People who think that open source suffers from poor quality often air this myth as well. It seems superficially to make sense, because malicious attackers can read open code and find bugs they can exploit. These bugs are often called “zero-day vulnerabilities”because they exist in software when it is first released, and the intruder might find the flaw before legitimate developers and security researchers.
But consider this: Why are modern security tools (such as the encryption methods used to send data securely over the Web) open source?
In fact, security researchers prefer tools that are open source. This allows a wide range of experts to review the code. Proprietary tools generally are insufficiently reviewed by security experts, and therefore have flaws.
Yes, open source tools still have security flaws. But the rate is about the same as proprietary software. Malicious attackers can use disassemblers and other tools to slice through the obscurity of proprietary code and discover its flaws.
There is a practice in the computer field called «security through obscurity.» This practice is based on the hope that nobody will break into your system because they won’t find it or won’t know where its weaknesses lie.
For instance, because many tools such as Google Docs assign URLs or file names containing long strings of random characters, many people think they don’t have to protect the documents any further. Security through obscurity is the principle behind hiding source code.
Security through obscurity is sometimes useful in conjunction with other, more robust practices such as encryption. But the principle is generally disparaged by security experts because sophisticated attackers can find ways around obscurity. In this age of fast, massive calculations that can analyze terabytes of data quickly, it becomes less and less feasible to hide what you’re doing just by keeping it secret.
<< Read the previous post of this series | Read the next post of this series >>