[vox] Open Source and Security
ME
vox@lists.lugod.org
Mon, 1 Mar 2004 14:37:13 -0800 (PST)
Byron Roberts said:
> I feel like I'm totally missing something here....I thought that one of
> the big advantages of
> OSS was increased security, precisely because the code is accessible and
> able to be
> modified? Or as a newbie is there some piece of information that I'm
> lacking?
Many Open Source projects are started by people who know very little about
the project they are creating-- especially security risks in their code.
(There are obvious exceptions to this.) Many are doing their work as an
academic exercise toward self improvement at better understanding to a
problem. As a result, many will make fundamental mistakes in the
organization of the project-- forcing a "new version" to be created which
builds from a point with a new list of System Requirements.
Many Univerisities teach programmers how to code, and some of the first
examples in introductory courses of C and C++ offer examples to young
programmers which have security holes. There is very little in the way of
a comprehensive education of CS Security for CS majors. CS Security to
most CS majors is a class, which is an optional elective, that is
something to learn and forget.
Corporations which provide products that are proprietary utilize a well
known fallacy in security called "security by obscurity." Their idea is
simple, "if we do not show people how we build our software, we are able
to better hide our security problems."
Second, corporations will likely *prefer* to have a certain number of
security holes in their sofwtare. Why?
Consider a case where a company desires the consumer to buy a new release
of a product. However, Release 2 does not have many more features over
release 1, and release 1 works without requiring the consumer to buy a new
one.
Now the company chooses to abandon support for release 1, and focus on
support for release 2 while they engineer release 3.
A "security hole" is found that impacts release 1 and 2, but the company
says, "We will only provide a patch for release 2."
Now the consumers must face the difficult decision:
Use insecure software (do not pay for the upgrade)
Not use insecure software (pay for the upgrade)
Not use insecure software (stop using the software and switch to an
alternative)
I believe businesses accept the risk for security issues, but also plan to
have a development cycle which is fast enough to make older version with
security holes considered "obselete" before "too many holes" are
discovered.
Here is another way to view it:
Suppose the number of bugs and security holes found in MS code when
compared to OpenSource code.
Now suppose the source code for OpenSource software is available for legal
peer review.
Also suppose the source code for the commercial software is not available
for legal peer review.
Which project will have bugs and security holes found sooner?
Are people (by percent of users) more willing to examine code if the
source is available, or if they must break a law or license and reverse
engineer an application?
Now suppose the vendor stops supporting the "obselete" project. The
consumer has no recourse if the source is closed. However, if the source
is open, the consumer could still hire a coder to fix the problems.
Certainly, the Linux Kernel has had a few issues over the last year, but
at least these issues have been fixed. Consider the various holes in MS
Windows which were not actually fixed, but patched to prevent *specific*
exploits instead of repairing fundamental flaws.
It is like going to a witchdoctor ans saying, "when John pushes me on the
neck it hurts," and the witchdoctor saying, "here is an amulet of 'John
Pushing You in The Neck' protection. Of course, this won't work when Ed
pushes your neck.
-ME