Corrections to "A Rough Year for OpenSSH"
Following the posting of Jose Nazario's article, "A Rough Year for OpenSSH", on the Linux Journal web site on January 2nd, we received an e-mail from Theo de Raadt, the OpenSSH project founder. His purpose in writing was to clarify a couple of errors and misrepresentations in the article.
Regarding the crc32 deattack code, Jose's article states:
Due to the nature of the vulnerability, this issue was addressed immediately by both the SSH developers and the OpenSSH team. Since SSH version 1.2.32 and OpenSSH version 2.3.0, this issue has been fixed. All SSH users should have upgraded as this is being actively exploited.
In actuality, OpenSSH's fix was made available in October 2000, several months before the hole was found. The fix was included in OpenSSH 2.3.0, which shipped right around that time--October 2000, not 2001--and can be seen in the revision information Theo sent:
revision 1.10 date: 2000/10/31 13:18:53; author: markus; state: Exp; lines: +2 -2 branches: 1.10.2; so that large packets do not wrap "n"; from netbsd
Therefore, the article's statement that "this issue was addressed immediately by both the SSH developers and the OpenSSH team" is incorrect--the OpenSSH fix was available months earlier.
Theo also points out another inaccuracy with the above statement; SSH.com took roughly three months to make an official release with the fix.
Heather Mead is Associate Editor of Linux Journal.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- RSS Feeds
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Weechat, Irssi's Little Brother
- Senior Perl Developer
- Technical Support Rep
- UX Designer
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?