If you want an example of evolution in action, look at the GNU/Linux password system. Although it includes the basic UNIX password structure as a vestigial organ, natural selection in the form of crackers has forced the evolution of shadow passwords, the MD5 algorithm and PAM. Passwords have even found new niches in the form of boot managers, remote login formats and advanced security systems. All these tools are used daily, but by taking a closer look at them, you can use them to make your system a little more secure.
Back in the Jurassic days of the 1970s, the standard UNIX password structure appeared. Nowadays, many users access it graphically through gdm, otherwise its structure hasn't changed much. To log in to an account, users enter a password of up to eight characters (it was six or seven in the past), and the password is encrypted into a key using the DES (data encryption standard) algorithm. This key is stored in the second column /etc/passwd, where any user can view it.
What has changed is the competition. Nowadays, the DES algorithm can be cracked in seconds. And, to make matters worse, there's little choice in the standard structure except to store the key for each password in a public place, where any intruder can find it. The alternative is to place severe restrictions on ordinary users and prevent them from using basic commands like ls -l. Although some security experts would be happy with a computer that was turned off and left in a lead-lined vault several miles beneath the surface of the earth, these restrictions are largely unacceptable.
By the mid-1990s, with the Internet's popularity giving crackers more opportunities, the competition was becoming intense. Because of this pressure, defenses began to evolve. At first available only as add-ons, by the dawn of modern times in the late nineties, these defenses were in symbiotic relations with each other and were standard parts of every distribution.
Shadow passwords get their name because they are the hidden counterparts of basic passwords. The difference is that, instead of being native to a public file like /etc/passwd, their habitat is the second column of /etc/shadow, a file readable only by the root user. If no password is set for an account, the column is marked by an asterisk or an exclamation mark, depending on the distribution. Only an “x” in the password column of /etc/passwd is left to mark their passage.
Since shadow passwords have been standard, manual configuration has all but disappeared. All the same, shadow passwords generally come with a toolset. pwconv and grpconv keep user and group entries in /etc/shadow and /etc/passwd in sync, but this housekeeping is generally done automatically when a password is created for the account. Similarly, pwunconv and grpunconv allow the creation of regular passwords, but few modern systems ever require this devolution. About the only useful shadow password tool is Debian's shadowconfig, whose on-and-off options can tell you quickly whether shadow passwords are enabled.
Another evolutionary dead end is the set of additional columns in /etc/shadow. Superficially, these columns promise extra control over when passwords are changed, when warnings of the need to change are given and when an account is disabled if its password is not changed. These columns could be a major survival trait. Unfortunately, they have to be entered individually for each user. More importantly, they must be entered in days since January 1, 1970. This measurement is so cumbersome to calculate that many system administrators leave most of the columns blank and fill the rest with impossibly large numbers so they can ignore them.
Another adaptation of the basic password structure is the use of the MD5 encryption algorithm. MD5 is the latest in an evolutionary line of algorithms developed by Ronald Rivest, an MIT professor and a founder of RSA Security, the dominant company in encryption for over a decade. It is a descendant of MD2, an algorithm optimized for 8-bit machines, and a modification of MD4, an algorithm for 32-bit machines that Rivest and his collaborators felt was rushed into release. First released to the public domain in 1991, MD5 was further modified in 1994.
Today, MD5 remains a standard in authentication, even though Rivest insists that it was never intended for that use. More elaborate algorithms have been developed in recent years, such as IDEA, Skipjack or Bowfish, but none has been proven to outperform MD5 consistently enough to replace it.
MD5 became available as an add-on for GNU/Linux in the mid-nineties and is a now a standard part of most distributions. From a security perspective, the advantages of MD5 over the DES used in the standard password structure is that it allows for longer passwords and provides more sophisticated encryption. When MD5 is enabled, passwords of up to 256 characters are possible. Regardless of the password's actual length, MD5 passes it through four rounds of encryption to create a 256-character key. Since this process is not reversible (at least, not without considerable effort), MD5 is classified as a “one-way hash”.
MD5 is an option during the installation of every major distribution. Although MD5 can create problems with network information systems on most modern workstations or networks, there is no reason not to use it. If you are unsure whether MD5 is enabled, check whether the password column in /etc/shadow starts with $1$, or search the files in /etc/pam.d for lines that end in “md5”. If it isn't, locating the necessary files and reconfiguring the system is time-consuming enough that a new user might be tempted to upgrade or re-install instead.
-- Bruce Byfield (nanday)
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Build a Skype Server for Your Home Phone System
- Why Python?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
23 min 10 sec ago
- Reply to comment | Linux Journal
1 hour 13 min ago
- Not free anymore
5 hours 15 min ago
9 hours 2 min ago
- Reply to comment | Linux Journal
9 hours 10 min ago
- Understanding the Linux Kernel
11 hours 25 min ago
13 hours 54 min ago
- Kernel Problem
23 hours 57 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 8 hours ago