Tighter SSH Security with Two-Factor Authentication
We need to close a potential vulnerability before using this system in the wild.
Using ssh-agent and agent forwarding allows the remote SSH server to query the private key stored on your local computer. However, if you use this system to log in to multiple computers, an intruder on one machine can potentially highjack those keys to break in to another machine. In that case, this system could be more dangerous than one using static passwords.
To illustrate the problem, let's expand our example network from two to three nodes by adding machine3 to the mix. Create key pairs for both bob and root on machine3, as described in Examples 1 and 2, and add root's private key to ssh-agent on machine1.
Now, ssh to machine3 as bob using the agent-forwarding option -A. Run ssh-add -l, and you can see the public keys for both machine2 and machine3:
2048 fa:5c:4b:73:88:...: ... /media/usbdisk/key-rsa-root@machine2 (RSA) 2048 26:b6:e3:99:c1:...: ... /media/usbdisk/key-rsa-root@machine3 (RSA)
In this example, ssh-agent on machine1 caches the private keys for machine2 and machine3. This single agent allows us to log in as root on either computer. However, using the single agent also potentially allows an intruder on machine2 to log in as root on machine3 and vice versa. This is not good.
Fortunately, we can fix this problem by using the ssh-add -c option; we can add additional security by using individual ssh-agent instances to store one root key for each remote machine. The -c option tells ssh-agent to have the user confirm each use of a cached key. Devoting one ssh-agent instance per host prevents any as yet unknown ssh-agent vulnerability from exposing one machine's key to another.
Using the ssh-add confirm option is easy; simply set the -c option whenever adding a key to ssh-agent. Let's give it a try. Start two agents on machine1, specifiying predefined sockets:
ssh-add -c /media/usbdisk/key-rsa-root@machine2 ssh-add -c /media/usbdisk/key-rsa-root@machine3
You'll be asked to confirm use of the key when you ssh to machine2 and machine3.
You also can use separate ssh-agents to store each key. Let's give it a try; start two agents on machine1, specifying predefined sockets:
ssh-agent -a /tmp/ssh-agent-root@machine2 ssh-agent -a /tmp/ssh-agent-root@machine3
Once again, I'm using an arbitrary yet descriptive naming convention. Set the environmental variable, and add the key for machine2:
export SSH_AUTH_SOCK=/tmp/ssh-agent-root@machine2 ssh-add -c /media/usbdisk/key-rsa-root@machine2
Repeat this process for machine3, making the appropriate substitutions:
export SSH_AUTH_SOCK=/tmp/ssh-agent-root@machine3 ssh-add -c /media/usbdisk/key-rsa-root@machine3
Now, log in to machine3 (we'll go to machine3 at this point as we just set the SSH_AUTH_SOCK variable to point to machine3's agent):
ssh -A -i /media/usbdisk/key-rsa-bob@machine2 bob@machine3
Run the following command to see what keys you can query on machine1:
You see only the key for root on machine3.
Exit from machine3, change the environmental variable to the machine2 ssh-agent socket, and log in to machine2:
export SSH_AUTH_SOCK=/tmp/ssh-agent-root@machine2 ssh -A -i /media/usbdisk/key-rsa-bob@machine2 bob@machine2
Check your keys again:
Checking your keys on machine2 and machine3 reveals only the root key for that machine. In the previous example, by using a single ssh-agent, you would have seen the keys for both machine2 and machine3.
Using separate ssh-agent instances for each machine you log in to requires more work.
Resetting the SSH_AUTH_SOCK variable every time you want to log in to another machine is impractical. To simplify the process, I've written a simple script tfssh (two-factor ssh) to simplify the process. Its syntax is:
tfssh [username@]host [keydir]
The script [Listing 1 on the LJ FTP site at ftp.linuxjournal.com/pub/lj/listings/issue152/8957.tgz] starts ssh-agent when necessary, sets the environmental variable, adds the root keys to ssh-agent and logs in to the remote machine as the user. You also can tell tfssh to look in an arbitrary directory ([keydir]) for its keys and also set a key timeout for the key cache.
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Readers' Choice Awards
- Please correct the URL for Salt Stack's web site
3 min 37 sec ago
- Android is Linux -- why no better inter-operation
2 hours 18 min ago
- Connecting Android device to desktop Linux via USB
2 hours 47 min ago
- Find new cell phone and tablet pc
3 hours 45 min ago
5 hours 14 min ago
- Automatically updating Guest Additions
6 hours 22 min ago
- I like your topic on android
7 hours 9 min ago
- Reply to comment | Linux Journal
7 hours 30 min ago
- This is the easiest tutorial
13 hours 45 min ago
- Ahh, the Koolaid.
19 hours 23 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?