ssh: Secure Shell
The ssh protocol is designed to be flexible and supports multiplexing of several communication channels within a single TCP stream. This design choice results in two effects: on one side the implementation of the protocol is much more elaborate than other TCP-based protocols, and on the other side the final user can exploit the added flexibility to achieve new goals. One of these goals is establishing secure communication channels between the X server and client applications. This feature is enabled by default whenever an ssh session is established.
The idea behind X11 forwarding is quite straightforward. The ssh application runs locally and is able to connect to the local X server without resorting to the network (via local Unix-domain sockets). Remote graphic programs, on the other hand, can connect locally to the sshd server which spawned the remote shell (via the loopback network interface). The remote sshd can encapsulate graphic data in the secure communication channel it owns, to complete the path linking the graphic application and the X server.
Figure 1 shows how a remote X application (running on a computer named sandra) securely connects to the local X server (on morgana).
When you log into a remote computer through ssh, the DISPLAY environment variable is automatically set to a proper value, and no user intervention is needed to establish the graphical channel. The following screen-shot shows automatic assignment of DISPLAY:
morgana% ssh sandra env | grep DISPLAY DISPLAY=sandra.systemy.it:10.0
It is apparent how any graphic program invoked on sandra by the ssh session will connnect to a local display (i.e., sandra:10).
The ssh/sshd programs can also forward other TCP channels, according to the user's needs. This capability can be activated by specifying command-line switches to the client ssh program. I won't describe the mechanisms here, as the manual page for ssh is well written.
The main problem when establishing a connection through an insecure network is performing reliable authentication. The ssh package is quite pedantic about authentication, and you'll be prompted for your password more frequently than usual. Typing passwords over and over is distressing and can be avoided by proper configuration of system files. Note also that any password you type is transmitted after establishing the encrypted communication channel.
You can type ssh -v (verbose) to get a report on what is happening. The information returned is very useful if you are unexpectedly prompted for a password. Now, let's look at the steps performed by ssh to authenticate a user in the remote server.
First, if the target account has no password, access is granted. If it does, different kinds of authentication engines are tried; each of which can be enabled or disabled into the server. For example, by default “PasswordAuthentication” and “RhostsRSAAuthentication” are enabled, and “RhostsAuthentication” is disabled.
The following is the sequence of actions when you try to log into a server running with the default configuration—which can be changed in /etc/sshd_config.
The client receives the public key of the server. If it is not recognized, ssh asks the user interactively if the connection must be continued. By confirming, the user trusts that the remote host matches its name, and the public key of the server is saved on the client, in the file $HOME/.ssh/known_hosts. This step is not performed if the server host is known system-wide (i.e., it appears in /etc/ssh_known_hosts).
The client tries authenticating through “RhostsRSA”. This requires that “Rhosts” authentication succeeds: either .rhosts in the user's home directory or /etc/hosts.equiv allows login. sshd is more pedantic than rlogind in checking these files and denies permission if any of the files are group-writable or world-writable. Needless to say, the entry beginning with the “plus” character in either file is disregarded. Moreover, .rhosts is not even used if the home directory of the user is group-writable or world-writable, and /etc/hosts.equiv is not used to authorize root logins. In addition to the standard files, sshd also checks .shosts in the home directory of the user and /etc/shosts.equiv. These files are useful if you still wish to run rshd or rlogind on the server hosts by trusting fewer hosts than you trust via ssh.
If the previous step succeeds, RSA is tried (Random-State Authentication). This technique consists in the client sending a challenge to the server, which must reply correctly. The challenge consists of random data encrypted using the client's private key; the server must decrypt such data and return its checksum. The server can solve the challenge only if it knows the public key of the client, which is known only if the remote user agreed to trust the client (local) host. RSA is used to prevent authorizing untrusted hosts which forge DNS records or which temporarily steal the IP address of a trusted host.
If either of the previous steps fails, i.e., if “RhostsRSA Authentication” as a whole fails, the client reverts to “Password Authentication”, by asking for a password from the local user.
If your .rhosts file is correctly configured and you are still prompted for a password, the problem is most likely caused by RSA not succeeding. The easiest way to store the client's public key in the server is invoking ssh right away to connect back to the client computer. When confirming to continue the connection, the server (now acting as a client) downloads the public key of the local host (now acting as a server).
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Why Python?
- Build a Skype Server for Your Home Phone System
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
44 min 10 sec ago
- Reply to comment | Linux Journal
1 hour 34 min ago
- Not free anymore
5 hours 36 min ago
9 hours 23 min ago
- Reply to comment | Linux Journal
9 hours 31 min ago
- Understanding the Linux Kernel
11 hours 46 min ago
14 hours 15 min ago
- Kernel Problem
1 day 18 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?