Speed Up Multiple SSH Connections to the Same Server

 in

If you run a lot of terminal tabs or scripts that all need to make OpenSSH connections to the same server, you can speed them all up with multiplexing: making the first one act as the master and letting the others share its TCP connection to the server.

If you don't already have a config file in the .ssh directory in your home directory, create it with permissions 600: readable and writeable only by you.

Then, add these lines:

Host *
   ControlMaster auto
   ControlPath ~/.ssh/master-%r@%h:%p

ControlMaster auto tells ssh to try to start a master if none is running, or to use an existing master otherwise. ControlPath is the location of a socket for the ssh processes to communicate among themselves. The %r, %h and %p are replaced with your user name, the host to which you're connecting and the port number—only ssh sessions from the same user to the same host on the same port can or should share a TCP connection, so each group of multiplexed ssh processes needs a separate socket.

To make sure it worked, start one ssh session and keep it running. Then, in another window, open another connection with the -v option:

~$ ssh -v example.com echo "hi"

And, instead of the long verbose messages of a normal ssh session, you'll see a few lines, ending
with:

debug1: auto-mux: Trying existing master
hi

Pretty fast.

If you have to connect to an old ssh implementation that doesn't support multiplexed connections, you can make a separate Host section:

Host antique.example.com
   ControlMaster no

For more info, see man ssh and man ssh_config.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Multiple SSH connections

Koen K's picture

In fact there are reasons to speed up multiple SSH connections.
At my home server I backup data of some family members over SSH with rsync+deltacopy. These are scheduled jobs, and yes I have scheduled those different backups on different times over day. But now and then those backups take a bit longer and they "collide" + I still want to have remote access without being bothered if there is a backup running or not.

In short: nice tip!

put it in /tmp, not ~/.ssh

Johannes Berg's picture

I used to do this, but sometimes my machine crashes, and it's stupid to have to remove the sockets manually from ~/.ssh/ after my machine has crashed, so I put those into /tmp/ now.

But you don't!

FeRD's picture

But you don't have to remove the sockets from ~/.ssh/ -- ssh will simply DTRT if it finds an old socket lying around. There's no problem with those sockets being left in ~/.ssh/, even in the face of a crash.

I suppose, if you're an obsessive neat freak (and connect to a lot of different ssh hosts frequently, and crash your machine a lot!), you might not like seeing the the dead sockets lying around in ~/.ssh/. (Who normally looks in ~/.ssh/ at all?) I'd just put them in ~/.ssh/master/ or something, then, so they'll be out of sight.

I'll grant you, there's ample precedent for keeping them in /tmp/. (/tmp/.esd-$UID, /tmp/pulse-$NOISE, /tmp/orbit-$USER, ...) I think my concern would be that if I have really long-life ControlMaster connections, they might run afoul of various tmpcleaning activities or something. A path under ~/.ssh/ isn't likely to be bothered by the system.

(And, of course, the permissions on your ~/.ssh directory should offer better protection than /tmp/ against other local users spying on your active .ssh connections, target hostnames, and usernames on those other hosts. That could be a concern if you use very-public systems. Less-than-honorable types could potentially make nefarious use of that information.)

it is great,

duanjigang's picture

it is great, trying...........

query

Tigers2's picture

It is not working for me:

my user name is tiger, destination hostname is bulls nd port is 22. i modified the config to

Host *
ControlMaster auto
ControlPath ~/.ssh/master-%tiger@%bulls:%22

Is it correct?

Sorry, Tigers2, that's not

FeRD's picture

Sorry, Tigers2, that's not correct -- the config should read:

Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p

exactly as written, no matter what the parameters of your connection are. The %r,%h,%p will be replaced with the proper values by ssh itself, when it creates the connection.

Thus, if you have those lines above in your ~/.ssh/config, exactly as written, and then you were to type
ssh -p 22 tiger@bulls
ssh would substitute the correct %-parameters, and create the file ~/.ssh/master-tiger@bulls:22 as your ControlMaster socket.

A few notes

Michael Raugh's picture

As a sysadmin I often open multiple sessions into the same host. The reasons vary. Sometimes it's because I want to tail -f a log file in one terminal while running a process in another, or because I want to keep the mySQL client open to type in commands for reference while I'm editing script code with vim. Sometimes I'm just too impatient to wait for the process in terminal A to finish before starting the next one. (Hey, I didn't say they were necessarily good reasons!)

I also frequently open two SSH sessions to the same Cisco switch when working with port setups: one window displaying the current configuration (which I get by ssh hostname "show config" | less so I can search, scroll, copy and paste as needed) and another with the live command line session. Thus, I discovered that my switches (Catalyst 4xxx series) don't support multiplexed connections -- if I close and re-run the configuration view command while I still have the command-line session up, the SSH call fails and I don't get my configuration view.

BTW, Don didn't mention it but entries in the .ssh/config file are treated like firewall rules -- the first matching one gets used. So if you do have hosts that don't support multiplexing, put the entries for them ahead of the "Host *" section in the file.

<MR>

Or you could use screen(1)

Anonymous's picture

Or you could use screen(1) like everyone else.

One more note...

RichS's picture

If you log out of the original connection, you don't get your shell back until other sessions using the same connection are ended too.

Logging out of the original connection doesn't end the other sessions.

screen doesn't do everything

RichS's picture

The article doesn't mention it, but this works for scp and sftp too. Open one ssh session, and multiplex your file copying operations.

It also doesn't mention that -v is simply the verbose option, and nothing to do with multiplexing. Marti is only using it to show it worked. To be thorough, you should test this before making changes to ~/.ssh/config to compare output.

But you're right -- if all you wanted to do was create multiple ssh sessions.

This breaks the sftp kioslave

Gheesh's picture

This article is awesome, especially for those of us that open a lot of remote terminal sessions. However, this solution breaks sftp folder access under KDE (I haven't tested in GNOME).

SOLVED

Gheesh's picture

I just used an alternate configuration file for my terminal sessions (using the -F switch) and left the default one for everything else.

Problems with secure VNC sessions?

FeRD's picture

One caveat is that this seems to cause problems with SSH-tunneled VNC sessions (using the -via command line argument to vncviewer), at least with the vnc-4.1.2-35.fc10 package from Fedora 10. It works fine if it's the first connection, but not so if it tries to piggyback on to an existing master.

There may be some alternate ssh invocation that can be specified via the VNC_VIA_CMD envvar that will make things functional, but I haven't found one. When -via connections are attempted by vncviewer using the default invocation:
VNC_VIA_CMD='/usr/bin/ssh -f -L "$L":"$H":"$R" "$G" sleep 20'
the debug output from the control master shows that it's sending through the 'sleep 20', but vncviewer still isn't able to connect via the forwarded port. Assuming the port is even forwarded, in this scenario.

Fix for secured VNC invocations

FeRD's picture

Turns out, adding a -M flag to the /usr/bin/ssh invocation used by vncviewer (indicating it should create a NEW master connection) is all that's required to fix 'vncviewer -via' connections when using ControlMaster.

So,

export VNC_VIA_CMD='/usr/bin/ssh -M -f -L "$L":"$H":"$R" "$G" sleep 20'

will enable vncviewer to tunnel to remote hosts even in the presence of an existing ControlMaster connection. It won't take advantage of the existing connection, but neither will it interfere with OTHER connections using the configured ControlMaster (which still owns the ControlPath socket).

The ControlMaster tip is a great find, the speedup to my gvfs ssh:/// mounts and terminal scp connections when re-using an existing connection is unbelievable! (From on the order of "a half-second or more" to "instantaneous", and that's just on the LAN!)

Thanks much for the info!

For me the "-M" fixes part

Anonymous's picture

For me the "-M" fixes part of the problem. It let's me give in my password but then it hangs for quite some time until I can login into the VNC server.
Can't one make it reuse the existing master?

Your secure VNC session

FeRD's picture

Your secure VNC session won't be able to use the shared master connection, because it needs to create an SSH tunnel for the VNC display port as part of its SSH connection. And since that tunnel will change depending on what VNC display you're connecting to, you can't anticipate it for opening during the ControlMaster connection setup. (If there's a way to command an existing ControlMaster ssh instance to create a new tunnel on the fly, I don't know it.)

I suppose, if you only ever connect to one VNC server, you might be able to define the proper tunnel and other VNC params as part of the ControlMaster setup. Then you'd have it always in place, and when vncviewer picked up the ControlMaster connection it would have the tunnel it needs.

But that just doesn't sound like a very reliable setup, to me.

I don't know why your connection is hanging, it shouldn't -- even without reusing ControlMaster, the time vncviewer takes to connect when using VNC_VIA_CMD with the -M flag should be no longer than the time it normally takes for the first connection to that host. I wonder if, perhaps, something run on the remote host is blocking on that "sleep 20" in VNC_VIA_CMD? You could try lowering the sleep to 10 or 5 seconds, experimentally.

My other suggestion would be to set up ssh keys and do key-based authentication, instead of using passwords. It's more secure _and_ it might solve your connection-delay problem.

Opening tunnels on an existing SSH connection

Michael Mior's picture

Not sure how useful this will be to anyone here, but I thought I would point out that it is possible to open new SSH tunnels on an existing connection. Unfortunately, the only way I know to do this is manually. Hit Enter and then ~C (tilde followed by a Shift+C). Then you can enter standard port forwarding options as you would on the command line (-L, -R). For help, just type ? at the command prompt. For a list of other things you can do with the SSH escape sequence, enter ~?

Wow, that's snappy!

Joachim Nilsson's picture

I have a quick build/media/etc server in our basement that I connect to using SSH for building stuff. Now my Emacs can use the same connection to edit the files I build! I simply issue C-x C-f and type in the mdns name after a leading slash — /myserver:/home/jocke/trunk/work/file.c — tab-completion and everything is so much more snappy than before.

Imagine sitting in the sofa at home to do your work, with the TV showing various episodes of StarTrek running in the background. That's me, with an oldish ThinkPad in my lap, good keyboard, big 15" 4:3 screen and decent wireless, but too slow disk and CPU. It's the perfect thin client and now I can't even notice the difference! :-)

Thanks a bunch! Articles like these really help us bums that are too lazy to keep track of the ever so rapid development of OpenSSH!

Happy New Year, from Sweden to everyone!

practical instance to the SSH multiplexing tip

Eliran Itzhak's picture

Thank you very much for this SSH tip. It's very useful.
I have a script that connects via SSH to a remote server, and copies files from there, if they exist.
Before implementing this tip, SSH/SCP connections took a long time, and the script was not very efficient.
Now I've added a check at the begining of the script to check in we have a working ssh connection,
and it works much faster then it used to.

Eliran.

a few more thoughts on the subject

dm's picture

jetole,

glad you replied.

Right now I am inclined to the view that establishing multiplex ssh sessions for the purpose of submitting simple commands faster (what you showed) sounds great and so forth but it's basically kids stuff. Doubly so for 1 versus 0.1 seconds.

> I havn't even mentioned SCP/file transfers yet

Now that would make a great addition to the article.
Run multiple scenarios, accumulate sufficient statistics and show times for say 128k data chunk transfer versus master/slave multiplexed ssh sessions transferring "concurrently" 64k each. Also include network specs whether it is wireless or infiniband or classical 1 gigabit (how many cards per a node? describe network topology.) I suspect if you did that, it would convert a lot of viewers into subscribers... :-)

Thank you!

Why suggestion?

jetole's picture

dm! Sure these suggestions that you asked for sound helpful, especially if you are not a very network oriented individual who already knows why YOU need multiple SSH connections and perhaps if you don't understand TCP well enough to realize the performance gain in a shared TCP connection. I can't tell you why you need multiple connections to the same server even though I personally find using multiple connections to the same server often helpful on a my twin monitor workstation at the office and twin monitor desktop at home and of coarse I havn't even mentioned SCP/file transfers yet. If real numbers is what you need then take this excerpt from my tests:

Pre suggested changes on a secondary connection to the same server:
[jetole@XXXXXX ~]$ time ssh XXXX.XXXXXXXX.com exit

real 0m1.208s
user 0m0.008s
sys 0m0.000s

Post suggested changes on a secondary connection to the same server:
[jetole@XXXXXX ~]$ time ssh XXXX.XXXXXXXXX.com exit

real 0m0.163s
user 0m0.000s
sys 0m0.004s

Results:
[jetole@XXXXXX ~]$ calc '1.208/0.163'
~7.41104294478527607362

So it appears if you can determine a reason for yourself to make a secondary connection to a server that it is likely to run 7.4 times faster approximately. Albeit, although I ran these tests 5 times each on each server and picked the mean average, it is by no means a proper benchmark however it is the closest thing to proper numbers at the moment I am willing to provide to you. It seems to me like the only thing you are lacking is a reason to make multiple SSH connections to the same server simultaneously and although I have my own, you will really have to come up with you own reasons for making the second connection.

suggestion

dm's picture

Great topic & article. Thanks!

Just a suggestion. It'd be nice to see some practical instance: why/when one would need multiple ssh connections to the very same host, preamble to the story if you wish, plus include quantitative measurable improvement (not just "pretty fast" but actual numbers). It would when be easier to gauge whether the benefits are worth the trouble of modifying ssh_config. If there is sufficient interest in speeding up ssh, perhaps consider including High Performance SSH from Pittsburgh supercomputing center: http://www.psc.edu/networking/projects/hpn-ssh/

Slight correction

Mark B's picture

Actually, your sentence above should be corrected to "only ssh sessions from the same user ON THE SAME LOCAL HOST to the same DESTINATION host on the same port can or should share a TCP connection, so each group of multiplexed ssh processes needs a separate socket."

I.e. your ControlPort setting should include %l, e.g. ControlPath ~/.ssh/master-%l-%r@%h:%p

In general this is needed as you may be sharing your ~/.ssh via an NFS mounted home area, etc.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix