Accessing Remote Files Easily and Securely
The secure shell, ssh, and its companion, scp, are tools that I use more or less on a daily basis. Being able to move files between machines without having to setup SAMBA or NFS is very handy when working with multiple systems. All that you need is to enable the secure shell daemon - sshd.
Before we go into the details of the sshfs, let's run through a quick re-cap of ssh. The secure shell daemon runs on port 22 by default. It makes it possible to run an encrypted shell session. With the -Y flag, you can even run X11-forwarding, allowing you to run X11, i.e. graphical, programs on the remote machine and displaying the windows on the terminal that you are sitting at.
You can configure sshd through the /etc/ssh/sshd_config file (that is the location on my Kubuntu machine). Here, you can disable root access, older protocols, X11 forwarding, etc. The notion is that the more limits you put on the remote access, the more secure your system is from potential attacks. You might also want to tune your hosts.allow and hosts.deny files if you plan to expose sshd to the Internet. There are many guides on hardening servers and ssh, so I will not go into details.
To get things up and running, what you need to do is to install sshd. In Ubuntu, that means the openssh-server package. For external access, you also need to enable port forwarding of port 22 in your router/firewall and find your external IP. Now, you should be able to log onto your machine using your normal user credentials.
$ ssh firstname.lastname@example.org email@example.com's password:
Having entered the password, you should now have full access to the remote system.
The handy scp command, secure copy, works in much the same way. To copy the file test.txt to user's remote home directory, simply enter:
$ scp test.txt firstname.lastname@example.org:
As before, you will be prompted for a password. You can copy the other way around as well. The command below demonstrates how to copy a file with an absolute path, i.e. not in the home directory of user, to your local machine.
$ scp email@example.com:/var/log/messages remote-messages
These two commands means that you can browse the file system, and freely copy files between machines. What sshfs does is that it exposes this functionality as a file system that you can mount. Before we look into how, let's have a quick look at sshfs.
The sshfs is implemented using FUSE, and relies on the sftp part of ssh to access the remote computer. As a remote file access protocol, sshfs is not very good. For instance, multiple users writing to the same file at once can create havoc. The benefits are the inherit security and that it is easy to setup.
So, how to use it. Let's look at a very short demonstration.
$ sshfs firstname.lastname@example.org: remote-home $ ls remote-home Desktop Documents Downloads Music $ fusermount -u remote-home
The initial sshfs command mounts the user's home directory to remote-home. You can specify another path after the colon to mount any other part of the remote file system. Access is only restricted by user's access rights.
Using ls, or any other ordinary command, will work as if the remote home directory was mounted locally. All tools work. For instance, you can log onto your remote machine and build software using your locally installed setup of build tools.
To unmount the filesystem, the fusermount command from the FUSE utilities package is used.
To summarize, sshfs an easy setup remote file access tool. It needs to be used with care if multiple users are involved. It makes it dead easy to temporarily access remote file systems, as well as mounting file systems from virtual machines for easier access and monitoring, as well as for remote installation, compilation and debugging. All-in-all, one of the tools I always keep handy in my toolbox.
Johan Thelin is a consultant working with Qt, embedded and free
software. On-line, he is known as e8johan.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
3 hours 44 min ago
- BASH script to log IPs on public web server
8 hours 11 min ago
11 hours 47 min ago
- Reply to comment | Linux Journal
12 hours 19 min ago
- All the articles you talked
14 hours 43 min ago
- All the articles you talked
14 hours 46 min ago
- All the articles you talked
14 hours 47 min ago
19 hours 12 min ago
- Keeping track of IP address
21 hours 3 min ago
- Roll your own dynamic dns
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?