Secure File Transfer

File transfer between Linux systems (and perhaps all POSIX systems in general) is in some ways a neglected subject. The arcane protocols in common use are far from secure, and the SSH replacements offer too much power and complexity. Servers holding highly sensitive data (such as credit card numbers, SSNs, birthdates and so on) often must accept file transfers, but greatly restrict remote visibility and administration, which is hard with the well known tools.

File transfers with RFC 1867 ( can offer a number of benefits over most other methods: the highest security and optional encryption, all without requiring entries in /etc/passwd or other credentials for the operating system.

The tools I cover in this article to implement this protocol are sthttpd, an upload CGI utility, stunnel and curl. The examples here were developed on Oracle Linux 7.1, but most of the code is portable and should run on other platforms with minimal changes (with the exception of the systemd configuration).

Why Not FTP?

There have been substantial improvements in security and performance through the years in the FTP server software that is commonly bundled with Linux ( It remains easy to configure FTP clients for batch activity with automatic logins:

echo machine login YourName password 
 ↪a_Password >> ~/.netrc
chmod 600 ~/.netrc
echo -e 'ls -l \n quit' | ftp

Unfortunately, this is a terrible idea that gets progressively worse with the passage of time:

  • The login, password and file payload are all sent in clear text over the wire in the normal configuration, and there are many utilities to capture them that might be used over an untrusted network.

  • Classic FTP servers listening on port 21 must run as root. If attackers find and exploit a weakness, your OS belongs to them.

  • In "active" FTP, the client and server switch roles in running the connect() and listen() system calls. This causes the TCP connections to open in both directions, introducing problems for firewalls.

  • Unless the FTP server supports chroot() and it is individually and specifically configured for a target user, that user is able to fetch recursively all accessible files on the system that have world-read permission.

  • An FTP account created for a few files can give visibility to just about everything. Most modern FTP clients allow such recursive transfers. An FTP user requires an entry in /etc/passwd on the server that creates an OS account. If not properly managed, this allows the remote user to log in to a shell or otherwise gain unwanted access.

  • Password aging often is mandated in high-security environments, requiring synchronized password changes on the client and server (usually after a failed overnight batch run).

Later revisions to the FTP protocol do add TLS/SSL encryption capabilities, but it is unwise to implement them:

man vsftpd.conf | col -b | awk '/^[ ]*ssl_enable/,/^$/'
        If enabled, and vsftpd was compiled against OpenSSL, 
        vsftpd will support secure connections via SSL. This 
        applies to the control connection  (including  login) 
        and also data connections. You'll need a client with 
        SSL support too. NOTE!!  Beware enabling this option.  
        Only enable it if you need it. vsftpd can make no
        guarantees about the security of the OpenSSL libraries. 
        By enabling this  option, you are declaring that you 
        trust the security of your installed OpenSSL library.

The reason for the above warning is that because the FTP server runs as root, it exposes the encryption library to remote connections with the highest system privilege. There have been many, many encryption security flaws through the years, and this configuration is somewhat dangerous.

The OpenSSH suite of communication utilities includes "sftp" clients and servers, but this also requires an account on the operating system and special key installation for batch use. The recommended best practice for key handling requires passwords and the use of an agent:

Our recommended method for best security with unattended SSH operation is public-key authentication with keys stored in an agent....The agent method does have a down side: the system can't continue unattended after a reboot. When the host comes up again automatically, the batch jobs won't have their keys until someone shows up to restart the agent and provide the passphrases to load the keys.—SSH, the Secure Shell, 2nd Edition, Daniel J. Barrett, Richard E. Silverman and Robert G. Byrnes.

Those who blindly rush from FTP to sftp due to security pressures do not understand the complexities of key generation, the ssh-agent and ssh-add. Forcing such sophisticated utilities on a general population that is attempting to migrate away from FTP is sure to end badly.

OpenSSH also extends the ability to run a shell to the client in the default configuration. It is possible to constrain a user to file transfers only and configure for a higher-security chroot(), but extensive modifications to the server configuration must be performed to implement this. The main focus of SSH is secure interactive login—file transfers are a sideline. The lack of "anonymous" sftp or keyed file dropoff highlight this (lack of) focus.

The classic Berkeley R-Utilities include an rcp program for remote file copy. This does eliminate the clear-text password, but improves little else. The use of these utilities is highly discouraged in modern systems, and they are not installed and configured by default.

None of the above programs work well for secure batch file copy when receiving files from untrusted sources, and for these reasons, let's turn to RFC 1867.

thttpd in a chroot()

RFC 1867 is the specification behind the "file upload gadget" found on Web pages. The HTML to implement the gadget is relatively simple:

<form action="script.cgi" enctype="multipart/form-data" 
<input type="file" name="Whatever">
<input type="submit" value="Upload">

Various browsers render the gadget with a slightly different appearance, but the function is the same (Figures 1–3).

Figure 1. Google Chrome

Figure 2. Microsoft Internet Explorer

Figure 3. Mozilla Firefox

For this article, I will be using the "curl" non-graphical, command-line tool to perform file transfers using this protocol. Since the RFC 1867 protocol is implemented over HTTP, a Web server is needed. The server software choice here will be unconventional, for I'm going to require native support for the chroot() system call, which isolates running processes in the filesystem tree. This prevents access to powerful programs in /sbin and any other sensitive data stored in restricted locations.


Charles Fisher has an electrical engineering degree from the University of Iowa and works as a systems and database administrator for a Fortune 500 mining and manufacturing corporation.