Secure File Transfer

File transfer between Linux systems (and perhaps all POSIX systems in general) is in some ways a neglected subject. The arcane protocols in common use are far from secure, and the SSH replacements offer too much power and complexity. Servers holding highly sensitive data (such as credit card numbers, SSNs, birthdates and so on) often must accept file transfers, but greatly restrict remote visibility and administration, which is hard with the well known tools.

File transfers with RFC 1867 ( can offer a number of benefits over most other methods: the highest security and optional encryption, all without requiring entries in /etc/passwd or other credentials for the operating system.

The tools I cover in this article to implement this protocol are sthttpd, an upload CGI utility, stunnel and curl. The examples here were developed on Oracle Linux 7.1, but most of the code is portable and should run on other platforms with minimal changes (with the exception of the systemd configuration).

Why Not FTP?

There have been substantial improvements in security and performance through the years in the FTP server software that is commonly bundled with Linux ( It remains easy to configure FTP clients for batch activity with automatic logins:

echo machine login YourName password 
 ↪a_Password >> ~/.netrc
chmod 600 ~/.netrc
echo -e 'ls -l \n quit' | ftp

Unfortunately, this is a terrible idea that gets progressively worse with the passage of time:

  • The login, password and file payload are all sent in clear text over the wire in the normal configuration, and there are many utilities to capture them that might be used over an untrusted network.

  • Classic FTP servers listening on port 21 must run as root. If attackers find and exploit a weakness, your OS belongs to them.

  • In "active" FTP, the client and server switch roles in running the connect() and listen() system calls. This causes the TCP connections to open in both directions, introducing problems for firewalls.

  • Unless the FTP server supports chroot() and it is individually and specifically configured for a target user, that user is able to fetch recursively all accessible files on the system that have world-read permission.

  • An FTP account created for a few files can give visibility to just about everything. Most modern FTP clients allow such recursive transfers. An FTP user requires an entry in /etc/passwd on the server that creates an OS account. If not properly managed, this allows the remote user to log in to a shell or otherwise gain unwanted access.

  • Password aging often is mandated in high-security environments, requiring synchronized password changes on the client and server (usually after a failed overnight batch run).

Later revisions to the FTP protocol do add TLS/SSL encryption capabilities, but it is unwise to implement them:

man vsftpd.conf | col -b | awk '/^[ ]*ssl_enable/,/^$/'
        If enabled, and vsftpd was compiled against OpenSSL, 
        vsftpd will support secure connections via SSL. This 
        applies to the control connection  (including  login) 
        and also data connections. You'll need a client with 
        SSL support too. NOTE!!  Beware enabling this option.  
        Only enable it if you need it. vsftpd can make no
        guarantees about the security of the OpenSSL libraries. 
        By enabling this  option, you are declaring that you 
        trust the security of your installed OpenSSL library.

The reason for the above warning is that because the FTP server runs as root, it exposes the encryption library to remote connections with the highest system privilege. There have been many, many encryption security flaws through the years, and this configuration is somewhat dangerous.

The OpenSSH suite of communication utilities includes "sftp" clients and servers, but this also requires an account on the operating system and special key installation for batch use. The recommended best practice for key handling requires passwords and the use of an agent:

Our recommended method for best security with unattended SSH operation is public-key authentication with keys stored in an agent....The agent method does have a down side: the system can't continue unattended after a reboot. When the host comes up again automatically, the batch jobs won't have their keys until someone shows up to restart the agent and provide the passphrases to load the keys.—SSH, the Secure Shell, 2nd Edition, Daniel J. Barrett, Richard E. Silverman and Robert G. Byrnes.

Those who blindly rush from FTP to sftp due to security pressures do not understand the complexities of key generation, the ssh-agent and ssh-add. Forcing such sophisticated utilities on a general population that is attempting to migrate away from FTP is sure to end badly.

OpenSSH also extends the ability to run a shell to the client in the default configuration. It is possible to constrain a user to file transfers only and configure for a higher-security chroot(), but extensive modifications to the server configuration must be performed to implement this. The main focus of SSH is secure interactive login—file transfers are a sideline. The lack of "anonymous" sftp or keyed file dropoff highlight this (lack of) focus.

The classic Berkeley R-Utilities include an rcp program for remote file copy. This does eliminate the clear-text password, but improves little else. The use of these utilities is highly discouraged in modern systems, and they are not installed and configured by default.

None of the above programs work well for secure batch file copy when receiving files from untrusted sources, and for these reasons, let's turn to RFC 1867.

thttpd in a chroot()

RFC 1867 is the specification behind the "file upload gadget" found on Web pages. The HTML to implement the gadget is relatively simple:

<form action="script.cgi" enctype="multipart/form-data" 
<input type="file" name="Whatever">
<input type="submit" value="Upload">

Various browsers render the gadget with a slightly different appearance, but the function is the same (Figures 1–3).

Figure 1. Google Chrome

Figure 2. Microsoft Internet Explorer

Figure 3. Mozilla Firefox

For this article, I will be using the "curl" non-graphical, command-line tool to perform file transfers using this protocol. Since the RFC 1867 protocol is implemented over HTTP, a Web server is needed. The server software choice here will be unconventional, for I'm going to require native support for the chroot() system call, which isolates running processes in the filesystem tree. This prevents access to powerful programs in /sbin and any other sensitive data stored in restricted locations.

Liberal use of chroot() and privilege separation recently saved OpenBSD's new mail system from disaster in a code audit (

First of all, on the positive side, privileges separation, chrooting and the message passing design have proven fairly efficient at protecting us from a complete disaster. [The] Worst attacks resulted in [the] unprivileged process being compromised, the privileged process remained untouched, so did the queue process which runs as a separate user too, preventing data loss....This is good news, we're not perfect and bugs will creep in, but we know that these lines of defense work, and they do reduce considerably how we will suffer from a bug, turning a bug into a nuisance rather than a full catastrophe. No root were harmed during this audit as far as we know.

The common Web servers on Linux, Apache and Nginx repeatedly have refused to implement native chroot() security (

OpenBSD has run its Web servers in a chroot for many years; Apache and nginx have been patched to run chroot'ed by default. These patches have never been accepted by upstream, but yet they provide a significant benefit.

Although this refusal precludes the use of Apache and Nginx in high-security applications, the recently updated sthttpd Web server ( does offer this capability. thttpd lacks many modern features (FastCGI, SPDY and SSL/TLS), but the native chroot() trumps the disadvantages. Here are the steps to download and install it:

tar xvzf sthttpd-2.27.0.tar.gz
cd sthttpd-2.27.0/

make install exec_prefix=/home/jail

mkdir /home/jail/etc
mkdir /home/jail/logs
mkdir /home/jail/htdocs
mkdir /home/jail/upload
chown nobody:nobody /home/jail/logs /home/jail/upload

echo 'port=80
logfile=/home/jail/logs/thttpd.log' > /home/jail/etc/thttpd.conf

Note above the cgipat=**.xyz for executing programs that adhere to the Common Gateway Interface ( The thttpd documentation mentions using the conventional .cgi extension, but I suggest you pick your own random extension and rename any CGI applications that you deploy to make them harder to find and exploit by an attacker.

After you have installed the thttpd Web server, you can start a copy with the following command:

/home/jail/sbin/thttpd -C /home/jail/etc/thttpd.conf

If you point a Web browser at your machine (first try http://localhost—your firewall rules might prevent you from using a remote browser), you should see a directory listing:

Index of /
    mode  links  bytes  last-changed  name
    dr-x   2           6  Oct 22 22:08  ./
    dr-x   6          51  Oct 22 22:08  ../

If you wish, you can explore your new chroot() environment by downloading a copy of BusyBox. BusyBox is a statically linked collection of "miniature" UNIX/POSIX utilities, with several tools specific to Linux. When BusyBox binaries are prepared in such a way that they have no external library linkage, they are perfect for running inside a chroot():

cd /home/jail/sbin
chmod 755 busybox-x86_64

ln -s busybox-x86_64 sh

cd ../htdocs
echo 'Keep out! This means you!' > index.html

echo '#!/sbin/sh

echo Content-type: text/plain
echo ""
/sbin/busybox-x86_64 env
echo "---"
/sbin/busybox-x86_64 id
echo "---"
/sbin/busybox-x86_64 ls -l /
echo "---"
/sbin/busybox-x86_64' >

chmod 755

Notice first that an index.html blocks the directory list. Ensure that your CGI applications are protected this way, so they are not seen unless you have chosen to expose them as a <FORM> action. Also observe that a softlink was created from /sbin/busybox-x86_64 to /sbin/sh. Calling BusyBox with the link changes the program's behavior and turns it into a Bourne shell. The program examines $argv[0], and if the contents match an "applet" that has been compiled into it, BusyBox executes the applet directly.

If you now load http://localhost/ with your browser, the shell script will run, and you should see:

HTTP_USER_AGENT=Mozilla/5.0 (X11; Linux x86_64; rv:38.0) 
 ↪Gecko/20100101 Firefox/38.0
SERVER_SOFTWARE=thttpd/2.27.0 Oct 3, 2014
uid=99 gid=99 groups=99
total 0
drwxr-xr-x    2 0    0        24 Oct 22 22:08 etc
drwxr-xr-x    2 0    0        40 Oct 24 15:03 htdocs
drwxr-xr-x    2 0    0        40 Oct 22 22:10 logs
drwxr-xr-x    2 0    0        97 Oct 24 15:02 sbin
BusyBox v1.24.0.git (2015-10-04 23:30:51 GMT) multi-call binary.
BusyBox is copyrighted by many authors between 1998-2015.
Licensed under GPLv2. See source distribution for detailed
copyright notices.

Usage: busybox [function [arguments]...]
  or: busybox --list[-full]
  or: busybox --install [-s] [DIR]
  or: function [arguments]...

    BusyBox is a multi-call binary that combines many common 
    Unix utilities into a single executable. Most people will 
    create a link to busybox for each function they wish to 
    use and BusyBox will act like whatever it was invoked as.

Currently defined functions:
    [, [[, acpid, add-shell, addgroup, adduser, adjtimex, arp, 
    arping, ash, awk, base64, basename, beep, blkid, blockdev, 
    bootchartd, bunzip2, bzcat, bzip2, cal, cat, catv, chat, 
    chattr, chgrp, chmod, chown, chpasswd, chpst, chroot, chrt, 
    chvt, cksum, clear, cmp, comm, conspy, cp, cpio, crond, 
    crontab, cryptpw, cttyhack, cut, date, dc, dd, deallocvt, 
    delgroup, deluser, depmod, devmem, df, dhcprelay, diff,
    dirname, dmesg, dnsd, dnsdomainname, dos2unix, du, dumpkmap,
    dumpleases, echo, ed, egrep, eject, env, envdir, envuidgid,
    ether-wake, expand, expr, fakeidentd, false, fatattr, fbset, 
    fbsplash, fdflush, fdformat, fdisk, fgconsole, fgrep, find, 
    findfs, flock, fold, free, freeramdisk, fsck, fsck.minix, 
    fstrim, fsync, ftpd, ftpget, ftpput, fuser, getopt, getty, 
    grep, groups, gunzip, gzip, halt, hd, hdparm, head, hexdump, 
    hostid, hostname, httpd, hush, hwclock, i2cdetect, i2cdump, 
    i2cget, i2cset, id, ifconfig, ifdown, ifenslave, ifup, inetd,
    init, insmod, install, ionice, iostat, ip, ipaddr, ipcalc, 
    ipcrm, ipcs, iplink, iproute, iprule, iptunnel, kbd_mode, 
    kill, killall, killall5, klogd, less, linux32, linux64, linuxrc, 
    ln, loadfont, loadkmap, logger, login, logname, logread, 
    losetup, lpd, lpq, lpr, ls, lsattr, lsmod, lsof, lspci, lsusb, 
    lzcat, lzma, lzop, lzopcat, makedevs, makemime, man, md5sum, 
    mdev, mesg, microcom, mkdir, mkdosfs, mke2fs, mkfifo,
    mkfs.ext2, mkfs.minix, mkfs.vfat, mknod, mkpasswd, mkswap, 
    mktemp, modinfo, modprobe, more, mount, mountpoint, mpstat, 
    mt, mv, nameif, nanddump, nandwrite, nbd-client, nc, netstat, 
    nice, nmeter, nohup, nslookup, ntpd, od, openvt, passwd, patch, 
    pgrep, pidof, ping, ping6, pipe_progress, pivot_root, pkill, 
    pmap, popmaildir, poweroff, powertop, printenv, printf, ps, 
    pscan, pstree, pwd, pwdx, raidautorun, rdate, rdev, readahead, 
    readlink, readprofile, realpath, reboot, reformime, remove-shell, 
    renice, reset, resize, rev, rm, rmdir, rmmod, route, rpm,
    rpm2cpio, rtcwake, run-parts, runsv, runsvdir, rx, script,
    scriptreplay, sed, sendmail, seq, setarch, setconsole, setfont,
    setkeycodes, setlogcons, setserial, setsid, setuidgid, sh, 
    sha1sum, sha256sum, sha3sum, sha512sum, showkey, shuf, slattach, 
    sleep, smemcap, softlimit, sort, split, start-stop-daemon, stat, 
    strings, stty, su, sulogin, sum, sv, svlogd, swapoff, swapon, 
    switch_root, sync, sysctl, syslogd, tac, tail, tar, tcpsvd, tee, 
    telnet, telnetd, test, tftp, tftpd, time, timeout, top, touch, 
    tr, traceroute, traceroute6, true, truncate, tty, ttysize, 
    tunctl, ubiattach, ubidetach, ubimkvol, ubirmvol, ubirsvol, 
    ubiupdatevol, udhcpc, udhcpd, udpsvd, uevent, umount, uname, 
    unexpand, uniq, unix2dos, unlink, unlzma, unlzop, unxz, unzip, 
    uptime, usleep, uudecode, uuencode, vconfig, vi, vlock,
    volname, watch, watchdog, wc, wget, which, whoami, whois, xargs, 
    xz, xzcat, yes, zcat, zcip

A few things to point out regarding each section above:

  1. The environment in the first section above will include a QUERY_STRING if you have referenced it from a GET-method form—that is, if you append ?abc=123 to the URL, you will see QUERY_STRING=abc=123 as standard GET-method parameters.

  2. User 99 above actually is defined as nobody in the local /etc/passwd on the test system. Because there is no /etc/passwd file in the chroot(), all user IDs will be expressed numerically. If you want users to resolve to names for some reason, define those users in a separate passwd file copy in the jail.

  3. It is obvious that the root directory above is confined within the jail. These files also are resolving to numeric ownership—if an entry for root is placed in the passwd jail file, named owners will appear.

BusyBox is useful for exploring a chroot(), but it should not be left on a production server, as it introduces far too much power. This is confirmed on the thttpd Web site with words of warning on the contents of the jail (

Also: it is actually possible to break out of chroot jail. A process running as root, either via a setuid program or some security hole, can change its own chroot tree to the next higher directory, repeating as necessary to get to the top of the filesystem. So, a chroot tree must be considered merely one aspect of a multi-layered defense-in-depth. If your chroot tree has enough tools in it for a cracker to gain root access, then it's no good; so you want to keep the contents to the minimum necessary. In particular, don't include any setuid-root executables!

The recent "Towelroot" vulnerability demonstrated an ordinary C program compiled into a binary executable with no special permissions that was able to escalate privilege to root on a great many Linux systems by exploiting a mutex bug. If your jail includes the ability to download a binary image and mark it executable, such a flaw could allow an attacker to smash out of the jail and take ownership of your system. Beware of providing such tools.

If you would like to copy utilities from your host operating system for use in the jail, you can use the ldd command to find their shared object dependencies. For example, to move a functional copy of GNU AWK into the jail, examine the dependent objects:

# ldd /bin/gawk =>  (0x00007ffe9f488000) => /lib64/ (0x00007f7033e38000) => /lib64/ (0x00007f7033b36000) => /lib64/ (0x00007f7033776000)
    /lib64/ (0x00007f7034053000)

These object targets are usually soft links, requiring a chain of files and links to be moved, demonstrated by the library below:

# ll /lib64/
lrwxrwxrwx. 1 root root 13 Mar 10  2015 
 ↪/lib64/ ->
# ll /lib64/ 
-rwxr-xr-x. 1 root root 19512 Mar  6  2015 

To copy these objects and re-create their links on Oracle Linux 7.1, follow these steps:

mkdir /home/jail/lib64
cd /home/jail/lib64

cp /lib64/ .
ln -s

cp /lib64/ .
ln -s

cp /lib64/ .
ln -s

cp /lib64/ .

Then, copy the gawk binary and create a test script:

cp /bin/gawk /home/jail/sbin

echo '#!/sbin/gawk -f

print "Content-type: text/plain"
print ""
print "Hello, world!"
print ""
for(x in ENVIRON) print x,ENVIRON[x]
}' > /home/jail/htdocs/
chmod 755 /home/jail/htdocs/

If you load http://localhost/, you will see the output of the script above. This means that, with the added libraries, you are free to write CGI scripts in GNU AWK if you wish, even if you remove BusyBox:

Hello, world!

HTTP_ACCEPT text/html,application/xhtml+xml,
AWKPATH .:/usr/share/awk
HTTP_HOST localhost
SERVER_SOFTWARE thttpd/2.27.0 Oct 3, 2014
SERVER_NAME localhost.localdomain
PATH /usr/local/bin:/usr/ucb:/bin:/usr/bin
HTTP_USER_AGENT Mozilla/5.0 (X11; Linux x86_64; 
 ↪rv:38.0) Gecko/20100101

GNU AWK is not the best example as it does provide network connectivity. Brian Kernighan's "One True AWK" might be a better choice at it lacks the extended network functions.

Let's consider additional startup parameters, using systemd to control the thttpd server. If you don't have systemd, examine the following unit file and replicate it with your init system. First, if you are still running thttpd, kill it:

kill $(</home/jail/logs/

Then, direct systemd to start it:

echo "[Unit]
Description=thttpd web service

ExecStart=/bin/ksh -c 'ulimit -H -f 48828; ulimit 
 ↪-H -m 48828; /home/jail/sbin/thttpd -C 

[Install]" > /etc/systemd/system/

systemctl start thttpd.service

Note above the ulimit commands executed by the Korn shell before launching thttpd (you may need to install this shell):

ulimit -H -f 48828
ulimit -H -m 48828

These commands set maximum limits for memory and files written by thttpd and all of its child processes. These are specified in blocks of 1,024 bytes and equate to 50 megabytes of maximum usage. These are hard limits that cannot be raised. The reason for imposing these limits will become clear in the next section.

The thttpd Web server records activity with the system syslog when able, but when running in a chroot(), the /dev/log socket does not exist unless created manually. The rsyslog dæmon can be instructed to listen on an additional socket in /home/jail/dev/log, like so:

echo '$ModLoad imuxsock
$AddUnixListenSocket /home/jail/dev/log
$umask 0000' > /etc/rsyslog.d/thttpd.conf

mkdir /home/jail/dev
chmod 755 /home/jail/dev
chcon -v --type=device_t /home/jail/dev

systemctl restart rsyslog.service
systemctl restart thttpd.service

After restarting, you should see thttpd entries in /var/log/messages. If you are running on an older Linux system that uses sysklogd, the following option will be of interest to you:

-a socket: Using this argument you can specify additional sockets from that syslogd has to listen to [sic]. This is needed if you're going to let some dæmon run within a chroot() environment. You can use up to 19 additional sockets. If your environment needs even more, you have to increase the symbol MAXFUNIX within the syslogd.c source file.

You also may find it useful to move or copy the /home/jail/sbin/thttpd binary to a location outside of the chroot(). If a copy remains in the jail, it can be tested at startup and compared to the protected copy. If the files differ, your startup script can mail an alert that your jail has been compromised. The thttpd.conf file might be similarly treated.


In 2000, Jeroen C. Kessels released Upload-2.6.tar.gz, which you easily can find using the major search engines. Although the software is quite old, it is likely the most concise implementation of RFC 1867 available (from the perspective of system resources).

Assuming that you have a copy of Upload-2.6.tar.gz, run the default compile with these commands:

tar xvzf Upload-2.6.tar.gz
cd Upload-2.6/sources/
ldd upload

Note that the ldd command should not be run as root on untrusted software, as documented in the manual page (run the build as a regular, non-root user).

The final ldd above will list the shared object dependencies for the binary: =>  (0x00007ffcbe5e1000) => /lib64/ (0x00007fbeffdad000)
/lib64/ (0x00007fbf00183000)

If you previously loaded libraries above for GNU AWK, you will have all of the needed shared objects in place to run this program in the chroot(). If you have not elected to place copies of the shared objects in /home/jail/lib64, recompile the program with static linkage (assuming that your compiler is capable of it—some distributions lack the static libc.a):

gcc -static -O -o upload.static upload.c

Copy your chosen binary to /home/jail, and set the configuration:

cp upload /home/jail/upload.cgi
cp ../html/BadPage.html /home/jail/htdocs/test-fail.html
cp ../html/OkPage.html /home/jail/htdocs/test-good.html

sed 's/action=[^ ]*/action=""/' ../html/index.html > \

cd /home/jail/htdocs
ln -s ../upload.cgi

echo 'Config          = Default
  Root          = /upload
  FileMask      = *
  IgnoreSubdirs = YES
  Overwrite     = YES
  LogFile       = /logs/upload.log
  OkPage        = /htdocs/test-good.html
  BadPage       = /htdocs/test-fail.html
  Debug         = 0' > test.cfg

If you now point your browser at http://localhost/test.html, you will see a file upload form; test it with a random file. With luck, you should see a success page, and the file that you transferred should appear in /home/jail/upload. You also should see a log of the transfer in /home/jail/logs/upload.log.

You can use the curl binary for batch transfers with this mechanism—for example:

curl -F file=@/etc/passwd http://localhost/

Curl should return the HTML to your standard output:


File uploaded: passwd<br>
Bytes uploaded: 2024


Uploaded files were configured to be stored in /home/jail/upload in this case:

# ll /home/jail/upload
total 1012
-rw-r--r--. 1 nobody nobody 1028368 Oct 25 10:26 foo.txt
-rw-r--r--. 1 nobody nobody    2024 Oct 25 10:29 passwd

This configuration is powerful in the respect that it removes a client's ability to browse your file store if you so choose. In FTP or its descendants, the ability to PUT into a batch directory also grants GET; with this mechanism, you can constrain your clients to transmit only, with no capability to catalog or retrieve any content.

One potential problem with this upload program is memory consumption. Let's examine the source code to upload.c:

/* Allocate sufficient memory for the incoming data. */
Content = (char *)malloc(InCount + 1);
p1 = Content;
RealCount = 0;
/* For some reason fread() of Borland C 4.52 barfs if the 
   bytecount is bigger than 2.5Mb, so I have to do it 
   like this. */
while (fread(p1++,1,1,stdin) == 1) {
  if (RealCount >= InCount) break;
*p1 = '\0';

You can see above that the entire file is read from the network (off standard input) and stored in memory. This could be a potential "denial of service" exploit, thus the 50mb ulimits set at the end of the previous section. Adjust these ulimits to meet your needs but prevent abuse. It also might be possible to use the tmpfile() function to spool to disk instead of memory, but extensive modifications to the C code would be required.

Because there isn't much code behind upload.cgi, it is relatively easy to extend. Consider these additional blocks for the ShowOkPage() function:

if (strnicmp(p1,"<insert sha256sum>",18) == 0) {
    char scratch[BUFSIZ];
    FILE *H;

    *p1 = '\0';
    strcpy(s1, "/sbin/sha256sum '"); strcat(s1, scratch); 
     ↪strcat(s1, "'");

    if((H = popen(s1, "r")) != NULL && fgets(scratch, BUFSIZ, 
     ↪H) != NULL)
    { sprintf(s1,"%s%s%s",Line,scratch,p1+18); strcpy(Line,s1);
     ↪fclose(H); }

if (strnicmp(p1,"<insert md5sum>",15) == 0) {
    char scratch[BUFSIZ];
    FILE *H;

    *p1 = '\0';
    strcpy(s1, "/sbin/md5sum '"); strcat(s1, scratch); 
     ↪strcat(s1, "'");

    if((H = popen(s1, "r")) != NULL && fgets(scratch, 
     ↪BUFSIZ, H) != NULL)
    { sprintf(s1,"%s%s%s",Line,scratch,p1+15); 
     ↪strcpy(Line,s1); fclose(H); }

Compiling in these fragments allows you to report the md5 and sha256 signatures of the on-disk data received from the client optionally, if you so specify in the template, enabling the client to confirm that the server's on-disk data is correct:

$ curl -F file=@Upload-2.6.tar.gz  

File uploaded: Upload-2.6.tar.gz<br>
Bytes uploaded: 2039<br>
sha256sum: bed3540744b2486ff431226eba16c487dcdbd4e60
md5sum: d703c20032d76611e7e88ebf20c3687a  



Such a data verification feature is not available in any standard file transfer tool, but was easily implemented in this elderly code because the approach is flexible. Adding custom processing on file receipt would be a nightmare for FTP, but it's relatively straightforward for upload.cgi. The need for such features is echoed in Solaris ZFS, which performs aggressive checksums on all disk writes—controller firmware makes mistakes, and the ability to report such errors is mandatory for some applications. Also note the signatures for Jeroen C. Kessels' package above, and further be warned that md5 signatures are vulnerable to tampering (—they are useful for detecting media errors, but they do not guarantee data to be free of malicious alteration.

Other useful changes to the upload.c code include a prefix always added to the incoming filename read from the .CFG file (I added this feature in three lines), and replacement of strcat()/strcpy() functions with the safer strlcat()/strlcpy() from OpenBSD's libc (

There is also an extended CGI processing library ( in the C programming language written by Tom Boutell (author of the GD graphics library). The CGI-C library offers more efficient RFC 1867 processing (without excessive memory consumption), but building applications with it is beyond the scope of this discussion.

Stunnel TLS

Because thttpd has no encryption support, those living in areas where encryption is legal ( can use Michal Trojnara's stunnel "shim" network encryption dæmon to provide https services on port 443. First, install the package from the standard Oracle Linux repositories:

yum install stunnel

You also can install stunnel from source. The package pulled by yum is in fact quite old (4.56, one major version behind the current 5.25), but it also includes stunnel security contexts for SELinux, so it is recommended that you install the package even if you plan to build a newer release.

After installation, stunnel will require a keypair for TLS. The public portion of the key can be signed by a Certificate Authority (CA) if you wish, which will allow error-free operation with most browsers. You also may use a "self-signed" key that will present browser errors, but will allow encryption.

Free signed SSL certificates should be available by the time you read this article from the Web site for the Let's Encrypt project (). Instructions should appear shortly on how to generate and maintain signed keys that are honored as valid by most browsers. Preliminary documentation on the Let's Encrypt Web site indicates that the tools will use .PEM files, which likely can be used by stunnel.

If you want to purchase a valid key for stunnel, there is a guide on the stunnel Web site on having your key signed by a CA (

For more informal use, you can generate a self-signed key with the following commands:

cd /etc/pki/tls/certs
make stunnel.pem

The process of key generation will ask a number of questions:

Generating a 2048 bit RSA private key
writing new private key to '/tmp/openssl.hXP3gW'
You are about to be asked to enter information that will 
be incorporated into your certificate request.
What you are about to enter is what is called a 
Distinguished Name or a DN. There are quite a few fields 
but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:IL
Locality Name (eg, city) [Default City]:Chicago
Organization Name (eg, company) 
 ↪[Default Company Ltd]:ACME Corporation
Organizational Unit Name (eg, section) []:Widget Division
Common Name (eg, your name or your server's hostname) 
Email Address []

The key produced above will be set for expiration 365 days from the day it was created. If you want to generate a key with a longer life, you can call openssl directly:

openssl req -new -x509 -days 3650 -nodes -out 
 ↪stunnel.pem -keyout stunnel.pem

The key will look something like this (abbreviated):

# cat /etc/pki/tls/certs/stunnel.pem

The PRIVATE KEY section above is the most sensitive portion of the file; ensure that it is not seen or copied by anyone you do not trust, and any recordings on backup media should be encrypted. The BEGIN CERTIFICATE section is presented to TLS clients when they connect to stunnel.

It also is wise to compute custom primes for the Diffie-Hellman key exchange algorithm, following guidance from the stunnel manual page:

openssl dhparam 2048 >> stunnel.pem

The previous command will add another section to your stunnel.pem file that looks like this:


Once the key is in place in /etc/pki/tls/certs/stunnel.pem, a stunnel configuration file must be created, like so:

echo 'FIPS    = no
options = NO_SSLv2
options = NO_SSLv3
ciphers =
syslog  = yes
#debug  = 6 #/var/log/secure
chroot  = /var/empty
setuid  = nobody
setgid  = nobody
cert    = /etc/pki/tls/certs/stunnel.pem
connect =' > /etc/stunnel/https.conf

The cipher settings above are from Hynek Schlawack's Web site on the subject (, and they represent the current best practice for TLS encryption. It is wise to visit this site from time to time for news and advice on TLS, or perhaps consider following Hynek's Twitter feed.

The FIPS and NO_SSL options above are the default settings starting with stunnel version 5. If you are running the version 4.56 package bundled with Oracle Linux, you must provide them for best practice TLS.

The above configuration sets stunnel as an inetd-style service that is launched for each connection. Each process will be in a chroot() in /var/empty. It also is possible to run stunnel as a standing dæmon that forks for each new client. If you do so, remember to restart the dæmon each time an OpenSSL update arrives and the chroot() might need more careful preparation. If you use the inetd approach, updates will apply to all new connections immediately after your updates, but you must use care not to exceed NPROC under high usage. There is a performance penalty for running in inetd-style, but the ease of security administration is likely worthwhile for all but heavy usage.

The following commands configure stunnel for inetd-style socket activation under systemd:

echo '[Unit]
Description=https stunnel
[Install]' > /etc/systemd/system/https.socket

echo '[Unit]
Description=https stunnel service
ExecStart=-/usr/bin/stunnel /etc/stunnel/https.conf
StandardInput=socket' > /etc/systemd/system/https@.service

systemctl enable https.socket
systemctl start https.socket

At this point, use your browser to visit https://localhost, and you will see your index page. Visit https://localhost/test.html, and you can upload over a secure channel. You also can use curl:

curl -k -F file=@/etc/group https://localhost/

Note above the -k option, which disables client CA validation for a server certificate. You will need this option if you are using a self-signed key or if your curl binary lacks access to an appropriate repository of CAs (provided by your operating system):


File uploaded: group<br>
Bytes uploaded: 842<br>
sha256sum: 460917231dd5201d4c6cb0f959e1b49c101ea
md5sum: 31aa58285489369c8a340d47a9c8fc49  



If you are using an older Linux distribution that uses xinetd, this configuration might prove useful:

service https
    disable      =   no
    socket_type	 =   stream
    wait         =   no
    user         =   root
    server       =   /usr/sbin/stunnel
    server_args	 =   /etc/stunnel/https.conf

And if you are in an environment that is still using inetd, this line will enable stunnel:

https stream nobody nowait root /usr/sbin/stunnel 
 ↪stunnel /etc/stunnel/https.conf

If you have problems with stunnel, try using telnet to connect to port 443—you may see helpful status messages there. For example:

# cd /etc/stunnel/

# mv https.conf https.conf.tmp

# busybox-x86_64 telnet localhost 443
Clients allowed=500
stunnel 4.56 on x86_64-redhat-linux-gnu platform
Compiled/running with OpenSSL 1.0.1e-fips 11 Feb 2013
Reading configuration from file 
Cannot read configuration
stunnel [] ] -fd  | -help | -version | -sockets
      - use specified config file
    -fd    - read the config file from a file descriptor
    -help     - get config file help
    -version  - display version and defaults
    -sockets  - display default socket options
str_stats: 1 block(s), 24 data byte(s), 58 control byte(s)
Connection closed by foreign host

Note that if you reference files (keys or configuration files) that are not in the standard paths above, the "enforcing" SELinux on Oracle Linux 7.1 might deny read permissions. If you see such errors in your syslog, try applying the following:

chcon -v --type=stunnel_etc_t /alternate/path/to/https.conf
chcon -v --type=stunnel_etc_t /alternate/path/to/stunnel.pem

If your local Linux firewall is enabled, you can open the port for stunnel on https and allow remote browsers to connect. If you leave port 80 closed, you are enforcing TLS-encrypted communication for all connections:

iptables -I INPUT -p tcp --dport 443 --syn -j ACCEPT

Please note that one drawback to https with stunnel is that the REMOTE_ADDR environment variable shown in the above CGI scripts always will be set to If you want to determine the source of a particular connection or transfer from the thttpd logs, you must cross-reference them with the stunnel connection logs. However, this property might be useful for upload.cgi—if getenv("REMOTE_ADDR") != "", you should call the exit() function. The net effect is that the Web site can be visible in clear text on both port 80 and via TLS on port 443, but file uploads will fail if attempted in clear text.

Finally, if your client must ensure the identity of the server, but you do not want to obtain a signed certificate, you can run a remote stunnel on the client that forces verification on a particular key.

Extract and save the CERTIFICATE section from your server's stunnel .PEM (abbreviated below):


Transfer this file to your client, and set the client's stunnel configuration file:

echo 'FIPS    = no
client  = yes
verify  = 4
cafile  = /path/to/publickey.pem
accept  =
connect =' > 

The configuration above will run on Windows and a variety of other platforms. If you are running on a UNIX variant, consider also adding the chroot() option in a similar manner as was set on the server. Note, however, that if you intend to use the HUP signal to reload the stunnel configuration, you must copy all of the required files inside the chroot() to which you have confined stunnel. Although this likely would never be done in an inetd-style configuration, this is one of several drawbacks for chroot() operation.

Clients then can point their browser at http://localhost:65432, and they will be routed over TLS to the remote Web server. The curl utility similarly can use the local 65432 port in clear text, allowing stunnel to handle the TLS session.

When client connections are launched, the client stunnel will open a connection to the server's port 443, then thoroughly exercise the server's key to ensure the correct identity and prevent a "man in the middle" from intercepting the transmitted data.

The curl utility does have a number of options to import certificate stores, but it does not appear capable of verifying a specific certificate as I have just demonstrated with stunnel.

The author of stunnel also noted that the rsync utility and protocol can be used for anonymous write access. The standard network configuration for rsync is clear text, but rsync also can be wrapped in either OpenSSH or stunnel. A dated guide for transferring files with rsync over stunnel can be found here: A benefit of RFC 1867 is that curl is the only utility required for command-line transfers; a more complex configuration is required for an rsync binary to be wrapped in services provided by an stunnel binary.


Special thanks to Michal Trojnara, the author of stunnel, for his helpful comments on this article and his greater work in stunnel development. Commercial support, licensing and consulting for stunnel is available from his organization. Please visit for his latest release.

Hurry Up and Downgrade

The classic UNIX file transfer utilities are woefully inadequate to ensuring identity, integrity and privacy. FTP fails so completely with these questions that the only justification for its continued use is the general familiarity with the command set. It is unbelievable that modern IT has been thus constrained for so long.

We need a file transport protocol with casual dropoff, integrity checking, encryption, privilege separation (of the TLS state machine and the HTTP file processing), chroot() security and hooks for custom processing on arrival. There is no mainstream utility that offers all of those features.

Until such time, RFC 1867 will allow you to "roll your own" transport. While this protocol has many implementations (Perl, PHP, Python and so on), it is rare to find any with chroot() security. Hopefully, this does not remain the case.

Load Disqus comments

Community Events

Park City, UT, USA
New Orleans, LA, USA
Copenhagen, Denmark
Austin, TX, USA
Austin, TX, USA