Paranoid Penguin - Rehabilitating Clear-Text Network Applications with Stunnel

Put modern crypto onto your legacy applications without modifying them. Mick Bauer shows how to bring pre-SSL programs up to date.
Configuring Stunnel's Global Settings

Once you've got a suitable server certificate, it's time to configure yourself a tunnel. This is considerably simpler than the previous task. This part is also much more version-specific. In versions of Stunnel prior to version 4.0, Stunnel accepted all of its configuration parameters as command options. In current versions, however, the only actionable command-line argument it expects is the nondefault path to its configuration file.

If you installed Stunnel from source with default compile-time options, Stunnel expects its configuration file to reside in /usr/local/etc/stunnel. If you installed from a binary package, this path is more likely to be /etc/stunnel. Listing 2 shows the global settings from an abbreviated sample stunnel.conf file (abbreviated mainly in that I've omitted comment lines).

The cert parameter tells Stunnel where to look for its server certificate; it therefore follows that you need this parameter only on your Stunnel server host, not on client hosts. The chroot parameter tells Stunnel which directory to chroot itself (reset / to) after starting up; this happens after Stunnel has read its configuration and server certificate files. You probably need to create this chroot jail manually and populate it with a few things, for example, its own etc/hosts.allow and etc/hosts.deny files, if you want to use TCPwrappers-style access controls.

pid tells Stunnel where to write its process ID. This path is relative to that set by chroot; that is, Stunnel writes its PID after chrooting itself.

setuid and setgid tell Stunnel which user and group to demote itself to after starting. If Stunnel is to listen on any TCP ports lower than 1025 it must be started as root, but it demotes itself after reading its configuration file, reading its server certificate and binding to the privileged port.

By default, Stunnel sends its log messages of severity notice or higher to the local dæmon syslog facility. Fedora's version sends it to authpriv, which in turn logs to /var/log/secure. You can use the debug option to set a different log level. Seven is the highest level and is best if you're having trouble getting Stunnel to work. You can use the output option to tell Stunnel to send its messages to a specific file rather than handing its messages off to syslog.

The last line in Listing 2 sets the client parameter to yes, which means that on this particular system, I intend to initiate SSL transactions, not receive them. On the server with which I intend to communicate, I need to leave this parameter set to its default, no.

Configuring a Tunnel

Now, finally, we come to the payoff—an actual tunnel. For this example, we're going to tunnel telnet from the host nearclient to the server farserver. The global section in farserver's stunnel.conf file can be almost identical to the one in Listing 2, except that the client needs to be set to no. The major difference between the two hosts' configurations is in their service definitions.

Before I dive into that, however, let's flesh out the example scenario a little more. Suppose farserver already is configured as a telnet server; it already accepts telnet sessions on TCP port 23. But, we don't want nearclient to connect to the clear-text port; we need to use something else for an SSL connection. As it happens, IANA has already designated a port for SSL-enabled telnet (aka telnets): TCP 992.

Therefore, we want a tunnel from nearserver to TCP 992 on farserver. But how will our non-SSL-enabled telnet command and our equally non-SSL-savvy telnet server process know how to use this tunnel? That's a trick question; the tunnel is completely transparent to the sending and receiving telnet processes.

nearserver's Stunnel process accepts the connection on the usual port—TCP 23 although this is user-defined—and then encrypts the packets with SSL before forwarding them to TCP port 992 on farserver. farserver decrypts the packets and then forwards them to its local telnet process on TCP 23. Actually, xinetd or inetd receive the packets before in.telnetd does, but you get the picture.

In this way, when users on nearserver want to connect to farserver, they enter the command telnet 127.0.0.1, and connection is encrypted, forwarded to farserver and decrypted. farserver's reply packets follow the same path but backward. Each telnet process (telnet and in.telnetd) thinks it's communicating with a local user, but the packets in fact are traversing an SSL-encrypted Stunnel session. All of which is a wordy way of explaining the six total lines that comprise Listings 3 and 4.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Need Help On how to change port 23 to port 992

Anonymous's picture

Our network is connected to proxie server and we already set our proxie server to accept port 992 but when the client computer connects logon the the website it will be block by the proxie server so we need to change the port 23 to port 992. Pls advise if you have any detailed suggestion on how to do this. The client PC is running windows XP pro and the server is running windows server 2000

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState