Configuring and Using an FTP Proxy
And now, finally, it's time to configure your proxy dæmon. As I mentioned, this is done in the file ftp-proxy.conf, which resides either in /etc/proxy-suite or in /usr/local/etc/proxy-suite. You may be confused or annoyed by SuSE's use of the term “suite” to refer to a single application. Hopefully, additional proxies will be completed soon, and if they're as useful as ftp-proxy, I, for one, will forgive them for this minor conceit.
The quickest way to explain this file is to list a brief example and dissect it (see Listing 1).
The first parameter, ServerType, determines whether to run ftp-proxy as a standalone dæmon or from inetd. Although I've been calling it a dæmon, ftp-proxy can be run either way. I personally avoid running inetd or even xinetd on my public servers, because that way I don't need to disable the unnecessary things that tend to get run by default, and because of the performance benefit of running things as dæmons. If your needs are different, you can set ServerType to inetd (which also works if you run xinetd rather than inetd).
User and Group, obviously enough, determine the UID and GID under which ftp-proxy runs after initialization. It's a good idea to set these to an unprivileged UID and GID in order to lessen the consequences of an attacker somehow hijacking an ftp-proxy process.
LogDestination specifies where ftp-proxy should send log messages. This can be either dæmon (the local syslog facility), a file or a pipe. LogLevel determines the quantity of information to be logged; for most users the default of INF is best, but DBG (the maximum setting) is useful for troubleshooting.
PidFile tells ftp-proxy where to store the process ID of its master process. This is used by the startup script when it's invoked with the stop command and upon system halt. It isn't used, however, if ftp-proxy is run in inetd mode.
ServerRoot specifies the path to ftp-proxy's chroot jail. Leave it commented out if you don't want to run ftp-proxy chrooted (see the “Problem with 1.9 and chroot” Sidebar).
The next three commands in Listing 1 are important. They determine whether your proxy will be transparent. In most situations, a transparent proxy is preferable. End users won't need to configure their FTP client software to explicitly support the proxy. To achieve this, ftp-proxy works in conjunction with the kernel's Netfilter code, which redirects FTP packets to your proxy dæmon rather then sending them to the host to which they're actually addressed.
When ftp-proxy receives FTP client packets that have been redirected in this way, it uses their destination IP as the destination of the new FTP connection it initiates to the desired FTP server. The parameter DestinationAddress specifies the default destination to use.
If you want to allow users to use the proxy non-transparently, i.e., by initiating their FTP sessions directly to the proxy, set the parameter AllowMagicUser to “yes”, but I do not recommend doing so if your proxy is to be used by external users, as in the case of a public FTP. AllowMagicUser will cause your proxy to act as an open proxy that external users may use to connect to other, external FTP servers, possibly for the purpose of attacking them.
If you've configured Netfilter to accept connections to the proxy from trusted (internal) users only, however, and you set AllowMagicUser to “yes”, users will be able to specify their FTP destination by attaching it to their user name with an @ sign, e.g., email@example.com. AllowMagicUser may be used regardless of whether AllowTransProxy is set to yes or no. But note that if it's set to no and AllowMagicUser is too, all FTP sessions will use DestinationAddress.
Other parameters include MaxClientsString and DestinationTransferMode. See the ftp-proxy.conf(8) man page for the complete list and for more information on the ones we've covered here.
For transparent proxying to work you need to use iptables to redirect FTP packets to the local proxy (i.e., you need to run Netfilter on your proxy host, which this article assumes you're doing), and of course, you'll need rules allowing FTP connections to and from the proxy. You will not, however, need any rules in the FORWARD chain.
First, you'll need to load several modules for your Linux 2.4 firewall to support transparent proxying: ipt_conntrack_ftp and ip_nat_ftp are required for FTP connection tracking; ipt_REDIRECT is required for the REDIRECT rule target. Most distributions' stock 2.4 kernels include these modules.
Once the modules are loaded, you can add firewall rules like these to your Netfilter startup script (Listing 2).
The first two commands of Listing 2, instruct the firewall to redirect all packets received on its external and internal interfaces (eth2 and eth0, respectively) that have a destination port of TCP 21 (the FTP server port). Note that these packets won't be rewritten (mangled) in any way; they'll simply be redirected to the local FTP proxy dæmon.
The third and fourth commands in Listing 2 tell the firewall to accept all incoming packets sent to TCP port 21 of the public FTP server (where the variable PUBLIC_FTP contains its IP address) and all incoming FTP packets sent by internal users (where the variable INTERNAL_HOSTS contains an IP range in CIDR notation, e.g., 192.168.99.0/24). Per the first two lines, any packets matching lines three and four will be diverted to the local proxy.
The fifth and sixth lines in Listing 2 allow the local ftp-proxy dæmon to initiate proxied FTP connections to the specified public FTP server and to external FTP servers (i.e., hosts reachable from its external Ethernet interface, in this example, eth2).
The lines in Listing 2 do not form a self-contained Netfilter rulebase. They represent the lines you could add to an existing script already properly configured for NAT, etc., and already containing definitions for the variables PUBLIC_FTP and INTERNAL_HOSTS. It's good practice to use custom variables like this to make your rules more readable.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
3 hours 23 min ago
- Please correct the URL for Salt Stack's web site
6 hours 35 min ago
- Android is Linux -- why no better inter-operation
8 hours 50 min ago
- Connecting Android device to desktop Linux via USB
9 hours 18 min ago
- Find new cell phone and tablet pc
10 hours 16 min ago
11 hours 45 min ago
- Automatically updating Guest Additions
12 hours 54 min ago
- I like your topic on android
13 hours 40 min ago
- This is the easiest tutorial
20 hours 16 min ago
- Ahh, the Koolaid.
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?