Designing and Using DMZ Networks to Protect Internet Servers
It would seem to be common sense that each host on a DMZ must be thoroughly nailed down. But sure enough, one commonly encounters organizations paranoid (prudent) enough to have a DMZ but not quite paranoid enough to secure their DMZ properly. The good news is that with a little time and a properly suspicious attitude, you can significantly lower any system's exposure to compromise by script kiddies.
Always run the latest stable version of your operating system, software and kernel, and keep current with security patches as they are released.
If everyone followed this simple and obvious tenet, the “Rumors from the Underground” list of hacked web pages on www.hackernews.com would be a lot shorter. As we discussed last month in “Securing DNS”, the vast majority of DNS-based hacks don't apply to the most recent versions of BIND; the same can safely be said for most other Linux network software packages. We all know this, but we don't always find the time to follow through.
A program you don't use for anything important is a program you've got little reason to maintain properly and is, therefore, an obvious target for attackers. This is an even easier thing to fix than old software. At setup time, simply delete or rename all unneeded links in the appropriate runlevel directory in /etc/rc.d/.
For example, if you're configuring a web server that doesn't need to be its own DNS server, you would enter something like the following:
mv /etc/rc.d/rc2.d/S30named /etc/rc.d/rc2.d/disabled_S30named
(Note that your named startup script may have a different name and may exist in different or additional subdirectories of /etc/rc.d.)
While any unneeded service should be disabled, the following deserve particular attention:
RPCservices: Sun's Remote Procedure Control protocol (which is included nowadays on virtually all flavors of UNIX) lets you execute commands on a remote system via rsh, rcp, rlogin, nfs, etc. Unfortunately, it isn't a very secure protocol, especially for use on DMZ hosts. You shouldn't be offering these services to the outside world—if you need their functionality, use ssh (the Secure Shell), which was specifically designed as a replacement for rpc services. Disable (rename) the nfsd and nfsclientd scripts in all subdirectories of /etc/rc.d in which they appear, and comment out the lines corresponding to any r-commands in /etc/inetd.conf. (Warning: local processes sometimes require the RPC portmapper, aka rpcbind—disable this with care, and try re-enabling it if other things stop working).
inetd: The Internet Dæmon is a handy way to use a single process (i.e., inetd) to listen on multiple ports and to invoke the services on whose behalf it's listening on an as-needed basis. Its useful life, however, is drawing to a close: even with TCP Wrappers (which allow one to turn on very granular logging for each inetd service), it isn't as secure as simply running each service as a dæmon. (An FTP server really has no reason not to be running FTPD processes all the time.) Furthermore, most of the services enabled by default in inetd.conf are unnecessary, insecure or both. If you must use inetd, edit /etc/inetd.conf to disable all services you don't need (or never heard of). Note: many rpc services are started in inetd.conf.
linuxconfd: While there aren't any known exploitable bugs in the current version of linuxconf (a system administration tool that can be accessed remotely), CERT reports that this service is commonly scanned for and may be used by attackers to identify systems with other vulnerabilities (CERT Current Activity page 7/31/2000, www.cert.org/current/current_activity.html).
sendmail: Many people think that sendmail, which is enabled by default on most versions of UNIX, is necessary even on hosts that send e-mail only to themselves (e.g., administrative messages such as Ctrl+tab output sent to root by the crontab dæmon). This is not so: sendmail (or postfix, qmail, etc.) is needed only on hosts that must deliver mail to or receive mail from other hosts. sendmail is usually started in /etc/rc.d/rc2.d or /etc/rc.d/rc3.d.
Telnet, FTP and POP: These three protocols have one very nasty characteristic in common: they require users to enter a username and password that are sent in clear text over the network. Telnet and FTP are easily replaced with ssh and its file-transfer utility scp; e-mail can either be automatically forwarded to a different host, left on the DMZ host and read through a ssh session or downloaded via POP using a “local forward” to ssh (i.e., piped through an encrypted Secure Shell session). All three of these services are usually invoked by inetd. These dæmons are usually started by inetd.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?