Protect Your Ports with a Reverse Proxy
In a previous article, I discussed Apache Tomcat, which is the ideal way to run Java applications from your server. I explained that you can run those apps from Tomcat's default 8080 port, or you can configure Tomcat to use port 80. But, what if you want to run a traditional Web server and host Java apps on port 80? The answer is to run a reverse proxy.
The only assumption I make here is that you have a Web-based application running on a port other than port 80. This can be a Tomcat app, like I discussed in my last article, or it can be any Web application that has its interface via the Web (such as Transmission, Sick Beard and so on). The other scenario I cover here is running a Web app from a second server, even if it's on port 80, but you want it to be accessed from your central Web server. (This is particularly useful if you have only one static IP to use for hosting.)
The way reverse proxying works, at least with the Apache Web server, is that every application is configured as a virtual host. Just like you can host multiple Web sites from a single server using virtual hosting, you also can host separate Web apps as virtual hosts from that same server. It's not terribly difficult to configure, but it's very useful in practice. First things first. On your server, you have the Web server installed (Figure 1). You also have a Web application on port 8080 (Figure 2). Along with the working Apache Web server, you need to make sure virtual hosting (by name) is enabled.
Figure 1. I have Apache installed, and it's hosting a very simple page. on port 80.
Figure 2. I have a Web application running on port 8080 on the server located at 192.168.1.11.
Enabling Name-Based Virtual Hosts
Enabling name-based virtual hosting on Apache is extremely common, and it's very simple to do. Depending on what distribution you're using, the "proper" location for enabling name-based virtual hosting may differ. The nice thing about Apache, however, is that generally as long as the directive is specified somewhere in the configurations, Apache will honor it.
My local test server is running Ubuntu. In order to determine where the "proper" place to enable name-based virtual hosting is, I simply went to the /etc/apache2 directory and executed:
grep NameVirtualHost *
That command searches for the
directive, and it returned this:
root@server:/etc/apache2# grep NameVirtualHost * ports.conf:NameVirtualHost *:80 ports.conf: # If you add NameVirtualHost *:443 here, # you will also have to change
Those results tell me that the
NameVirtualHost directive is specified
in the /etc/apache2/ports.conf file. (Note that grep will return
only the lines that
contain the search term, which is why it shows those two
out-of-context lines above. The important thing is the filename
ports.conf, which is what I was looking for.) Again, with Apache, it generally
doesn't matter where you specify directives, but I like to stick with
the standards of the particular distribution I'm using, if only
for the sake of future administrators.
To enable name-based virtual hosting, you simply uncomment:
from the file, and save it. If you can't find a file that contains such a directive commented out, just add the line to your apache.conf or httpd.conf file. Then you need to specify a VirtualHost directive for the virtual host you want to create. This process is the same whether you're making a traditional virtual host or a reverse proxy virtual host.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide