I have a two network interfaces (eth0 and eth1) on a machine running Fedora 8. Each one has its own publicly visible IP address. Apache is listening on port 80 only on eth0. I have another server (not Apache) also speaking HTTP (call it server_b) which can be made to listen on any port. Obviously it cannot listen on port 80 as that is already taken by Apache.
What I would like to do is run server_b listening on any free port e.g. 6677 and somehow redirect packets coming on eth1 destined for port 80 to port 6677. Packets coming on eth0 destined for port 80 will go to Apache as usual. This redirection then, needs to happen before arriving packets reach Apache. I am hoping for a solution in iptables or something like it. To me it seems possible to distinguish between packets meant for Apache and server_b as they will be arriving on different network interfaces even though both are destined for port 80. Problem is that I do not know enough about iptables to be able to do this.
Another possbility is to make Apache listen on both interfaces and use its URL rewriting capabilities to forward the appropriate packets to port 6677. However I do not know if it will be possible for Apache to get the response from port 6677 and send it back on eth1. In any case, I know even less about URL rewriting in Apache than iptables.
It is probably easiest to simply allow access to port 6677 from the outside. However, sysadmins I work with will do it only as a last resort.
Of course, outgoing packets generated by server_b will need to be manipulated as well. They will need to look as if coming from port 80 rather than 6677. There might be other issues which I have not thought about yet.
As an experiment (on another machine), I started httpd listening on port 80 and then tried to redirect packets coming on port 8080 to port 80 using iptables. Success of redirection will result in index.html becoming accessible on http://localhost:8080/. I tried the nat table but it seems that packets destined for the machine only see the INPUT chain of the filter table.
Any suggestions, solutions or guidance will be much appreciated.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
1 hour 45 min ago
- Reply to comment | Linux Journal
5 hours 45 min ago
- Yeah, user namespaces are
7 hours 2 min ago
- Cari Uang
10 hours 33 min ago
- user namespaces
13 hours 26 min ago
13 hours 52 min ago
- One advantage with VMs
16 hours 21 min ago
- about info
16 hours 54 min ago
16 hours 55 min ago
16 hours 56 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?