Paranoid Penguin - Linux VPNs with OpenVPN, Part V
In my four previous columns, I showed, in painstaking detail, how to set up OpenVPN to allow remote users to create secure remote-access connections—Virtual Private Network (VPN) tunnels—over the Internet back to your personal or corporate network. By now, you should understand how VPN technologies in general, and TLS/SSL-based VPNs in specific, work and how to create working server and client configurations for OpenVPN.
This month, I wrap up the series, with some miscellaneous but important notes about the previous columns' client-server scenario, including instructions on enabling IP forwarding, some tips on using a Web proxy and enforcing DNS use through the tunnel, and on “hiding” all VPN clients' IP addresses behind that of your OpenVPN server.
Throughout this series, I've been implementing the OpenVPN server configuration shown in Listing 1, which causes OpenVPN to run in server mode. In my example scenario, I've got only one remote user connecting to this OpenVPN server, but if you have more, you should edit the max-clients parameter accordingly. Remember, because I've also set fairly liberal tunnel timeouts in order to minimize the odds that a tunnel will go down due to network problems, you should add 1 or 2 to the actual number of maximum concurrent client connections you expect.
Listing 1. Server's server.ovpn File
port 1194 proto udp dev tun ca 2.0/keys/ca.crt cert 2.0/keys/server.crt key 2.0/keys/server.key # This file should be kept secret dh 2.0/keys/dh1024.pem tls-auth 2.0/keys/ta.key 0 server 10.31.33.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 cipher BF-CBC # Blowfish (default) comp-lzo max-clients 2 user nobody group nogroup persist-key persist-tun status openvpn-status.log verb 3 mute 20
The other setting in Listing 1 that I need to review is push "redirect-gateway def1 bypass-dhcp", which pushes the OpenVPN's local default gateway setting to all clients. This has the effect of causing VPN clients to route all their Internet traffic through the VPN tunnel, which (as I discuss shortly) has important security benefits.
The client configuration file that corresponds to Listing 1 is shown in Listing 2. This file works equally well on Linux and Windows client systems. Remember that the parameter remote specifies the IP address or hostname of your OpenVPN server and the port on which it's accepting connections.
Remember also that the files ca.crt, minion.crt, minion.key and ta.key specified by the parameters ca, cert, key and tls-auth, respectively, need to be generated beforehand and placed alongside the configuration file itself in /etc/openvpn. The certificate and key specified by ca and cert should be unique for each client system!
Listing 2. Client's client.ovpn File
client proto udp dev tun remote 188.8.131.52 1194 nobind ca ca.crt cert minion.crt key minion.key ns-cert-type server tls-auth ta.key 1 cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun mute-replay-warnings verb 3 mute 20
Again, the purpose of the server configuration in Listing 1 and the client configuration in Listing 2 is to allow a remote user to connect from over the Internet back to the “home” network on which the OpenVPN server resides. (This may or may not be your residence. By home network, I mean “trusted corporate or personal network”, as opposed to the remote network from which you're trying to connect.) Last month, however, I forgot to mention a critical step that you must perform on your OpenVPN server if you want remote clients to be able to communicate with anything besides the server itself: enabling IP forwarding.
By default, almost any Linux system is configured not to allow network packets entering one network interface to be forwarded to and sent out of a different network interface. This is a Linux security feature. It helps reduce the likelihood of your Linux system linking different networks together in undesirable or unintended ways.
But, generally you do want an OpenVPN server to link different networks. The exceptions to this are if:
All services and resources your remote users need are housed on the OpenVPN server itself.
It's possible to run proxy applications on the OpenVPN server that can proxy connections to services not hosted on it.
In the first case, once remote users have connected to the OpenVPN server successfully, they can connect to other services hosted on that server by targeting the server's real/local IP address rather than its Internet-facing address. For example, the client configuration in Listing 2 is targeting a server address of 184.108.40.206, which is Internet-routable. Suppose that this is actually a router or firewall address that is translated to your OpenVPN server's address 10.0.0.4.
To ssh to the OpenVPN server after you've established a tunnel to it, you'd target 10.0.0.4, not 220.127.116.11. The same would apply to Samba, NFS, HTTP/S or any other service running on the OpenVPN server.
In the second case, to reach other resources on the remote network, you would configure the applications running on your client system to use the OpenVPN server's real/internal address (10.0.0.4) as its proxy address. The best example of this is Squid. If all the resources you wanted to reach on your remote network involve Web services, you could run Squid on the OpenVPN server and configure your client's Web browser to use 10.0.0.4 as its proxy address (although this will work only when the tunnel is up).
In either of the above scenarios, you don't need IP forwarding enabled on the OpenVPN server, because all direct communication between VPN clients and your home network terminates on the OpenVPN server. If, however, your clients need to reach other things on the home network or beyond, without using the OpenVPN server as a proxy, you do need to enable IP forwarding.
This is very simple. To turn on IP forwarding without having to reboot, simply execute this command:
bash-$ sudo sysctl -w net.ipv4.ip_forward=1
To make this change persistent across reboots, uncomment the following line in /etc/sysctl.conf (you'll need to su to root or use sudo to edit this file):
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
- It is quiet helping
56 min 35 sec ago
1 hour 13 min ago
- Reachli - Amplifying your
2 hours 30 min ago
3 hours 18 min ago
- good point!
3 hours 21 min ago
- Varnish works!
3 hours 30 min ago
- Reply to comment | Linux Journal
4 hours 25 sec ago
- Reply to comment | Linux Journal
6 hours 26 min ago
- Reply to comment | Linux Journal
10 hours 26 min ago
- Yeah, user namespaces are
11 hours 42 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?