Linux Means Business: A Case Study of Pakistan On-Line
At POL, we have been using Apache from the beginning. We have found the Apache web server excellent where stability and performance is concerned. We have many virtual domains running on a single web server, utilizing only one IP address. Basic configuration and virtual hosting for Apache is quite simple. You can also use SSL, URL caching, and protected user directories with Apache. All of the POL users have their own web pages which are kept in their home directories on the Apache server, and are free to update these pages any time they wish.
The Apache server is of great commercial use, as you can utilize it with databases, ODBC, etc. In fact, we have tested it with mSQL, ODBC and MySQL at Pakistan On-Line. We have found some clients who are interested in co-locating their database server at the POL premises and linking them to the Apache server for on-line databases. We are sure this will be a great success for us here in Pakistan.
Pakistan On-Line is using the Linux kernel features for firewalling, IP masquerading and transparent proxying. We have installed Linux-based firewalls for some of our corporate clients. We have tested both packet filtering and proxy-based firewalls. For a proxy-based firewall, we have used Trusted Information Systems (TIS) Firewall Toolkit (FWTK), which is freely available in source code form on the Internet.
IP masquerading has been useful in environments where IP addresses are very sparse. In fact, this situation prevails in most of the developing countries where we do not have enough IP addresses for corporate clients. IP masquerade plays a very important role in those circumstances where we can use a Linux box to support a virtually unlimited number of computers to connect to the Internet through a single legal IP address.
Some of POL's corporate clients have dozens of computers on their private networks where they need Internet access. It is difficult for us to assign as many IPs to these customers as they want. We use the Linux IP masquerading feature on a computer running Linux, which then acts as gateway for the private LAN. Now the company is free to add as many computers on their private LAN as they want, without consulting the ISP again and again.
This arrangement has been very useful for us and has given POL an edge over other ISPs here. Other ISPs try to provide such a solution based on the Windows NT Server and Microsoft Proxy Server, which are costly for both hardware and software. Also, the NT-based system does not pass through all protocols as transparently as Linux does.
The transparent proxy feature in the Linux kernel is also very useful. We are using the transparent proxy feature with the help of Cisco routers to force all WWW traffic to pass through our proxy server. On the Cisco router, we have extended access lists which redirect all outgoing traffic on port 80 to pass through the Linux server. Another package, tproxyd, has also proven to be very useful in our operations. More information on tproxyd can be obtained from its web site (see Resources). A sample Cisco access list that serves the job might look like this:
interface Ethernet0 ip address 220.127.116.11 255.255.255.0 ip policy route-map proxy-redir ! access-list 101 permit tcp 18.104.22.168 0.0.0.255 any eq www route-map proxy-redir permit 20 match ip address 101 set ip default next-hop 22.214.171.124 !
Now, when the router receives an IP packet with a destination port equal to 80 from any computer on local network 126.96.36.199/24, it redirects this packet to 188.8.131.52, which is a Linux server running as the proxy server. The Linux ipfwadm utility is then used to manage this kind of traffic.
As far as caching is concerned, squid has remained our choice. It provides very good performance as well as stability. This is also used with the transparent proxy feature of Linux to obtain extra benefit. A number of tools for squid which analyze performance and scan logs in graphical format are available on the Internet.
The ISP accounting and billing system is the most important thing, because this is the process that generates money for an ISP. It needs to be very accurate and stable. We have three types of systems that support dial-in users:
Linux-based servers with multiport cards. The login and logout information from these is obtained through syslogd.
Xyplex Max 1640 servers that can send both RADIUS and syslogd information. We are using syslogd information for billing purposes.
Cisco remote access servers, which are sending accounting information to the RADIUS server.
We have developed a billing system in C that gets information from all three types of servers and generates user log files. Any user can see his billing information whenever required. We also use shell scripts for some housekeeping jobs in our billing system. It has proven to be a very good and user-friendly system, and two other small ISPs now use this same billing system.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide