Cost-Effective Services for the Office
Aerofil Technology had been using cc:Mail to support internal e-mail needs for about a year and a half when I joined the organization in 1995 as an Information Services Technician. The IS manager at that time had implemented cc:Mail along with a major upgrade of computers to Windows 3.11. Networking was installed to allow connections to a Novell server for file and print services as well as access to the corporate accounting and manufacturing software.
Around my first year, the computer inventory, user base and cc:Mail usage grew, requiring the purchase of additional licenses. In 1996, my boss left to do independent consulting and I was given the opportunity to run Information Services. A one-person department has its advantages; in particular, I could make all decisions regarding technology. It also has its disadvantages, such as limited funding for projects and necessities like software licenses.
During this time, our local phone company began providing Internet service and Aerofil signed up for the corporate option, including 20 e-mail accounts which were quickly issued to a privileged few. No grandiose plan was in place with the initial sign-up, but once a few employees had Internet e-mail, everyone wanted it. Also, more and more of our customers wanted to communicate in this way. Having the ISP maintain 70+ e-mail accounts would obviously be costly, and we would not have control over the accounts. The ability to change passwords was important. So, I asked for help from our ISP in obtaining a domain name and in hosting that domain for the company.
Because so many users required access to e-mail, I set up a Linux machine with diald and IP masquerading and named it “Gatekeeper”--an Intel P133 with 32MB of RAM. This machine was a vast improvement over the previous configuration in which certain individuals had their own modems, plus we had a modem server that didn't allow more than one person at a time to access it. With Gatekeeper running, everyone had equal access. Whoever accessed it first would initiate the dial-in, but once connected, everyone had instantaneous access. Typically, once I had my e-mail client running, the connection was up all day. This was not a problem for our ISP, since their heaviest usage occurred between 8 PM and 10 PM. We were on-line only between 7 AM and 5 PM.
With the Linux machine in place, I investigated setting up fetchmail so that we could handle all of our own e-mail accounts. At the same time, I also began looking at ways to gain more control of our web site maintenance. Our ISP required that any changes be e-mailed to them for implementation. They eventually set up a configuration to do this using Samba, but it was still troublesome due to name-mangling problems.
Once our ISP began offering DSL (Digital Subscriber Line) service, I decided to wait until DSL could be implemented before pursuing direct e-mail and web site account management. Our IS budget would not allow us to implement DSL as a solution until January 1998.
In the meantime, in order to enable TCP/IP on our network, I set up a second Linux machine, appropriately named IPkeeper, to act as the DHCP (Dynamic Host Configuration Protocol) server—an old Intel 486SX33 with 16MB of RAM. This allowed us to specifically assign IP addresses by hardware address or simply from a set range of addresses. I set this up on a separate Linux machine in order to simplify the Apache configuration on Gatekeeper and to segregate the Intranet from the Internet.
In January of 1998, I placed the order for the DSL connection and added an additional Ethernet card to Gatekeeper for the DSL equipment connection. By February, Aerofil had a 128K connection to the Internet. Testing showed the connection speed to be adequate for hosting our web page. I then notified the ISP to make the necessary changes to DNS to reflect our domain location at Gatekeeper. I moved all of our web pages down and set up all the e-mail accounts that had existed at our ISP, as well as adding additional accounts for the other users who required e-mail. At this time, the company drafted an Internet and E-mail Usage Policy to discourage inappropriate use of the service. In hopes of conserving bandwidth, Internet access was restricted to those users with a justified need. This was accomplished by modifying the IP masquerading rules on Gatekeeper to allow access to specified IP addresses.
With IPkeeper on the network, even more opportunities were available to us. Ongoing discussions had been held to decide how to “computerize” corporate documents and make them accessible to every PC on the network. A variety of software solutions for Windows NT were looked into and found to be too expensive for our company. Instead, I used existing Internet technology and set up IPkeeper to host our Intranet, upgrading it to an Intel PII-233 with 32MB of RAM.
In June of 1998, the company hired a college student for the summer to assist in getting our existing documents on-line. Some of the documents were in a word-processor format, some were flowcharts done in Visio, and some existed only in paper form. Several methods were used to get this information onto the computer: scanning, converting to PDF format and coding in HTML. Basically, I set up various user areas to segregate the documents, such as Human Resources, Information Services, Aerofil Process Descriptions, Quality Control, Safety Process Descriptions, etc. Each area had security set up to ensure that only designated users could add or modify the information contained in it.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
- It is quiet helping
2 hours 1 min ago
2 hours 18 min ago
- Reachli - Amplifying your
3 hours 34 min ago
4 hours 23 min ago
- good point!
4 hours 26 min ago
- Varnish works!
4 hours 35 min ago
- Reply to comment | Linux Journal
5 hours 4 min ago
- Reply to comment | Linux Journal
7 hours 30 min ago
- Reply to comment | Linux Journal
11 hours 30 min ago
- Yeah, user namespaces are
12 hours 46 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?