Linux and Networking: The Next Revolution
Revolution is defined as a sudden and fundamental change, and having seen the effects of one revolution does not always allow us to foresee those of the next. Revolutions can cause new waves of innovation and shifts in power and control, changes that are unnerving and exciting at the same time.
One might say that the computer world is experiencing revolutions in the areas of both hardware and software. What might happen in the networking arena when they intersect and amplify one another? Of particular interest is how possible changes will affect the functionality of the solutions available to endusers, now that it is suddenly possible to build a new generation of network appliances that are more powerful and incorporate richer functionality than traditional router appliances and other networking equipment. The potential impact of Linux on networking is even more significant than the impact it has already had on the server market.
During the 1970s and 1980s, computer networks were predominately host-based, with mainframes or midrange systems constituting most of the processing power. Users were directly connected to the host using simple, non-intelligent terminals.
In the early 1990s, computer network protocols converged into a few standards and were adopted for use with desktop computers. Local area networks became a requirement for almost any type of business. While network services were distributed among many servers, the network connectivity migrated from host computers to specialized hardware designed and built to perform internetworking functions. These specialized network appliances (routers, switches and access servers) allowed more reliable, cost-effective and efficient networking. Network connectivity (provided by a router) and network services (running on the servers) are seen as separate entities. This is how most people understand networking today, but cheap hardware and open-source software is beginning to change this.
The PC revolution that consolidated in the late 1980s extended computing power to almost every office desktop. Fueled by a high level of competition and the establishment of industry-standard hardware and software, PCs became more affordable and more powerful. With the Internet, an explosion in the demand for home computers was triggered. PC manufacturers were able to leverage the volumes driven, and prices of PC-related hardware dropped sharply. An article in the May 1995 issue of PC Magazine said, “As of spring, the touchstone price is $1,999 (US) for a Pentium/75 multimedia system with 8MB RAM, a 700MB hard disk, and a 15-inch monitor.” In January of 1997, when the first PCs for under $1,000 were offered, the same magazine wrote, “So what does $999 get you? You can buy a 120MHz or 133MHz system, for less than $1,000.” As we start a new decade, consumer PC prices have dropped further. Today, we can buy desktop computers with CPUs running over 500MHz, 128MB of RAM and many built-in peripherals for a few hundred dollars. That is about the same price you would pay for a typical access router with much less impressive hardware specifications.
Because the architecture of servers and desktops are similar, manufacturers can take advantage of the low cost of components to build inexpensive server systems. Although cost was one of the important factors in the substitution of servers for routers, this is no longer necessarily the case. Standard hardware components are becoming so inexpensive that it is almost impossible for the manufacturer of a proprietary hardware device to be competitive. This trend is reaching a threshold where the addition of a catalyst could trigger a paradigm shift.
Linux may be that catalyst. Distributed without restrictions on use and installation, it is always provided with source code. It is not necessarily free (zero cost), but anyone can change or improve it to meet specific requirements. Frequently, those requirements are shared by others, and the changes or improvements are fed back to the community.
According to the latest IDC numbers (August 2000), Linux was the second most popular server OS in 1999, with 24% of new server licenses, and is the fastest growing one (the Windows platforms together have 36%). The favorite for Internet-related applications, it is growing quickly in the enterprise market for corporate applications. Due to its roots in the Internet, Linux developed strong networking support, better than other commercial operating systems. Because it is open source and receives the contributions of a huge developer community, Linux is more flexible and evolves much faster as well. The Linux operating system has features, security and robustness comparable to a specialized, internetworking operating system. Put it together with commodity hardware and the result is a very powerful network platform.
It's obvious that Linux is changing the server market landscape: 24% of the market is substantial, regardless of your preference. While it remains to be seen whether Linux will change the client/desktop market, its impact on the networking market is sure.
In this early stage of the revolution, there is still a need for technology integrators to make these benefits widely available. Some technical users are doing the integration themselves. They get communication boards, integrate them with standard PC hardware, and build their own Linux-based network boxes. One such example is Internet Service Providers who, instead of buying a PPP remote access server to provide dial-up Internet access, use Linux servers with multiport serial boards connected to modem banks to perform the same function. Some technology integrators, however, are already delivering a successful new generation of network appliance products. For example, the Cobalt Qube is an all-in-one Internet gateway for small-and medium-size businesses, that can be fitted with a routing board for WAN connectivity. The whole solution integrates all the Internet functionality needed, including network services and connectivity, is very easy to set up and manage, and costs about the same as the less functional access router it replaces.
But this new network device is not simply a more affordable replacement for the traditional router. It has the added advantages of expandability and flexibility. As routers were once better adapted for the network of the past, the new network appliance is better adapted to the realities of the future.
Users end up getting new products that are cheaper, better and easier to use; products that replace the router or access server, incorporate new networking services, and can be easily customized to each different application. It is a different kind of product.
To fully understand the impact of Linux in this mix, it's necessary to consider the latest IDC numbers (available at www.idc.com/itforecaster/itf20000808.stm). The total client and server OS market was about $17 billion in 1999. Windows generated almost $8 billion in revenues, while Linux generated less than $100 million. Considering that Linux now has a substantial share of the market, those numbers are shocking. From the user standpoint, the value of a solution is the same, independent of the OS being used; so one would expect revenues to be proportional to the market share. If Microsoft is making $8 billion on Windows, where are the Linux revenues? Because Linux combines open architecture and the business models implied in the open-source model, its “revenues” translate almost directly into savings for endusers—savings that can be used to pay for integration and services that produce better solutions for each user.
This gives some idea of the impact and the shift in control and power that Linux brings to the table. But our focus is networking, rather than general purpose OS. According to the last Data Communications' market forecast, the size of the network equipment market in 1999 was $70 billion in the U.S. and $120 billion worldwide. All of this money is going to the equipment manufacturers, holders of proprietary software and hardware technology, such as Cisco and Nortel.
So, when open architectures replace proprietary boxes, a lot of money will change hands. In the networking market, this is happening in both software and hardware at the same time, the impact is amplified.
In the past, proprietary solutions were used because they were cost-effective compared to server-based solutions. Linux and standard hardware frees the market, allowing technology integrators to produce better solutions and be competitive without the need to drive large volumes (the large volumes are already integral to open-source and standard hardware). The need for better solutions drives the change, and open-source and commodity hardware enables it.
These changes have important consequences. Today, users depend on proprietary networking box solutions for features and functionality. They work well for connectivity but cannot have services added or be customized in terms of functionality. Users are driven by economics to separate network connectivity from network services, and to choose solutions that are many times more cumbersome and difficult to manage. In the near future, however, additional functionality will be incorporated into the connectivity product, and using an open platform will make incorporating new hardware or software technologies simpler and quicker. Users and technology integrators will not depend on a sole technology provider, and control will shift toward the enduser.
As with any change, at first it is not easy to perceive all the benefits of a new approach. For networking, all the elements are in place and changes are coming. New possibilities will be discovered along the way, and those discoveries will lead to a new, more powerful approach to networking.
Marcio Saito (email@example.com) is director of technology for Cyclades Corporation.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide