EOF - The Universal Internet Time Source
In theory, setting a computer's clock over the network is easy: simply send a query to the time server and get the current time in return. For low-precision usage on a trusted network, this process does indeed work fine, as demonstrated by the old UNIX “time” protocol. For today's Internet, however, and for millisecond (ms) or even sub-ms precision, problems such as authentication, reliability of the time servers and network delays need to be considered. This is where the Network Time Protocol (NTP), with its reference implementation, steps in. The specification and the reference implementation are being written by Professor David Mills of the University of Delaware, his graduate students and many other volunteers.
To allow everybody to use NTP to synchronise computers' clocks over the public Internet, Prof. Mills has long maintained a list of public time servers. Most of these servers are operated by universities or national standardisation organisations. Today, this list is maintained by the NTP Public Services Project, under the umbrella of the Internet Systems Consortium. However, the growth of the Internet and the prevalence of small, cheap appliances, such as cable or DSL routers, with built-in NTP clients lead to a rapidly growing load on these public time servers. One of the most famous cases involved a severe firmware problem in a range of such devices, resulting in more than 150Mbps of NTP traffic to the University of Wisconsin's NTP server.
After reading the discussion of one time server operator's request to be taken off the public time servers list, I wondered if there was a better approach to this whole problem—instead of having tens of thousands of clients targeting one single time server, the load should be distributed on many different time servers all over the network. So I went ahead and created the original time.fortytwo.ch DNS round-robin in January 2003. The project quickly acquired many interested volunteers and was well received by Prof. Mills and his team. It soon became the pool.ntp.org project with a somewhat more official status.
During the next two years, the project continued to grow, thanks to all the people who mentioned it in various Web forums, HOWTO documents and the like. Today, the project consists of more than 300 servers, offering service to tens of thousands of clients, in a very rough estimate. Also, pool.ntp.org is now the default time server in several operating system distributions, including Debian GNU/Linux, NetBSD and Gentoo Linux.
So far, the growth in servers could more or less match the growth of the user base of the project. However, the future remains challenging, and discussions on the project's discussion mailing list have shown that the project needs to deal with an inherent conflict between providing easy service for as many clients as possible and assuring good quality of the time servers participating in the project. That aside, the big challenges for the near and medium future are:
More automation—currently, I process server additions and removals mostly manually.
Better, more novice-friendly documentation on the Web.
Of course, we always need more servers too.
And above all, we need to deal with abusive clients. In one example, the six worst clients were responsible for 25% of the traffic on one time server.
Although the first three items are not technically difficult and the “getting more servers” plan should see a big leap ahead with the publication of this article, we don't currently have a good plan to educate the hundreds of users with sub-optimally configured clients. Due to their number, they are a serious problem for the project. At the same time, the bandwidth per client is small enough that the big ISPs' abuse departments are not prepared to help in any way.
In the medium to long term, we will need to face the issue that DNS round-robin, as currently implemented, is not a good solution for load balancing on the scale of several hundred servers with a hundred thousand or more clients. Wide deployment of IP multicast together with the existing multicast support in ntpd would be a good solution to this problem, but obviously not one the NTP and pool.ntp.org crew can deploy on their own. Another possible solution is to make the ntpd dæmon aware of the pool.ntp.org project and, in some generic way, similar such databases and have the dæmon configure itself to use such a resource.
Finally, on a personal note, I honestly can say that it was fun to get this project started and see it grow, but I now see the need for somebody new, with fresh ideas, to take over from here. Indeed, as I write this, I am talking with several people about the project's future, and I am certain that the involvement of a new “father figure” will do the project much good as new ideas are looked at and implemented by a new crew.
Resources for this article: /article/8454.
Adrian von Bidder graduated with a degree in computer science from the Federal Institute of Technology in Zurich, Switzerland, in 2004. He is running the pool.ntp.org project in his spare time. His day job is developing the SEPP e-mail encryption gateway at Onaras AG in Wettingen, Switzerland. He can be contacted at email@example.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Interview with Patrick Volkerding
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide