A Brief Introduction to XTide
An illustrated version of the XTide README can be accessed at universe.digex.net/~dave/xtide/. It contains examples of almost every kind of output that XTide can generate and includes full instructions and a FAQ.
You can learn a lot about tides and tide prediction by reading the National Ocean Service's Tide and Current Glossary. An old version is preserved at universe.digex.net/~dave/xtide/tidegloss.html for the purpose of providing definitions for the technical terms used in the XTide README. The latest version, currently accessible at www-ceob.nos.noaa.gov/tidegloss.html, has been separated into many smaller web pages for easier browsing.
The canonical reference for tide prediction is the Manual of Harmonic Analysis and Prediction of Tides, Special Publication No. 98, Revised (1940) Edition, United States Government Printing Office, 1941. However, much of the traditional lore on tide prediction is not digestible unless you like swimming through pages of equations. Probably the easiest introduction to the subject for programmers is to read the source for the Java applets provided in the XTide distribution. These were written to be as small and simple as possible, and you can easily see where the tides are generated.
Although tide prediction is almost a definition of the term niche market, XTide has attracted an amazing number of users, and I hope that it will continue to serve their needs for years to come.
David Flater (email@example.com) is a Computer Scientist (actual job title) living in the vicinity of Washington, D.C. He escaped grad school two years ago with a Ph.D. in Computer Science and is still trying to regain his sense of humor. All things considered, he'd rather be John Carmack.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- A Topic for Discussion - Open Source Feature-Richness?
- Why Python?
- Tech Tip: Really Simple HTTP Server with Python
- Not free anymore
1 hour 46 min ago
5 hours 33 min ago
- Reply to comment | Linux Journal
5 hours 41 min ago
- Understanding the Linux Kernel
7 hours 56 min ago
10 hours 26 min ago
- Kernel Problem
20 hours 29 min ago
- BASH script to log IPs on public web server
1 day 56 min ago
1 day 4 hours ago
- Reply to comment | Linux Journal
1 day 5 hours ago
- All the articles you talked
1 day 7 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?