Satellite Remote Sensing of the Oceans
Modeling of phytoplankton blooms and the subsequent chlorophyll concentrations is also done here at the University in conjunction with satellite ocean colour data. This data reveals information about pigment concentration that is a measure of the biological activity in the water. The pigments are part of the phytoplanktons' biological strategy for getting energy from sunlight (photosynthesis) so they can live. The study of phytoplankton blooms is very important for the study of the carbon cycle and its global warming implications.
Figure 6 shows an interesting image from the Coastal Zone Colour Scanner (CZCS) which is an instrument which flew on the Nimbus 7 satellite. The instrument is no longer functional but worked well between 1978 and 1985. The image data were acquired on 14/9/80 and show the Western coast of the Iberian Peninsula. The image shows pigment concentration during a strong upwelling event. (Equatorward winds push the water away from the coast, and cool water from beneath the surface is drawn upwards near the coast.) The pigments are produced by phytoplankton.
The subsurface waters are generally cooler in the coast than the surface and in Figure 7 this is shown on a coincident thermal image from the AVHRR satellite as the blue, cool area. So the high pigment concentrations in the CZCS image can be explained by the fact that the upwelling event observed in the thermal image has led to the pigments being brought closer to the surface where they are more visible to the CZCS satellite instrument. Also, nutrients upwell with the phytoplankon and as they are closer to the surface, where there is more light, they are able to photosynthesise more effectively and thus form large blooms. This multi-sensor approach to oceanography (using complementary data from different sources, e.g., WAR, thermal and visible imagery) provides a more comprehensive view of a region than would be obtained using only one source of data.
For serious image processing you need a fast machine with good graphics support. For satellite images you also need vast amounts of storage. So I will talk about these in turn, bearing in mind that cost is always a factor.
At the moment Intel P200 and AMD K6 processors are very fashionable although price-wise a P166 will give comparable performance for much less money. It's difficult to make price comparisons though because here in the UK electronic components are generally more expensive than in most other countries. The Intel 430TX motherboard is generally the one I would choose at the moment, USB and Ultra DMA support being standard.
Depending on the amount of time you spend using your machine for graphics I would recommend at least a 17-inch colour monitor. We do have some Illyama 21-inch monitors, but at the moment those extra few inches double the price of the monitor. A fast graphics card with lots of on-board RAM will make your machine update the display much faster, especially if you are using large images. Any S3 card (e.g., S3 trio v64+) with 2MB+ on board should give you enough to cope with most demands, although a 4MB card should give plenty of scope for dealing with vast displays, especially when using the monitor at its highest resolution.
Our group has about 10GB of storage space allocated on the network server, which is almost enough. If you need speed, you need a lot of disk space local to the machine. The local hard disks of workstations are rarely backed up, so beware of depending on it too much. About 3GB of hard disk space is sufficient, and these days E-IDE is about as quick as SCSI and certainly cheaper. New IDE disks have Ultra DMA which allows a 33MB/s transfer rate, double that of the old IDE, although you will need at least the 430 TX motherboard to take advantage of this rate.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Interview with Patrick Volkerding
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide