burn-in: One of the quality tests performed on electrical circuits in computer equipment during the manufacturing process. During the burn-in process, the temperature may be varied from below freezing to above 100 degrees Fahrenheit to test the circuits in a computer or its components while they are operating. In some tests, the input voltage may be varied.
latency: Delay between when a computer receives an address to which data is to be transferred and when it actually starts the transfer.
message-passing: Term related to distributed multiprocessing operating systems for communications between tasks.
MIMD: Multiple instructions, Multiple Data machine. Massive parallel processing architecture in which the processors work as a team, solving large problems by dividing them up. Each processor has its own memory. The number of processors in a MIMD system varies from 16 to 2000. Each processor manipulates different data independently.
parallel programming: Writing a program so that separate elements of it are executed at the same time. Concurrent C/C++ is an example of a language written for parallel programming.
PCI bus: Peripheral component interconnect bus. The local bus standard developed by Intel Corp. which allows the central processing unit to transfer data to 16 devices at 33MHz along a 32- or 64-bit pathway. This version is a separate bus isolated from the CPU.
RS-232: Standard for cable and 25-pin electrical connection between computers and peripheral devices using a serial binary data interchange. Used for slower communications, requiring speeds of no greater that 20Kbps, with a standard limit of 75 feet.
SIMD: Single instruction, multiple data. Massively parallel processing architecture with large numbers of processors working on a single problem but sharing distributed memory. SIMD computers have between 1000 and 16,400 processors.
virtual: Anything that appears to be other than what it actually is, e.g., virtual memory is the apparent expansion of the computer's memory by using disk space to store programs and data.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Returning Values from Bash Functions
- SuperTuxKart 0.9.2 Released
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide