burn-in: One of the quality tests performed on electrical circuits in computer equipment during the manufacturing process. During the burn-in process, the temperature may be varied from below freezing to above 100 degrees Fahrenheit to test the circuits in a computer or its components while they are operating. In some tests, the input voltage may be varied.
latency: Delay between when a computer receives an address to which data is to be transferred and when it actually starts the transfer.
message-passing: Term related to distributed multiprocessing operating systems for communications between tasks.
MIMD: Multiple instructions, Multiple Data machine. Massive parallel processing architecture in which the processors work as a team, solving large problems by dividing them up. Each processor has its own memory. The number of processors in a MIMD system varies from 16 to 2000. Each processor manipulates different data independently.
parallel programming: Writing a program so that separate elements of it are executed at the same time. Concurrent C/C++ is an example of a language written for parallel programming.
PCI bus: Peripheral component interconnect bus. The local bus standard developed by Intel Corp. which allows the central processing unit to transfer data to 16 devices at 33MHz along a 32- or 64-bit pathway. This version is a separate bus isolated from the CPU.
RS-232: Standard for cable and 25-pin electrical connection between computers and peripheral devices using a serial binary data interchange. Used for slower communications, requiring speeds of no greater that 20Kbps, with a standard limit of 75 feet.
SIMD: Single instruction, multiple data. Massively parallel processing architecture with large numbers of processors working on a single problem but sharing distributed memory. SIMD computers have between 1000 and 16,400 processors.
virtual: Anything that appears to be other than what it actually is, e.g., virtual memory is the apparent expansion of the computer's memory by using disk space to store programs and data.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?