Focus: Science & Engineering
New technology has always found a home in the science labs of universities and the research departments of both scientific and engineering companies. Schools and new companies looking for their niche in the marketplace often have restricted budgets and a need for highly robust systems. For both these reasons, Linux has been embraced from the beginning by research departments in universities, business and government.
I have always had in interest in science. My degree is a B.S. (math and physics), one of my hobbies is astronomy, my job for many years was programming geophysical applications, and my husband is a physicist. As a result, our Science & Engineering issue is always one of my favorites. I enjoy reading about the cool ways Linux is being put to use in these fields. And this issue is no exception—we even have an article about geophysics.
Ed Petron's article on teaching computers to think brings science fiction to everyday reality. This new method of programming is a quite a step from the usual algorithmic approach. Speaking of science fiction, take a look at the pictures of Fermilab and the great article by Jon Hall.
Inside, Wolf-Rainer Novender describes SCEPTRE, a simulation tool for electric circuits, and on the Web, Alasdair McAndrew gives us a comprehensive tutorial for the mathematical tool, MuPad. Also in “Strictly On-line” are articles about using GPS technology to do precision farming and calculating underground water quality using parallel algorithms. Two unique uses of Linux that I would never have dreamed up.
As long as students continue to be exposed to Linux at school in their science labs, Linux will continue to make inroads into engineering and scientific applications.
Ransom Love of Caldera dropped by our office in April to give us a copy of their new release, OpenLinux 2.2. I installed it on a test machine to see if the much-touted “new and easy” Lizard install truly worked. Basically, it did, but I had one problem: the install was hanging while probing for the SCSI device, a 2-channel UW Adaptec on-board controller. A message from Caldera support recommended I use the boot parameter er=cautious. I did, and it worked. I also had to use the custom install option and define partitions, since the machine I was using already had the Be OS installed. If the OS had been Windows, Lizard would have automatically built the partitions using PartitionMagic. Even with the custom install, the entire procedure took about ten minutes and I had a working Linux system with KDE, WordPerfect8, Star Office and other goodies. It was so fast, that by the time it offered to let me play Tetris, the installation was complete. When Linux detractors say Linux needs an easy install, this is what they want. We'll have a full review next month.
Marjorie Richardson, Editor in Chief
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
2 hours 38 min ago
- Yeah, user namespaces are
3 hours 54 min ago
- Cari Uang
7 hours 25 min ago
- user namespaces
10 hours 19 min ago
10 hours 45 min ago
- One advantage with VMs
13 hours 13 min ago
- about info
13 hours 46 min ago
13 hours 47 min ago
13 hours 48 min ago
13 hours 50 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?