Why can't you guys be a little more consistent with the focus on topics?
The ULB issue has been coming out in different months in the last few years,
and it's kind of hit or miss, because I can't seem to find anywhere on your
Web site a little section that would say what's going to be in the next
issue. And, what is up with the ULBs being nothing more than just
high-end gaming PCs? Whatever happened to real workstations? Is nobody
using those anymore?
You can find our editorial calendar with upcoming topics here www.linuxjournal.com/xstatic/author/topicsdue. As far as your ULB question, we've been working with readers to find out exactly what should constitute a ULB these days. Stay tuned.—Ed.
I am having lots of problems with installers on recent “Linuxes”. They all assume that just because I am installing their OS, it will be my primary OS. FAIL! My partitions are Kubuntu-swap-spare, and I use the spare (usually ext3) partition to test new OSes. Every time I do so, I have to use a live CD to get the bootloader straightened out. What a mess! And, until I fix it, I can't get any work done with my primary OS.
Slackware gave me a choice: put the bootloader in the MBR or in the partition superblock. This made chainloading easy. Why can't I get that choice in any of the Debian/Ubuntu family? They overwrite menu.lst without warning. I plan on trying a BSD on the spare partition, won't that be fun?
I also wish GRUB used a single config file. Then, I could save that file to
a thumbdrive and fix things with one command, rather than digging out a
live CD and fumbling with Yet Another Shell Syntax. Say what you will about
lilo, at least it was simple and consistent.
As a longtime software developer working in health care (health insurance-related applications), I read Doc Searls' “Why We Need Hackers to Fix Health Care” with great interest [LJ, October 2008]. My company, while our software is Windows-based and proprietary, struggles daily with interoperability issues with systems from other vendors with which our applications communicate. No patient lives are at risk if our applications encounter issues as described by Doc.
I'm curious about one thing that Doc said. He said that he was “using a
Web browser on one of the nursing workstations there. I was surfing for
about ten seconds when every screen in sight went blue.” Just what was Doc
doing that caused the issue? Or was it mere coincidence of timing?
He was just surfing. Hard to say whether it was coincidence or timing.—Ed.
Greetings. I just received my October 2008 issue of LJ, and one of the wonderful articles I saw was the review of the HP Media Vault mv series product (mv2xxx and mv5xxx products). Having played with one for a few months now, I was surprised at the amount of research that did not go into the review.
For example, take the failure to mention the rather extensive hacking guide posted at www.k0lee.com/hpmediavault and written by one of the HP engineers responsible for this product. How can a review of these devices fail to mention this site? It has links to the source code for the product, how to replace a drive, re-flash instructions and so on.
Otherwise, it's nice to see an open-source-friendly NAS being reviewed—especially one that is open and hackable.
It was merely an oversight. Thank you for pointing out the hacking guide.—Ed.
Since you recently mentioned some interest for the FoxyTag speed-camera
warning system [see New Projects, July 2008], I invite you to consult the latest press release at
www.foxytag.com/blog/?p=48. This article has been copied in many
blogs and some popular Web sites, including mashable.com.
Dr Michel Deriaz
I have been using Linux since the mid-1990s when I had to load a huge stack of floppies to get a command-line version running. Currently, on my primary PC, I dual-boot Windows XP and PCLinuxOS. This machine is a five-year-old home-built machine with an AMD, Socket-A ASUS A7VBX-X motherboard. I keep using it, because it works great with both XP and Linux. My laptop is an old Dell C400 that runs Ubuntu 8.04 wonderfully well.
Linux always has worked well for me with the hardware of yesteryear, but that no longer appears to be the norm. I recently decided to build a dual-core machine to replace my old Socket-A machine. I built up a BioStar TF8200 A2+ with 4GB of RAM, a SATA primary drive and two more SATA drives for a RAID-1 /home. I soon discovered that today's new hardware is not very Linux-friendly. I have tried many distributions and no distribution can correctly process audio from the motherboard's onboard Realtek ALC888 audio chip combined with the NVIDIA support chipset. Likewise, there are problems with the onboard NVIDIA GF8200 graphics. Only Sabayon Linux can use it “out of the box”. It is a nightmare. Of course, Windows XP runs all of the hardware just fine.
My issue is that, today, it is very, very difficult to build a modern
system that is Linux-compatible. I encourage you to work with motherboard
and peripheral boards to advertise the products that are Linux-compatible. Given the dearth of computer makers seriously selling Linux
computers and the difficulty of building a modern Linux-compatible system,
I am concerned there never will be a serious mainstream proliferation of
the Linux OS.
Have a photo you'd like to share with LJ readers? Send your submission to email@example.com. If we run yours in the magazine, we'll send you a free T-shirt.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide