In the past month, the development team I lead and I went though the same search for the appropriate language or SDK with which to write software destined to run on multicore systems (in my case, 8-core/32-thread processors from Raza Microelectronics as well as future Intel 8-core CPUs) as well as single-core systems.
So Nicholas Petreley's article “Is Hardware Catching Up to Java?” in the November 2007 issue was of great interest, though in the end we came to different conclusions.
Nicholas picked Java because it has some multithreading support built in, though he admits that is far from being a slam dunk for issues related to garbage collection.
I don't think GC's implementation is what is most important. I think what is most important is being able to write multithreaded software with as few bugs as single-threaded software. In my experience, once you get past the simple, large-scale pieces of the software that can be run on separate threads, you hit a wall. For example, it is usually easy in server software to run each client's requests in a different thread. That is easy because the number of places where two client threads interact, and the amount of data they share, is limited and well defined. (Well, if it isn't, it's going to crash.)
But, how do you get beyond that and do things like running a for loop (in C or Java) in parallel and knowing the implementation is right, and will remain right, over the next five years as new software developers alter the rest of the software?
Java cannot help you there, not more than C, C++ or Python, because they all share something: shared state. In all these languages, the default is that data is shared. Any thread can write to anything to which it has a pointer. There is no guarantee beyond documentation and code reviews and the good intentions of future developers that the data your threads use isn't changing in ways that will crash them.
My conclusion of my search was that the proper language for multicore software was a single-assignment language: Erlang or Haskell. In these languages, the default is that software cannot alter a value after it is assigned. Thus, data structures can be shared between threads without laying down rules about how it can be used or not used (locks, lock-free algorithms and so on). In these languages, the variables that act like normal Java or C variables are the exception, and are defined differently from the rest. In fact, in Haskell, they are extremely well marked—to the point that any function that accesses them (even to read) is marked as well.
In the end, we decided to develop in Haskell, using its C interface to connect it with our existing C code. I've previously worked with developers who swore by Erlang (and thought at the time that we were nuts to code in C++).
PS. You mentioned Python. Python (more precisely, the CPython interpretor,
one everyone uses and for which we have all the nice plugins and tools
support) has an Achilles' heel: the global interpretor lock (GIL). It may
be multithreaded, and stackless Python is perfect for multithreaded server
software. But, the GIL means the Python code cannot run on more than one
I have known Dave Taylor for many, many years, having interacted with him at various USENIX conferences. His discussions of shell programming in his Work the Shell column are useful to all of us.
Unfortunately, he should have chosen another application area instead of numerology for his recent article in the 2008 January issue of Linux Journal. By writing such articles, even more people are led to believe that there is validity in traditional numerology. There isn't.
Systematics (www.systematics.org) on the other hand, a discipline developed by John Bennett and others, asserts that numbers do, in fact, have “qualitative significance”. Instead of “associating numbers with letters”, Dave could have presented a shell script to, for example, enumerate the various “inner connections” within each of Systematics' primary “systems” (monad, dyad, triad, tetrad, pentad and so forth).
Let's not encourage useless, unreal “disciplines” by publishing
articles involving them. Rather, Linux Journal should
focus on what is true and of value.
Kenneth Hood Jacker
Dave Taylor replies: Interesting...there are 17 letters in your name, and the letters sum up to 77. When I started programming, one of the languages I learned was Fortran 77. Coincidence? Maybe not. In any case, thanks for your note, Kenneth.
I have an update on this [see Letters, LJ April 2008]. I finally got tired of the old notebook running out of memory and migrated to the new Lenovo. I'm getting by using mostly one workspace, with all the windows overlapping, which I hate apparently about as much as my wife hated the pannable virtual desktop. Having recently re-installed Linux on my home desktop (going from Red Hat 9 to Ubuntu 7.10), I got a taste of Compiz and all its fancy features. That made me wonder why on the Lenovo, Compiz wouldn't let me enable any visual effects.
It turns out this is yet another case of the Intel X server sucking. It seems under this X server, you can either have Xv accelerated video playback or Compiz. Ubuntu “solved” this problem by blacklisting the Intel X server. I found I could get around this blacklisting by adding SKIP_CHECKS=yes to /etc/xdg/compiz/compiz-manager, but the next time I tried to play a video file, I found I could not. There are workarounds, configuring the various video player apps to use something other than the default (Xv) for video output, but those result in slower or buggier (video always on top) behavior.
Some have suggested running the i810 X server rather than the newer Intel one, but when I tried that, X wouldn't run at all.
Had I known how bad the X server support is for this video chipset, I would have blacklisted machines using it while shopping for a new notebook.
I'm still waiting for Xi to get the necessary programming info from Intel so they can produce an Intel X server that hopefully doesn't suck.
As a side note, the ASUS Eee PC also uses a similar Intel video chipset and
suffers all these same problems. I recently got an Eee at work, and that
tiny screen just begs for a virtual/pannable desktop. Too bad it uses the
Intel X server. Frequently, windows pop up that have to be moved
(Alt-click-drag) partially off the screen to get to the buttons on them.
These things aren't as big of a deal for me on the Eee, as I wanted it
primarily as a router config terminal and “go anywhere” portable Internet
terminal, and I knew before we ordered it that I wouldn't be happy with
the screen. The Eee would be great if it was just a bit bigger (making
the keyboard less cramped), had a bit more screen resolution and size and,
of course, a non-Intel video chipset with an X server that doesn't suck.
In regard to the letter from Nick Couchman in the March 2008 LJ, “More Business Content, Please”, I agree with Nick to a point but must express that he may have missed the business side of some articles. Like he says, articles about LTSP for schools and such are great, but has he ever considered using it as a FREE (beer) connection broker for VDI? With XP licenses as the only pay-for product, I use LTSP to boot old machines with Etherboot or PXE into an rdesktop screen pointed at that person's XP virtual machine on VMware server. Linux all the way to the VM. I'd also like to call attention to Dave Richards' blog (davelargo.blogspot.com). He has more than 500 thin clients deployed in the city of Largo, Florida. The whole city operation runs Linux, Evolution, OpenOffice.org—beautiful.
I would like to see more business-related articles, such as using
Coraid's AoE product in a VMware server or ESX environment. But, part
of the fun is being able to read an LJ article and
think “Hey! I can
adapt that to my business.”
I am writing regarding the article in the March 2008 issue of LJ titled “Desktop Must-Haves” by Dan Sawyer.
First off, I want to say that the article was great and well written and quite lucid. I have no problems with anything that Mr Sawyer said in the article, and agree with many of his choices for good Linux desktop applications.
What I, personally, have had issues with in moving from my Mac OS X platform to Linux as a desktop is the Pro Audio realm. I have yet to see any program that replaces three or four of my “must have” applications. I am learning that there may be replacements out there, and if I can find one that suits my needs, I would replace my Mac with a nice Core Duo Intel box, most likely running Debian. The applications that I need to replace are Logic Express or another audio package like Adobe Audition 2 (Cool Edit) for multitrack recording and MegaSeg (which is a DJ software, www.megaseg.com). These are my biggest hold outs. I haven't been too keen on the iTunes replacement offerings, but admittedly have not looked at any of the projects since 2006.
My profession is Web development, and I do use *AMP. On Linux, I have found that the Bluefish Editor is my editor of choice and does most of what I need for the Web. I am also very open to using The GIMP or Krita, as Mr Sawyer pointed out, but the main reason I haven't switched is the lack of third-party plugin support for GIMP from the plugins I use all the time, namely Alien Skin Software. If they would write Xenofex for GIMP, I would be using it in a heartbeat. Yes, going from Photoshop to GIMP is a bit of a curve only because you have to learn what the authors of GIMP call your favorite tools. Once you are past that, you should be able to do everything in GIMP that you do in Photoshop (in my opinion) except for the aforementioned plugins, which to date I have not figured out how you could produce these effects without them. Also, the Layer Styles in Photoshop seem to be missing from open-source counterparts.
It would be nice to sell my Mac and go totally Linux (Debian for me),
but I remain unconvinced that everything I do is covered, as of
J. Mike Needham
Dave Taylor, in his March 2008 article “Understanding Shell Script Shorthand”, says that Ada makes it easy for programmers to abbreviate their code (“abbreviate their code to make it shorter”! Well, yes, Dave, so it would!) to the point of obfuscation.
I've never (in 25 years) met an Ada programmer who thought it was
clever, funny or macho to write code that's hard to understand.
Indeed, the designers of the language rejected “neat” constructs that
might make code easier to write if it was felt that they would make
code harder to read.
Have a photo you'd like to share with LJ readers? Send your submission to firstname.lastname@example.org. If we run it in the magazine, we'll send you a free T-shirt.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide