/var/opinion - Is Hardware Catching Up to Java?
I had a wonderful experience chatting with the folks at RapidMind. The company provides C/C++ programmers with a very clever tool to help them exploit the parallel processing capability of multiple core CPUs. The idea is to make parallel processing at least somewhat transparent to the programmer. Programmers can use RapidMind to improve multicore performance of existing applications with minimal modifications to their code and build new multicore applications without having to deal with all of the complexities of things like thread synchronization.
RapidMind and any other solutions like it are invaluable and will remain so for a long, long time. C/C++ is an extremely well-entrenched legacy programming platform. I can't help but wonder, however, if the trend toward increased parallel processing in hardware will create a slow but steady shift toward inherently multithreaded, managed programming platforms like Java.
The inherent trade-off to adding cores to a CPU poses an interesting problem for any programming platform. As you add cores, you increase power consumption, which generates more heat. One way to reduce heat is to lower the processing speed of the individual cores, or at the very least, keep them from advancing in speed as quickly as they might have if history could have continued along its current path.
As a result, any single thread of execution would run faster on a speedy single-core CPU than it would on a slower core of a multicore CPU. The seemingly obvious answer is to split up the task into multiple threads. Java is multithreaded by nature, right? Therefore, all other things being equal, a Java application should be a natural for multicore CPU platforms, right?
Not necessarily. If Java was a slam-dunk superior way to exploit parallel processing, I would have posed this shift to Java as a prediction, not a question. It's not that simple. For example, don't let anyone tell you that Java was built from the ground up to exploit multiple processors and cores. It ain't so. Java's famous garbage collection got in the way of parallel programming at first. Java versions as late as 1.4 use a single-threaded garbage collector that stalls your Java program when it runs, no matter how many CPUs you may have.
But Java's multithreaded design more easily exploits parallel processing than many other languages, and it ended up lending itself to improvements in garbage collection. JDK 5.0 includes various tuning parameters that may minimize the impact of garbage collection on multiple cores or CPUs. It's not perfect, and you have to take into account the way your application is designed, but it is a vast improvement, and the improvement is made possible by the fact that Java was well conceived from the start.
Obviously, these features aren't enough. IBM builds a lot of additional parallelism into its WebSphere application server. In addition, IBM and other researchers are working on a Java-related language X10, which is designed specifically to exploit parallel processing (see x10.sourceforge.net/x10home.shtml).
It is also interesting to note that when Intel boasts about its quad-core performance on Java, its numbers are based on using the BEA jRockit JVM, not the Sun JVM. See www.intel.com/performance/server/xeon/java.htm for the chart and more information. I suspect Intel used this JVM because the BEA JVM employs garbage collection algorithms that are more efficient for use on multiple cores.
The fact that Intel used a non-Sun JVM makes this whole question of the future of Java on multicore CPUs interesting. I won't discount the possibility that Intel may have tuned its code to work best with the BEA JVM. But it is a significant plus for Java that you can choose a JVM best suited for the hardware you have. The big plus is that you still have to learn only one language, Java, and this does not limit your choice of architecture or platforms. If you run your application on a multicore platform, you can choose between JVMs and JVM-specific tuning parameters to squeeze extra performance out of your application.
Now, think a bit further into the future. Imagine parallelism showing up in a future generation of cell phones or other small devices. What better language than a platform-neutral one to take advantage of this future?
Some disclaimers are in order, though. First, I don't think any tool or language will soon make optimal programming for multiple cores or CPUs totally transparent. Java has the potential to exploit multiple cores with less impact to the programmer than many other languages, however, which is why I think Java's future is getting even more promising with this new hardware trend. Second, although I think Java has the edge, other managed, multithreaded languages will no doubt ride this same wave. Python, Ruby and your other favorite language in this genre probably have an equally bright future.
Nicholas Petreley is Editor in Chief of Linux Journal and a former programmer, teacher, analyst and consultant who has been working with and writing about Linux for more than ten years.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Tech Tip: Really Simple HTTP Server with Python
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide