That First Gulp of Java
Most modern languages are “fully compiled.” The compiler generates the “native code” of a specific target platform, i.e. the machine language appropriate to a particular operating system running in a particular processor. Once a program is installed on a user's machine, the operating system executes its instructions directly—an arrangement that achieves efficiency at the expense of portability. For example, if you are running Linux on a Pentium-based PC and creating a C program with GNU's gcc compiler, the resulting executable will run just fine on your machine and ones like it, but not on a Pentium running OS/2, and not on a DEC Alpha running Linux.
If you want to distribute your program widely, you will need to recompile it for a dismaying number of platforms, probably using a number of different development tools. Oh, and you want to keep supporting your software after sale, as well? Nice of you—start hiring. Experience has shown that the long-term effort of maintaining software products on multiple platforms far exceeds the effort of developing them in the first place. And the costs are proportional to the effort—better hunt up some heavy financing to pay all those people.
Java eliminates the complexity of cross-platform development and support through its reliance on a “virtual machine.”
As the word “virtual” implies, a Java compiler's target is a machine that does not actually exist. Instead of generating the native code of a particular platform, it produces “bytecode”, a sequence of 8-bit codes that no actual machine can execute directly. Your program will run, however, and not only on your Linux box, but on any platform that supports Java—and these days that's the same as saying “on any popular platform.”
To execute a Java program, a machine must have a Java Run-time System (JRTS), an implementation of the JVM for that platform—but that is all it needs to run any program written in Java. The JRTS executes the bytecode much as an operating system executes native machine code. Because the run-time system handles all those nasty machine-specific issues for every program, the program itself does not have to.
It is a common mistake to confuse the run-time system with the virtual machine. Even people who should know better sometimes refer to “a program running on [a particular computer's] virtual machine”—and thereby conceal a crucial distinction. Part of Java's unique character is that only one piece of software, the JRTS, knows anything about the particular platform. Programs themselves remain blissfully ignorant of hardware dependencies—and so do programmers. They write their code for a machine that does not exist, serenely confident that doing so makes it portable to any popular platform.
A JRTS loads compiled classes as needed, performs security checks, and dynamically binds calls to methods. At that point many run-time systems begin executing the bytecode, interpreting each one as it is encountered. This continuous interpretation limits execution speed, and is the source of many early complaints about poor performance. An increasing number of Java implementations solve this problem by performing a second compilation step, “just in time.”
Native-code compilers produce fast executables at the expense of portability. Java compilers that produce bytecode achieve portability at the expense of speed—if the JRTS interprets each instruction every time it is encountered.
Many run-time systems don't. In place of the interpreter they include a just-in-time (JIT) compiler. The first time the JRTS loads a portion of bytecode, the JIT compiler translates it into native code. Thereafter, the run-time system executes the native code instead of interpreting the bytecode; execution speed improves dramatically.
It is worth stressing that users get the speed of fully compiled programs without sacrificing portability. The JIT compiler is part of the JRTS, not the Java source-code compiler, so all platform-specific knowledge resides only in the run-time system, on the user's machine, where it belongs. Software developers continue to compile and distribute the same portable, architecture-neutral bytecode files.
A second compilation step is not as expensive as it might sound. JIT compilation is actually quite fast in practice, because the most time-consuming tasks are completed in the first translation, from original Java source code to bytecode. JIT-compiled code is currently running 20 to 30 times faster than interpreted bytecode; this level of performance compares favorably with that of object-oriented code written in C++. Future improvements could boost this ratio to 50 or more, which would put Java executables on a par with optimized C code.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide