Getting to Know Mono
If you have ever written an application for the Linux desktop, or even looked into writing one, you are familiar with the multitude of language bindings available for the various GUI toolkits. This is one of the strengths of writing GUI applications for Linux; you are not locked in to a particular programming language. Unfortunately, you quickly come to realize that different language bindings offer varying amounts of API completeness. A widget you used from one language isn't yet supported when using a different language. This is the downside of supporting multiple languages. The amount of work needed to maintain an API increases with each set of bindings. A change or update to the original API must be replicated in each of the language wrappers.
Now imagine a single GUI toolkit, accessible from any programming language without having to rely on API wrappers—a toolkit that offers the same functionality to every language that uses it. Mono has the potential to provide this, plus much more, by offering programming language independence as well as programming language interaction.
Life for Mono began about two years ago at the Linux software company Ximian, Inc. Ximian is known for their Ximian Desktop, Evolution PIM/e-mail client, Red Carpet upgrade system and enthusiastic CTO Miguel de Icaza. Recognizing the potential in a couple of newly proposed standards, Miguel de Icaza began prototyping what would later become the Mono Project.
So what were these standards that caught Miguel de Icaza's eye? It's no secret that they were ECMA-334 and ECMA-335, the specifications for the core technologies in Microsoft's .NET development platform. At this point, it probably is important to point out that there is a difference between the .NET development platform and the blanket term “.NET”. Microsoft covers a whole slew of products and services, including operating systems, development tools, network services and applications, with the expansive .NET term. We are concerned with only a portion of .NET.
In October 2000, Microsoft, Hewlett-Packard and Intel jointly submitted the specifications for a runtime environment known as the Common Language Infrastructure (CLI) and a newly developed object-oriented language named C#. By the second half of 2001, Ximian officially had launched the Mono Project to provide an open-source implementation of the .NET development platform based on the proposed standards. In December 2001, the European Computer Manufacturing Association (ECMA) officially ratified as standards the specifications for the CLI and C# language.
The CLI lays out a base class library and a runtime environment that provides services such as Just In Time (JIT) compilation, memory management, exception handling, loading and linking and security management. To illustrate this better, it helps to compare it to the traditional method of compiling source code.
Traditionally, source code is converted by the compiler to machine-specific instructions. The instructions are then executed directly on the processor. A program compiled for the x86 processor line will fail to execute on a PPC processor without first being recompiled for that processor. This makes it difficult for software to target multiple hardware platforms, as a different version of compiled code must be kept for each one.
As an alternative, source compiled for a runtime environment is converted to an intermediate set of instructions that are not dependent on the underlying hardware. The intermediate instructions then can be executed in a couple of different ways. One method is to use a interpreter. The interpreter loads the intermediate instructions and then executes them, in essence acting as a virtual machine. In a second method, the intermediate form is JIT-compiled at runtime or installation time into machine-specific instructions and then executed directly. Because JIT compiling executes native platform instructions, compiling can be optimized for the target processor. The JIT compiler can increase execution speed further by converting only the portions that are being used into native instructions and then storing those in memory for subsequent calls.
The trade-off for having the platform independence of using a runtime environment is in execution speed. Compared to the traditional method of compiling to native instructions, the runtime is slower. How much slower depends on the specific situation and which method of execution the runtime uses. Generally though, an interpreter provides the slowest execution speed. The performance of a JIT compiler is much closer to the performance of traditional compiling because both produce native instructions. The overhead of the runtime itself still keeps the speed performance slightly behind.
I know what you are thinking. An object-oriented language, a base class library, a runtime environment—this sounds a lot like Java. Well, you are right. The components of the CLI are very similar to those found in Java. However, there is one fundamental difference. The Java runtime was designed only for the Java language. Although it is true that a handful of other languages have been ported to output Java bytecode and run on the JVM, this still falls short of the language neutrality supported by the CLI. From the ground up, the CLI was designed to be the execution environment for many programming languages. The data type system of the CLI can support imperative languages, like C or Pascal, as well as object-oriented languages. Not only does the CLI have facilities to execute multiple languages (language independence), it also provides the framework to allow those languages to share data with each other (language interaction), including cross-language exception handling. An object created in one language can be inherited in another. Details of how the CLI achieves this level of language neutrality can be found by examining its core components.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide