One of the important lessons from C was that there are times that real systems need to be programmed essentially at the machine level. This power has been nicely integrated into the Modula-3.
Any module that is marked as unsafe has full access to machine-dependent operations such as pointer arithmetic, unconstrained allocation and de-allocation of memory, and machine-dependent arithmetic. These capabilities are exploited in the implementation of the Modula-3 I/O system. The lowest levels of the I/O system are written to make heavy use of machine-dependent operations to eliminate bottlenecks.
In addition, existing (non-Module-3) libraries can be imported. Many existing C libraries make extensive use of machine-dependent operations. These can be imported as “unsafe” interfaces. Then, safer interfaces can be built on top of these while still allowing access to the unsafe features of the libraries for those applications that need them.
How often have you avoided changing a base header file in a C/C++ system because you didn't want to recompile the world? How many times have you restructured your header files, not because it was the right thing to do, but because you needed to cut down on the number of recompilations after each change?
The SRC implementation of Modula-3 has a rather elegant solution to this problem. If an item in an interface is changed, only those units that depend on that particular item will be recompiled. That is, dependencies are recorded on an item basis, not on an interface file basis. This means much less recompilation after each set of changes.
m3gdb is a version of GDB that has been modified to understand and debug Modula-3 programs. One of the nice features of m3gOb is that it understands M3 threads and allows you to switch from thread to thread when debugging a problem.
Also very exciting is Siphon. Siphon is a set of servers and tools to support multi-set development. The basic idea is that you can create a set of packages. A package is just a collection of source files and documentation. Siphon provides a simple model for checking out and checking in a package. Checking out a package locks it so that no one else can check it out and modify the contents. This probably doesn't sound that exciting. The exciting thing that Siphon does is to automatically propagate modified files to other sites when the package is checked back in. It does this in such way that packages are never seen in a “half-way” state; that is where part of the sources have been copied but not yet all of them. Further, it does this in the face of failure. One of the really interesting parts of multi-site development is making sure that everyone has the most recent copy of the sources. This is especially hard when communication links can go down. Siphon takes care of all of these problems for you. A system like Siphon can save you considerable amounts of work if you are involved in multi-site development. By the way, Siphon is not restricted to Modula-3 source files. It can manage any type of source or documentation file.
A good, simple object-oriented language makes a nice starting point, but that in itself probably doesn't provide sufficient motivation for considering a new language. Real productivity comes about when there are good reusable libraries. This is one of the real strengths of SRC Modula-3 system. It provides a large set of “industrial strength” libraries. Most of these libraries are the result of a number of years of use and refinement. They are as well- or better-documented than most commercially available libraries.
Libm3 is the workhorse library for Modula-3. It is the Modula-3 equivalent of libc (the standard C library), but it is considerably richer.
Libm3 defines a set of abstract types for I/O; these are called “readers” and “writers”. Readers and writers present an abstract interface for writing to “streams”. Streams represent buffered input and output. Stdin, stdout, and stderr represent represent streams that are familiar to most programmers. The streams package was designed to make it easy to add new kinds of streams.
In addition to the standard I/O streams, one can open file streams and text streams (that is, streams over character strings). There is also a set of abstractions for unbuffered I/O. In addition to the File type, there are Terminal and Pipe. The Fmt interface provides a type-safe version of C's printf. A big source of errors in C programs is passing one kind of data into a printf, but trying to format it as a different kind of data. The Fmt interface was designed to have the flexibility of printf, but without introducing its problems.
Libm3 also defines a simple set of “container” types as generic interfaces. The basic container types include tables, lists, and sequences. A table is an associatively indexed array. The list type is the familiar “lisp” style list. A sequence is an integer (CARDINAL, actually) addressed array which can grow in size.
Finally, Libm3 provides a simple persistence mechanism called Pickles. Writing code to convert complex data structures to and from some disk format is tedious and error prone. Many programmers don't do it unless they absolutely have to. With the Pickle package, you no longer need to write this kind of code. Since the runtime knows the layout of every object in memory, it can use this information to walk a set of structures and read them from or write them to a stream. The programmer does not have to write object-specific code for writing an object to a stream, although he or she can if a better representation is known. For example, the programmer of a hash table may choose to write out individual entries if the table is below a certain size.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide