Porting MS-DOS Graphics Applications
I first started VGA programming under MS-DOS, using the popular DJGPP C compiler. In recent years, this protected-mode 32-bit compiler, which is basically an MS-DOS port of gcc, has established itself as one of the preferred compilers in the MS-DOS game programmers' community. DJGPP was in fact the MS-DOS compiler of choice for idsoftware's game Quake. For the Linux console port of Quake, the Linux Super VGA library, SVGALIB was used.
When I first decided that I was going to port my own 3-D Model Viewer, jaw3d, from MS-DOS to Linux, it seemed logical to use the same approach. SVGALIB is very intuitive and allows me to easily maintain and further develop my 3-D Model Viewer for both platforms.
I found the easiest way to work with one set of source files for both platforms was to use preprocessor directives in places where different code was needed. Since I had already written and used DJGPP's low-level VGA and mouse instructions for the MS-DOS version, I simply added the equivalent SVGALIB Linux code in each instance, and separated the MS-DOS and Linux code using the preprocessor directive #ifdef. The following code snippet represents one of the many ways in which this can be accomplished:
#ifdef __DJGPP__ ... ... #endif #ifndef __DJGPP__ ... ... #endif
__DJGPP__ is automatically defined by the DJGPP compiler, and is not defined by gcc under Linux.
An additional advantage of using SVGALIB under Linux is the fact that there is also a DJGPP version of SVGALIB. Let's try not to get confused at this point: SVGALIB is a graphics library that does some behind-the-scenes low-level VGA and mouse work for the user. Although SVGALIB was first developed for Linux, someone eventually released a version that worked with DJGPP under MS-DOS. Why not use SVGALIB for both MS-DOS and Linux? This would allow us to have 100% identical code for both platforms.
I don't recommend this approach, however, for two reasons. First, when I made speed comparisons of my 3-D engine between the two platforms, I noticed that when using SVGALIB with DJGPP under MS-DOS, graphics performance was sluggish in comparison with SVGALIB under Linux. Second, the MS-DOS executable was unnecessarily big because it had to be statically linked with the SVGALIB library. Using SVGALIB under Linux did not seem to present any problems. Due to the use of shared libraries under Linux, the executable remained tiny when dynamically linked, and graphics performance was actually slightly better under Linux than under MS-DOS. For the sake of performance and executable size, I found it best to use DJGPP's low-level instructions under MS-DOS and to use SVGALIB under Linux. This makes a difference, especially in a setting like 3-D engines, where every frame-per-second counts.
The advantage obtained from using the DJGPP port of SVGALIB is the fact that you can test your SVGALIB Linux code under MS-DOS, without having to reboot. Except for speed and executable size, both versions of SVGALIB behave identically.
Note that the DJGPP port of SVGALIB is still in beta, but I ran across only one minor problem and that was easily fixed. The file vgakeybo.h included with the DJGPP port of SVGALIB seemed to differ from the file vgakeyboard.h under Linux; therefore, making cross compilations was impossible in cases where keyboard code was used. The two files should be identical, of course, and the solution is to copy the Linux version of the include file over the DOS one.
The three compiler-specific code aspects are VGA, mouse input and keyboard input. If you have completed an MS-DOS graphics application, you may be using much of this code already and can quickly add on the SVGALIB equivalent code. On the other hand, if you do not have any previous graphics programming experience, you will find the code shown in Listings 1 through 4 to be very useful.
In the case of my 3-D Model Viewer, jaw3d, a complete frame is first rendered onto a buffer which has the same dimensions as the screen, and then copied to video memory all at once, allowing us to display frequently updated screens successively without any flickering. This is done as follows:
memcpy (video_buff, image_buffer, DIM_X * DIM_Y); /* video_buff was initialized above */ dosmemput (image_buffer, DIM_X * DIM_Y, 0xA0000); /* 0xA0000 is the video memory in VGA mode * 13h */
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide