Linux as a Work Environment Desktop
As we've seen over the last 18 months or so, Linux has stormed up the league tables in terms of server-side operating systems. Many people, in fact, are now beginning to see that there are options beyond a small number of large companies that traditionally held sway over the high-end operating system market.
The next logical step in this evolution of Linux is the migration from the back-end services, where it has proved itself admirably, to the desktop. This switch is more difficult to accomplish, as Linux was traditionally developed with server-side processing in mind. Until recently, there had been no concerted effort to develop applications for the end user that could compete against the dominant players.
While many argue that Linux is still not ready to act as a desktop OS, there isn't too far to go before this scenario becomes feasible. In this article, I aim to give you a feel for some of the challenges that Linux enthusiasts may meet when setting out to make this an environment more appealing to their sensibilities, as well as some tips on resolving them.
I work as a developer in a small office environment. The project that I'm working on at the moment has put me in an ideal situation to experiment with Linux as a serious desktop environment. I admit that this scenario may not be appropriate for everyone, but I attempt to deal with handling some of the more common tasks under Linux.
The project concerns Java development, with the main development platform being Solaris. The tasks that I've had to perform include designing the project, developing and testing code, tracking project defects, and general day-to-day administration, such as mail, research, and document reading and writing.
Most of the design work that takes part in the office involves the use of UML as the modeling language. In UML there are a number of conventions for representing different aspects of object-oriented program design. While there are a plethora of diagramming tools available for the Windows environment, tools for the Linux environment are generally lesser known, harder to come by, and maybe not as polished as their counterparts in the Microsoft world. Because the development environment that I work in is predominately Java, I had the advantage of not having to base my search wholly in the Linux application area; a good tool that is written in Java is cross platform by nature. While searching for suitable tools, I came across two likely candidates.
One of these tools is called ArgoUML. The web site for the tool, listed in Resources, describes ArgoUML as a cognitive design tool. It attempts to examine your design and provide suggestions on what may be lacking, or point out discrepancies in your design. The version of ArgoUML that was available from the web site was only a development version. They seem to have moved recently to a new site and revamped their development efforts.
On Windows, the Rational Rose UML tool has a pretty powerful presence in the object-oriented design world. Being able to read and write project files that are compatible with Rose is a great advantage. MagicDrawUML is a commercial UML design tool. Written in Java, it is completely cross-platform, and by exporting your projects in Rational Rose format, you can also share your work with others. As a side note, many people claim that Java can be quite slow, so what the developers did with this tool is pretty impressive. The user interface is as responsive as any you might use in a Windows or Linux environment. The downside of this product is that it doesn't come free. However, the license fees are a fraction of Rational Rose, and in a commercial development environment, it may be a worthwhile investment.
In this section, I try to give you some idea of the tools that are available for Linux that can be used to develop and build a Java project.
The build environment we use has been set up with Make. This tool is pretty standard on Linux installations. Make is available on a wide variety of platforms, so it is a good choice for cross-platform projects. The bulk of platform independent macros and targets can be defined in standard Makefiles. By using the following strategy, you can minimize the fuss that is involved in building over multiple platforms.
First, define an environment variable ARCH, which is set to whatever the uname command returns. This can be done at login time by adding the following command to your .bash_profile (for bash users):
Then, move all your platform specific Make routines and definitions to a makefile with a name such as Makefile.$ARCH, where $ARCH is what is returned by uname on the particular platform. Finally, in your main makefile, add the following line to include the platform specific definitions in your make commands:
include Makefile.$(ARCH)This results in the correct makefile definitions being automatically loaded by make at runtime.
The next stage in your build environment is finding a Java compiler that has been ported to Linux. At the moment, there are a number of Java compiler and interpreter projects in the pipeline. But so far, for production work, the options are the Java development kit that has been ported by the Blackdown team, or the JDK that is available from IBM. Depending on your situation, there may be preferences towards either JDK. Both these JDKs support the latest specifications from Sun.
Another advantage of using Java is the cross-platform nature of the byte code that your source code is compiled into. This means that if you need to use extra class libraries or jar files, there is no need to go hunting for platform specific implementations. However, many Java applications do in fact include native libraries, and this limits execution of these applications to the platforms that they have been ported to. Depending on how much you need to run these applications, this may not be a problem. For example, the project that I'm working on makes use of the Java Messaging Specification (JMS), defined by Sun. The Java Message Queue (JMQ) is a product released by Sun that implements JMS. JMQ hasn't been ported to Linux as of yet, and as a result, I am unable to test my code against it in Linux. I do need to compile my code against it, and because the libraries are mostly in jar format, I can copy the jar libraries onto my local machine and still successfully build my application against it.
Applications to help you develop code are two a penny. Each editor or IDE has their evangelists, so making a recommendation in this area is always a touchy subject, but I will mention my favorite editor. Due to job requirements, I was forced to learn vi a few years ago. Since then, I have moved onto vim and gvim and never looked back. I've tried other IDEs now and again, such as JBuilder and even Microsoft's Developer Studio, but until they implement the vi keymaps, those IDEs will only ever get a look from me. Where these IDEs do have advantages over vi is in the debugging aspect, but sometimes a well-placed println() method will do just as good a job.
Vim and gvim have color syntax highlighting, auto-indentation and other features that are too numerous to list. These editors can even handle write-build-debug, if set up correctly. An extremely powerful and easy-to-use function of vi clones is macros. A little thought about repetitive tasks in an editor can result in a simple but powerful macro to help you do what might have taken a lot longer in a less powerful editor.
What we're left with, then, are two of the more project-management type features that are usually left until last in any self-respecting project: source code management and bug/defect tracking. The most widespread source code management packages are SCCS, RCS and CVS. SCCS isn't available as an open-source application, which leaves CVS and RCS. RCS is suitable for smaller projects. I recently had the opportunity to work with CVS. While, for the most part, it uses the same format history files as RCS, the user interface to CVS is much more elegant. Added to that, you have multiple-developer support and remote-client support. That last feature comes in quite handy in my setup. Our CVS repository is stored on a Solaris machine. In my aim to use all things Linux and to prove that it can actually be done, I installed a CVS client on my Linux machine (this would probably come standard on some installations). By setting the CVSROOT environment variable to access the CVS repository on the Solaris machine remotely, I can manage my source code locally. CVS uses rlogin to execute the CVS commands remotely, so make sure that you have set up the proper access on the remote machine. The CVSROOT environment variable needs to follow the following format in order to have it access a remote repository:
Good open-source bug/defect tracking software is hard to come by. I could only find two that looked at all stable. One of these was dropped because it didn't fully meet our requirements. The only real possibility available to us was the bug-tracking application used by the Mozilla team. This application has a decent web-based user interface, with a MySQL database back-end. It is highly configurable, and once we got over a few quirks, it was up and running reasonably quickly.
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Readers' Choice Awards
- Android is Linux -- why no better inter-operation
2 hours 9 min ago
- Connecting Android device to desktop Linux via USB
2 hours 38 min ago
- Find new cell phone and tablet pc
3 hours 36 min ago
5 hours 4 min ago
- Automatically updating Guest Additions
6 hours 13 min ago
- I like your topic on android
7 hours 1 sec ago
- Reply to comment | Linux Journal
7 hours 21 min ago
- This is the easiest tutorial
13 hours 35 min ago
- Ahh, the Koolaid.
19 hours 14 min ago
- git-annex assistant
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?