Linux as a Work Environment Desktop
Some of the typical Linux utilities come in very handy when developing a project. Not just for the work involved in coding and testing, but also the utilities that simply make life a bit easier. For example, the commands find, cat, awk and egrep tend to be used quite a lot for general system administration, finding your way around your source files, and writing small scripts that make a job easier. Without installing something like Cygwin for Windows, you would be hard-pressed to find similar “life-improving” utilities on Microsoft platforms.
A particular class of utility that is more directly related to code development would be a “pretty printer”, or utilities that can tidy your code for you so that they conform to a particular standard, such as the Sun Microsystems Java coding standard, or your own company's coding standard. The indent utility is available on most Linux systems, and by specifying a range of options, you can control how the final output is formatted. At the moment, our project conforms to the Sun Microsystems coding standard. When I first went looking for the options for indent that would format Java code to the correct standard, I came across the Jindent utility. This is an indenter written in Java, so again, it is completely cross-platform. By default, it formats to the Sun standard, and when creating your own configuration file, you can modify its output.
For backup purposes, we keep most of our development work on UNIX servers. The majority of our client machines are Windows NT. The UNIX machines we work with are equipped with NFS and SAMBA. By sharing out these drives, we can access those resources locally without having to log into the remote machine. A Linux machine can mount both NFS and SAMBA drives that have been shared out. By keeping the same directory structure as that on the remote machine, you reduce the need to produce machine specific makefiles. For example, if a required jar library is located under /opt/FSUNjmq/lib/jms.jar on the remote machine, the makefiles that you developed can run on both the remote UNIX machine and your local Linux machine without any modification, by mapping /opt/FSUNjmq to /opt/FSUNjmq on your local machine. Commands for sharing out remote drives are similar to the following. For NFS:
For SAMBA, edit the /etc/smb.conf file, copying one of the example shares, modify it to point to the correct directory, and make sure to give your user name access permission. It is not always possible to work solely off your local machine; there will be times when you will need to log in to the remote machine to run various processes, such as when there isn't a port available for the application you wish to test against, or when you need to copy, move or access large amounts of data. In addition, logging into the remote machine will give you better network performance. The rlogin command gives you easier access to remote machines than telnet. With rlogin, you can set up access permissions so that you are not required to enter a user name and password every time on login. By creating a .rhosts file in your remote home directory, and by adding your local machine name, you will be able to drop into the remote machine easily. Just make sure that your .rhosts is set to read/write permission for the owner only, and that there is no group or global access allowed. Otherwise, your automatic authentication will not work.
One of the annoying things I found about logging into remote machines is that you have to set up all your shell preferences a second time. This can be overcome by configuring a common profile file and making this available to both your local machine and your remote machines. Create a .commonProfile file on your remote home directory, and mount your remote home directory locally. So, in addition to your /home/username directory, you also have a /remote/username directory. In your local and remote profiles, you can then source your .commonProfile. Any changes you need to make can now be made only once, and they will be reflected in both environments.
One important point about remote access to your UNIX servers is that when setting up a user account for yourself on your local machine, you should set the same username, uid and gid that you have been assigned on the remote machine. This makes life much easier in terms of write, mount and login permissions. Some of the more experienced users may be familiar with NIS; this is a method for keeping a central repository of users and passwords across a number of UNIX machines. With a bit of research, and a suitable network configuration, it may be possible to set up your Linux machine to use NIS to allow other users to access your machine. This avoids the need to duplicate each user id and group id when you add a new user to your machine.
An extremely useful ability is being able to run X Window clients from the remote machine on your local machine. This gives you access to the remote machine in a graphical environment. However, even though you can run remote applications in a window on your local X Window display, in some cases there are incompatibilities between what the remote application is expecting from a UNIX X Window system and what your local X Window server provides. A classic case would be if your X Window server is running in 16-bit color, but the remote application can only run in 8-bit color. Rather than having to shutdown your X Window session and restart in 8-bit color, you can either start up a second X Window session with 8-bit color locally, or you can run the remote window manager so that it displays on your local machine. The command for this is:
X :1 -query remotehost
The “:1” means that you want X to run as your second display. This requires you to set the display variable on the remote machine to your second display:
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
|Concerning Containers' Connections: on Docker Networking||Aug 26, 2015|
|My Network Go-Bag||Aug 24, 2015|
|Doing Astronomy with Python||Aug 19, 2015|
|Build a “Virtual SuperComputer” with Process Virtualization||Aug 18, 2015|
- Concerning Containers' Connections: on Docker Networking
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- A Project to Guarantee Better Security for Open-Source Projects
- Where's That Pesky Hidden Word?
- Firefox Security Exploit Targets Linux Users and Web Developers
- My Network Go-Bag
- Doing Astronomy with Python
- Build a “Virtual SuperComputer” with Process Virtualization
- Three More Lessons
- diff -u: What's New in Kernel Development