Linux as a Work Environment Desktop
Some of the typical Linux utilities come in very handy when developing a project. Not just for the work involved in coding and testing, but also the utilities that simply make life a bit easier. For example, the commands find, cat, awk and egrep tend to be used quite a lot for general system administration, finding your way around your source files, and writing small scripts that make a job easier. Without installing something like Cygwin for Windows, you would be hard-pressed to find similar “life-improving” utilities on Microsoft platforms.
A particular class of utility that is more directly related to code development would be a “pretty printer”, or utilities that can tidy your code for you so that they conform to a particular standard, such as the Sun Microsystems Java coding standard, or your own company's coding standard. The indent utility is available on most Linux systems, and by specifying a range of options, you can control how the final output is formatted. At the moment, our project conforms to the Sun Microsystems coding standard. When I first went looking for the options for indent that would format Java code to the correct standard, I came across the Jindent utility. This is an indenter written in Java, so again, it is completely cross-platform. By default, it formats to the Sun standard, and when creating your own configuration file, you can modify its output.
For backup purposes, we keep most of our development work on UNIX servers. The majority of our client machines are Windows NT. The UNIX machines we work with are equipped with NFS and SAMBA. By sharing out these drives, we can access those resources locally without having to log into the remote machine. A Linux machine can mount both NFS and SAMBA drives that have been shared out. By keeping the same directory structure as that on the remote machine, you reduce the need to produce machine specific makefiles. For example, if a required jar library is located under /opt/FSUNjmq/lib/jms.jar on the remote machine, the makefiles that you developed can run on both the remote UNIX machine and your local Linux machine without any modification, by mapping /opt/FSUNjmq to /opt/FSUNjmq on your local machine. Commands for sharing out remote drives are similar to the following. For NFS:
For SAMBA, edit the /etc/smb.conf file, copying one of the example shares, modify it to point to the correct directory, and make sure to give your user name access permission. It is not always possible to work solely off your local machine; there will be times when you will need to log in to the remote machine to run various processes, such as when there isn't a port available for the application you wish to test against, or when you need to copy, move or access large amounts of data. In addition, logging into the remote machine will give you better network performance. The rlogin command gives you easier access to remote machines than telnet. With rlogin, you can set up access permissions so that you are not required to enter a user name and password every time on login. By creating a .rhosts file in your remote home directory, and by adding your local machine name, you will be able to drop into the remote machine easily. Just make sure that your .rhosts is set to read/write permission for the owner only, and that there is no group or global access allowed. Otherwise, your automatic authentication will not work.
One of the annoying things I found about logging into remote machines is that you have to set up all your shell preferences a second time. This can be overcome by configuring a common profile file and making this available to both your local machine and your remote machines. Create a .commonProfile file on your remote home directory, and mount your remote home directory locally. So, in addition to your /home/username directory, you also have a /remote/username directory. In your local and remote profiles, you can then source your .commonProfile. Any changes you need to make can now be made only once, and they will be reflected in both environments.
One important point about remote access to your UNIX servers is that when setting up a user account for yourself on your local machine, you should set the same username, uid and gid that you have been assigned on the remote machine. This makes life much easier in terms of write, mount and login permissions. Some of the more experienced users may be familiar with NIS; this is a method for keeping a central repository of users and passwords across a number of UNIX machines. With a bit of research, and a suitable network configuration, it may be possible to set up your Linux machine to use NIS to allow other users to access your machine. This avoids the need to duplicate each user id and group id when you add a new user to your machine.
An extremely useful ability is being able to run X Window clients from the remote machine on your local machine. This gives you access to the remote machine in a graphical environment. However, even though you can run remote applications in a window on your local X Window display, in some cases there are incompatibilities between what the remote application is expecting from a UNIX X Window system and what your local X Window server provides. A classic case would be if your X Window server is running in 16-bit color, but the remote application can only run in 8-bit color. Rather than having to shutdown your X Window session and restart in 8-bit color, you can either start up a second X Window session with 8-bit color locally, or you can run the remote window manager so that it displays on your local machine. The command for this is:
X :1 -query remotehost
The “:1” means that you want X to run as your second display. This requires you to set the display variable on the remote machine to your second display:
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Rogue Wave Software's Zend Server
- Returning Values from Bash Functions
- Linux Kernel Testing and Debugging
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide