Focus on Software
Training and certification. Who needs them anyway? Well, many organizations, particularly larger ones, like certification a lot. Why? Because it tells them several things: first, it tells them the candidate is dedicated enough to the profession to get certification (with or without training). Second, it tells them this candidate actually knows something about what he/she claims to have knowledge of; they don't just have to take the candidate's word. Frankly, in many larger places, the folks who do the initial screening don't have a clue about operating systems or computers, beyond pulling up a word processor or spreadsheet and using it. This should be obvious from their blind dogma insisting all correspondence be submitted in Word format, otherwise they can't figure out how to read it. Do I insist on certification? No. I actually like someone who has little or no training/experience so I can teach my way. Take it from someone who's been there, there are at least three ways to go about any task in Linux. I happen to like my way. This debate is something the folks at LPI struggle with, too—I know I volunteer many hours working with them. Ensuring that the questions are relevant, unambiguous and not biased toward any given distribution; that the correct answers are correct (grammatically and syntactically, as well as technically); and also that the wrong answers are wrong (or at least more wrong than the right ones) is a time-consuming process. So, if you have a little time and even a little knowledge, you're invited to help devise and submit questions for consideration—you don't need to be an expert. While you're at it, why not take the exams? It won't hurt and might even get you a foot in a large and otherwise closed door.
This little utility won't make you an iptables expert, but it will help you create, view and edit iptables' rules. Based on correspondence I've had, the most difficult part seems to be the concept of separate tables for chains, depending on where the rules in those chains work. The good part is that while it's curses-based, it's not X-based. After all, X on firewall isn't the best idea, although I recognize that under some circumstances it will happen anyway. Requires: cursel, objc, sh.
I don't personally know many folks who can write man pages. In fact, this is one area where nonprogrammers can help out. Perhaps you just want to improve grammar, spelling or add some comments of your own to existing man pages. This utility should help you do all the above and more in a nice graphical environment. Requires: libgtk, libgdk, libgmodule, libglib, libdl, libXext, libX11, libm, glibc.
I remember in chemistry class we had to draw chemicals to visualize the bonding. Not sure I really learned anything from it, but it was required. Well, chemtool does all this better than I ever could. When your creation is done, you can export it to various formats, including PostScript and Xfig. Some examples and templates are included to get you started. Requires: libgtk, libgdk, libgmodule, libglib, libdl, libXext, libX11, libm, glibc.
Project Clock: http://members.optushome.com.au/starters/pclock/
This small, lightweight utility can be used to keep track of how much time you devote to various projects during the day. It can be started easily at login, then select the project to add time to as you go. Projects are simple to add, and an included report generator will show you what you need to do come billing time. Requires: tcl/tk, tix.
While not incredibly useful, this program is fun. After all, who doesn't like fractals? This program allows you to view fractals, cycle colors and other things. Requires: libgtk, libdgk, libgmodule, libglib, libdl, libXext, libX11, libm, libpng, libz, glibc.
MRTG Remote Data Collector: http://pandora.sytes.net/mrdc/
MRTG does one thing very well: graph bandwidth usage, but it doesn't track much else. To help, mrdc can collect and present other kinds of data for MRTG to graph. For example, mrdc can pass load data so you can watch a system's load over time. Or, it can graph physical memory versus virtual (swap) memory or number of running processes to total processes. Requires: Perl, snmp on the system from which to collect data and MRTG.
Input/Output Grapher: http://www.dynw.com/iog/
When MRTG is overkill, or you just don't want to configure anything that simple for a bandwidth monitor, IOG might be what you need. It uses bar graphs instead of the line graphs used in MRTG and is easier to set up and run. You will need to know what your ifInOctets and ifOutOctets device numbers are, but a walk of the snmp tree will show that quickly enough. Requires: Perl, snmp on the system to be monitored.
If you are a realtor or know any realtors, then this software will be of interest. It claims to be simple enough for a realtor to set up, and I imagine that means technoneanderthal realtors. It requires someone to make adjustments to the index.php page, but beyond that, this is the simplest package to administer I've seen in a while. I wish realtors had something like this set up the last time I was looking for a house in the States. If you're not in the US, you might need to make some adjustments (including translations), but it would be a trivial undertaking. Requires: web server w/ MySQL and PHP4, MySQL, web browser.
Until next month.
David A. Bandel (email@example.com) is a Linux/UNIX consultant currently living in the Republic of Panama. He is coauthor of Que Special Edition: Using Caldera OpenLinux.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide