Focus on Software
The time is on us once again. The “feature freeze” for 2.3 was just announced. By the time you read this, it will be down to last-minute testing and making sure all is well before final release. What's new other than more device drivers? I don't yet know everything new, but I do know that once again, I'll need to learn new firewalling software. For 2.0, it was ipfwadm--not bad, but no fine-grained control. For 2.2, it is ipchains--I liked the control, but heard many complaints about its complexity, and I found very few configuration tools for this beast. So I'm off to download and compile the latest kernel to test the new netfilter that will be 2.4's packet mangling software. Here's hoping for configurability and simplicity in one package.
This aptly named software is, you guessed it, a firewall configuration program. Basically, mason learns about the traffic passing through your gateway (soon to be your firewall) and records the traffic so you can build a firewall brick by brick (or chain by chain, as it were). The recording is done in the form of a line that can be used by mason or by the ipchains-restore script. When the software fires up, it checks what type of system you have: if it is a 2.0.x system, it will use ipfwadm; if 2.2.x, it will use ipchains. The new netfilter software rules should not be significantly different from ipchains, and support will be added before the 2.4.x release if it hasn't been already (some of the code was in place but disabled in the version I tested). The software does require you to review the rules, so you do need to be able to read and understand them to decide which rules to keep. It requires bash, ipchains or ipfwadm, and a kernel built with firewall support.
This command-line system information utility will fill pages. si will tell most folk more than they ever cared to know about their system, what resources (IRQs, DMAs, etc.) are being used, what programs are running, how much memory they're using, etc. The information can be obtained by other programs, but it will take a few. In fact, I'm not sure what more information you could get or want. While I haven't verified it, I suspect this program is reading a good part of the /proc tree to return all this information; at least, it matches the information I know to be available, just not as easily readable in /proc. It requires glibc.
Going from information overload to almost underwhelming by comparison, this utility will provide one page of information nicely formatted in HTML—great for putting something up on a web page. I looked, and while it had a fair amount of information for only one page's worth, it was innocuous enough. I would feel safe putting this on a public web page, whereas the utility above is more information than even a wannabe cracker would want (or need). It requires Perl.
DNS sleuth: atrey.karlin.mff.cuni.cz/~mj/linux.html
This little jewel is a DNS checker. With both a command-line interface and a web interface, sleuth will check whether the configuration of your DNS complies with the RFCs. It will give you warnings for some things and errors when it sees something completely wrong. The best part is it will tell you what is wrong and reference the RFC so you can see for yourself why it's bad and how to fix it. No more guessing if it's correct or not—fast and thorough. It requires Perl and the Net::DNS Perl module.
Have a large LAN? Thankfully, I don't any more. But if I did, particularly one that spans buildings (much less floors), and typically two or more /24 (class C) networks, I'd be using something like this database to keep it sorted. It really is overkill for a small network, though. I think I'd add a few comment fields to hold a name and number or two for problems. Makes a nice complement to a resource manager like MOT (Ministry of Truth) or IRM (IT Resource Manager). It requires Perl, CGI, DBI, DBD modules, MySQL and a web server.
yafc is yet another FTP client. You may be thinking, “I already have both graphical and command-line FTP tools, and ncftp (a command-line client to which this is a competitor) fills the latter niche nicely.” However, the nice thing about competition is the newcomer has to have something that works better than the incumbent, or otherwise why bother? Well, this one has—at least for me. Side by side, I found yafc easier to use (important even to a command-line-junkie like myself) and better designed. It has a few parameters you can set, like cache and others. It requires libncurses, libreadline and glibc.
It's been a while since I looked at any kind of HTML markup editor, and I don't remember them being all that friendly or easy to use, so my HTML editor of choice has always been vi. Now, you've probably guessed I'm not much of a webmaster (it's true, I'm not)--I'm into substance over form. About the only thing I didn't see in august, but would like to, is some markup selections for PHP. It requires Tcl/Tk.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide