Years ago, in the days before the Web was popular, the ultimate machine was equated universally with servers. These systems had the hottest, fastest CPU(s) one could afford, obscene amounts of memory (often as much as 64MB of RAM), huge, fast SCSI hard drives (usually two or three to spread the load) and were never commodity (Intel) systems—they were DECs, SUN Sparcs, IBM RS or HP UNIX servers. Back in those days, the big data centers belonged to Wang and others. Today, most server systems I install are or can be low-end systems with 1GHz processors, 128MB RAM, 18GB IDE hard drives that absolutely fly, at least compared to the clunky MFM or RLL drives of years gone by, and they're still overkill. Now, more than ever, you're likely to find the ultimate machine on the boss' desktop. After all, we can't have him or her waiting ten seconds for Outlook or Netscape to open, and how will he or she watch CNN while working in the bloated, monstrous word processor (with 95% of the “features” totally unknown to most users). Today's graphics cards have more RAM than the first disk drives I owned had storage space. And computing is only in its infancy. In a few years we'll look back on today and shake our heads, wondering how we ever got along with such slow, primitive systems.
I find it hard to believe I've overlooked reviewing this particular package because I use it all the time. (All programs in this column are built from source.) This program is run instead of make install when installing packages from source. It builds (albeit crudely) RPMs, DEBs and TGZ (Slackware) packages. This will help control the cruft on your system as you install and remove source packages. I even use it on my Linux from Scratch systems (I install RPM and checkinstall early on). This is a must-have/must-use for all systems—production, test, whatever. Requires: bash, glibc.
CRM allows you to track incidents (entities), assign them to folks for resolution, assign due dates, priorities and so on, then check up on all the activity. If you're running a service-oriented business, this particular application will be worth investigating. You even can set alarms on projects you don't want to extend past the due date. Easy to install and use. Requires: MySQL, Apache with PHP and MySQL, web browser.
Many years ago I used to sit around with some friends for weekends at a time and play games like NATO Division Commander. I haven't done that in a long time, but this game brought back memories. I don't have the time now to sit around all weekend playing war games, and even if I did, my significant other would likely object. But I can play LGames anytime, even on sleepless nights, as long as I keep the volume down. Requires: libSDL_mixer, libSDL, libpthread, glibc, libm, libdl, libvorbisfile, libvorbis, libogg, libsmpeg, libartsc, libX11, libXext.
Crossword Generator www.ldc.usb.ve/~96-28234/crossword-0.8.tar.gz (download only)
If you like crossword puzzles, this program will provide you with all the puzzles you could want. You create the board and provide a list of words, and the program does the rest. What is needed is a tie-in to a thesaurus so the clue provides synonyms or definitions rather than the word itself. Documentation is provided in Spanish (as are dictionaries, etc.), but that's easily remedied. Requires: libstdc++, libm, glibc, TeX, LaTeX.
If there's one thing users like, it's simple, easy-to-use tools. But above all, they like graphical tools. The find++ utility will search your hard drive for words or phrases contained either in the filename or inside the file. Once a document is found, if the file type has been associated with a program, you can launch that program and open the file. It doesn't get much easier than this. Requires: libgtk, libgdk, libgmodule, libglib, libdl, libXext, libX11, libm, glibc.
Need to find out where a particular domain name entry is coming from? This will trace the authoritative information back to its source. The program has a lot of options for controlling how the query is run. Requires: glibc.
ippl (Internet Protocol Logger) pltplp.net/ippl
This month's pick from three years ago wasn't easy, as a number of good choices are still available, but ippl is probably the most useful. If you need to keep an eye on the types of traffic you have, ippl will do that well. It's somewhat improved since three years ago. Probably the best feature is that you can configure it easily to log only those protocols in which you're interested. Its drawback is a lack of support for other than the standard TCP, UDP, ICMP protocols, but few folks would need this anyway. Requires: libthread, glibc.
Until next month.
David A. Bandel (firstname.lastname@example.org) is a Linux/UNIX consultant currently living in the Republic of Panama. He is coauthor of Que Special Edition: Using Caldera OpenLinux.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
16 min 14 sec ago
- Reply to comment | Linux Journal
2 hours 42 min ago
- Reply to comment | Linux Journal
6 hours 41 min ago
- Yeah, user namespaces are
7 hours 58 min ago
- Cari Uang
11 hours 29 min ago
- user namespaces
14 hours 23 min ago
14 hours 48 min ago
- One advantage with VMs
17 hours 17 min ago
- about info
17 hours 50 min ago
17 hours 51 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?