Best of Technical Support
I'm building a series of servers for a friend of mine, and they must stay on 24 hours a day, 7 days a week, 52 weeks a year (well, if they don't burn, obviously).
Is there a way to automatically recheck a mounted filesystem? —Franco Favento dei Favento da Trieste, firstname.lastname@example.org
Switch to reiserfs, a journaling filesystem for Linux. Even without it, I have had Linux servers running for years without filesystem glitches, but reiserfs is more reliable. But if you really want to do a filesystem check, boot using a very minimal (better still, ramdisk) root filesystem, and store on it only those executables needed to run a minimal system. That will allow you to unmount the filesystems you actually need to check. —Chad Robinson, email@example.com
Only a read-only check can be done on a mounted filesystem. It cannot be repaired:
fsck.ext2 -fn some-device
If it detects errors, you can plan downtime to repair the filesystem. —Keith Trollope, firstname.lastname@example.org
I am happy to report that my employers are moving to Linux in a big way for seismic data crunching, and they've given me a nice symmetric multipenguin. However, I routinely use files greater than 2GB. I've got reiserfs and a 2.4.2 kernel on the RH 7.0 installation now, but still can't deal with files greater than 2 —Adam Cherrett, email@example.com
You should look at http://www.suse.de/~aj/linux_lfs.html. In addition to kernel and glibc support, you need to do one of the following in your programs: 1) Compile your programs with gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE. This forces all file access calls to use the 64-bit variants. Several types change also, e.g., off_t becomes off64_t. It's therefore important to always use the correct types and to not use, for example, int instead of off_t. 2) Define _LARGEFILE_SOURCE and _LARGEFILE64_SOURCE. With these defines you can use the LFS functions like open64 directly. 3) Use the O_LARGEFILE flag with open to operate on large files. —Marc Merlin, firstname.lastname@example.org
I have recently installed Linux on my PC with 256MB RAM. I wanted to use it as my experimental system with an Oracle server and for some Java development.
To my surprise, the system was striving for resources, especially memory. So I booted it fresh, and without an X session I checked for memory usage with free. It was showing that about 110MB was used. How do I decide what processes are using how much memory? —Dhimant Patel, email@example.com
Be sure to note the “-/+ buffers” line when running the free command. Linux will automatically use available RAM to buffer I/O requests, and this memory will be freed for program use as necessary. Your primary concern should be the “used” indicator for your Swap space. It should be small—less than a few megabytes. You can use the ps aux | less command to examine the memory usage of each running process. Only the resident set size (RSS) value should be important here, but be aware that the memory indicated is not necessarily used as-is by each process. That discrepancy has to do with the fact that ps will show all memory used, even by shared libraries, although they are loaded only once for all processes that need them. —Chad Robinson, firstname.lastname@example.org
I compiled kernel 2.2.16 from SuSE using make zdisk. When I boot my system, the progression dot goes to the end of the screen and bombs out with the error mesage “out of memory”. —Eskinder Mesfin, email@example.com
You need to compile with make bzdisk to solve that problem; it shuffles memory around in different ways to make bigger kernels work. —Marc Merlin, firstname.lastname@example.org
I made some backup tapes using ftape (HP/Colorado Travan 1). I could read them fine on Red Hat 5.2, but on later releases (e.g., 6.2, 7.0) it acts as if nothing is on the tape. —Jim Haynes, email@example.com
In the newer versions of ftape, a fixed block size on the tape is used rather than the variable size previously used. To change the block size to a variable size, type:
mt -d /dev/qft0 setblk 0
Information on the old and new versions can be found on the >tape home page, http://www.instmath.rwth-aachen.de/~heine/ftape/. —Keith Trollope, firstname.lastname@example.org
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide