Best of Technical Support
I'm building a series of servers for a friend of mine, and they must stay on 24 hours a day, 7 days a week, 52 weeks a year (well, if they don't burn, obviously).
Is there a way to automatically recheck a mounted filesystem? —Franco Favento dei Favento da Trieste, email@example.com
Switch to reiserfs, a journaling filesystem for Linux. Even without it, I have had Linux servers running for years without filesystem glitches, but reiserfs is more reliable. But if you really want to do a filesystem check, boot using a very minimal (better still, ramdisk) root filesystem, and store on it only those executables needed to run a minimal system. That will allow you to unmount the filesystems you actually need to check. —Chad Robinson, firstname.lastname@example.org
Only a read-only check can be done on a mounted filesystem. It cannot be repaired:
fsck.ext2 -fn some-device
If it detects errors, you can plan downtime to repair the filesystem. —Keith Trollope, email@example.com
I am happy to report that my employers are moving to Linux in a big way for seismic data crunching, and they've given me a nice symmetric multipenguin. However, I routinely use files greater than 2GB. I've got reiserfs and a 2.4.2 kernel on the RH 7.0 installation now, but still can't deal with files greater than 2 —Adam Cherrett, firstname.lastname@example.org
You should look at http://www.suse.de/~aj/linux_lfs.html. In addition to kernel and glibc support, you need to do one of the following in your programs: 1) Compile your programs with gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE. This forces all file access calls to use the 64-bit variants. Several types change also, e.g., off_t becomes off64_t. It's therefore important to always use the correct types and to not use, for example, int instead of off_t. 2) Define _LARGEFILE_SOURCE and _LARGEFILE64_SOURCE. With these defines you can use the LFS functions like open64 directly. 3) Use the O_LARGEFILE flag with open to operate on large files. —Marc Merlin, email@example.com
I have recently installed Linux on my PC with 256MB RAM. I wanted to use it as my experimental system with an Oracle server and for some Java development.
To my surprise, the system was striving for resources, especially memory. So I booted it fresh, and without an X session I checked for memory usage with free. It was showing that about 110MB was used. How do I decide what processes are using how much memory? —Dhimant Patel, firstname.lastname@example.org
Be sure to note the “-/+ buffers” line when running the free command. Linux will automatically use available RAM to buffer I/O requests, and this memory will be freed for program use as necessary. Your primary concern should be the “used” indicator for your Swap space. It should be small—less than a few megabytes. You can use the ps aux | less command to examine the memory usage of each running process. Only the resident set size (RSS) value should be important here, but be aware that the memory indicated is not necessarily used as-is by each process. That discrepancy has to do with the fact that ps will show all memory used, even by shared libraries, although they are loaded only once for all processes that need them. —Chad Robinson, email@example.com
I compiled kernel 2.2.16 from SuSE using make zdisk. When I boot my system, the progression dot goes to the end of the screen and bombs out with the error mesage “out of memory”. —Eskinder Mesfin, firstname.lastname@example.org
You need to compile with make bzdisk to solve that problem; it shuffles memory around in different ways to make bigger kernels work. —Marc Merlin, email@example.com
I made some backup tapes using ftape (HP/Colorado Travan 1). I could read them fine on Red Hat 5.2, but on later releases (e.g., 6.2, 7.0) it acts as if nothing is on the tape. —Jim Haynes, firstname.lastname@example.org
In the newer versions of ftape, a fixed block size on the tape is used rather than the variable size previously used. To change the block size to a variable size, type:
mt -d /dev/qft0 setblk 0
Information on the old and new versions can be found on the >tape home page, http://www.instmath.rwth-aachen.de/~heine/ftape/. —Keith Trollope, email@example.com
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
1 hour 48 min ago
- Reply to comment | Linux Journal
2 hours 20 min ago
- All the articles you talked
4 hours 44 min ago
- All the articles you talked
4 hours 47 min ago
- All the articles you talked
4 hours 48 min ago
9 hours 13 min ago
- Keeping track of IP address
11 hours 4 min ago
- Roll your own dynamic dns
16 hours 17 min ago
- Please correct the URL for Salt Stack's web site
19 hours 29 min ago
- Android is Linux -- why no better inter-operation
21 hours 44 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?