Running Linux with Broken Memory
When you make a patch like this, and certainly when you're “SlashDotted” over it, you receive a lot of interesting e-mail, some with ideas even weirder than those I thought of.
One proposal I've often heard is to perform a memory test at boot time. Although this may seem interesting, it is not practical. memtest86 does an excellent job, but it requires several hours. You don't want this at boot time, and you also don't want to have a bad memory test (we already have that in most PC BIOSes, anyway, and BadRAM modules usually pass that test). The option of making a LILO boot alternative for memtest86, or of having a memtest86 boot floppy, has always been my preference.
I have had some reports of errors in motherboards which corrupted a particular address of physical memory, perhaps due to a short circuit with an on-board peripheral. One such reporter informed me that he had put four modules of 512M in his machine to limit the chances of hitting the erroneous address, but his problems were entirely resolved by throwing out a single memory page with the BadRAM patch. Discarding 4K out of 2G made his machine work flawlessly.
I have also talked with someone who owns a PC from a large PC-at-home project. The rather proprietary architecture of his Compaq Deskpro did not foresee an option to switch off the 15M-16M memory hole needed for some ISA cards. So, expanding memory to 24M did not work on Linux 2.2 because setting mem=24M means that the region 15M-16M works as, you guessed it, a memory hole. But, after adding badram=0<\#215>00f00000,0<\#215>fff00000 to inform the kernel that the hole should be treated as BadRAM, he got his additional memory going and was ready to add even more.
Finally, I received romantic responses from people that had worked with older systems that detected memory faults (using either parity schemes or ECC schemes) and assigned a lower level of trust to such faulty pages. When a page was “under evaluation”, it would, at most, be used to load program code, the idea being that program code is a verbatim copy off a hard disk and can be restored if the error pertains. If it worked well for a while, the error apparently had been something spurious, and the page would be upgraded to “trustworthy” again. However, if program storage also didn't work, the page would be put out of use. This scheme would be very interesting to support, but it would take up CPU cycles at runtime and, therefore, should be seen as a very fancy feature.
There are a few ways to extend upon the BadRAM efficiency. It is certainly possible to exploit ECC modules, both the ones that have recoverable and unrecoverable errors. Since I lack both types of modules, unfortunately, this is beyond my current ability. Also, it burns CPU cycles and falls into the category of fancy features.
Another option would be to exploit the slab allocator in the kernel. Slabs are small, uniform and reusable memory blocks that the kernel allocates in arrays that span, as closely as possible, an integer number of memory pages. It would be possible to exploit the error address information in more detail by using BadRAM pages for slabs, thereby avoiding the allocation of slabs that overlap error addresses. Ideally this could reduce the memory loss to absolutely zero. In practice, however, it will not be far off the standard BadRAM performance because an average system does not use many slab pages at all. For this reason, I doubt if the additional CPU overhead and coding effort would be worthwhile.
It is my sincere hope that memory marketing companies will pick up this idea and start to publish (cheap) memory modules based on broken memory. I propose a schema to classify such modules with a logarithmical degree of “badness” as part of the in-kernel documentation on the BadRAM patch.
All that remains now is to wish you good luck with your chase for broken memory. Perhaps some befriended user of a less mature operating system could spare it.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
|HPC Cluster Grant Accepting Applications!||Jan 28, 2015|
|Sharing Admin Privileges for Many Hosts Securely||Jan 28, 2015|
|Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform||Jan 23, 2015|
|Designing with Linux||Jan 22, 2015|
|Wondershaper—QOS in a Pinch||Jan 21, 2015|
|Ideal Backups with zbackup||Jan 19, 2015|
- Sharing Admin Privileges for Many Hosts Securely
- HPC Cluster Grant Accepting Applications!
- Internet of Things Blows Away CES, and it May Be Hunting for YOU Next
- Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform
- Ideal Backups with zbackup
- Designing with Linux
- Wondershaper—QOS in a Pinch
- Slow System? iotop Is Your Friend
- diff -u: What's New in Kernel Development
- Hats Off to Mozilla