You installed a new Linux system, but forgot to set enough swap space for your needs. Do you need to repartition and reinstall? No, the swap utilities on Linux allow you to make a real file and use it as swap pace.
The trick is to make a file and then tell the swapon program to use it. Here's how to create, for example, a 64 megs swap file on your root partition (of course make sure you have at least 64 megs free):
#dd if=/dev/zero of=/swapfile bs=1024 count=65536
This will make a 64 megs (about 67 millions bytes) file on your hard drive. You now need to initialize it:
#mkswap /swapfile 65536
And you can then add it to your swap pool:
With that you have 64 megs of swap added. Don't forget to add the swapon command to your startup files so the command will be repeated at each reboot.
Read more @http://www.lynuxstuff.com/lynux/index.php?option=com_content&view=article&id=60&Itemid=66
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?