RAID0 Implementation Under Linux
Now you're ready to actually create a RAID0 device. The compilation created several tools for the task: mdadd, mdrun and mdstop. mdadd is used to add block devices to an md device. If you want to use sda1, sdb1 and sdc1, you issue the command:
/sbin/mdadd /dev/md0 /dev/sda1 /dev/sdb1 \ /dev/sdc1
This command adds sda1, sdb1 and sdc1 to md0. This same result can also be accomplished by giving these commands:
/sbin/mdadd /dev/md0 /dev/sda1 /sbin/mdadd /dev/md0 /dev/sdb1 /sbin/mdadd /dev/md0 /dev/sdc1Remember that the order in which the devices are added is significant. If you change the order, any data previously written will be lost. I recommend adding the devices in what seems like a logical order and then sticking to it.
Now we must start the device. mdrun has the following command syntax:
where x indicates the mode: -l for linear, 0 for RAID0 and 1 for RAID1. To start the device we just made, the command would be:
/sbin/mdrun -p0 /dev/md0When using RAID devices, another option you can use is -cnk to specify chunk size, where n is the chunk size in KB (n must be a power of two). For example, -c6k indicates a 6KB chunk size. The default value is the value of your PAGE_SIZE. The best value for chunk size would be the average request size, so chances are two requests will write to different physical disks. If you plan to use the md for swap space, stick with the default.
Once the device is running, you can create a file system and mount it. For example:
/sbin/mkfs.ext2 /dev/md0 mount /dev/md0 /var/spool/news
This will create an ext2 file system and then mount it as the news spool. Your RAID0 device is now ready for data. To check its status, type:
cat /proc/mdstatand receive the following output:
Personalities : [2 raid0] read_ahead 120 sectors md0 : active raid0 sda1 sdb1 sdc1 168588 blocks 4k chunks md1 : inactive md2 : inactive md3 : inactiveThis report tells you which modes are supported, the current read_ahead value, the state of each md device, its mode, physical parts, total size and chunk size.
At this point we have our RAID device running and mounted; as soon as the machine is rebooted, we will have to rerun mdadd, mdrun and mount. All of this can easily be added to your rc.local file, but there is a better way. mdcreate automatically creates an /etc/mdtab file. The mdtab file serves a function similar to the /etc/fstab file, informing the system of the component devices, modes and mount points. The syntax is:
mdcreate [-cxk] mode md_dev dev0 dev1 ...
To create an mdtab file for our example device we would use:
/sbin/mdcreate raid0 /dev/md0 /dev/sda1\ /dev/sdb1 /dev/sdc1 cat /etc/mdtab # mdtab entry for /dev/md0: # /dev/md0 raid0,4k,0,fe8a9ffb /dev/sda1 /dev/sdb1 /dev/sdc1With this file in place, we can reduce the mdadd command to mdadd -a or mdadd -ar to automatically add the devices and run them. This also ensures that the devices will always be added in the correct order.
If there is ever a need to stop the device, first unmount it and then use mdstop. mdstop will free the physical devices and flush the buffers. For our example device, we would first stop the news server if it was running with the command:
Then, we could unmount it using:
umount /var/spool/newsmd0 is now inactive, and the physical partitions can be used elsewhere. Remember, if the device is stopped, none of the data that was written to the md device is accessible.
With md, the implementation and management of RAID devices is made easy. As development continues, we will see RAID1 and the tools necessary for mirror management and recovery. To stay current on the development process, join the Linux-raid mailing list. To subscribe send an email to Majordomo@vger.rutgers.edu with a one line body that says:
subscribe linux-raid <
Be sure to look at the documentation that comes with the md package. It's tools like this one that are helping Linux find a place in the business world.
Jay Munsterman has just relocated to Atlanta, GA from Washington DC, where he works with a variety of Unix platforms, Linux being his favorite. In his spare time he likes to spend time with his soon-to-be wife, Denessa, and their dog Melman. Jay can be reached at firstname.lastname@example.org.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- New Products
- Parallel Programming with NVIDIA CUDA
- RSS Feeds
- Python Programming for Beginners
- Debian on Steroids III: Libranet 3.0
- Trying to Tame the Tablet
- A Partner's Survival Guide
43 min 30 sec ago
- Keeping track of IP address
2 hours 34 min ago
- Roll your own dynamic dns
7 hours 47 min ago
- Please correct the URL for Salt Stack's web site
10 hours 59 min ago
- Android is Linux -- why no better inter-operation
13 hours 14 min ago
- Connecting Android device to desktop Linux via USB
13 hours 43 min ago
- Find new cell phone and tablet pc
14 hours 41 min ago
16 hours 10 min ago
- Automatically updating Guest Additions
17 hours 18 min ago
- I like your topic on android
18 hours 5 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?