Kernel Korner - ATA Over Ethernet: Putting Hard Drives on the LAN

With ATA hard drives now being cheaper than tape, this simple new storage technology enables you to build storage arrays for archives, backup or live use.
Backup

Because ATA over Ethernet puts inexpensive hard drives on the Ethernet network, some sysadmins might be interested in using AoE in a backup plan. Often, backup strategies involve tier-two storage—storage that is not quite as fast as on-line storage but also is not as inaccessible as tape. ATA over Ethernet makes it easy to use cheap ATA drives as tier-two storage.

But with hard disks being so inexpensive and seeing that we have stable software RAID, why not use the hard disks as a backup medium? Unlike tape, this backup medium supports instant access to any archived file.

Several new backup software products are taking advantage of filesystem features for backups. By using hard links, they can perform multiple full backups with the efficiency of incremental backups. Check out the Backup PC and rsync backups links in the on-line Resources for more information.

Conclusion

Putting inexpensive disks on the local network is one of those ideas that make you think, “Why hasn't someone done this before?” Only with a simple network protocol, however, is it practical to decouple storage from servers without expensive hardware, and only on a local Ethernet network can a simple network protocol work. On a single Ethernet we don't need the complexity and overhead of a full-fledged Internet protocol such as TCP/IP.

If you're using storage on the local network and if configuring access by creating Ethernet networks is sufficient, then ATA over Ethernet is all you need. If you need features such as encryption, routability and user-based access in the storage protocol, iSCSI also may be of interest.

With ATA over Ethernet, we have a simple alternative that has been conspicuously absent from Linux storage options until now. With simplicity comes possibilities. AoE can be a building block in any storage solution, so let your imagination go, and send me your success stories.

Acknowledgements

I owe many thanks to Peter Anderson, Brantley Coile and Al Dixon for their helpful feedback. Additional thanks go to Brantley and to Sam Hopkins for developing such a great storage protocol.

Resources for this article: /article/8201.

Ed L. Cashin has wandered through several academic and professional Linux roles since 1997, including Web application developer, system administrator and kernel hacker. He now works at Coraid, where ATA over Ethernet was designed, and he can be reached at ecashin@coraid.com. He enjoys music and likes to listen to audio books on his way to martial arts classes.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Definitely very help full was

Ace Winget's picture

Definitely very help full was kind of looking into doing this on linux and now I'm pretty positive that I can handle it.

distributed network raid configuration

pr0mjr's picture

there are redundant packets sent to the same shelf with mirrored disks as described in your post. this will saturate the switch ports unnecessarily.

consider distributing the raid 1 mirrors between two or more shelves as follows:

mdadm -C /dev/md1 -l 1 -n 2 \
/dev/etherd/e0.0 /dev/etherd/e1.0
mdadm -C /dev/md2 -l 1 -n 2 \
/dev/etherd/e0.1 /dev/etherd/e1.1
mdadm -C /dev/md3 -l 1 -n 2 \
/dev/etherd/e0.2 /dev/etherd/e1.2
mdadm -C /dev/md4 -l 1 -n 2 -x 2 \
/dev/etherd/e0.3 /dev/etherd/e1.3 \
/dev/etherd/e0.4 /dev/etherd/e1.4

then stripe those mirrors as previously suggested...

mdadm -C /dev/md0 -l 0 -n 4 \
/dev/md1 /dev/md2 /dev/md3 /dev/md4

considering the server may be bonded to gigabit ethernet uplinks in a round-robin or similar configuration, the switch will saturate each of the fast ethernet ports dedicated the shelves before saturating the server uplinks.

the other advantage to a distributed raid mirror occurs when a single shelf fails. all of the drives are mirrored on another shelf, therefore it's business as usual for the server.

with the improvements mentioned above you get both improved throughput during reading and writing, as well as a more robust system that continues to run despite multiple disk or single shelf failures.

cheers! ;-)

It works in my lab!

Davester's picture

That doesn't make it enterprise.

Out of order packets aren't the silent killer here.
faulty checksum hardware will silenty allow corruption of your data.
Cheap NICs can kill your data, silently and thoroughly.
Fsck early. Fsck often.

References ---

google: tcp checksum hardware error

of particular note:

http://portal.acm.org/citation.cfm?doid=347059.347561

To quote: "Even so, the highly non-random distribution of errors strongly suggests some applications should employ application-level checksums or equivalents."

I guess the Coraid folks don't have google.

"I guess the Coraid folks

Ziggy Stardust's picture

"I guess the Coraid folks don't have Google." ?? I guess Davester can't read.

What's the relevance of TCP checksum/CRC issues when this is all done at layer 2 and TCP isn't even involved? Here - let me answer that for you: NONE.

As noted in the article, avoiding TCP also avoids a lot of other issues. This is a layer 2 (Ethernet) solution. No TCP. No UDP. No IP. That's the lovely simplicity of this solution.

maybe zfs is the answer

Anonymous's picture

zfs does checksums for all blocks so there won't be any silent corruptions

Single write multiple read

Al's picture

"Given linux software RAID is not cluster-aware you cannot share the array between multiple AoE clients".

I presume this is only in the case with multiple writing clients?

Is it therefore possible to have a single write client but any number of read clients accessing the array via AoE ? Are there any examples/users doing this ?

great article by the way..

regards
Al

Nope, you need a cluster

Anonymous's picture

Nope, you need a cluster aware FS like GFS or CXFS even for 1 writer and multiple readers.

Packet-ordering dependent?

Anonymous's picture

Hi,

Aren't you hosed if the switch decides to deliver frames out-of-order? Is there anything in the protocol that dictates ordering at the frame level?

Thanks,
--S

Packet-ordering dependent?

eclectic's picture

It is a requirement / feature of Ethernet / IEEE 802.3 that packets are not re-ordered. It is also a requirement that packets that are delivered are error free within the capabilities of the 32 bit checksum. It is not a requirement (of the connectionless mode of operation) that all packets are delivered so there must be a retransmission / error correction mechanism.

Out of order

AlanCox's picture

There is a complicated answer to this but as an armwaving simple case the answer is "no". The Linux block layer will not issue an overlapping write to a device until the previous write covering that sector has completed. In fact usually it'll merge them together.

Don't know, but I don't

Anonymous's picture

Don't know, but I don't think it's a big problem... recent SATA drives have Native command queueing; which reorders the commands in it's buffer to increase performance.

Cluster-aware RAID

Anonymous's picture

Nice article. Am a relative new comer in the field of storage. Could you please explain what you meant by the term cluster-aware RAID? Is there currently any implementation of it?

cluster aware RAID

yacc's picture

A cluster aware RAID would be a block device driver that cooperates while writing to the RAID/rebuilding the RAID with other hosts.

Andreas

Good article

Andrew's picture

Over a year later this article is still relevant and informative. Thanks.

software used in read/write tests

Anonymous's picture

What exactly did you use to perform the read/write tests? If it's just a simple shell script, would you mind pasting it here? I'd assume you used hdparm -Tt, except IIRC this doesn't do any write tests.

Great article!

--
Adam Monsen

Could someone describe the di

Adrian's picture

Could someone describe the differences between AoE and Netblock Devices(nbd) Thanks.

AoE and nbd

Anonymous's picture

AoE is a network protocol for ethernet. The aoe driver for Linux allows AoE storage devices (targets) to be usable as local block devices.

nbd is not a network protocol but a Linux feature. It's analogous to the aoe driver, not the AoE network protocol. Instead of AoE, it uses TCP over IP as the network protocol for transmitting information and data.

TCP is more complex than AoE. AoE can be implemented by low-cost hardware.

AoE is not a routable protocol, so for using remote storage devices over long, unreliable network links, nbd (using TCP) might be a nice choice. On the other hand, AoE is great for using nearby storage devices. Interestingly, AoE could be tunneled through other protocols (like TCP), or even encrypted sessions.

Sharing drives over AoE

dchouinard's picture

Is it possible to use an old box and share his drive over AoE? One could use some older machines and build a disk array for a more powerfull machine.

Yes, there's an AoE target th

Anonymous's picture

Yes, there's an AoE target that runs in user space:

http://freshmeat.net/projects/vblade

... with which you could export any file or block
device using ATA over Ethernet.

But for the application you're considering, it
sounds like PVFS is just the thing.

http://www.parl.clemson.edu/pvfs/

Each host has some storage, and all the hosts
communicate in order to share the storage efficiently
to create a large, fast filesystem.

AoE target as loadable module

Anonymous's picture

there is also an AoE target that runs in kernel space now:

http://lpk.com.price.ru/~lelik/AoE/

unfortunately, it doesn`t seem to be documented very well

vblade user mode

Elix's picture

vblade user mode implementation is slow (vblade 100% CPU Athlon64-3200 and gigabit ethernet). Client and server using ubuntu6 desktop

from server
single sata disk
hdparm -tT /dev/sda = 58MB/s

from client pc (p4 3GHz) hdparm -tT /dev/etherd/e0.0 = 50MB/s
-----------------
from server
raid0 sata disk
hdparm -tT /dev/md0 = 115MB/s

from client pc (p4 3GHz) hdparm -tT /dev/etherd/e0.0 = !!!75MB/s!!!
-----------------

Thanks for your benchmark

Art's picture

Thanks for your benchmark numbers!

2 nice facts are included in your posting:
1) It seems that there is a waste of 8 MB/s for ATA over ethernet (for your first benchmark)
2) You are hitting the troughput limit of your gigabit ethernet NIC here (75 MB/s is a very good value [when I benchmarked sometime ago 3 GbE NICs, the fastest NIC was an IntelPro with about 78 MB/s max throughput])

I'm wondering if channelbonding would help here !? With 2 GbE NICs per machine the troughput should be again at ~150 MB/s (or 115 MB/s for your disks). Sadly this concept won't scale very well :(

See:
http://www.howtoforge.com/nic_bonding

Bonding

Evan's picture

I use bonding in my clusters. I would suspect that the bonding will not give you much of a benchmark increase, but should provide a more constant higher access rate under heavy work loads( lots of users, heavy video editing, webservering) than a single nic.

holy comment spam, batman.

Anonymous's picture

holy comment spam, batman. Why aren't you guys at least using capcha?

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix