Note: This web page was automatically created from a PalmOS "pedit32" memo.

RAID notes


Some brief definitions of common RAID levels, and a link to information on a lot of RAID levels: Linear mode Concatenation. Disk 0 fills up first, then disk 1... No fault tolerance, but it can be large. It won't be that fast unless each individual disk is, most of the time. RAID-0 striping - no fault tolerance, but it can be large, and it can be fast RAID-1 mirroring - highly fault tolerant RAID-5 striped parity - somewhat fault tolerant See also http://linux.cudeso.be/raid.php
Experimental RAID or RAID-like systems: 2005-03-17: Daniel Phillips <phillips@redhat.com> has been working on a "distributed data raid" or "ddraid", which is likely to show up on sourceforge/freshmeat eventually. He refers to it as "RAID 3.5", and it allows any one server to go down, without loss of data. CLVM
These notes cover linux (including RHEL 3) on md, LVM2, NBD, ENBD and so on: https://stromberg.dnsalias.org/~strombrg/RHEL-software-RAID.html https://stromberg.dnsalias.org/~strombrg/nbd.html
This URL covers Solaris' software RAID (DiskSuite/SVM), and includes scripts for setting up mirroring of a root disk or RAID 5 easily: https://stromberg.dnsalias.org/~strombrg/SunOS-software-RAID.html
Here are some notes on Lustre and GFS, which are actually distributed filesystems: https://stromberg.dnsalias.org/~strombrg/Lustre-notes.html https://stromberg.dnsalias.org/~strombrg/gfs_procedures.html http://www.nacs.uci.edu/~lopez/esmf_aix/gfs_procedures.html
Comparison of "software raid" and "hardware raid": A "hardware solution" should have the following advantages (but won't always, so check!) : 1) Better performance 2) Simple, relatively driver-error-free maintenance. We don't need a game of "step on a crack and break your mother's back" (IE, doing bad things to the RAID should be hard, and doing good things with the RAID should be easy) with important data. 3) Can be pulled off and moved easily to another host (at least if they have the same OS, but maybe even with two different OS's) #1 is a very minor consideration IMO. The more important ones are #2 and #3.
If you're using a linux hardware RAID controller, and the controller needs a driver upgrade for whatever reason, be careful that your driver is not only in your /lib/modules directory/ies, but also the right version if in your initrd (unless it isn't in your initrd, in which case /lib/modules is enough).
We've had good experiences with these folks: http://www.raidweb.com/ We've been buying their RAID units that use IDE disks, and present a SCSI interface to the computer. The only problem we've had with them is that older versions of their units come with firmware that shrieks until you give it a new disk when there's an error - you can't just acknowledge the error, to keep the machine room folks from being driven up the wall. But I anticipate that what they're selling now won't have this problem. Oh, BTW, we have a Western Scientific hardware RAID unit in our machine room that looks almost identical to a raidweb unit - I'm guessing they're buying stuff from a common source, and changing the labels....
For information on my experiences with 3Ware RAID controllers and Maxtor SATA disks, more specifically in the ESMF storage systems initially spec'd by IBM, see https://stromberg.dnsalias.org/~strombrg/3ware-maxtor-notes.html .
This vendor appears to sell older disk drives, which might be helpful when you need to replace a disk in an older RAID array: http://www.foxtec.com/
Some distributed filesystems: GFS: https://stromberg.dnsalias.org/~strombrg/gfs_procedures.html GFS2 Lustre: https://stromberg.dnsalias.org/~strombrg/Lustre-notes.html OCFS OCFS2
Great comparison of Linux filesystem limits by SuSE: http://www.suse.de/~aj/linux_lfs.html And some notes from the linux-lvm list: In case anyone is interested there seem to be some limitations: 32-bit machines: ext3 : 2Tb (I managed to create a 4TB fs for some reason) xfs : 16 Tb reiser : 16Tb 64-bit: ext3: 2Tb (? not sure about this one) xfs and reiser: very large (> million Tb) One of the hardware support people said there was a 2Tb limit on the SCSI protocol (not sure about this), so I had to use LVM to create the large volume from a subset of RAID volumes even though the native capacity of the RAID unit was 16Tb. So the process was to divide and recollect the RAID volumes using LVM and xfs.
Sun filesystem limits: filesystem|max filesystem size|max file size SunOS 4.x|UFS|2 GB|2 GB Solaris UFS|1 TB|2 GB Solaris 2.6|UFS|1 TB|1 TB VxFS|8,000 TB|8,000 TB QFS|1 petabyte|1 PB ZFS ?
Monitoring how well an "md" RAID array is doing: chkconfig mdmonitor on service mdmonitor start
ACM on why you shouldn't just build your own 16 terabyte RAID array: http://delivery.acm.org/10.1145/870000/864077/opinion.htm?key1=864077&key2=6665939111&coll=GUIDE&dl=GUIDE&CFID=47965038&CFTOKEN=81819877 A little bit dated, but perhaps still relevant (2005-06-21).
Comparing hardware and software RAID: > > I'm very interested in the relative SW raid / HW raid performance. I > > have both in service (two raid 5 sets are actually the same size with > > the same number of components) and see roughly the same as you mention. > > One difference that I see is that HW raid should generate fewer > > interrupts and lower bus traffic. > > In the early days of RAID, people always used to say that for speed, you > had to get a hardware RAID controller instead of doing software RAID. > > However, I saw an interesting comment on a local linux user group > mailing list recently. That comment was to the effect that hardware > RAID controllers tend to have no where near the horsepower of a modern > desktop CPU - so the claim was (I've not verified this!) that using > software RAID would be much faster. > > I'm thinking that if you can do software RAID with a dedicated box > that's doing nothing but RAID and a little network, then yes, that > likely would be faster than some scaled-down hardware RAID controller - > but it would also likely cost more, be less likely to have good hot > swap, and so on. This is very true. Especially with 64bit machines. The software RAID is much faster and efficient. However with servers that are using a lot of CPU resources it can become a bottleneck. Our inhouse application is proven faster with hardware RAID than with software RAID. Mainly due to the 100% CPU utilization. It being able to offload the IO to the MegaRAID card helps. I can't wait to test it with the new 320-2E. The CPU is twice as fast as the 320-2X we use now.
It appears these folks have a variety of OS-independent RAID cards combining variously PATA, SATA, SCSI, Fibre Channel. http://www.areca.com.tw/index/html/ Interesting comparison of some SATA RAID controllers, in which Areco topped the list: http://www.tweakers.net/reviews/557/1
Copious info about "smart", which is for monitoring disks: http://smartmontools.sourceforge.net/#references http://www.ariolic.com/activesmart/smart-attributes


Back to Dan's palm memos