Tuesday, January 29, 2008

Fakeraid, Softraid, Hardraid, Hell ???

I am definitely getting the feeling there is no hard and fast rules with RAID on any level.

It looks like a few of the big players in the hardware RAID business at the SOHO market level are:

3ware
ArecaHigh Point Technologies
LSI
Promise Technology

A quick look at Newegg revealed 361 RAID cards with prices ranging from $11.99 to $9999.99 Gulp.

I limited my search to a card that would support a minimum of four SATA II connections in a RAID 5 or 10 configuration. The cheapest new card I could find was a High Point RocketRAID 1740 PCI SATA I SATA II Controller Card that supported RAID 0/1/5/10/JBOD for $121.00
This is very doable for a small business solution but probably at the upper end of the price spectrum for most home applications.

Software RAID was starting to look good at this point. A little more digging revealed most of the people running open source software believe software RAID is the way to go based on the articles I read. Some of the reasons cited best summarized at Linux: Why software RAID?

Why prefer Linux software RAID?
  • Potential for increased hardware and software biodiversity
  • Kernel engineers have much greater ability to diagnose and fix problems, as opposed to a closed source firmware. This has often been a problem in the past, with hardware RAID.
  • Disk format is public thus, no vendor lock-in: Your data is not stored in a vendor-proprietary format.
  • A controller-independent, vendor-neutral layout means disks can be easily moved between controllers. Sometimes a complete backup+restore is required even when moving between hardware RAID models from the same vendor.
  • Eliminates single-points-of-failure (SPOF) compared to similar configurations of hardware RAI
  • RAID speed increases as host CPU count (multi-thread, multi-core) increases, following current market trends.
  • Cost. A CPU and memory upgrade is often cheaper and more effective than buying an expensive RAID card.
  • Level of abstraction. Linux software RAID can distribute data across ATA, SCSI, iSCSI, SAN, network or any other block device. It is block device agnostic. Hardware RAID most likely cannot even span a single card.
  • Hardware RAID has a field history of bad firmwares corrupting data, locking up, and otherwise behaving poorly under load. (certainly this is highly dependent on card model and firmware version)
  • Hardware RAID firmwares have a very limited support lifetime. You cannot get firmware updates for older hardware. Sometimes the vendor even ceases to exist.
  • Each hardware RAID has a different management interface, and level of feature support.
  • Your hardware RAID feature set is largely locked in stone, at purchase time. With software RAID, the feature set grows with time, as new features are added to Linux... no hardware upgrade required.
  • Additional RAID mode support. Most hardware controllers don't support RAID-6 as Linux software RAID does, and Linux will soon be adding RAID-5E and RAID-6E support.
  • Many ATA-based hardware RAID solutions either (a) fail to manage disk lifetimes via SMART, or (b) manage SMART diagnostics in a non-standard way.

Why prefer Linux hardware RAID?

  • Software RAID may saturate PCI bus bandwidth long before a hardware RAID card does (this presumes multiple devices on a single PCI bus).
  • Battery backup on high end cards allows faster journalled rebuilds.
  • Battery-backed write-back cache may improve write throughput.
After reading all of this, (some of which I must say I do not fully understand yet) it sounds like there are some compelling reasons to go with software RAID. Another article supporting this was from Unix Pro News about a hardware solution (3ware) that failed and the author is going back to using software RAID. He concluded his article by saying:

Let's just say I've been burned a few times in the past.

Anyway, soon I can finally migrate the data for this site and several others off my old (going on 6 years old) server in Ohio (happily running Software RAID).

In retrospect, I was adding complexity and a new point of failure to a system that had always worked fine in the past. I've learned my lesson.

During all of this I kept seeing how one should avoid FakeRAID. I had no clue what this was so I looked it up and found a reference to it at Wikipedia:

Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and BIOS (software) extensions to provide the RAID functionality. The operating system requires specialized RAID device drivers that present the array as a single block based logical disk. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids", and have almost all the disadvantages of both hardware and software RAID.


A more humorous description was over at Snowflakes in Hell:

Whoever decided that “FakeRAID”, which is a highly technical term used to describe the types of Serial ATA RAID appearing on some cheaper motherboards, was a good idea needs a severe beating. It appears that FakeRAID is just basically a BIOS hint, requiring the CPU on the machine to do the majority of the work with regards to creating and maintaining the array. I was trying to make Ubuntu do the FakeRAID thing on a server at work, but I think I’m just going to use the Linux software RAID, which seems to be the conventional wisdom these days anyway.

Now back to your regularly scheduled gun blogging.

I guess I will not worry to much about what RAID levels are supported by any particular motherboard during future purchasing decisions...

No comments: