[Logo]
 
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
Dead Hard Drive - I think ...  XML
Forum Index -> Banter
Author Message
BotFodder

Wicked Sick!
[Avatar]

Joined: 01/13/2006 15:23:41
Messages: 1239
Location: Florida
Offline

I'm working my way through some computer problems that may involve a faulty hard drive on a RAID 0 array. Fun! I've already contacted support for a replacement, but I'm in the middle of trying to get my data ghosted off so that I can restore it later. I do have an earlier version tucked away, but I'm hoping to get something a bit more recent ...

Anyways, I'm "here" but I won't be on the game much until I resolve this; I have UT2004 installed on my XP x64 install, so as soon as I get the system stable and get at least a successful backup, I should be on a little ...

I use the Futurama Prof. Farnsworth Skin: http://www.disastrousconsequences.com/dcforum/posts/list/1595.page
WM: (DC)BotFodder 170
MM: (DC)BotDoctor 141
AM: (DC)BotBooster 147
http://ericdives.com/ - My DC Newbie FAQ: http://tinyurl.com/lz229
Twitter: http://twitter.com/ericdives
[WWW] aim icon [MSN]
Moof

Wicked Sick!
[Avatar]
Joined: 06/24/2006 19:42:44
Messages: 433
Location: College Park, MD
Offline

Raid 1, Raid 5. Never Stripe =(

Moof, Scholar of Ni

Moof (W); Dr. Moof (M); Engimoof (E); Moofgineer (E beta)
[Yahoo!] aim icon [ICQ]
BotFodder

Wicked Sick!
[Avatar]

Joined: 01/13/2006 15:23:41
Messages: 1239
Location: Florida
Offline

I'm toying with buying another hard drive just so I can go 5 when the RMA gets here.

I use the Futurama Prof. Farnsworth Skin: http://www.disastrousconsequences.com/dcforum/posts/list/1595.page
WM: (DC)BotFodder 170
MM: (DC)BotDoctor 141
AM: (DC)BotBooster 147
http://ericdives.com/ - My DC Newbie FAQ: http://tinyurl.com/lz229
Twitter: http://twitter.com/ericdives
[WWW] aim icon [MSN]
Spacey

Wicked Sick!
[Avatar]

Joined: 01/07/2005 21:28:14
Messages: 589
Location: Da'Burgh (Pittsburgh) PA
Offline

Moof wrote:
Raid 1, Raid 5. Never Stripe =( 


Actually, as someone who has worked with RAID for many many years, I can tell you that RAID 0 has its uses. For one thing, RAID 0 has a much higher performance than a single disk, and it can get you much larger volumes. The key is, if you use just plain RAID 0, you better hope that what you have on it is something that you can afford to loose, the reason being that it has no redundancy and has a much lower MTBF since a single drive will take out the entire plex. So if you have N drives with MTBFs of 100Khrs in a RAID 0 array, your effective MTBF is 100Khrs/N (e.g. two drives will have a MTBF of 50Khrs, 4 drives will have a MTBF of 25Khrs, etc.).

At the radio observatory, we have a RAID 0 array on a multi-TB file server. Since at the max data acquisition rate of >1TB/day, we cannot keep data for that long of a period, RAID 0 makes perfect sense.

Now, regarding RAID 1 and RAID 5. One thing that folks forget when building a high performance system is that with these types of arrays, you have multiple writes, and in the case of RAID 5, multiple reads when doing I/O of a single block. At Bell Labs while doing the R&D for the next generation voice mail/IVR systems we produced (ever hear of Audix or Conversant?? It's big brother is at companies like VZW, Cingular, and other providers and is what I am speaking about). We did very extensive testing of multiple HW controllers, SW drivers, etc., and found that out of all the RAID arrays which provide both large plexes and redundancy, the best out there was RAID 0+1 (mirror and stripe). When we did RAID 5 on a 6-drive plex, every block of I/O resulted in 6 reads or 6 writes for every single operation. This killed our performance, regardless of the fact that it had a much lower overhead. Other RAID strategies gave similar problems. Only with RAID 0+1 or 1+0 could we get the huge plex required to store the 100Khrs of voicemail for 1M+ mailboxes per sub-node, and give us the performance we needed to support the hundreds of active sessions per sub-node. This is because we reduced the I/O operations per block from 6 to 2 for writes and 1 for reads. Yes, these have a 50% overhead (e.g. if you have 6 250GB drives, you will only get 750GB from these arrays), but with this setup, a single VM system for one wireless provider covered the entire state of Ohio until they exceeded the maximum number of mailboxes, sessions or hours of messages.

BTW...there is a much better way of doing this sort of thing for most folks (face it, none of us need the amount of fault tolerance a RAID array would give you. You can lose a few hours or more with little problems, if you truely think about it). If you want performance and just want to be able to have a backup incase the drive goes bad, rather than running a RAID array where the drives are constantly connected, think of a second drive (or drive set) for a snapshot. You periodically tell the SW/HW involved to make a snapshot onto the second drive, let it complete, take your system idle, flush to disk (beware of your controllers buffering), and then break the snapshot back apart. Besides performance, this reduces your vulnerability to single point failures above the physical drives, such as cables, drive controllers, OS, etc. When you get those, particularly when you are talking about the HDC, OS and up, RAID will do you no good, as all copies will be corrupted. And if you spin down the drives in between snapshots, you use less power as well.

Now, if you are really insistent to use RAID, consider a controller which supports hot sparing and hot swapping.

BTW...the most extreme RAID I have seen... a system which needed ultra-high availability (better than 6-9s). It was in a Tandem non-stop cluster, with a HW RAID-5 arrays (with hot swap/hot spares) on multiple nodes, with VxFS doing software mirroring between the arrays on the nodes. The nodes in the cluster were split between multiple data centers with a multiple ring Sonnet connection between each data center. Talk about mega-cool, ultra-geek factor!

*BEL*_e (spacey), BEL Clan General -- You Frag em, I'll Slag em!
LA -- *BEL*_e (level 283 - Extreme AM), LW -- *BEL*_o (level 26) MM - ?? ( *BEL*_Rolaids ?? *BEL*DrWho??, Engineer... *BEL*BS_E_E [BSEE '89, Ohio U] (level 22)

[Email] [WWW] [Yahoo!] aim icon [MSN] [ICQ]
 
Forum Index -> Banter
Go to: