I'm finally going to bite the bullet and rebuild my trusty linux box
with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
up my web host/mail host, but want some redundancy before I make the switch...
I've found some cheap ATA-133 RAID controllers with the Silicon Image
680 chipset; initial googling looks good re: linux support. I'm
thinking one of those, two 120 gb drives, and consolidate all of my
network storage onto it.
Re: ATA Raid under Linux?
By: Poindexter Fortran to All on Wed May 04 2005 17:29:00
I'm finally going to bite the bullet and rebuild my trusty linux box
with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
up my web host/mail host, but want some redundancy before I make the switch...
I've found some cheap ATA-133 RAID controllers with the Silicon Image
680 chipset; initial googling looks good re: linux support. I'm
thinking one of those, two 120 gb drives, and consolidate all of my network storage onto it.
I'd strongly suggest you configure the controllers to treat the drives as "JBOD" and use Linux software RAID.
I'm finally going to bite the bullet and rebuild my trusty linux box
with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
up my web host/mail host, but want some redundancy before I make the switch...
I've found some cheap ATA-133 RAID controllers with the Silicon Image
680 chipset; initial googling looks good re: linux support. I'm
thinking one of those, two 120 gb drives, and consolidate all of my
network storage onto it.
Come to think of it, if this is going to be a repository for all my
data, I should think bigger. 250 GB drives seem to be a nice sweet
spot.
I don't have any experience with the SI RAID cards. However, the 3ware cards have *excellent* Linux support, including kernel and user-level monitoring of the raidset. 3ware cards are available for both IDE and SATA in 2-16 channel boards. The 2-channels are fairly inexpensive considering the RAID is a true hardware-based raid, and not some BIOS trickery.
Re: Re: ATA Raid under Linux?
By: Funar to Poindexter Fortran on Tue Jul 05 2005 14:46:00
I don't have any experience with the SI RAID cards. However, the 3ware cards have *excellent* Linux support, including kernel and user-level monitoring of the raidset. 3ware cards are available for both IDE and SATA
in 2-16 channel boards. The 2-channels are fairly inexpensive considering the RAID is a true hardware-based raid, and not some BIOS trickery.
Funny -- In always considered that "some BIOS trickery" had definate advantages over any RAID card that considered a 2-disk mirror to be
the ultimate in advanced technology.
Not at all. If you're relying on the BIOS and special drivers to handle the configuration and operation of the RAID, you're wasting valuable processor cycles on I/O that could be used for CPU intensive applications such as video capture, editing, etc. If you're going that route, you may as well use straight software-based RAID. True hardware-based cards take all the mapping, data striping, and configuration away from the main system CPU.
I used to use Promise FastTrak cards. Their 2 and 4 channel cards are BIOS mapped cards. I was seeing frame drops when encoding HDTV w/5.1 sound - this on a dual Opteron. When I went to a 3ware card of a similar class, the problems went away and my I/O throughput nearly doubled. That was enough for me.
I've since installed 3ware cards in every server I've installed.
Given the many advantages of software RAID, I have to say that unless you have money to spare and need the peroformance, I'd recommend staying away from hardware RAID solutions.
I used to use Promise FastTrak cards. Their 2 and 4 channel cards are BIOS >> mapped cards. I was seeing frame drops when encoding HDTV w/5.1 sound -
this on a dual Opteron. When I went to a 3ware card of a similar class, the >> problems went away and my I/O throughput nearly doubled. That was enough
for me.
I have used Promise FastTrak cards as well, but always in JBOD mode. I
used software ERAID on top of these to give me the redundancy that I
wanted. I've also used "big-iron" RAID solutions with multiple ranks of multiple disks. And I've experienced RAID failure, which was a load of
fun. Wanna guess which failed -- hardware or software?
Angus Mcleod wrote:
Given the many advantages of software RAID, I have to say that unless you have money to spare and need the peroformance, I'd recommend staying away from hardware RAID solutions.
Dunno, I think RAID should be handled outside of the OS myself, just a personal view,
allong with performance, and eliminates the need for the OS the have
that additional layer of overhead running.
I generally only want a single raid-1 setup for things, and usually the cheaper cards work fine for this..
I haven't had a bad raid card, but actual did have an OS issue with
raid before, don't remember the OS in particular (was redhat or suse, I didn't admin the box), I just did the db dumps from another machine...
I've also seen windows software raid eat itself when one the drives
crashed as well...
in my own experiences hardware solutions deal with a drive failure better...
The SATA raid card in one of my servers supports hotswap & rebuild.. I don't have hotswap bays setup, but if I did, I could swap a drive out while runnin etc.. and this is on a <$100 card...
Given the many advantages of software RAID, I have to say that unless you >>> have money to spare and need the peroformance, I'd recommend staying away >>> from hardware RAID solutions.
Dunno, I think RAID should be handled outside of the OS myself, just a
personal view,
Why?
allong with performance, and eliminates the need for the OS the have
that additional layer of overhead running.
Well, "performance" and "the need to have an additional layer running" are the same thing, aren't they? Yes, you can get better performance by using hardware RAID, but you can buy a faster CPU to counter that issue, and in any event most SOHO-type users are not looking for performance anyway, but reliability. Certainly, the original poster who asked about RAID stated directly that they were looking for redundancy, rather than performance...
I generally only want a single raid-1 setup for things, and usually the
cheaper cards work fine for this..
Software raid gives you additional options (such as RAID 5) at no
additional cost.
I haven't had a bad raid card, but actual did have an OS issue with
raid before, don't remember the OS in particular (was redhat or suse, I
didn't admin the box), I just did the db dumps from another machine...
The thing is, if you have a RAID card fail on you, and you can't locate a replacement, what do you do? The data on the disks are going to be in a proprietary format, so you will HAVE to find a replacement card or lose
the data. With Software RAID this is not an issue. You can hook up the disks to ANY machine that supports the RAID software, and reacquire the data.
I've also seen windows software raid eat itself when one the drives
crashed as well...
Windows software? What a surprise!
in my own experiences hardware solutions deal with a drive failure
better...
My experiences are quite the opposite. A very expensive RAID unit failed and cost nearly $100,000 to rejuvenate, whereas I've never experienced a problem with software RAID.
The SATA raid card in one of my servers supports hotswap & rebuild.. I don't >> have hotswap bays setup, but if I did, I could swap a drive out while runnin >> etc.. and this is on a <$100 card...
SATA drives make life much easier, of course. But I'm sure you know that Linux software RAID supports hot-spares with automatic rebuild on standard IDE drives at absolutely no additional cost. And the *software* will also manage hot-swap as well, although most people do not have hot-swappable
IDE drives and bays...
One thing software RAID will allow is the use of dissimilar drives, and
the slicing of these drives to suit.
For instance, you could have a 30 gig drive and a 40 gig drive, with the
40 gig drive sliced into a 10 gig partition and a 30 gig partition. You
can use the 10 gig partition to boot the machine and get it running, and then RAID the other (30 gig) partition with the separate 30 gig drive.
Ok, this may be questionable, but it is *doable* and low-budget installations may require questionable practices like this. I don't know
a hardware RAID solution that allows you to mix dissimilar drives, at
least not without reducing the capacity of all drives to that of the smallest.....
And here is another good thing about software RAID: You can set up installations for testing, that simply can't be set up without buying
extra hardware. Example:
Suppose you were thinking of setting up a rank of four drives in RAID-5 configuration. You want to try it out to get a feel of it. You can cut four 1-gig slices off the SAME drive and RAID them with software. OK, you get no performance *OR* reduncancy benefits, but for the week that you
will be using this setup for testing, and since no production data will be committed to the test rank, you can proceed without spending a cent.
Hell, if you have some unsliced space on the disk, you can proceed without even *rebooting* the machine.
No, there *are* reasons for using hardware RAID solutions, but those are largely reasons of performance. And the cost of the RAID hardware could
be applied to a faster CPU to maintain reasonable performance levels with software RAID. And given the flexibility of softeare RAID, and the
benefit of not being held to ransom by a particular brand of RAID card, I think that *most* of the time, software RAID is a good choice for the guy building a RAID box at home or for a small business.
One thing which I don't know, and maybe you can tell me is this: When you use a hardware RAID solution, can you access the drives with SMART monitoring software to be on the lookout for potential drive degradation which might lead to a failure? In the software RAID solutions I've implemented for small (and not-so-small) business, I've had the RAID
drives continually monitored, with any departure from established norms logged, e-mailed to the SysAdmin, announced verbally in a loop via a home-brewed "digi-talker" on the machine, and sent as an SMS to my celly.
I don't know of anyone else who was able to do that with their hardware
RAID installations, but perhaps the capability exists and they just didn't bother?
Generally hardware raid is generic, and the os layer is separate from how th virtual drives function to the OS... whereas with software raid, it's anothe potential place for bugs, that are non-generic in nature... I know hardware subject to bugs/failures as well, but I find more comfort in a hardware solu
Software raid gives you additional options (such as RAID 5) at no additional cost.
You still have to buy the drives, and hardware raid controllers that support raid-5 are not too bad on the SATA side, though increasingly expensive for s
The thing is, if you have a RAID card fail on you, and you can't locate a replacement, what do you do? The data on the disks are going to be in a proprietary format, so you will HAVE to find a replacement card or lose the data.
True, but depending on how a drive fails, you could wind up with corrupt dat in any case, this is where a good backup plan is necessary..
I've also seen windows software raid eat itself when one the drives
crashed as well...
Windows software? What a surprise!
LOL, note above I've seen it in linux too.. ;)
A very expensive RAID unit failed and cost nearly $100,000 to
rejuvenate, whereas I've never experienced a problem with software
RAID.
Don't know about this, as said, generally do cross system backups in additio to raid, so rarely loose anything (sometimes on my desktop I'm less fortunat though, had about 4 HD's fail on my desktop in the last 7 years)
SATA drives make life much easier, of course. But I'm sure you know that Linux software RAID supports hot-spares with automatic rebuild on standard IDE drives at absolutely no additional cost. And the *software* will also manage hot-swap as well, although most people do not have hot-swappable IDE drives and bays...
Dunno, afaik most IDE/PATA controllers don't support hotswap anyway... SATA is becomming more standard.
Many hardware raid solutions will allow dissimilar drives, but you loose the extra on the bigger drive, I will give you that.. on the flip side, the varying drive speeds tend to bring down performance (it keeps coming back to that doesn't it.. ;) )
I think that *most* of the time, software RAID is a good choice for
the guy building a RAID box at home or for a small business.
Possibly, but as I said before, a good backup plan is always a good idea. :) sometimes harder to impliment than others.
One thing which I don't know, and maybe you can tell me is this: When you use a hardware RAID solution, can you access the drives with SMART monitoring software to be on the lookout for potential drive degradation which might lead to a failure?
Actually, yes, it varies by the controller.
We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to rejuvenate the rank, no matter what it cost. Fortunately for me, I'd
memo'd The Pointy-haired Idiot only eight days before, reminding him that I'd been telling him we had a problem with the tape drive for the last two years. So I was able to dodge that particular bullet.....
Dunno, afaik most IDE/PATA controllers don't support hotswap anyway... SATA >> is becomming more standard.
I believe they *are* available, but pricy. Anyone with that sort of cash probably is not building a rank for domestic use.
Many hardware raid solutions will allow dissimilar drives, but you loose the >> extra on the bigger drive, I will give you that.. on the flip side, the
varying drive speeds tend to bring down performance (it keeps coming back to >> that doesn't it.. ;) )
Yes, and I said right off, I'll give you that hardware RAID solutions will run faster than software RAID solutions. But the original poster IIRC had just built a Debian box and wanted some redundancy for storage of digital media. If performance was important enough, a faster CPU chip could probably neutralize the difference.
I think that *most* of the time, software RAID is a good choice for
the guy building a RAID box at home or for a small business.
Possibly, but as I said before, a good backup plan is always a good idea. :) >> sometimes harder to impliment than others.
The biggest mistake you can make when setting up *any* RAID solution, is thinking it relieves you of the need to backup.
One thing which I don't know, and maybe you can tell me is this: When you >>> use a hardware RAID solution, can you access the drives with SMART
monitoring software to be on the lookout for potential drive degradation >>> which might lead to a failure?
Actually, yes, it varies by the controller.
Okay. I know the high-end RAID solutions can do that stuff, but I wasn't sure about the <$100 RAID card bought over the counter.
Angus McLeod wrote:
We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to rejuvenate the rank, no matter what it cost.
Yeah, it's funny when people don't consider that.. I backup most stuff to another system, so that I can recover quicker in the short term.. main webserver down, setup backup to serve until the main is backup, same for db etc... not a live redundancy, but enough to cut down time a bit...
Yeah, I can't beleive anyone would spend *THAT* much for PATA technology compared to scsi, and more recently sata.. it simply crosses the line, thoug PATA drives are typically in much bigger sizes available, so that may have something to do with it.
Angus McLeod wrote to Tracker1 <=-
Re: Re: ATA Raid under Linux?
By: Tracker1 to Angus McLeod on Mon Jul 25 2005 00:14:00
Angus McLeod wrote:
We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to rejuvenate the rank, no matter what it cost.
Yeah, it's funny when people don't consider that.. I backup most stuff to another system, so that I can recover quicker in the short term.. main webserver down, setup backup to serve until the main is backup, same for db etc... not a live redundancy, but enough to cut down time a bit...
Well, after that particular incident, I was given the go-ahead to build
a machine specifically for doing backups. We had our production
databases each in a slice of the RAID cabinet. I duplicated these
slices on the new backup box, and periodically did a "cold backup" of
the entire slice onto the other machine. Then the backed up slices
were tar'd and gzip'd, so I had twelve days worth of backups available, and the latest one not archived. In the event of a database loss the
idea was to mount the backed-up slice via NFS and be running again
ASAP.
Yeah, I can't beleive anyone would spend *THAT* much for PATA technology compared to scsi, and more recently sata.. it simply crosses the line, thoug PATA drives are typically in much bigger sizes available, so that may have something to do with it.
Again, depends WHO and WHAT is doing it. I'd not buy into a big SCSI array for home use. I'd buy two IDE disks and go with a simple mirror (using software RAID). My cost would be only the drives themselves,
which would be low buck-per-bit in comparison. But the Linux Software RAID implementation *will* support hot-swap if you feel like spending
the cash for the apropriate IDE or SCSI units. And the Hot-Spare
option is perfectly viable with low-cost IDE.
SATA does make the whole problem moot, though, don't ya think? :-)
Sysop: | Rixter |
---|---|
Location: | Madison,NC |
Users: | 552 |
Nodes: | 10 (0 / 10) |
Uptime: | 20:43:10 |
Calls: | 1,642 |
Files: | 8,748 |
Messages: | 19,632 |