|
|||||||||
|
Thread Tools | Search this Thread |
June 15th, 2007, 04:29 AM | #1 |
Regular Crew
Join Date: Feb 2007
Location: Munich, Germany
Posts: 135
|
RAID5 configuration
Hi, i have a workstation with 1 Raptor 150Gb for OS and 2 SATAII Western Digital 250Gb 16mb for scratch disks. Im starting now to work with HDV and want to optimize my system. Researching I got that RAID5 is the best disk array (speed+fault tolerance).
I need suggestions: as my mobo (Tyan S2895) only accepts 4 SATA disks, what would be wiser (cost/efficiency minded): 1.buy a big disk (750Gb) and build a 3 disk array with the 2 ones i already have (is that possible or i need 3 disks of the same capacity to build a RAID array?) 2.buy 3 new disks of 500 GB (110 euros each in Europe) thanks in advance! |
June 15th, 2007, 07:10 AM | #2 |
Trustee
Join Date: Sep 2003
Posts: 1,435
|
Why not get 4 disks and set up a Raid10.
You will get the best of both worls - redundancy and speed. Put your OS on that raid, too. |
June 15th, 2007, 01:59 PM | #3 |
Major Player
Join Date: Oct 2003
Location: Portland OR
Posts: 227
|
I'll echo that. RAID5 on four discs is great for commercial applications (database etc.) where reads outnumber writes, but for fast write performance the same 4 discs write twice as fast when configured RAID10.
|
June 16th, 2007, 02:52 AM | #4 |
Trustee
Join Date: Nov 2005
Location: Honolulu, HI
Posts: 1,961
|
Raid5 depends on the speed of the controller's processor. It takes some serious computing for the controller to determine how to write the data across multiple drives in Raid5. It is interesting in how it works in that any disk can fail and the data can be reconstructed, but there is a penalty for write performance.
|
June 16th, 2007, 04:46 AM | #5 | |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Quote:
Raid 10: Disadvantages Very expensive / High overhead All drives must move in parallel to proper track lowering sustained performance Very limited scalability at a very high inherent cost Recommended Applications # Database server requiring high performance and fault tolerance Raid3: Characteristics & Advantages Very high Read data transfer rate Very high Write data transfer rate Disk failure has an insignificant impact on throughput Low ratio of ECC (Parity) disks to data disks means high efficiency Recommended Applications # Video Production and live streaming # Image Editing # Video Editing # Prepress Applications # Any application requiring high throughput So IMO Raid3 is much more attractive than Raid10. |
|
June 16th, 2007, 05:09 AM | #6 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Hernan,
The 4 SATA connectors on your mobo are no limitation, since you need to get a SATA Raid controller anyway and the controller has all the connectors on the card. Look for instance at the Areca ARC-1231ML card with 12 SATA connectors. |
June 18th, 2007, 04:37 PM | #7 |
Regular Crew
Join Date: Feb 2007
Location: Munich, Germany
Posts: 135
|
Harm
My mobo (Tyan Thunder S2895) includes a controller for RAID0, 1, 1+0, 5. so, 1. would u still recommend RAID3? (and thus buy an ARECA controller of nearly 900 euros??) 2. If not the best scenario would be RAID5 with the contr of my mobo? 3. 3 disks are enough? (i would keep the 1st apart for the OS) 4. they could be disks of different capacity (the 2 of 250g that i have plus 1 of 750 that would buy)? 5. or, as some people suggested, should i also put the OS disk in the RAID and have 4 disks arrayed? thanks! |
June 18th, 2007, 04:55 PM | #8 |
Regular Crew
Join Date: Feb 2007
Location: Munich, Germany
Posts: 135
|
Harm and anyone interested in helping: could you please describe which RAID are you using for HDV and how is your experience with it? Thanks!
|
June 18th, 2007, 05:19 PM | #9 |
Major Player
Join Date: Oct 2003
Location: Portland OR
Posts: 227
|
I had four 7200rpm SATA discs on the Intel ICH7R controller on my mobo (Abit AW8-max) and write perfornance was a bit worse than a single 7200rpm disc. I could still do 3 or so streams on PPro2+CineformAccessHD, but output renders never exceeded 70% CPU usage on my dual core 840EE chip (2x3.24Ghz), thus indicating an IO bottleneck. [Remember, Cineform makes bigger intermediate .avi files than native HDV, making it harder on the disc array and easier on the CPU.] I then added a pair of similar drives to the second RAID controller (Silicon Image w/ 2 ports) and striped them RAID0. Renders speeded up, CPU usage hit 100% and I had to kick my fan speeds up to keep temps in check!
|
June 18th, 2007, 07:06 PM | #10 |
Major Player
Join Date: Feb 2006
Location: Perth, Western Australia
Posts: 414
|
I'm only a baby
when it comes to the post production world, but there a few things one should always build into the process. That is, always always have a separate os drive to the working drive, and a good video pro buddy of mine has 3 computers working at any one time, one captures, one renders, and one he edits on.
Now I mentioned the last not because that's what I'm suggesting, but just to get an idea of different processes required and their overheads. But the other thing this friend does is has a separate drive to save the project files to, This is what I'll be doing, because the last thing you want is everything on a paid job going kaput at the same time. And these, external 2tb raid 5 boxes are only like $2000 Aust, which means you'd probably only pay about $1000 in the States, this is a much better idea as it means they have their own built in processor to do the RAID calculations, and they have their own cooling and monitoring solutions, if you have all those hdd's in a standard atx box, your going to have little airflow and a lot of heat! Anyway the dollar always help to form our decisions, does anybody know if the JBOD RAID is a better solution as well? Good Luck Adam |
June 19th, 2007, 03:13 AM | #11 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Hernan,
My experience with hard disks is that either they fail in the first three months of their life or after several years. If they survive the first 2 or 3 months, they are good and likely last 4 or 5 years before starting to give trouble. On one machine I have 6 disks, 1 for OS, 1 as a duffelbag, 2 in Raid0 for media, 1 for audio and previews and 1 for projects. This is used for DV. On another machine I also have 6 disks, 1 for OS and 5 in Raid5 on a CERC controller. The machine I am contemplating consists of 1 disk for OS, and 8 hot-swappable disks on the Areca ARC 1231 ML (with leaves some room for future expansion) with 2 GB cache and BBM. 2 disks in Raid0 and 6 disks in Raid5. This in a full blown server chassis with 6 fans and a good airduct. The one thing I do not know is whether the acclaimed advantages of Raid3 will prove to be beneficial. I have asked some tweakers sites to include test results for Raid3/5 and 6 configuration for 4, 6, 8 and 12 disk arrays, so it can be easier to determine the best configuration. Raid5 has helped me recover from failing disks. In that light you have to see my opening statement. I had intermittant problems with 2 disks after around 6 weeks in the Raid5 array, so I first exchanged 1 disk, rebuilt, exchanged the second disk and rebuilt again and never lost any data. Hope this helps. |
June 19th, 2007, 06:29 AM | #12 |
Regular Crew
Join Date: Feb 2007
Location: Munich, Germany
Posts: 135
|
Harm
Thanks again! Ok, so you use both RAID0 and 5. Other issue, is it mandatory that the disks of the RAID have all the same capacity or i can buy a bigger disk than the 2 i already have for editing, and make the biggest RAID possible with 3 disks (2 of 250g plus 1 new of 750g)? |
June 19th, 2007, 11:04 AM | #13 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Hernan,
The best is to use similar disks or better yet, the same disks in a raid. So in your case, if you want to use the two 250's you already have, add 2 250's for a 4 disk array. Just a very rough guesstimate to calculate the achievable transfer rate is the following, assuming raid5, where you lose 1 disk effectively for the parity bits. If one disk has a sustained transfer rate of 60 MB/s, 4 disks would yield 4x60 = 240. Deduct from that the effective loss of one disk for parity info and you have 3x60 = 180. Deduct from that the overhead for the controller to handle the parity bits, say 20% and you end up with - again very roughly speaking - with sustained transfer rates of around 180-(20%) = 144 MB/s. Nearly 2.5 times faster than a single disk with triple the storage capacity. There is no sense in using 2 250's and 2 750's in the same raid, because the extra room on the larger disks can not be used, unless you go for a JBOD approach, but then you also lose the redundancy. My suggestion is to start out with 4 250's, preferably in hot-swappable bays, then you can later get 4 500's or even 4 1000's, swap them and you have sufficient storage room, I think. |
| ||||||
|
Thread Tools | Search this Thread |
|