Comments: |
![[User Picture]](https://l-userpic.livejournal.com/4980108/1052633) | From: edm 2007-01-28 12:10 am (UTC)
RAID5 | (Link)
|
If you want to performance test configurations, why not define, eg, a 2GB or 10GB partition on the start of each disk, then build those into a RAID set? The rebuild will be much faster, and you'll get to do your RAID-set layout tests much quicker. (Of course if you want to fill the RAID set with 400GB of data this won't help -- but filling a disk with 400GB of data takes Some Time (tm) too. And of course the start of the disk is faster access than the outer disk -- but if you care about this, try with partitions defined at different points on the disk.)
For RAID-5 I'm not sure that you can just zero the disks and do --assume-clean, because I'm not certain that the parity disk ends up with all-zeros on its blocks (I don't remember the parity algorithm used off the top of my head). Doing --assume-clean and then zeroing the whole RAID set should, in theory, work, but I can't see it being a whole lot faster than letting MD resync, as you're still doing RAID-5 parity calculations and writing to 6 disks at 45MB/s. The limiting factor here is the 45MB/s to each individual disk platter.
Ewen
PS: I normally make my software RAID sets on, eg, 32GB or 64GB partitions, and then use something like LVM to join them together again. I do this precisely to keep the resync time for any given RAID set down to, eg, 1 hour.
![[User Picture]](https://l-userpic.livejournal.com/4980108/1052633) | From: edm 2007-01-28 12:20 am (UTC)
Re: RAID5 | (Link)
|
Incidentally ((460GB * 1024)/45 MBps)/3600 = just under 3 hours. So absolutely best case to write out over an entire disk is 3 hours. Thus 6 hours to write it all seems a bit long, but not unbelievable. As I said, there's a reason that I do my RAID sets in smaller chunks than "whole disk".
Ewen
![[User Picture]](https://l-userpic.livejournal.com/54541970/2) | From: brad 2007-01-28 01:40 am (UTC)
Re: RAID5 | (Link)
|
Yeah, I'd done my math wrong. My original 100 MB/s number was from a 4-stripe LV. So I probably don't get anywhere near 45 MB/s and probably more like 25 MB/s instead.
6 hours starts to make sense. :)
![[User Picture]](https://l-userpic.livejournal.com/4980108/1052633) | From: edm 2007-01-28 02:08 am (UTC)
Re: RAID5 | (Link)
|
I was basing the 45MB/s figure on the Blk_read and Blk_write figures in your output (about 45,000 per second per disk). And 45MB/s is definitely the right order of magnitude for a modern disk platter, which was part of why I took that figure without much extra consideration.
However the iostat man page sugggests that the blocks reported are actually sectors in Linux 2.4 kernels and later, and thus is 512 Bytes. Given that figure -- which calculates out to 22.5MB/s -- that translates pretty directly to 6 hours to resync with the same calculation as I used previously.
Although I'd be wondering why you're getting only 22.5MB/s off your disks; it seems a bit low for a modern disk that is SCSI connected or even SATA connected.
Ewen
![[User Picture]](https://l-userpic.livejournal.com/54541970/2) | From: brad 2007-01-28 02:20 am (UTC)
Re: RAID5 | (Link)
|
They're SATA with NCQ enabled. But the Linux rebuild code uses only "idle disk bandwidth", so I wonder if it's only submitting one I/O at a time to be nice. And small I/Os at that: 64kB stripe size. So lots of little 64 kB I/Os one at a time not as good as submitting either huge ones, or lots of small ones?
*shrug*
I'll do tests later on the raw devices.
![[User Picture]](https://l-userpic.livejournal.com/4980108/1052633) | From: edm 2007-01-28 02:40 am (UTC)
Re: RAID5 | (Link)
|
The "idle" bandwidth code will use up to all the disk bandwidth -- or the bandwidth specified in the dev.raid.speed_limit_max sysctl if it's lower (default maximum seems to be 200MB/s) -- assuming there is no other I/O going on. But any other I/O will take precedence. OTOH, as you say 64kB is somewhat suboptimal in terms of a read/write chunk on a modern disk, and the resync code may be only reading one stripe in at a time and writing it out limiting the throughput (and, eg, not getting any advantage of NCQ).
These days it's probably worth specifying a larger chunk size than the default 64kB; I suspect 128kB or 256kB probably matches modern disks ability to stream into their on-disk cache a bit better. (Alas this can only be specified when you create the array.)
Ewen
Recent SATA drives have 8 or 16mb caches. If the disks are otherwise idle, 64k transactions are more than enough to eat up all of the drive's bandwidth as long as the latency between them is low and the write cache is enabled. If you have a battery backup or don't care about your data integrity you can force the write cache to be enabled.
I just bought a sata raid card for my box. It must store in it's configuration which blocks contain valid data because it takes no time to initialize the array. For an extra $80 I bought a battery backup for the raid card and it has 128mb of write cache. For many datasets you're limited by PCI-X bandwidth and not disk throughput. :)
read 64m -------- 10# dd if=/dev/ad0s1b of=/dev/null bs=64k count=1024 67108864 bytes transferred in 1.688534 secs (39743863 bytes/sec) write 32m --------- 10# dd if=/dev/zero of=/dev/ad0s1b bs=64k count=512 33554432 bytes transferred in 0.662472 secs (50650339 bytes/sec) write 64m --------- 10# dd if=/dev/zero of=/dev/ad0s1b bs=64k count=1024 67108864 bytes transferred in 1.498063 secs (44797095 bytes/sec) write 128m ---------- 10# dd if=/dev/zero of=/dev/ad0s1b bs=64k count=2048 134217728 bytes transferred in 3.607651 secs (37203634 bytes/sec)
These are the stats for the pata drive in my laptop. It's 7200 rpm and has write cache enabled. This is doing real 64k transactions to the drive. I was writing to an offset at about 512mb from the start of the disk on a mostly idle system.
Notice that as the size increases on writes the throughput decreases. This is because the writes are hitting the drive cache and eventually we get to the real write speed of the drive, which is apparently about 37 mb/s. Not bad for a laptop.
The fastest rate I was able to get was about 66mb/s, which is about 50% max theoretical bandwidth for a 32bit 33mhz pci bus, which probably really maxes out at about 70% of 32bitx33mhz. That's really not too bad although I'd be curious to know why we didn't get that last 20%.
I'll have to check out my raid array when I get home, although the performance from dd will be misleading as it sends a single io at a time to the device. For reads you'll see one drive's performance but for writes we might be able to measure the bus speed.
![[User Picture]](https://l-userpic.livejournal.com/996772/447266) | From: ydna 2007-01-28 02:04 am (UTC)
Re: RAID5 | (Link)
|
I like your idea for keeping 32GB or 64GB partitions for the sake of faster resync time. But if you blow a 400+GB disk with six or seven 64GB paritions, don't you have to rebuild all the RAID sets the intersect with that drive anyway?
![[User Picture]](https://l-userpic.livejournal.com/4980108/1052633) | From: edm 2007-01-28 02:16 am (UTC)
Re: RAID5 | (Link)
|
Sure, you need to rebuild everything that touches the disk. But you can do it in sections (eg if you shutdown/reboot the box in the middle of the whole disk rebuild you don't have to start from scratch), and if you have a failure on a group of sectors there's a good chance that they're issolated in one of the RAID set chunks so the rest still run at full performance not degraded performance.
I was also encouraged in doing this by one client system with a partly supported Promise SATA controller which would crash from time to time, particularly under the load of resyncing. With this strategy, plus limiting the bandwidth used to resync, it usually survives a resync -- and where it doesn't, it can usually be done in stages.
Ewen
![[User Picture]](https://l-userpic.livejournal.com/996772/447266) | From: ydna 2007-01-28 02:21 am (UTC)
Re: RAID5 | (Link)
|
Ah, okay. I like the idea even more. Thanks, Ewen.
Yeah, Promise cards... the only promise I got from them was to corrupt my arrays. Heh.
![[User Picture]](https://l-userpic.livejournal.com/59234721/2015653) | From: loganb 2007-01-28 04:56 am (UTC)
Re: RAID5 | (Link)
|
When you say "You can do it in sections..." do you mean "You, the operator" or "You, the intelligent kernel?" By default will it try to rebuild all the raids at once or will it reconize that multiple ones intersect the same physical drive and then only rebuild one at a time?
![[User Picture]](https://l-userpic.livejournal.com/4980108/1052633) | From: edm 2007-01-28 05:16 am (UTC)
Re: RAID5 | (Link)
|
It's automatic.
The kernel recognises which RAID sets use overlapping resources (eg, drives), and avoids rebuilding ones which require the same resource at the same time. However it'll rebuild RAID sets using non-overlapping sets of resources in parallel. So, eg, if you have RAID-1 sets on sda1 and sdb1, and another on sdc1 and sdd1 then they'll both be rebuilt in parallel. But if you have RAID-5 sets on sda5, sdb5, sdc5, sdd5 and another one on sda6, sdb6, sdc6, and sdd6, then only one of them will be rebuilt at a time (and neither will be rebuilt while the earlier ones are being rebuilt).
The particularly useful thing from the point of view of rebooting is that if you notice that it's finished two of the RAID sets, and you need to reboot again (eg, another power outage needed, or hardware swap or something) then you can do so, and when it comes up it'll start on the RAID sets which remain rather than doing the first two again. If it's all in one gigantic RAID set, it starts again from the beginning when you reboot, which is rather painful if it takes, eg, 6 hours to resync it all with no other I/O on the system...
Ewen
Don't use RAID 5. A disk will fail and while it rebuilds a second disk will fail, too.
If you have more than 4-5 disks, use RAID6. If you have just 4 disks, stick with RAID10.
I also second the suggestion of splitting up the disks in smaller chunks. Beware though that the Linux SATA (or SCSI, I forget) layer only likes ~15-16 partitions per disk. I started making each of my raid chunks 80-100GB. That way when disks are +1TB, I don't have to merge the old partitions.
- ask | |