Brad Fitzpatrick (brad) wrote,
Brad Fitzpatrick
brad

RAID-5 misc

I never use RAID-5, so I'd never noticed this before:
   -f, --force
        Insist  that  mdadm  accept  the  geometry and layout
        specified without question.  Normally mdadm will  not
        allow  creation of an array with only one device, and
        will try to create a raid5  array  with  one  missing
        drive (as this makes the initial resync work faster).
        With --force, mdadm will not try to be so clever.

And indeed, when I created the array with 5 disks, it marked one as a spare:
# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.03
  Creation Time : Sat Jan 27 13:30:36 2007
     Raid Level : raid5
     Array Size : 1953545984 (1863.05 GiB 2000.43 GB)
    Device Size : 488386496 (465.76 GiB 500.11 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan 27 13:52:08 2007
          State : clean, degraded, recovering
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 27% complete

           UUID : 5ad3ba82:30b256f3:c70f55c8:1f40abbd
         Events : 0.194

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       64        1      active sync   /dev/sde
       2       8       80        2      active sync   /dev/sdf
       3       8       96        3      active sync   /dev/sdg
       5       8      112        4      spare rebuilding   /dev/sdh

And you can see that 4 disks are reading, and 1 is writing:
Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
...
sdd             198.00     45824.00         0.00      45824          0
sde             198.00     45824.00         0.00      45824          0
sdf             200.00     45824.00         0.00      45824          0
sdg             203.00     46080.00         0.00      46080          0
sdh             203.00         0.00     46080.00          0      46080
...

Neat!

It makes sense why it's done this way: 4 disks doing sequential reads and 1 doing sequential writes is faster than 5 disks doing mixed reads and writes.

But really? 6 hours?

I'd prefer an option where all disks are zeroed, and then initial resync is skipped. Yes, array wouldn't be immediately usable like it is with the kernel doing the background sync for me, but I think I could zero a 460 GB disk quicker than 6 hours... based on 100 MB/s filesystem writes I saw, assuming I can do even more to a raw block device, should be about an hour) But I can't see how... --assume-clean may be what I'm looking for? Do I just zero all the devices myself first, then re-create the array?

I wouldn't normally mind, but I want to performance-test several configurations and 6 hour waits seriously kills my flow. :)
Tags: linux, tech
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 14 comments