SW RAID Quick reference

Create a RAID 1 volume from two drives.

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc /dev/sde

To add RAID device 'md0' to /etc/mdadm.conf so that it is recognized the next time you boot.

mdadm -Es | grep '''md0'''  >>/etc/mdadm.conf

View the status of a multi disk array.

mdadm --detail /dev/md0

View the status of all multi disk arrays.

cat /proc/mdstat

Note: add raid1 to /etc/initramfs-tools/modules and rebuild initrd

Grow sw raid

Situation: Raid 5, with 6 disks

Final: Raid 5, with 8 disks

  # mdadm --add /dev/md127 /dev/sdh1 
  mdadm: added /dev/sdh1
  # mdadm --add /dev/md127 /dev/sdj1 
  mdadm: added /dev/sdj1
  # mdadm --grow --raid-devices=8 /dev/md127 
  mdadm: Need to backup 1344K of critical section..
  mdadm: ... critical section passed.
  #

Nekdy se muze hodit:

tags: 3ware tw_cli replace drive

echo "0 0 0" > /sys/bus/scsi/devices/0\:0\:4\:0/rescan

Frozen spare

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Feb  2 06:51:55 2013
     Raid Level : raid6
     Array Size : 11717889024 (11175.05 GiB 11999.12 GB)
  Used Dev Size : 1952981504 (1862.51 GiB 1999.85 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Mon Jul 15 20:30:57 2013
          State : active, degraded 
 Active Devices : 7
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : tukan2:0  (local to host tukan2)
           UUID : bf36da8d:5009d151:4f3ee49d:92059128
         Events : 1535927

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
       4       8       97        4      active sync   /dev/sdg1
       5       0        0        5      removed
       6       8      129        6      active sync   /dev/sdi1
       7       8      145        7      active sync   /dev/sdj1

       8       8      113        -      spare   /dev/sdh1
 # cat /sys/block/md0/md/sync_action 
 frozen
 echo repair >/sys/block/md0/md/sync_action

Result:

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Feb  2 06:51:55 2013
     Raid Level : raid6
     Array Size : 11717889024 (11175.05 GiB 11999.12 GB)
  Used Dev Size : 1952981504 (1862.51 GiB 1999.85 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Mon Jul 15 20:38:59 2013
          State : active, degraded, recovering 
 Active Devices : 7
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 0% complete

           Name : tukan2:0  (local to host tukan2)
           UUID : bf36da8d:5009d151:4f3ee49d:92059128
         Events : 1536481

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
       4       8       97        4      active sync   /dev/sdg1
       8       8      113        5      spare rebuilding   /dev/sdh1
       6       8      129        6      active sync   /dev/sdi1
       7       8      145        7      active sync   /dev/sdj1

Speed Up

 echo     6144 > /sys/block/md3/md/stripe_cache_size  
 echo   40000 > /proc/sys/dev/raid/speed_limit_min
 echo 256000 > /proc/sys/dev/raid/speed_limit_max

3ware Utility

3ware smart

smartctl -d 3ware,1 -a /dev/twa0
smartctl -d 3ware,8 -a /dev/twa0 -T permissive

IBM ServeRAID Utility

ibm_utl_aacraid_9.10_linux_32-64.zip

HP Smart Array

HP DL360 G6 P410i * http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareIndex.jsp?lang=en&cc=us&prodNameId=3902575&prodTypeId=329290&prodSeriesId=3902574&swLang=8&taskId=135&swEnvOID=4004

At Debian you need to

 apt-get install ia32-libs

and hpacucli-8.75-12.0.noarch.rpm. Example:

  controller all show config
  ctrl slot=0 create type=ld drives=1I:1:3 raid=0
 

RAID10 performance

  mdadm -v --create /dev/md0 --level=raid10 --layout=f2 --raid-devices=4 ...

The trick is “layout=f2” as the man page says> Finally, the layout options for RAID10 are one of 'n', 'o' or 'f' followed by a small number. The default is 'n2'. The supported options are: 'n' signals 'near' copies. Multiple copies of one data block are at similar offsets in different devices. 'o' signals 'offset' copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, and are one chunk further down. 'f' signals 'far' copies (multiple copies have very different offsets). See md(4) for more detail about 'near', 'offset', and 'far'. The number is the number of copies of each datablock. 2 is normal, 3 can be useful. This number can be at most equal to the number of devices in the array. It does not need to divide evenly into that number (e.g. it is perfectly legal to have an 'n2' layout for an array with an odd number of devices).

Setting up RAID10 with tw_cli

Generating the RAID10 field with disks in ports from 8 to 17:

./tw_cli /c0 add type=raid10 disk=8-17 noautoverify

Software way of removing a disk from port 14:

./tw_cli maint remove c0 p14

Taking a look of what has it done:

./tw_cli /c0/u0 show

“Adding” a disk back to the file system (more like identifying it):

./tw_cli maint rescan c0

Starting the verification process (if the unit was not previously initialized, it will be):

./tw_cli /c0/u0 start verify

If you need to check which disk is connected to each port, you can use this command, it will light the external GUI light:

./tw_cli /c0/p16 set identify=on

Add disk to raid:

 tw-cli /c0/u0 start rebuild disk=16
 

If the disk is in another unit (i.e. u1), just delete the unit

tw-cli /c0/u1 del

Problem: after physically removing a disk and installing it back, RAID started re-initializing by its own will without letting the master know. It's just bad.

 
linux/raid.txt · Last modified: 2017/03/20 11:51 by tomsa