Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
linux:raid [2012/03/03 03:49]
admin
linux:raid [2017/03/20 11:51] (current)
tomsa [Setting up RAID10 with tw_cli]
Line 13: Line 13:
 **Note: add raid1 to /​etc/​initramfs-tools/​modules and rebuild initrd** **Note: add raid1 to /​etc/​initramfs-tools/​modules and rebuild initrd**
  
 +====== Grow sw raid =======
 +Situation: Raid 5, with 6 disks
 +
 +Final: Raid 5, with 8 disks
 +
 +<​code>​
 +  # mdadm --add /dev/md127 /​dev/​sdh1 ​
 +  mdadm: added /dev/sdh1
 +  # mdadm --add /dev/md127 /​dev/​sdj1 ​
 +  mdadm: added /dev/sdj1
 +  # mdadm --grow --raid-devices=8 /​dev/​md127 ​
 +  mdadm: Need to backup 1344K of critical section..
 +  mdadm: ... critical section passed.
 +  #
 +</​code>​
 +
 +**Nekdy se muze hodit:**
 +
 +tags: 3ware tw_cli replace drive
 +
 +<​code>​
 +echo "0 0 0" > /​sys/​bus/​scsi/​devices/​0\:​0\:​4\:​0/​rescan
 +</​code>​
 ===== Links ===== ===== Links =====
   * [[http://​www.debian-administration.org/​articles/​238| Migrating To RAID1 Mirror on Sarge]]   * [[http://​www.debian-administration.org/​articles/​238| Migrating To RAID1 Mirror on Sarge]]
   * [[http://​lucasmanual.com/​mywiki/​DebianRAID|Debian RAID]]   * [[http://​lucasmanual.com/​mywiki/​DebianRAID|Debian RAID]]
 +
 +======== Frozen spare ========
 +<​code>​
 +# mdadm --detail /dev/md0
 +/dev/md0:
 +        Version : 1.2
 +  Creation Time : Sat Feb  2 06:51:55 2013
 +     Raid Level : raid6
 +     Array Size : 11717889024 (11175.05 GiB 11999.12 GB)
 +  Used Dev Size : 1952981504 (1862.51 GiB 1999.85 GB)
 +   Raid Devices : 8
 +  Total Devices : 8
 +    Persistence : Superblock is persistent
 +
 +    Update Time : Mon Jul 15 20:30:57 2013
 +          State : active, degraded ​
 + ​Active Devices : 7
 +Working Devices : 8
 + ​Failed Devices : 0
 +  Spare Devices : 1
 +
 +         ​Layout : left-symmetric
 +     Chunk Size : 512K
 +
 +           Name : tukan2:​0 ​ (local to host tukan2)
 +           UUID : bf36da8d:​5009d151:​4f3ee49d:​92059128
 +         ​Events : 1535927
 +
 +    Number ​  ​Major ​  ​Minor ​  ​RaidDevice State
 +       ​0 ​      ​8 ​      ​17 ​       0      active sync   /​dev/​sdb1
 +       ​1 ​      ​8 ​      ​33 ​       1      active sync   /​dev/​sdc1
 +       ​2 ​      ​8 ​      ​49 ​       2      active sync   /​dev/​sdd1
 +       ​3 ​      ​8 ​      ​65 ​       3      active sync   /​dev/​sde1
 +       ​4 ​      ​8 ​      ​97 ​       4      active sync   /​dev/​sdg1
 +       ​5 ​      ​0 ​       0        5      removed
 +       ​6 ​      ​8 ​     129        6      active sync   /​dev/​sdi1
 +       ​7 ​      ​8 ​     145        7      active sync   /​dev/​sdj1
 +
 +       ​8 ​      ​8 ​     113        -      spare   /​dev/​sdh1
 +</​code>​
 +   # cat /​sys/​block/​md0/​md/​sync_action ​
 +   ​frozen
 +
 +   echo repair >/​sys/​block/​md0/​md/​sync_action
 +
 +**Result**:
 +<​code>​
 +# mdadm --detail /dev/md0
 +/dev/md0:
 +        Version : 1.2
 +  Creation Time : Sat Feb  2 06:51:55 2013
 +     Raid Level : raid6
 +     Array Size : 11717889024 (11175.05 GiB 11999.12 GB)
 +  Used Dev Size : 1952981504 (1862.51 GiB 1999.85 GB)
 +   Raid Devices : 8
 +  Total Devices : 8
 +    Persistence : Superblock is persistent
 +
 +    Update Time : Mon Jul 15 20:38:59 2013
 +          State : active, degraded, recovering ​
 + ​Active Devices : 7
 +Working Devices : 8
 + ​Failed Devices : 0
 +  Spare Devices : 1
 +
 +         ​Layout : left-symmetric
 +     Chunk Size : 512K
 +
 + ​Rebuild Status : 0% complete
 +
 +           Name : tukan2:​0 ​ (local to host tukan2)
 +           UUID : bf36da8d:​5009d151:​4f3ee49d:​92059128
 +         ​Events : 1536481
 +
 +    Number ​  ​Major ​  ​Minor ​  ​RaidDevice State
 +       ​0 ​      ​8 ​      ​17 ​       0      active sync   /​dev/​sdb1
 +       ​1 ​      ​8 ​      ​33 ​       1      active sync   /​dev/​sdc1
 +       ​2 ​      ​8 ​      ​49 ​       2      active sync   /​dev/​sdd1
 +       ​3 ​      ​8 ​      ​65 ​       3      active sync   /​dev/​sde1
 +       ​4 ​      ​8 ​      ​97 ​       4      active sync   /​dev/​sdg1
 +       ​8 ​      ​8 ​     113        5      spare rebuilding ​  /​dev/​sdh1
 +       ​6 ​      ​8 ​     129        6      active sync   /​dev/​sdi1
 +       ​7 ​      ​8 ​     145        7      active sync   /​dev/​sdj1
 +</​code>​
 +
 +====== Speed Up ======
 +<​code>​
 + ​echo ​    6144 > /​sys/​block/​md3/​md/​stripe_cache_size  ​
 + ​echo ​  40000 > /​proc/​sys/​dev/​raid/​speed_limit_min
 + echo 256000 > /​proc/​sys/​dev/​raid/​speed_limit_max
 +</​code>​
  
 ====== 3ware Utility ====== ====== 3ware Utility ======
Line 41: Line 155:
   ctrl slot=0 create type=ld drives=1I:​1:​3 raid=0   ctrl slot=0 create type=ld drives=1I:​1:​3 raid=0
  </​code> ​  </​code> ​
 +
 +====== RAID10 performance ======
 +    mdadm -v --create /dev/md0 --level=raid10 --layout=f2 --raid-devices=4 ...
 +The trick is "​layout=f2"​ as the man page says>
 +'' ​ Finally, the layout options for RAID10 are one of '​n',​ '​o'​ or '​f'​ followed by a small number. ​ The default is '​n2'​. ​ The supported options are:
 +
 +              '​n'​ signals '​near'​ copies. ​ Multiple copies of one data block are at similar offsets in different devices.
 +
 +              '​o'​ signals '​offset'​ copies. ​ Rather than the chunks being duplicated within a stripe, whole stripes are duplicated but are rotated by  one  device ​ so
 +              duplicate blocks are on different devices. ​ Thus subsequent copies of a block are in the next drive, and are one chunk further down.
 +
 +              '​f'​ signals '​far'​ copies (multiple copies have very different offsets). ​ See md(4) for more detail about '​near',​ '​offset',​ and '​far'​.
 +
 +              The  number is the number of copies of each datablock. ​ 2 is normal, 3 can be useful. ​ This number can be at most equal to the number of devices in the
 +              array. ​ It does not need to divide evenly into that number (e.g. it is perfectly legal to have an '​n2'​ layout for  an  array  with  an  odd  number ​ of
 +              devices).
 +
 +''​
 +====== Setting up RAID10 with tw_cli ======
 +Generating the RAID10 field with disks in ports from 8 to 17:
 +  ./tw_cli /c0 add type=raid10 disk=8-17 noautoverify
 +Software way of removing a disk from port 14:
 +  ./tw_cli maint remove c0 p14
 +Taking a look of what has it done:
 +  ./tw_cli /c0/u0 show
 +"​Adding"​ a disk back to the file system (more like identifying it):
 +  ./tw_cli maint rescan c0
 +Starting the verification process (if the unit was not previously initialized,​ it will be):
 +  ./tw_cli /c0/u0 start verify
 +If you need to check which disk is connected to each port, you can use this command, it will light the external GUI light:
 +  ./tw_cli /c0/p16 set identify=on
 +
 +Add disk to raid:
 +   ​tw-cli /c0/u0 start rebuild disk=16
 +   
 +If the disk is in another unit (i.e. u1), just delete the unit
 +  tw-cli /c0/u1 del
 +
 +Problem: after physically removing a disk and installing it back, RAID started re-initializing by its own will without letting the master know. It's just bad.
 +
 +
 
linux/raid.1330742975.txt.gz · Last modified: 2013/03/15 16:58 (external edit)