Raid1 to Raid5

Raid1 to Raid5

My new HDD has just arrived ! 1 more terabyte to play with πŸ˜€
Now the funny part begins : here’s what I already have in my computer :
– 1 x 500GB for all my operating systems
– 2 x 1TB RAID1 for my documents
[cc lang=”bash”]
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sdc1[0]
976759936 blocks [2/2] [UU]

unused devices:
[/cc]

My documents partition is full, so I want to switch from 2 x 1TB RAID1 to 3x 1TB RAID5.
I thought it would be easy, as I could read here and there, that 4 command could do the trick :

– stop the RAID1 /dev/md0 array : [cc lang=”bash”]$ mdadm –stop /dev/md0[/cc]
– create the raid5 with only 2 disks in place of the old raid1 : [cc lang=”bash”]$ mdadm –create /dev/md0 -l5 -n2 /dev/sdb1 /dev/sdc1[/cc]
– add the third HDD : [cc lang=”bash”]$ mdadm –add /dev/md0 /dev/sdd1[/cc]
– and use all the available space : [cc lang=”bash”]$ mdadm –grow /dev/md0 -n3[/cc]

But ! Because it wouldn’t have been fun if it were too easy, the first step didn’t work :
[cc lang=”bash”]$ mdadm –stop /dev/md0
mdadm: fail to stop array /dev/md0: Device or resource busy[/cc]
Anything I could try made this work (–force, recovery root shell, runlevel 1, degrade the array, kill all my processes…).

The reason, I guess (no, I’m sure), is that I created a lvm partition on that raid array, and this is what prevents me from stopping the array. So considering this, I wanted to try something different, this was the plan :
1) degrade the raid1 array so keep the data on only 1 disk
2) create a raid5 array on the 2 other disks
3) move my lvm partition on that new array
4) delete the lvm physical volume from the old raid1
5) stop (finally) the old array
6) add the last disk to the raid5
7) grow everything to fit (physical volume, logical volume and filesystem which is reiserfs)

Let’s give it a try
1) degrade the raid1 array so keep the data on only 1 disk
[cc lang=”bash”]$ mdadm /dev/md0 –fail /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
$ mdadm /dev/md0 -r /dev/sdc1
mdadm: hot removed /dev/sdc1
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1]
976759936 blocks [2/1] [_U]

unused devices: [/cc]

2) create a raid5 array on the 2 other disks
[cc lang=”bash”]$ mdadm –create /dev/md1 -l5 -n2 /dev/sdc1 /dev/sdd1
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri May 8 20:34:01 2009
Continue creating array? y
mdadm: array /dev/md1 started.
[/cc]

3) move my lvm partition on that new array
The hard part !
3.1) initialize the partition on the raid5 array for use by LVM
[cc lang=”bash”]$ pvcreate /dev/md1
Physical volume “/dev/md1″ successfully created[/cc]
3.2) add the new physical volume to the docs volume group
[cc lang=”bash”]$ vgextend vgdocs /dev/md1
Volume group “vgdocs” successfully extended
$ pvdisplay
— Physical volume —
PV Name /dev/md0
VG Name vgdocs
PV Size 931,51 GB / not usable 3,12 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID rhBfur-zaKN-UBpx-C1lj-LZK0-6TRX-impxD2

— Physical volume —
PV Name /dev/md1
VG Name vgdocs
PV Size 931,51 GB / not usable 3,12 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 238466
Free PE 238466
Allocated PE 0
PV UUID rDkJvJ-xtnW-dAmQ-TtXR-311H-qayQ-7wxLEE[/cc]

3.3) Move the data from md0 to md1 (This step takes a long time.)
[cc lang=”bash”]$ pvmove -v -i 10 /dev/md0 /dev/md1
Finding volume group “vgdocs”
Archiving volume group “vgdocs” metadata (seqno 5).
Creating logical volume pvmove0
Moving 238466 extents of logical volume vgdocs/lvdocs
Found volume group “vgdocs”
Updating volume group metadata
Creating volume group backup “/etc/lvm/backup/vgdocs” (seqno 6).
Found volume group “vgdocs”
Found volume group “vgdocs”
Suspending vgdocs-lvdocs (252:0) with device flush
Found volume group “vgdocs”
Creating vgdocs-pvmove0
Loading vgdocs-pvmove0 table
Resuming vgdocs-pvmove0 (252:1)
Found volume group “vgdocs”
Loading vgdocs-pvmove0 table
Suppressed vgdocs-pvmove0 identical table reload.
Loading vgdocs-lvdocs table
Resuming vgdocs-lvdocs (252:0)
Checking progress every 10 seconds
/dev/md0: Moved: 0,1%

/dev/md0: Moved: 100,0%
Found volume group “vgdocs”
Found volume group “vgdocs”
Loading vgdocs-lvdocs table
Suspending vgdocs-lvdocs (252:0) with device flush
Suspending vgdocs-pvmove0 (252:1) with device flush
Found volume group “vgdocs”
Found volume group “vgdocs”
Found volume group “vgdocs”
Resuming vgdocs-pvmove0 (252:1)
Found volume group “vgdocs”
Resuming vgdocs-lvdocs (252:0)
Found volume group “vgdocs”
Removing vgdocs-pvmove0 (252:1)
Found volume group “vgdocs”
Removing temporary pvmove LV
Writing out final volume group after pvmove
Creating volume group backup “/etc/lvm/backup/vgdocs” (seqno 8).

$ pvdisplay
— Physical volume —
PV Name /dev/md0
VG Name vgdocs
PV Size 931,51 GB / not usable 3,12 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 238466
Free PE 238466
Allocated PE 0
PV UUID rhBfur-zaKN-UBpx-C1lj-LZK0-6TRX-impxD2

— Physical volume —
PV Name /dev/md1
VG Name vgdocs
PV Size 931,51 GB / not usable 3,12 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID rDkJvJ-xtnW-dAmQ-TtXR-311H-qayQ-7wxLEE

[/cc]

3.4) Remove md0 physical volume from volume group
[cc lang=”bash”]$ vgreduce vgdocs /dev/md0
Removed “/dev/md0” from volume group “vgdocs”
$ pvdisplay
— Physical volume —
PV Name /dev/md1
VG Name vgdocs
PV Size 931,51 GB / not usable 3,12 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID rDkJvJ-xtnW-dAmQ-TtXR-311H-qayQ-7wxLEE

“/dev/md0” is a new physical volume of “931,51 GB”
— NEW Physical volume —
PV Name /dev/md0
VG Name
PV Size 931,51 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID rhBfur-zaKN-UBpx-C1lj-LZK0-6TRX-impxD2
[/cc]

4) delete the lvm physical volume from the old raid1
[cc lang=”bash”]$ pvremove /dev/md0
Labels on physical volume “/dev/md0” successfully wiped
[/cc]

5) stop (finally) the old array
[cc lang=”bash”]$ mdadm –stop /dev/md0
mdadm: stopped /dev/md0
[/cc]
And this is where I know I was right from the beginning for doing all this πŸ˜‰

6) add the last disk to the raid5
[cc lang=”bash”]$ mdadm –add /dev/md1 /dev/sdd1
mdadm: added /dev/sdd1
[/cc]
And don’t forget to tell mdadm to actually use it (not spare it) =)
[cc lang=”bash”]$ mdadm –grow /dev/md1 -n3
mdadm: Need to backup 128K of critical section..
mdadm: … critical section passed.

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdd1[2] sdc1[1] sdb1[0]
976759936 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
[>………………..] reshape = 0.1% (1151808/976759936) finish=1356.4min speed=11986K/sec

unused devices:
[/cc]
This step takes a loooong time too.

7) grow everything to fit (physical volume, logical volume and filesystem which is reiserfs)
physical volume :
[cc lang=”bash”]
$ pvresize /dev/md1
Physical volume “/dev/md1” changed
1 physical volume(s) resized / 0 physical volume(s) not resized
$ pvdisplay
— Physical volume —
PV Name /dev/md1
VG Name vgdocs
PV Size 1,82 TB / not usable 2,06 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 476933
Free PE 238467
Allocated PE 238466
PV UUID rDkJvJ-xtnW-dAmQ-TtXR-311H-qayQ-7wxLEE

$ vgdisplay
— Volume group —
VG Name vgdocs
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1,82 TB
PE Size 4,00 MB
Total PE 476933
Alloc PE / Size 238466 / 931,51 GB
Free PE / Size 238467 / 931,51 GB
VG UUID mcE5Hj-Y1Fh-fcsO-AbBu-XIa9-UGoT-LPA6o0
[/cc]

logical volume :
[cc lang=”bash”]
$ lvresize -l 476933 /dev/vgdocs/lvdocs
Extending logical volume lvdocs to 1,82 TB
Logical volume lvdocs successfully resized

$ lvdisplay
— Logical volume —
LV Name /dev/vgdocs/lvdocs
VG Name vgdocs
LV UUID Go55Gv-50Rx-XCB1-7Yfx-CYVW-YTyu-duUHFU
LV Write Access read/write
LV Status available
# open 2
LV Size 1,82 TB
Current LE 476933
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:0

[/cc]

filesystem :
[cc lang=”bash”]$ df -h /dev/mapper/vgdocs-lvdocs
/dev/mapper/vgdocs-lvdocs
932G 758G 175G 82% /media/docs

$ resize_reiserfs /dev/mapper/vgdocs-lvdocs
resize_reiserfs 3.6.19 (2003 www.namesys.com)

resize_reiserfs: On-line resizing finished successfully.

$ df -h /dev/mapper/vgdocs-lvdocs
/dev/mapper/vgdocs-lvdocs
1,9T 758G 1,1T 41% /media/docs

[/cc]

Done ! πŸ™‚
And everything has been made online on a mounted partition ! πŸ™‚

Oh, and I was about to forget! if I want my new array back next time I reboot :
[cc lang=”bash”]$ mdadm –detail –scan
ARRAY /dev/md1 level=raid5 num-devices=3 metadata=00.90 UUID=3ba98ff9:e832da41:ad30f554:42303922
[/cc]
I have to replace the old line in /etc/mdadm.conf by this one !
[cc lang=”bash”]
# old line : ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b4592a1b:290540de:ad30f554:42303922
# new line : ARRAY /dev/md1 level=raid5 num-devices=3 metadata=00.90 UUID=3ba98ff9:e832da41:ad30f554:42303922

$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
# ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b4592a1b:290540de:ad30f554:42303922
ARRAY /dev/md1 level=raid5 num-devices=3 metadata=00.90 UUID=3ba98ff9:e832da41:ad30f554:42303922
[/cc]
For more convenience, I removed the “sudo” in front of most of all the above commands, but of course you have to execute those with root privileges.

Comments are closed.
sed luctus dolor. libero mi, ultricies et, mattis sit