Search This Blog

Thursday, March 27, 2014

Menambah HDD di Active RAID 5

I've done this twice. From 4 to 5 disk raid5, then 5 to 6. It was online and mounted the entire time. Here is my log from 5 to 6.

The mdadm --grow took about 10-12 hours. And the resize2fs took a bit under 30 minutes. While it's growing, you can use watch -n3 cat /proc/mdstat to watch its progress. And while it's doing the resize2fs, you can use watch -n3 df to watch. Also I should mention that while it was doing all this, I even downloaded a few gigs worth of stuff and was heavily reading from the raid the entire time (seeding 500+ torrents with about 5-10 of them active at any given point and watching shows/movies from it).

root@lanfear:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid5 sdd1[0] sde1[4] sdb1[3] sdc1[2] sda1[1]
1953535744 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md0 : active raid1 sdg1[0] sdh1[1]
243167744 blocks [2/2] [UU]

unused devices: <none>

root@lanfear:~# mdadm --add /dev/md1 /dev/sdf1
mdadm: added /dev/sdf1

root@lanfear:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid5 sdf1[5](S) sdd1[0] sde1[4] sdb1[3] sdc1[2] sda1[1]
1953535744 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md0 : active raid1 sdg1[0] sdh1[1]
243167744 blocks [2/2] [UU]

unused devices: <none>

root@lanfear:~# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Fri Oct 31 23:10:19 2008
Raid Level : raid5
Array Size : 1953535744 (1863.04 GiB 2000.42 GB)
Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
Raid Devices : 5
Total Devices : 6
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Sat Nov 8 16:49:59 2008
State : clean
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

UUID : f6e524b8:284a0034:0f20f41d:3c651377 (local to host lanfear)
Events : 0.324790

Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 1 1 active sync /dev/sda1
2 8 33 2 active sync /dev/sdc1
3 8 17 3 active sync /dev/sdb1
4 8 65 4 active sync /dev/sde1

5 8 81 - spare /dev/sdf1

root@lanfear:~# mdadm --grow /dev/md1 --raid-devices=6
mdadm: Need to backup 1280K of critical section..
mdadm: ... critical section passed.

root@lanfear:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid5 sdf1[5] sdd1[0] sde1[4] sdb1[3] sdc1[2] sda1[1]
1953535744 blocks super 0.91 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] reshape = 0.0% (81280/488383936) finish=800.8min speed=10160K/sec

md0 : active raid1 sdg1[0] sdh1[1]
243167744 blocks [2/2] [UU]

unused devices: <none>

root@lanfear:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid5 sdf1[5] sdd1[0] sde1[4] sdb1[3] sdc1[2] sda1[1]
1953535744 blocks super 0.91 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[===================>.] reshape = 99.9% (488369920/488383936) finish=0.0min speed=19503K/sec

md0 : active raid1 sdg1[0] sdh1[1]
243167744 blocks [2/2] [UU]

unused devices: <none>

root@lanfear:~# resize2fs /dev/md1
resize2fs 1.41.3 (12-Oct-2008)
Filesystem at /dev/md1 is mounted on /media/save; on-line resizing required
old desc_blocks = 117, new_desc_blocks = 146
Performing an on-line resize of /dev/md1 to 610479920 (4k) blocks.
The filesystem on /dev/md1 is now 610479920 blocks long.

root@lanfear:~# df | awk /md1/
/dev/md1 2403601996 1117790744 1285811252 47% /media/save

No comments:

Post a Comment