Resize Software raid through mdadm
In Linux/Unix machine we used software raid many times which used to manage Disk and provide easy manageable raid options without any interaction with Data-Center Engineer. In such case i had situation today in which i need to increase size of raid1 Disk partition online, while users working on this data, so i hvae Resize Software raid through mdadm command
So in present situation i have two Disk in software raid1, through we able to manage mirror of both disk on each other and mount virtual raid disk on directory for further use, so in case any disk get faulty we always room to replace this disk without any data loss because we have second mirror disk during this time.
So now we need to increase mount partition which is coming through this virtual raid partition. But as both disk is already in use and we don’t have such mechanism in which we can work to increase it straight, we have to think in other way.
In such scenario we could detach one disk at time and increase its capacity, add them back in array, wait to get sync on all disk and then repeat this for another disk. Repeat this till all disk to increase size. During this task we should we very clear that we need to wait for complete sync with new added disk and remaining disks.We also understand that if raid doesn’t have fault tolerance or not consistent, there could be potential data loss if we remove any disk. So we should careful while removing disk from any raid.
Let’s create a scenario, in which we can understand this thing and see how we can work on such cases.
OS – RHEL
[root@srv ~]# cat /etc/redhat-release CentOS release 6.10 (Final)
Raid software version
[root@srv ~]# mdadm --version mdadm - v3.3.4 - 3rd August 2015
For this setup i am using Kernel Virtual machines with above specification. For Disk i am using qemu-img command to create disk and attach it on Kernel Virtual Machine.
So in present conditions, we have Raid1 with two 10GB disk that mount on directory that we will trying to increase during this post.
Let’s see present status of Raid1 md0 on machine ..
vdb 252:16 0 10G 0 disk └─md0 9:0 0 10G 0 raid1 /data1 vdc 252:32 0 10G 0 disk └─md0 9:0 0 10G 0 raid1 /data1
Detail of raid
[root@srv14 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Dec 5 05:10:16 2020 Raid Level : raid1 Array Size : 10479424 (9.99 GiB 10.73 GB) Used Dev Size : 10479424 (9.99 GiB 10.73 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sat Dec 5 05:12:19 2020 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : srv.geekpills.com:0 (local to host srv.geekpills.com) UUID : d769a30c:f8d26ff5:7a6d238b:5be89bbb Events : 18 Number Major Minor RaidDevice State 0 252 16 0 active sync /dev/vdb 1 252 32 1 active sync /dev/vdc
So with above details,now we know we have 10GB of disk which formed to create /dev/md0 and mount it on /data1.
Let’s copy some data on same mount point and use it for some day, so we can increase its size online while data in use with some application.So we are using this mount point for some day and now we are going to increase mount point with resize of Raid1 as well.
First we need to swap disk in Raid1, so one by one disks should swap with new disks, so we have increased size disk which will further.
In Above raid1, we have /dev/vdb and /dev/vdc, both are 10GB and to increase Raid size we have to swap both disk to increase sized.
To swap, we always should declare disk fault an remove it. Before declare faulty and remove it, please be sure you should proper redundant disk.
[root@srv14 ~]# mdadm /dev/md0 --fail /dev/vdb --remove /dev/vdb md/raid1:md0: Disk failure on vdb, disabling device. md/raid1:md0: Operation continuing on 1 devices. mdadm: set /dev/vdb faulty in /dev/md0 md: unbind
md: export_rdev(vdb) mdadm: hot removed /dev/vdb from /dev/md0
Now we have only one Disk in raid /dev/md0 and below s status of it.
[root@srv14 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Dec 5 05:10:16 2020 Raid Level : raid1 Array Size : 10479424 (9.99 GiB 10.73 GB) Used Dev Size : 10479424 (9.99 GiB 10.73 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Dec 5 06:18:46 2020 State : active, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : srv.geekpills.com:0 UUID : d769a30c:f8d26ff5:7a6d238b:5be89bbb Events : 21 Number Major Minor RaidDevice State 0 0 0 0 removed 1 252 32 1 active sync /dev/vdc
Add increased size Disk
Now we can add 20Gb Disk on same Raid in below way.
[root@srv14 ~]# mdadm -a /dev/md0 /dev/vdd md: bind
mdadm: added /dev/vdd [root@srv14 ~]# md: recovery of RAID array md0 md: minimum _guaranteed_ speed: 1000 KB/sec/disk. md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. md: using 128k window, over a total of 10479424k.
In above way we can add increased size disk on raid and We can also see sync effect on /proc/mdstat
One of disk has been increased. vdc 252:32 0 10G 0 disk └─md0 9:0 0 10G 0 raid1 /data1 vdd 252:48 0 20G 0 disk └─md0 9:0 0 10G 0 raid1 /data1 Raid is still in Sync status [root@srv14 ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 vdd vdc 10479424 blocks super 1.2 [2/1] [_U] [=>...................] recovery = 5.5% (584320/10479424) finish=1.6min speed=97386K/sec
We have to wait till sync is complete, then We have to repeat same step with another disk, so both disk increase size like below.
vdd 252:48 0 20G 0 disk └─md0 9:0 0 10G 0 raid1 /data1 vde 252:64 0 20G 0 disk └─md0 9:0 0 10G 0 raid1 /data1
Both disk has been swapped, but raid size still same that now we have to increase it.
[root@srv14 ~]# mdadm --detail /dev/md0| grep Array Array Size : 10479424 (9.99 GiB 10.73 GB)
Increase raid size
As we already swap disk to increase disk, but we have get in effect increase size to Raid as well. For same we have to do like below.
[root@srv14 ~]# mdadm --detail /dev/md0| grep Array Array Size : 10479424 (9.99 GiB 10.73 GB) [root@srv14 ~]# mdadm --grow /dev/md0 -z max md0: detected capacity change from 10730930176 to 21469986816 VFS: busy inodes on changed media or resized disk md0 mdadm: component size of /dev/md0 has been set to 20966784K md: resync of RAID array md0 md: minimum _guaranteed_ speed: 1000 KB/sec/disk. md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. md: using 128k window, over a total of 20966784k. md: resuming resync of md0 from checkpoint. [root@srv14 ~]# mdadm --detail /dev/md0| grep Array Array Size : 20966784 (20.00 GiB 21.47 GB)
Increase File-system size
Like above we have increase raid size, but mount-point size is still same.
But to make it effect on in-use mount point which were already mounted on one directory, we have to resize mount-point as well like below
[root@srv14 ~]# df -hTP /data1 Filesystem Type Size Used Avail Use% Mounted on /dev/md0 ext4 9.8G 1.5G 7.8G 17% /data1 [root@srv14 ~]# resize2fs /dev/md0 resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/md0 is mounted on /data1; on-line resizing required old desc_blocks = 1, new_desc_blocks = 2 Performing an on-line resize of /dev/md0 to 5241696 (4k) blocks. The filesystem on /dev/md0 is now 5241696 blocks long. [root@srv14 ~]# df -hTP /data1 Filesystem Type Size Used Avail Use% Mounted on /dev/md0 ext4 20G 1.5G 18G 8% /data1
In above commands output we can see increase file-system mounted on directory, So now users can used new increased size Disk on md0 Raid.
With this post, now we know that we can increase software raid online while users working on same