In this post we will see how we could Increase Disk Partition , Raid Size and LV in Linux, In This post we are going to use Raid 1 type to complete this procedure in Centos6 latest as of Jan 2019.

SetUP

various software version for this setup

[root@fileserver1 ~]# cat /etc/redhat-release 
CentOS release 6.10 (Final)

[root@fileserver1 ~]# uname -r
2.6.32-754.9.1.el6.x86_64

[root@fileserver1 ~]# mdadm --version
mdadm - v3.3.4 - 3rd August 2015

[root@fileserver1 data]# lvs --version
  LVM version:     2.02.143(2)-RHEL6 (2016-12-13)
  Library version: 1.02.117-RHEL6 (2016-12-13)
  Driver version:  4.33.1

[root@fileserver1 ~]# fdisk -v
fdisk (util-linux-ng 2.17.2)

As per setup we are using two Disk on virtual Machine in below fashion.

Disk 1 -- vdb |  	       |            |            |             |
	      |	 md0 (RAID1)   |  PV (md0)  |  VG (vg1)  |   LV (lv1)  |   Mount Point(/data)
Disk 2 -- vdc |		       |            |            |             |

So we have 10 GB of Disk each, which used to below way

  • 1. 10 GB Disk each (vdb and vdc)
  • 2. Create 5GB partition on each Disk, vdb1 and vdc1
  • 2. Only use 5GB of Disk to create Raid1 name md0
  • 3. Will create PV – VG -LV – Mount point on same 5GB of Raid
  • 4. Will increase all to 10 GB, So process would be
    • Unmount mount point
    • Deactivate VG
    • Increase partition
    • Pvresize
    • Activate VG
    • Lvextend LV
    • Mount LV

So as said we have two Disk with 5 GB partition each and use in LV over raid1, see below lsblk output

[root@fileserver1 ~]# lsblk 
NAME                        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
vda                         252:0    0   20G  0 disk  
├─vda1                      252:1    0  500M  0 part  /boot
└─vda2                      252:2    0 19.5G  0 part  
  ├─VolGroup-lv_root (dm-0) 253:0    0 17.6G  0 lvm   /
  └─VolGroup-lv_swap (dm-1) 253:1    0    2G  0 lvm   [SWAP]
vdb                         252:16   0   10G  0 disk  
└─vdb1                      252:17   0    5G  0 part  
  └─md0                       9:0    0    5G  0 raid1 
    └─vg1-lv1 (dm-2)        253:2    0    5G  0 lvm   /data
vdc                         252:32   0   10G  0 disk  
└─vdc1                      252:33   0    5G  0 part  
  └─md0                       9:0    0    5G  0 raid1 
    └─vg1-lv1 (dm-2)        253:2    0    5G  0 lvm   /data

So now we understand we have Disk of 10 GB , but we only used 5GB and construct all on it. Now we need to increase all to maximum space on disk.

Just to create some data to verify it after completing all of this, I filled up this mount point with 1 MB file through while loop.

count=1;while true; do dd if=/dev/zero of=$count bs=1M count=1; count=$((count+1)); done

This small code create some 1MB files in same directory.

[root@fileserver1 data]# du -sh .
4.8G	.
[root@fileserver1 data]# ls -lhtr | tail -5
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4874
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4875
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4876
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4877
-rw-r--r-- 1 root root 568K Jan  4 00:22 4878
[root@fileserver1 data]# ls -l | wc -l
4880

Now we could start activity,first we need to unmount it from mount point. This will show lsblk output like below

[root@fileserver1 ~]# lsblk | tail -8
vdb                         252:16   0   10G  0 disk  
└─vdb1                      252:17   0    5G  0 part  
  └─md0                       9:0    0    5G  0 raid1 
    └─vg1-lv1 (dm-2)        253:2    0    5G  0 lvm   
vdc                         252:32   0   10G  0 disk  
└─vdc1                      252:33   0    5G  0 part  
  └─md0                       9:0    0    5G  0 raid1 
    └─vg1-lv1 (dm-2)        253:2    0    5G  0 lvm   

Now we need to inactive vg through vgchange command, this will show lsblk output like below.

[root@fileserver1 ~]# vgchange -a n vg1
  0 logical volume(s) in volume group "vg1" now active
[root@fileserver1 ~]# lsblk | tail -6
vdb                         252:16   0   10G  0 disk  
└─vdb1                      252:17   0    5G  0 part  
  └─md0                       9:0    0    5G  0 raid1 
vdc                         252:32   0   10G  0 disk  
└─vdc1                      252:33   0    5G  0 part  
  └─md0                       9:0    0    5G  0 raid1

Now we need to stop Raid through mdadm command like below

[root@fileserver1 ~]# mdadm --stop /dev/md0
md0: detected capacity change from 5365235712 to 0
md: md0 stopped.
md: unbind
md: export_rdev(vdc1)
md: unbind
md: export_rdev(vdb1)
mdadm: stopped /dev/md0

[root@fileserver1 ~]# lsblk | tail -4
vdb                         252:16   0   10G  0 disk 
└─vdb1                      252:17   0    5G  0 part 
vdc                         252:32   0   10G  0 disk 
└─vdc1                      252:33   0    5G  0 part 

Now when partition is next point to work , we need to increase partitions of both Disk through fdisk command like below

[root@fileserver1 ~]# fdisk -cu /dev/vdb

Command (m for help): p

Disk /dev/vdb: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x86b048ec

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1              63    10487231     5243584+  83  Linux

Command (m for help): d
Selected partition 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

We need to delete both partition of Disk vdb1 and vdc1. Further we have to create new Big 10GB partitions like below.

[root@fileserver1 ~]# fdisk -cu /dev/vdb

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

I noticed once we create partitions , it get auto initiate Raid and Disk get allocate to it. After this lsblk output would be.

[root@fileserver1 ~]# lsblk | tail -6
vdb                         252:16   0   10G  0 disk  
└─vdb1                      252:17   0   10G  0 part  
  └─md0                       9:0    0    5G  0 raid1 
vdc                         252:32   0   10G  0 disk  
└─vdc1                      252:33   0   10G  0 part  
  └─md0                       9:0    0    5G  0 raid1 

Now i can see Disk partition has been increased but Raid Size is still same

[root@fileserver1 ~]# mdadm --detail /dev/md0| grep "Array Size"
     Array Size : 5239488 (5.00 GiB 5.37 GB)

[root@fileserver1 ~]# pvs /dev/md0
  PV         VG   Fmt  Attr PSize PFree
  /dev/md0   vg1  lvm2 a--u 4.99g    0 

So now we have to increase Raid size that will also need to reflect on PV

[root@fileserver1 ~]# mdadm --grow /dev/md0 --size=max
md0: detected capacity change from 5365235712 to 10733150720
mdadm: component size of /dev/md0 has been set to 10481592K


[root@fileserver1 ~]# mdadm --detail /dev/md0| grep "Array Size"
     Array Size : 10481592 (10.00 GiB 10.73 GB)

So now Raid size increased, but what about PV Size, it’s still same.

[root@fileserver1 ~]# pvs /dev/md0
  PV         VG   Fmt  Attr PSize PFree
  /dev/md0   vg1  lvm2 a--u 4.99g    0 

To increase PV size, we need to use pvresize command , like below

[root@fileserver1 ~]# pvresize /dev/md0
  Physical volume "/dev/md0" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

[root@fileserver1 ~]# pvs /dev/md0
  PV         VG   Fmt  Attr PSize PFree
  /dev/md0   vg1  lvm2 a--u 9.99g 5.00g

So now we can activate VG like below, one thing more , we need not to extend VG, because it automatically reflect size attained through PV inside it. So we just need to activate VG

[root@fileserver1 ~]# lvs /dev/vg1/lv1
  LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1  -wi------- 4.99g                                                    

[root@fileserver1 ~]# vgs vg1
  VG   #PV #LV #SN Attr   VSize VFree
  vg1    1   1   0 wz--n- 9.99g 5.00g

[root@fileserver1 ~]# vgchange -a y vg1
  1 logical volume(s) in volume group "vg1" now active

[root@fileserver1 ~]# vgs vg1
  VG   #PV #LV #SN Attr   VSize VFree
  vg1    1   1   0 wz--n- 9.99g 5.00g

[root@fileserver1 ~]# lvs /dev/vg1/lv1
  LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1  -wi-a----- 4.99g                                                    

So now LV need to extend. we could use lvextend command for same, like below.

[root@fileserver1 ~]# lvextend -l +100%FREE /dev/vg1/lv1
  Size of logical volume vg1/lv1 changed from 4.99 GiB (1278 extents) to 9.99 GiB (2558 extents).
  Logical volume lv1 successfully resized.

[root@fileserver1 ~]# lvs /dev/vg1/lv1
  LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1  -wi-a----- 9.99g                                                    

Now we could mount it again.

[root@fileserver1 ~]# mount -a
EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: 
[root@fileserver1 ~]# cd /data 
[root@fileserver1 data]# du -sh .
4.8G	.
[root@fileserver1 data]# ls -ltrh | tail -5
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4874
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4875
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4876
-rw-r--r-- 1 root root 1.0M Jan  4 00:22 4877
-rw-r--r-- 1 root root 568K Jan  4 00:22 4878
[root@fileserver1 data]# df -hTP .
Filesystem          Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg1-lv1 ext4  4.8G  4.8G     0 100% /data

if you try to match we have all of our data back in mount point. But one thing need to notice that we still have 5GB of space in mount point. Now here resize2fs command will help

[root@fileserver1 data]# resize2fs /dev/vg1/lv1 
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg1/lv1 is mounted on /data; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vg1/lv1 to 2619392 (4k) blocks.
The filesystem on /dev/vg1/lv1 is now 2619392 blocks long.

[root@fileserver1 data]# df -hTP .
Filesystem          Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg1-lv1 ext4  9.8G  4.8G  4.5G  52% /data

Now we have all our data back with increased Raid, PV, VG and LV. Mount point also reflect increased space.