Introduction


In Linux we could create disk strip across multiple drive with distributed parity. Striping with parity means it will split the parity information and stripe data across multiple drives, this is good for data redundancy. For RAID 5, we need minimum 3 disk.
In Linux, we have mdadm command that can be used to configure and manage RAID. System Administrator could use this utilities to manage individual storage device to create RAID that have greater performance and redundancy features.

In this Post we are only working to know how madam could use to configure RAID 5. There are only slight difference in between creating another RAID with same utility.

Pros and cons of RAID5

1. Better Read performance
2. Parity provide fault tolerance
3. Hot spare Disk option automatic start recovery process
4. No data loss in case of single disk fail
5. Because of Parity working method, write slow
6. In case big data storage, rebuild take long time

What is Parity

Parity in Computers is method to check whether data is lost or written during moving from one to another on storage. It store in each disk of RAID. So like we have three disk, every disk contain parity information. In case any disk get failed we still get our data back from parity saved in other two disk. As soon as replace that faulty disk, data of same disk will get recover from parity.

Prerequisites

As we are covering software RAID 5 in Linux for this post, mdadm utility is required to install on Linux machine. For RAID 5, we need three disk on Linux machine that will used to configure.
As in this post we are using centOS6 and we need to install below rpm

# rpm -qf /sbin/mdadm
mdadm-3.2.6-7.el6.x86_64

For this you should have access of root account or sudo access to run commands that only root can run. We also need to have three or more Disk on which we will configure RAID 5

Server Setup

For this post we are working on following setup

Operating System : CentOS release 6.5 (Final)
Hostname	 : srv6
DISK 1		 : /dev/sda1
DISK 2		 : /dev/sdb1
DISK 3		 : /dev/sdc1

Installing mdadm and verifying drives

As wrote above we need to have mdadm package for software RAID. In Most of Linux distributions its already installed, but in case it is not then install as below mention

yum install mdadm     		#For RedHat, CentOS and Fedora Systems
apt-get install mdadm		#For Debian Systems

Now we have to check our disk for any already RAID blocks on these disk with below commands.

[root@srv6 ~]# mdadm --examine /dev/sda /dev/sdb /dev/sdc
mdadm: No md superblock detected on /dev/sda.
mdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdc.

As we can see above there is no super blocks present in above disks.

Partitioning Disks

Now we should create partition on these disk and label them as raid, Although we can create RAID on these devices.
In below output, we partition sda disk and label it as RAID disk.

[root@srv6 ~]# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1009, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1009, default 1009): 
Using default value 1009

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

command (m for help): p

Disk /dev/sda: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x05592581

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1009     2095662   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
 sda: sda1
Syncing disks.

Like above, we have partition rest two (sdb and sdc) Disks as well. So after partition all three disks, Disk would look like below.

[root@srv6 ~]# fdisk -l /dev/sd[a-c]| grep -B 1 "Linux raid autodetect"
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1009     2095662   fd  Linux raid autodetect
--
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1009     2095662   fd  Linux raid autodetect
--
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        1009     2095662   fd  Linux raid autodetect

Creating RAID md device md0

Now we should create RAID md device md0 on these disk. like below

[root@srv6 ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

This could monitor in “/proc/mdstat” file as well.

[root@srv6 ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
      4189184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=========>...........]  recovery = 46.7% (980864/2094592) finish=0.1min speed=122608K/sec
      
unused devices: 
[root@srv6 ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
      4189184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [============>........]  recovery = 63.6% (1334016/2094592) finish=0.1min speed=121274K/sec
      
unused devices: 
[root@srv6 ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
      4189184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [===================>.]  recovery = 95.0% (1991168/2094592) finish=0.0min speed=124448K/sec
      
unused devices: 
[root@srv6 ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
      4189184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
unused devices: 

In case you have very big Disks, you could monitor it continually with watch command,like below . This way it will update in every 2 seconds.

watch "cat /proc/mdstat"

We can also check md device details through mdadm command as well.

[root@srv6 ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Apr 19 01:47:10 2017
     Raid Level : raid5
     Array Size : 4189184 (4.00 GiB 4.29 GB)
  Used Dev Size : 2094592 (2045.84 MiB 2144.86 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Apr 19 01:47:28 2017
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : srv6:0  (local to host srv6)
           UUID : 4e7c1751:cd467d3f:8e86a6a1:3c88f6a4
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       3       8       33        2      active sync   /dev/sdc1

After creation of md device, we could examine again disk and see change in Disks metadata by below command.

mdadm --examine /dev/sda1 /dev/sdb1 /dev/sdc1

Now we need to create filesystem on md device, so that we could able to mount and use it, like below.

[root@srv6 ~]# mkfs.ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047296 blocks
52364 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Now we could mount and use this md device which is created on our three Disks.

[root@srv6 ~]# mkdir /raid5_disk

[root@srv6 ~]# mount /dev/md0 /raid5_disk

[root@srv6 ~]# df -hTP /raid5_disk/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4  4.0G   72M  3.7G   2% /raid5_disk

We should add mount entry in /etc/fstab , So it would automatically mount after reboot as well. For same we have to add following entry in /etc/fstab.

[root@srv6 ~]# grep md0 /etc/fstab 
/dev/md0		/raid5_disk		ext4	defaults	0 0

[root@srv6 ~]# mount -av
mount: UUID=227e066b-f522-4b69-9d33-55469231d16d already mounted on /boot
mount: tmpfs already mounted on /dev/shm
mount: devpts already mounted on /dev/pts
mount: sysfs already mounted on /sys
mount: proc already mounted on /proc
mount: /dev/md0 already mounted on /raid5_disk
nothing was mounted

[root@srv6 ~]# ls -l /raid5_disk/
total 16
drwx------ 2 root root 16384 Apr 19 02:14 lost+found

HOW TO REMOVE RAID IN LINUX

HOW TO INCREASE EXISTING SOFTWARE RAID 5 STORAGE CAPACITY IN LINUX