RAID stands for Redundant Array of Independent Disks. It was basically developed to allow one to combine many disks (such as HDD, SSD and NVMe) into an array in order to realize redundancy. This array of devices appears to the computer as a single logical storage unit or drive. Redundancy cannot be achieved by one huge disk drive plugged into your project because recovery of data will be near impossible in case of a disaster. In RAID, even though the array is made up of multiple disks, the computer “sees” it as one drive or a single logical storage unit which is quite amazing.
Definition of terms
- Hot Spare: – If you have a disk that is not being used in the RAID array and is on stand by in case there is a disk failure, then you have your Hot Spare. Data from faulty disk will be migrated in this spare disk automatically.
- Mirroring: – As you can guess, to mirror is to copy of same data in another disk. This feature makes the idea of data backup possible.
- Striping: – A feature that enables data to be written in all available disks randomly. It is just like sharing data between all disks, so all of them fill equally.
- Parity: – A technique of regenerating lost data from saved parity information.
Using techniques such as disk striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5,4 and 6), RAID is capable of achieving redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes.
Primary reasons you should consider deploying RAID in your projects include the following:
- Achievement of better speeds
- Increases storage capacity using a single virtual disk
- Minimizes data loss from disk failure. Depending on your RAID type, you will be able to achieve redundancy which will later save you in case there are incidences of data losses.
RAID technology comes in three flavors: Firmware RAID, Hardware RAID and Software RAID. Hardware RAID handles its arrays independently from the host and it still presents the host with a single disk per RAID array. It uses Hardware RAID controller card that handles the RAID tasks transparently to the operating system. Software RAID, on the other hand, implements the various RAID levels in the kernel disk (block device) code and offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis are not required. There are faster CPUs in the current era, therefore Software RAID generally outperforms Hardware RAID.
Cardinal Features of Software RAID. Source (access.redhat.com)
- Portability of arrays between Linux machines without reconstruction
- Backgrounded array reconstruction using idle system resources
- Hot-swappable drive support
- Automatic CPU detection to take advantage of certain CPU features such as streaming SIMD support
- Automatic correction of bad sectors on disks in an array
- Regular consistency checks of RAID data to ensure the health of the array
- Proactive monitoring of arrays with email alerts sent to a designated email address on important events
- Write-intent bitmaps which drastically increase the speed of resync events by allowing the kernel to know precisely which portions of a disk need to be resynced instead of having to resync the entire array
Setting up RAID on CentOS 8/RHEL 8
With the brief introduction, let us get into the crux of the matter and set up the various RAID levels in CentOS 8/RHEL 8. Before we proceed, we need mdadm tool that will help in configuring the various RAID Levels.
sudo dnf -y update
sudo dnf -y install mdadm
Configuring RAID Level 0 on CentOS 8/RHEL 8
As it had been mentioned, RAID 0 provides striping without parity and requires at least two hard disks. It does well when its speed is scored compared to the rest because it does not store any parity data and perform read and write operation simultaneously.
Let us view the disks that we have on our server:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 128G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 127G 0 part
├─cl_centos8-root 253:0 0 50G 0 lvm /
├─cl_centos8-swap 253:1 0 2G 0 lvm [SWAP]
└─cl_centos8-home 253:2 0 75G 0 lvm /home
sdb 8:16 0 1G 0 disk
sdc 8:32 0 1G 0 disk
sdd 8:48 0 1G 0 disk
As shown above, the server has three raw disks (sdb,sdc and sdd) attached. We shall start by clearing the disks then partition them before we create RAID on top of them.
for i in sdb sdc sdd; do
sudo wipefs -a /dev/$i
sudo mdadm --zero-superblock /dev/$i
done
Create one partition each on the disks and set RAID flag.
for i in sdb sdc sdd; do
sudo parted --script /dev/$i "mklabel gpt"
sudo parted --script /dev/$i "mkpart primary 0% 100%"
sudo parted --script /dev/$i "set 1 raid on"
done
You should see the new partitions (sdb1, sdc1, sdd1) created:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 128G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 127G 0 part
├─cl_centos8-root 253:0 0 50G 0 lvm /
├─cl_centos8-swap 253:1 0 2G 0 lvm [SWAP]
└─cl_centos8-home 253:2 0 75G 0 lvm /home
sdb 8:16 0 1G 0 disk
└─sdb1 8:17 0 1022M 0 part
sdc 8:32 0 1G 0 disk
└─sdc1 8:33 0 1022M 0 part
sdd 8:48 0 1G 0 disk
└─sdd1 8:49 0 1022M 0 part
After the partitions are ready, proceed to Create RAID 0 device. Level stripe is the same as RAID 0 since it only offers striping of data.
sudo mdadm --create /dev/md0 --level=stripe --raid-devices=3 /dev/sd[b-d]1
Find out your RAID device status using any of the commands below:
$ cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdd1[2] sdc1[1] sdb1[0]
3133440 blocks super 1.2 512k chunks
unused devices: <none>
Or
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Aug 26 21:20:57 2020
Raid Level : raid0
Array Size : 3133440 (2.99 GiB 3.21 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Wed Aug 26 21:20:57 2020
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : -unknown-
Chunk Size : 512K
Consistency Policy : none
Name : centos8.localdomain:0 (local to host centos8.localdomain)
UUID : 2824d400:1967473c:dfa0938f:fbb383ae
Events : 0
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
If everything looks pretty, create a file system of your choice on the new RAID device.
sudo mkfs.ext4 /dev/md0
Next, we need to mount the new device on the file system for it to start holding files and directories. Create a new mount point:
sudo mkdir /mnt/raid0
Mount the filesystem by typing:
sudo mount /dev/md0 /mnt/raid0
Save the Array
Adjust the “/etc/mdadm.conf ” to make sure that the array is reassembled automatically at boot. You can automatically scan the active array and append the file by doing the following:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
Later, update the initial RAM file system(initramfs), so that the array will be available during the early boot process:
sudo update-initramfs -u
Configure mounting in /etc/fstab:
$ sudo vi /etc/fstab
/dev/md0 /mnt/raid0 ext4 defaults 0 0
If you are unsure of the file system type, issue the command below and change ext4 with the TYPE that will be shown.
$ sudo blkid /dev/md0
/dev/md0: UUID="e6fe86e5-d241-4208-ab94-3ca79e59c8b6" TYPE="ext4"
Confirm it can be mounted correctly:
$ sudo mount -a
$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 865M 0 865M 0% /dev
tmpfs tmpfs 882M 0 882M 0% /dev/shm
tmpfs tmpfs 882M 17M 865M 2% /run
tmpfs tmpfs 882M 0 882M 0% /sys/fs/cgroup
/dev/mapper/cl_centos8-root xfs 50G 2.1G 48G 5% /
/dev/sda1 ext4 976M 139M 770M 16% /boot
/dev/mapper/cl_centos8-home xfs 75G 568M 75G 1% /home
tmpfs tmpfs 177M 0 177M 0% /run/user/1000
/dev/md0 ext4 2.9G 9.0M 2.8G 1% /mnt/raid0 ##Our New Device.
Configuring RAID Level 1 on CentOS 8/RHEL 8
RAID 1 provides disk mirroring or parity without striping. It simply writes all data on two disks and therefore, if one disk fails or is ejected, all data will be available on the other disk. Since it writes on two disks, RAID 1 requires double hard disks such that in case you want to use 2 disks, then you have to install 4 disks for setup.
Before we start let us clear all disks before we start the RAID configurations to ensure we are starting from clean disks.
for i in sdb sdc sdd; do
sudo wipefs -a /dev/$i
sudo mdadm --zero-superblock /dev/$i
done
Create one partition each on the disks and set RAID flag.
for i in sdb sdc sdd; do
sudo parted --script /dev/$i "mklabel gpt"
sudo parted --script /dev/$i "mkpart primary 0% 100%"
sudo parted --script /dev/$i "set 1 raid on"
done
Create RAID 1 device:
sudo mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/sd[b-c]1 --spare-devices=1 /dev/sdd1
Check the status of the new array:
$ sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Aug 26 21:32:52 2020
Raid Level : raid1
Array Size : 1045504 (1021.00 MiB 1070.60 MB)
Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Wed Aug 26 21:33:02 2020
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Name : centos8.localdomain:1 (local to host centos8.localdomain)
UUID : 9ca1da1d:a0c0a26b:6dd27959:a84dec0e
Events : 17
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 - spare /dev/sdd1
Once the RAID device is ready, we cannot use it if it has no file system on it. To fix that, create a file system that your needs requires. An example is shown below where xfs is being setup.
sudo mkfs.xfs /dev/md1
After that, create a mount point where the device will be mounted on:
sudo mkdir /mnt/raid1
Mount the filesystem by typing the following on the terminal:
sudo mount /dev/md1 /mnt/raid1
Save the Array
Adjust the “/etc/mdadm.conf ” to make sure that the array is reassembled automatically at boot. You can automatically scan the active array and append the file by doing the following:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
Later, update the initial RAM file system(initramfs), so that the array will be available during the early boot process:
sudo update-initramfs -u
Again, configure mounting in /etc/fstab:
$ sudo vi /etc/fstab
/dev/md1 /mnt/raid1 xfs defaults 0 0
Confirm it can be mounted correctly:
$ sudo mount -a
$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 865M 0 865M 0% /dev
tmpfs tmpfs 882M 0 882M 0% /dev/shm
tmpfs tmpfs 882M 17M 865M 2% /run
tmpfs tmpfs 882M 0 882M 0% /sys/fs/cgroup
/dev/mapper/cl_centos8-root xfs 50G 2.1G 48G 5% /
/dev/sda1 ext4 976M 139M 770M 16% /boot
/dev/mapper/cl_centos8-home xfs 75G 568M 75G 1% /home
tmpfs tmpfs 177M 0 177M 0% /run/user/1000
/dev/md1 xfs 1016M 40M 977M 4% /mnt/raid1
Configuring RAID Level 10 on CentOS 8/RHEL 8
RAID 10 combines disk mirroring (duplicating your data) to protect data and disk striping (divides data into blocks and spreading them across the disks) to increase data throughput. With a requirement of a minimum of 4 disks, RAID 10 stripes data across mirrored pairs. With this configuration, data can be retrieved as long as one disk in each mirrored pair is functional.
Like the previous RAID levels already done, start by clearing all of your raw disks.
for i in sdb sdc sdd sde; do
sudo wipefs -a /dev/$i
sudo mdadm --zero-superblock /dev/$i
done
Create one partition each on the disks and set RAID flag.
for i in sdb sdc sdd sde; do
sudo parted --script /dev/$i "mklabel gpt"
sudo parted --script /dev/$i "mkpart primary 0% 100%"
sudo parted --script /dev/$i "set 1 raid on"
done
Then go ahead and create a RAID 10 device and check its status:
sudo mdadm --create /dev/md10 --level=10 --raid-devices=4 dev/sd[b-e]1
sudo mdadm -–query --detail /dev/md10
Once the RAID device is setup create a file system that your specific needs requires. An example is shown below where xfs is being setup.
sudo mkfs.xfs /dev/md10
After that, create a mount point where the device will be mounted on:
sudo mkdir /mnt/raid10
Mount the filesystem by typing:
sudo mount /dev/md10 /mnt/raid10
Save the Array
Adjust the “/etc/mdadm.conf ” to make sure that the array is reassembled automatically at boot. You can automatically scan the active array and append the file by doing the following:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
As usual, update the initial RAM file system(initramfs), so that the array will be available during the early boot process:
sudo update-initramfs -u
Configure mounting in /etc/fstab:
$ sudo vi /etc/fstab
/dev/md10 /mnt/raid10 xfs defaults 0 0
Confirm it can be mounted correctly:
$ sudo mount -a
$ df -hT
Stop and remove a RAID array
In case you would wish to remove a RAID device from your system, simply unmount the mount point, stop it and remove with the commands below. Remember to substitute /mnt/raid0 with your mount point and /dev/md0 with your RAID device.
sudo umount /mnt/raid0
sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0
Celebratory end note
Software RAID is wonderful due to its versatility and ease of setup. As you have witnessed, configuring RAID only takes a few commands an your array is back to being healthy once again. Depending on business needs, you can achieve high levels of backup that assists in backup in case of a disaster.