How to Resize a Raid Array
A tutorial on how to resize a software or hardware defined raid array
It’s been a while since I last wrote a tutorial. The summer is usually a good time for me to catch up on my writing and to explore projects that have been on the back burner.
I have been working a lot with raid arrays recently. Mostly just raid0 arrays but some raid6 implementations. I have also ran into the issue of my raid array filling up and I was forced to come up with a plan for resizing arrays.
After doing some research, I have found a relatively simple way to resize raid arrays by adding an extra disk (logical or physical).
In this tutorial, I will be creating a Harvester VM in my homelab. The VM will have a raid0 array. I will write some data to the array and I will resize the array by adding an extra disk.
Let’s get started.
But first, why would you want a raid0 array?
Raid0, also known as striping, is a method of storing data across multiple disks, where data is divided into blocks and each block is stored on a different drive. It’s called raid0 because it doesn’t actually provide redundancy or fault tolerance—attributes typically associated with other RAID levels. Instead, raid0 focuses solely on performance and storage capacity. The most significant advantage of raid0 is performance. By splitting data across multiple disks, raid0 minimizes the read and write time, making it ideal for tasks that require high disk performance
The increased performance comes with a cost though. The lack of redundancy means that if one disk fails, all data on the array is lost. Therefore, raid0 is best suited for non-critical data or scenarios where data is either backed up regularly or transient and not necessary to preserve.
I can’t recommend raid0 for production workloads UNLESS the data is not important.
Now that the disclaimer is done. I should tell you to backup all your stuff before entertaining the idea. I TOTALLY have everything backed up in three different locations and I am sure that every homelabber does as well. JK I am being sarcastic. I don’t have anything super critical on my servers so I won’t be backing up. Plus, I like to live on the edge a little.
Let’s set up the VM as follows:
4 vCPU
16 GB RAM
50 GB boot volume
4x(10 GB data volumes)
jmcglock@substack-raid-demo:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.9M 1 loop /snap/core20/2318
loop1 7:1 0 87M 1 loop /snap/lxd/28373
loop2 7:2 0 38.8M 1 loop /snap/snapd/21759
vda 252:0 0 50G 0 disk
├─vda1 252:1 0 49.9G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 10G 0 disk
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk
vde 252:64 0 10G 0 disk
vdf 252:80 0 1M 0 disk
I will make a raid0 array with 3 of the 4 data drives (we will save one for adding later).
jmcglock@substack-raid-demo:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.9M 1 loop /snap/core20/2318
loop1 7:1 0 87M 1 loop /snap/lxd/28373
loop2 7:2 0 38.8M 1 loop /snap/snapd/21759
vda 252:0 0 50G 0 disk
├─vda1 252:1 0 49.9G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 10G 0 disk
└─md127 9:127 0 30G 0 raid0 /data
vdc 252:32 0 10G 0 disk
└─md127 9:127 0 30G 0 raid0 /data
vdd 252:48 0 10G 0 disk
└─md127 9:127 0 30G 0 raid0 /data
vde 252:64 0 10G 0 disk
vdf 252:80 0 1M 0 disk
The raid0 device name is md127. The array is mounted to /data. Let’s make a few files in /data to demonstrate that these files are kept when we expand the array with an extra disk.
Now if we ls in /data:
jmcglock@substack-raid-demo:/data$ ls -l
total 2690432
drwx------ 2 jmcglock jmcglock 16384 Jul 6 13:29 lost+found
-rw-rw-r-- 1 jmcglock jmcglock 0 Jul 6 13:35 subscribetojmcglock.yml
-rw-rw-r-- 1 jmcglock jmcglock 0 Jul 6 13:35 thisisatest.yml
-rw-rw-r-- 1 jmcglock jmcglock 0 Jul 6 13:35 thisisatest2.yml
-rw-rw-r-- 1 jmcglock jmcglock 2754981888 Apr 23 12:46 ubuntu-24.04-live-server-amd64.iso
We have some yml files and an ISO file.
Let’s expand the array now. We are going from 30 GB to 40 GB.
We are adding disk vde:
sudo mdadm --detail /dev/md127
And the output:
/dev/md127:
Version : 1.2
Creation Time : Sat Jul 6 13:29:53 2024
Raid Level : raid0
Array Size : 31429632 (29.97 GiB 32.18 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Jul 6 13:29:53 2024
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : -unknown-
Chunk Size : 512K
Consistency Policy : none
Name : substack-raid-demo:127 (local to host substack-raid-demo)
UUID : e7364d82:d40abb5d:3673e83b:b9ea22b5
Events : 0
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
2 252 48 2 active sync /dev/vdd
Here are some general details about our raid0 array. Notice the number of active devices is 3. We will now add the device.
sudo mdadm --grow /dev/md127 --raid-devices=4 --add /dev/vde
And the output:
mdadm: level of /dev/md127 changed to raid4
mdadm: added /dev/vde
Notice that the raid array was changed to raid4. When you are growing a raid0 array by adding another disk, mdadm needs to change the configuration to accommodate the new structure. Given that raid0 doesn't have inherent redundancy and parity features, changing to raid4 (which includes dedicated parity) is a logical step by mdadm. The raid array is switched back to raid0 after the “sync across all disks” is done.
We can check the progress with the following command:
cat /proc/mdstat
And the output:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid0 vde[4] vdd[2] vdc[1] vdb[0]
41906176 blocks super 1.2 512k chunks
unused devices: <none>
The disk has been added successfully and the raid array has been switched back to raid0 by mdadm.
jmcglock@substack-raid-demo:/data$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.9M 1 loop /snap/core20/2318
loop1 7:1 0 87M 1 loop /snap/lxd/28373
loop2 7:2 0 38.8M 1 loop /snap/snapd/21759
vda 252:0 0 50G 0 disk
├─vda1 252:1 0 49.9G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 10G 0 disk
└─md127 9:127 0 40G 0 raid0 /data
vdc 252:32 0 10G 0 disk
└─md127 9:127 0 40G 0 raid0 /data
vdd 252:48 0 10G 0 disk
└─md127 9:127 0 40G 0 raid0 /data
vde 252:64 0 10G 0 disk
└─md127 9:127 0 40G 0 raid0 /data
vdf 252:80 0 1M 0 disk
Last but not least we need to resize the filesystem:
sudo resize2fs /dev/md127
And the output:
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/md127 is mounted on /data; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 5
The filesystem on /dev/md127 is now 10476544 (4k) blocks long.
Running a df -h
jmcglock@substack-raid-demo:/data$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 1.2M 1.6G 1% /run
/dev/vda1 49G 2.0G 47G 4% /
tmpfs 7.6G 0 7.6G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda15 105M 6.1M 99M 6% /boot/efi
tmpfs 1.6G 4.0K 1.6G 1% /run/user/1000
/dev/md127 40G 2.6G 35G 7% /data
Amazing. We are now working with 40 GB. If we ls in /data:
jmcglock@substack-raid-demo:/data$ ls -l
total 2690432
drwx------ 2 jmcglock jmcglock 16384 Jul 6 13:29 lost+found
-rw-rw-r-- 1 jmcglock jmcglock 0 Jul 6 13:35 subscribetojmcglock.yml
-rw-rw-r-- 1 jmcglock jmcglock 0 Jul 6 13:35 thisisatest.yml
-rw-rw-r-- 1 jmcglock jmcglock 0 Jul 6 13:35 thisisatest2.yml
-rw-rw-r-- 1 jmcglock jmcglock 2754981888 Apr 23 12:46 ubuntu-24.04-live-server-amd64.iso
All of our stuff is still there.
TLDR;
Add a disk that is the same size as the other disks (raid will default to the smallest disk size if not).
Run the mdadm grow command and define a new number of devices and the disk to be added.
Watch the progress.
Resize the filesystem.
It really is pretty simple. This should work for hardware and software defined raid arrays.
I AM NOT RESPONSIBLE FOR YOUR DATA LOSS IF YOU MAKE A MISTAKE! BACK UP YOUR STUFF!
Cheers,
Joe