

Vdb2 is no longer marked (S) because it's not a hotspare. Mdadm: set /dev/vdb1 faulty in /dev/md127 We can verify this works as expected by failing an existing disk and noticing a rebuild takes place on the spare: $ sudo mdadm -manage /dev/md127 -fail /dev/vdb1 So now the new disk can be re-added as a hotspare: $ sudo mdadm -a /dev/md127 /dev/vdb2 Mdadm: hot removed /dev/vdb2 from /dev/md127Īnd now resize: $ sudo mdadm -grow /dev/md127 -raid-devices=2Īt this point we have successfully reduced the array down to 2 disks: md127 : active raid1 vdb3 vdb1 We can now remove this disk: $ sudo mdadm -manage /dev/md127 -remove /dev/vdb2 Mdadm: set /dev/vdb2 faulty in /dev/md127Īnd the status now shows it's bad md127 : active raid1 vdb3 vdb2(F) vdb1 We need to offline one of the disks before we can remove it: $ sudo mdadm -manage /dev/md127 -fail /dev/vdb2 SELECT PARTITION 1 - Selects partition 1. CREATE PARTITION PRIMARY - Creates a partition. SELECT DISK X (Replace X with your USB flash drive number, we are using 2 in this example).

In the image below the USB flash drive shows as Disk 2. Here they're just partitions of one disk, but it doesn't matter md127 : active raid1 vdb3 vdb2 vdb1 LIST DISK - This shows the disk number of your USB flash drive. So let's assume we have md127 with 3 disks in a raid1. In this example, that's where the data comes from. You can check the current state of the array with cat /proc/mdstat.
