Adding drives to a RAID 10 Array
Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
and get $2,000 discount on your first invoice
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Melt
--
Chapters
00:00 Adding Drives To A Raid 10 Array
00:23 Accepted Answer Score 6
00:55 Answer 2 Score 6
01:30 Answer 3 Score 10
03:02 Answer 4 Score 10
03:38 Thank you
--
Full question
https://superuser.com/questions/311570/a...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#raid #mdadm #raid10
#avk47
ANSWER 1
Score 10
I realize this is over a year old but someone might find this helpful...
You can expand a raid 10 array, but not how you are hoping. You would have to nest multiple levels of raid. This can be done with mdadm on 2 drives in raid 10, which quite nice performance depending on the layout, but you would have to make multiple 2 disk raid 10 arrays, then attach them to logical node. Then to expand add a few more, and stripe across that. If that is your use case (needing to expand a lot) then you would be wise to use a parity array, which can be grown.
These are the limitations you get with raid 10, while maintaining better read/write performance overall. And a clarification, raid 5/6 absolutely does not "In general, provide better write performance...". Raid 5/6 has their own respective pros/cons just as raid 10, but write performance is not a pro for raid 5/6.
Also, you didnt specify the size of your drives but beware of raid 5 on new large drives. Though if you are careful, you can recover from an unrecoverable read error, you risk downtime and the possibility of not being able to recover at all.
--edit to add info-- Use tools like hdparm (hdparm -i) and lshw to get the serial numbers along with the device name (/dev/sda) when you have a failure. This will ensure you remove the correct device when replacing. Up-arrow on Travis' comment as it is very correct and a nice layout, but as usual, weight the pros and cons of every solution.
ANSWER 2
Score 10
Some great news from the release announcement for mdadm 3.3:
This is a major new release so don't be too surprised if there are a few issues...
Some highlights are:
...
- RAID10 arrays can be reshaped to change the number of devices, change the chunk size, or change the layout between 'near' and 'offset'. This will always change data_offset, and will fail if there is no room for data_offset to be moved.
...
According to this answer on U&L, you will need at least linux 3.5 as well.
ACCEPTED ANSWER
Score 6
Last time I checked, mdadm won't let you --grow raid10. I glanced over mdadm's manpage now and it still says: Currently supported growth options including changing the active size of component devices and changing the number of active devices in RAID levels 1/4/5/6, changing the RAID level between 1, 5, and 6, changing the chunk size and layout for RAID5 and RAID5, as well as adding or removing a write-intent bitmap.
ANSWER 4
Score 6
I know it's more work and could get confusing, but you can always stripe multiple mirrors.
For example I just setup a 4 drive raid 10 array and latter want to add another 4 drive raid 10 array. Just use mdadm to create a new raid 10 array on the new drives. You could then either create another raid 0 array using the two existing raid devices. However I would use the features of lvm to create the stripe, thus keeping mdadm configs and /dev/md devices in an easy to understand state. Either method would work, and there are probably more but that's what I could do off the top of my head.