Raidz with a striped disk?

Moderators: jhartley, MSR734, nola

Raidz with a striped disk?

Post by /dev/null » Mon Dec 24, 2012 10:23 am

What to do, if u find 4 disks in the cabin with the following sizes?

Code: Select all
disk1 = 1 TB
disk2 = 1 TB
disk3 = 2 TB
disk4 = 2.5 TB


In a normal raidz this would be a total of 3 TB (N-1 * smallest disk = 3 * 1TB) with a disk failure tolerance of 1 disk.

Is there a way to build a raidz with disk3, disk4 and (striped disk1, disk2 = 2 TB)?
Cauz that would get me a total of 4 TB (N-1 * smallest disk = 2* 2 TB) and still a disk failure tolerance of 1 disk.

Is there a way to do something like:
Code: Select all
zpool create -f test raidz disk3 disk4 STRIPE disk1 disk2
/dev/null Offline


 
Posts: 16
Joined: Sat Sep 15, 2012 7:13 am

Re: Raidz with a striped disk?

Post by ghaskins » Tue Dec 25, 2012 1:44 pm

/dev/null wrote:Is there a way to build a raidz with disk3, disk4 and (striped disk1, disk2 = 2 TB)?
Cauz that would get me a total of 4 TB (N-1 * smallest disk = 2* 2 TB) and still a disk failure tolerance of 1 disk.

Is there a way to do something like:
Code: Select all
zpool create -f test raidz disk3 disk4 STRIPE disk1 disk2


There isn't a way to do it with a literal "raidz", though in theory you could manually create a hybrid solution similar to how something like Synology Hybrid Raid (SHR) works. That is, create a 4-way raidz vdev across 1 GB partitions on each, and create a 2-way mirror on a 1GB partition on disks 3 & 4:

Code: Select all
zpool create test raidz disk1 disk2 disk3s1 disk4s1
zpool add test mirror disk3s2 disk4s2


The extra space on the largest drive is wasted, of course, just like it would be on SHR/Drobo. Theres not much you can do about that. But this should net you about 4TB of redundant storage, at the expense that the vdevs share underlying HDDs (e.g. IOP penalty across vdevs).

Hope that helps,
-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: Raidz with a striped disk?

Post by ghaskins » Tue Dec 25, 2012 8:55 pm

One thing to note about all this: ZFS isn't as flexible as some of the schemes that I mentioned above. For instance, with SHR you could add a 5th disk at a later time and the 4xraid5 would be expandable to a 5xraid5, and the 2 disk mirror would convert to a 3-disk raid5, etc. As far as I know, there isn't an equivalent operation in ZFS since you cant add devices to a vdev (4->5 raidz devices) nor remove/reconfigure a vdev (convert the 2-way mirror to a 3-way raidz). jFYI as food-for-thought as you plan your storage architecture.

-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: Raidz with a striped disk?

Post by /dev/null » Wed Dec 26, 2012 5:33 am

Since I don’t have a server rack in the basement to put an unlimited amout of additional hard disks in it … yes, would love the day, when zfs meets OCE/ORLM, too. Rebuilding the raid N times to swap each disk is a pain and the waste of diskspace when using different sized disks is a pain, too. And my youngsters are some sort of diskspace-sucking vampires. In their world, disk space is - like energy, water and heat - a free and unlimited resource…

Thanks for your help!
/dev/null Offline


 
Posts: 16
Joined: Sat Sep 15, 2012 7:13 am

Re: Raidz with a striped disk?

Post by emory » Sun Dec 30, 2012 10:56 pm

If I were in that situation I would buy different disks, unless my wife forbid me.

Would there be any benefit to building a pool with three vdevs comprised of:

disk1+disk2, disk3, disk4

Then if one of the disks in the stripe vdev of disk1+disk2 dies, you lose the whole vdev, but who cares, since it's raidz1 and you have the 2TB and 2.5TB disks still working? At that point, you replace the two 1TB disks with a vdev of a 3TB disk, and when disk3 or disk4 dies you do the same, resilvering your way to greatness with your old lady none the wiser?
emory Offline


 
Posts: 15
Joined: Mon Sep 17, 2012 7:47 pm

Re: Raidz with a striped disk?

Post by ghaskins » Mon Dec 31, 2012 2:29 pm

emory wrote:If I were in that situation I would buy different disks, unless my wife forbid me.

Would there be any benefit to building a pool with three vdevs comprised of:

disk1+disk2, disk3, disk4



You could possibly achieve this if you used something external to ZFS to achieve the stripe (ala Apple RAID 0). I am not sure if Zevo will let you map an apple raid device, but assuming you could, you might be able to do something like:

Code: Select all
zpool create -f tank raidz /dev/stripe /dev/disk3 /dev/disk4


(substituting "stripe" for the real dev surfaced by the raid0 implementation)

emory wrote:Then if one of the disks in the stripe vdev of disk1+disk2 dies, you lose the whole vdev, but who cares, since it's raidz1 and you have the 2TB and 2.5TB disks still working?


While I think what you are saying is more or less correct, it looks like you confused the ZFS terminology slightly. To be clear, all four drives in this configuration would be a single "vdev" (a raidz vdev, to be precise). But to your point: Yes, disk1+disk2 would have the characteristics of a raid0 (that is, parallel reads+writes, no redundancy, loss of one disk is loss of entire raid0). But as you point out, when used in conjunction with ZFS, redundancy would be provided by the aggregate raidz vdev, where loss of the raid0 setup is just considered one "disk", and the other two should be able to reconstruct the missing data from parity across the other two.

I agree that if this works, it is probably superior to my suggestion. I am just not sure if it works or if it has any ramifications (e.g. perhaps the logical block size being larger on the raid0 when comared to disk3+4 might have some kind of negative performance impact).

Kind Regards,
-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am


Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 1 guest

cron