Community Edition 16TB Limit

Moderators: jhartley, MSR734, nola

Community Edition 16TB Limit

Post by audiophil » Tue Sep 18, 2012 10:20 pm

The description of the storage space limit in the documentation is a little vague. Is this per pool, cumulative or just the logical volume or what? Can I not have more than 16tb worth of zfs volumes imported at one time or will only the first 16tb of space be usable? Does it include the space on a l2arc device?

I would like to stuff 8x2TB drives in a case I have and possibly add an L2ARC. . .and this hits pretty close to the limit.

*EDIT*
dbrady wrote:The current 16TB limit is for the total pool storage size not including spares, logs and cache devices. This pool size property can be seen in a zpool list command or a zpool get.

Code: Select all
$ zpool get size tank
NAME    PROPERTY  VALUE   SOURCE
tank    size      16Ti    -
Last edited by audiophil on Sun Sep 30, 2012 10:45 pm, edited 1 time in total.
audiophil Offline


 
Posts: 15
Joined: Sat Sep 15, 2012 1:51 pm

Re: Community Edition 16TB Limit

Post by pooserville » Mon Sep 24, 2012 12:47 pm

I have the same question. I'm planning to build a RAIDZ2 with six 3TB drives -- is that considered 12TB or 18?
pooserville Offline


 
Posts: 4
Joined: Sat Sep 15, 2012 4:59 pm

Re: Community Edition 16TB Limit

Post by si-ghan-bi » Mon Sep 24, 2012 1:41 pm

and what about someone wants to overcome the limit? are there additional ZFS versions? I couldn't find any.
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

versions

Post by grahamperrin » Mon Sep 24, 2012 2:03 pm

si-ghan-bi wrote:… are there additional ZFS versions? …


If you mean additional versions of ZEVO,

>> … There will be more products down the line.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Community Edition 16TB Limit

Post by audiophil » Sat Sep 29, 2012 7:43 pm

I guess I'll find out this weekend. I have a raid 50 pool of 2TB x 3 + 2TB x 3 = 8TB volume size/12TB total in drives; a raid 10 pool of 1TB x 4 drives; and am going to see what happens when I start adding additional vdevs to my system and will report back with the results. Also some performance results from using a JBOD capable SAS card w/ expander to connect all the storage.
audiophil Offline


 
Posts: 15
Joined: Sat Sep 15, 2012 1:51 pm

Re: Community Edition 16TB Limit

Post by mkush » Sat Sep 29, 2012 9:47 pm

I have posted the exact same question without answer. Someone must know. My case will be six 4TB drives in RAIDZ2, thus 24TB raw but only 16TB usable. OK or not? I sure hope so since I now have a (used) Mac Pro on the way to build this system! Already own the drives.
mkush Offline


 
Posts: 34
Joined: Tue Sep 25, 2012 4:36 pm

Re: Community Edition 16TB Limit

Post by grahamperrin » Sat Sep 29, 2012 11:48 pm

audiophil wrote:… raid 50 pool of 2TB x 3 + 2TB x 3 = 8TB volume size/12TB total in drives; a raid 10 pool of 1TB x 4 drives; and am going to see what happens when I start adding additional vdevs to my system and will report back with the results. Also some performance results from using a JBOD capable SAS card w/ expander to connect all the storage.


Reading this topic alongside Kernel Panic on Scrub- *ISSUE RESOLVED

The resolution there suggests to me that in some cases, kernel panics may be associated with insufficient memory.

audiophil, please, how much memory do you have?
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Community Edition 16TB Limit

Post by audiophil » Sun Sep 30, 2012 12:14 am

grahamperrin wrote:
Reading this topic alongside Kernel Panic on Scrub- *ISSUE RESOLVED

The resolution there suggests to me that in some cases, kernel panics may be associated with insufficient memory.

audiophil, please, how much memory do you have?


24 gigs. And now that I think about it, I've not tried to take a look at the memory config stuff in this release. I'll post back tomorrow when I'm done juggling data around & get all this storage added.
audiophil Offline


 
Posts: 15
Joined: Sat Sep 15, 2012 1:51 pm

Re: Community Edition 16TB Limit

Post by audiophil » Sun Sep 30, 2012 7:28 pm

The software didn't prevent me from importing more than 16TB worth of total drives. (3 different pools exceeding 16TB in total disks). I don't have enough storage to throw around to check and see if a pool larger than 16TB can be created or whatnot.

I dropped in a raidz2 test pool w/ 6 2TB seagate 7200 rpm drives; attached to a RocketRaid 4320 running in JBOD mode (cache off). Small file performance doesn't seem to peak past 50-80mb/sec on reads; but for large files (1GB files) I get W-200MB/s R-200MB/s, both varying up and down by 20GB/S and (16GB files) i get higher reads and writes, up near the 250MB/s range.

I ran this same array on the SAS card and on my integrated ICH10 sata ports. Not going to list off numbers but it 'feels' faster and my large file bench tests trend a bit faster on the SAS JBOD setup (maybe 5-10%)

This is nowhere near the file transfer performance of running a RAID6 on the integrated IOP; where performance on this card is up in the 1500MB/s range for cached data and this particular array nets 500MB/s for batches of large or small files. Then again, this is to be expected with zfs.
audiophil Offline


 
Posts: 15
Joined: Sat Sep 15, 2012 1:51 pm

Re: Community Edition 16TB Limit

Post by mkush » Sun Sep 30, 2012 9:44 pm

audiophil, is there any way you could test a single pool that has (1) more than 16TB physical storage but (2) less than 16TB actual storage due to RAIDZ/Z2 loss... This is the particular case I need to know about.
mkush Offline


 
Posts: 34
Joined: Tue Sep 25, 2012 4:36 pm

Next

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron