Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby Sharko » Thu May 12, 2016 12:51 pm

So... I've been curious about ZFS for a while, and I bought a 2010 base model Mac Pro to play around with. The sweet spot for memory cost per GB seems to be 8GB sticks, so I figured I would put 3 of those in, for a total of 24 GB of RAM. I thought I would dedicate 3 of the SATA sleds to ZFS, and keep one as HFS+ (may eventually do a fusion drive with an SSD on a PCI card). I would like to be able to do snapshots of the other computers in the house as they back up to this machine with Carbon Copy Cloner, so it sounds like the de-duplication feature would really help keep the actual storage utilization down. I've seen it mentioned that one should budget 5GB RAM per TB stored when using de-duplication; if I were to put 2TB disks in each of the three ZFS sleds would I be getting up to 4TB useful storage, and would the 24GB RAM likely be enough?

Any strong recommendations on whether o3x runs better on Yosemite or El Capitan?

Thanks for your advice,

Kurt
Sharko
 
Posts: 228
Joined: Thu May 12, 2016 12:19 pm

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby Brendon » Thu May 12, 2016 1:41 pm

"Friends don't let Friends dedup"

- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby Sharko » Thu May 12, 2016 6:21 pm

Hmm, that's an interesting response. I'm curious why you say that, though; specifically, does the system slow to a crawl, is it more likely to become corrupted and lose data, or ??? Obviously, it is a memory-hogging feature; does it just not play nice with OS X?

Kurt
Sharko
 
Posts: 228
Joined: Thu May 12, 2016 12:19 pm

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby lundman » Thu May 12, 2016 6:45 pm

It's a memory-hogging feature, that only pays off in specific work loads, where the data is very similar. VMs, that sort of thing. It is memory intensive, for an age were storage was expensive. Now though, storage is cheap :)

oh and I can't just turn it off either. So you should be sure.

Yes, 24GB will be enough for 4TB.
User avatar
lundman
 
Posts: 1334
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby Sharko » Thu May 12, 2016 8:06 pm

Thank you for the clarification. Perhaps if explain my intended use it will be clear to you whether de-duplication is worth it or not; to me it looks like a lot of duplication, but maybe I'm missing something here.

What I was planning to do was backup each computer to a specific ZFS dataset as a clone operation using rsync or Carbon Copy Cloner. Then after each clone operation has completed I would take a snapshot of that dataset. My thinking was, with deduplication turned on, each snapshot would really only be as big as the unique data that changed during the period between clones. Since not much data changes week to week, the snapshot would be 95-99% duplicate of the preceding weeks of data. In essence, this would be a kind of roll-your-own Time Machine using ZFS.

The gap in my knowledge is this: does a snapshot only consist of the unique data already? Section 20.4.5 at this link seems to suggest this:

http://www.allanjude.com/bsd/zfs-zfs.html

So, maybe deduplication isn't necessary to save space if you're just using it for snapshots?

Thank you for your kind help.

Kurt
Sharko
 
Posts: 228
Joined: Thu May 12, 2016 12:19 pm

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby lundman » Thu May 12, 2016 10:57 pm

If you do backups with rsync, then snapshots. The snapshots will only contain the differences. You can also incrementally send snapshots based on previous, but I don't think you are using zfs send anywhere.

You will probably get some dedup between each machines (epoch) backup, but the incremental differences probably wont. I would probably just set compression=gzip9 and not use dedup. But you could always try dedup the first week, then re-create everything the 2nd week and see how they compare. (Since its backups, you have the option to start from scratch presumably).
User avatar
lundman
 
Posts: 1334
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby s_mcleod » Fri Sep 23, 2016 10:49 pm

I really don't know much about ZFS but I had dedupe enabled on a small 8TB RAID mirror and the machine only had 32GB of RAM, I didn't notice any issues at the time.
s_mcleod
 
Posts: 7
Joined: Fri Sep 23, 2016 10:14 am

Re: Setting up ZFS: 24GB ram enough for de-dupe 4TB ZFS?

Postby Brendon » Sat Sep 24, 2016 2:30 am

@s_mcleod

It is a well known tenet of ZFS use that deduplication is a specialist configuration, that should only be employed deliberately by those that (a) know it will benefit them (b) understand the consequences. It is certainly not just about RAM - although it is important to ensure that there is sufficient RAM to keep the DDT in main memory (as opposed to on disk).

- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm


Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 1 guest