Memory usage, number of files?

All your general support questions for OpenZFS on OS X.

Memory usage, number of files?

Postby moriz » Mon Aug 17, 2015 2:16 am

I'm backing up about 100 Macs to a pool, using rsync 3.0.9 with --xattrs. The pool is attached to a Mac mini with 16GB RAM. The mini runs the all the daily rsyncs, from all the Macs to its attached pool.

My impression is the mini copes a lot better since about 1.3.1. My question is, with about 6 million files, am I better off using HFS+ disk images? What's ZFS more sensitive to, the number of files and metadata (so wrapping things inside disk images would reduce that), or the sheer number of blocks used?

This may be moot now as on 1.3.1r2 the RAM usage seems to remain constant, and the mini doesn't freeze like it used to every three weeks or so.
But in the long run I could look at replacing the Mac mini with something which can hold more RAM if necessary, if that's recommended. Any suggestions welcome. Love ZFS.

Background and a couple other issues:

A pool spans 2 VDEVs, each VDEV is 2 x 4TB in a mirror. It is a couple of WD Thunderbolt Duos, connected over Thunderbolt, which contain WD Red drives. I'm assuming I can extend the pool by adding more Duos/VDEVs later.
Occasionally I do a zfs send to other pools/disks, to copy the backups to other locations.
Each Mac is backed up to its own filesystem in the pool, each filesystem snapshotted daily. About 6 million files in total at present (excluding earlier snapshots).
No deduplication used.

I was impressed when I destroyed about 30000 snapshots down to about 3000, and it kept chugging. I had unfortunately let the pool fill to 90% (urgh). It now reports the pool's FRAG is 21% and CAP is 61%.

The recent addition of clearing the unlinked drain list, however, on first import, caused the Mac mini to freeze every couple of hours, and this carried on for a few days until all the lists were down to zero. It scrubs without error. And now it takes about 20 mins for it to import the 100 filesystems in the pool, although that's not a problem, I'm just curious why that is?
moriz
 
Posts: 3
Joined: Mon Aug 17, 2015 12:35 am

Re: Memory usage, number of files?

Postby lundman » Tue Aug 18, 2015 6:59 pm

ZFS has relatively few limits, or very high ones, compared to HFS - you will find HFS craps out first.

https://en.wikipedia.org/wiki/HFS_Plus
https://en.wikipedia.org/wiki/ZFS

However, ZFS can start to run slower once the pool is very close to full (actual number vary, lets say 96%) as it has to split large writes in to lots of small. Ie, it has to behave like HFS+ does when it comes to writing.

ZFS loves memory, but that is usually used in reading. if you use it to write to it as backup, adding more memory might not yield as big a difference as you hope. Putting the ZIL log onto a SSD might though, rsync can fsync after all.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan


Return to General Help

Who is online

Users browsing this forum: Bing [Bot] and 23 guests

cron