Encryption and APFS

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

Encryption and APFS

Postby Haravikk » Sun Aug 04, 2019 4:25 am

So I've experimented with ZFS on Mac off and on, but never actually had a completely ideal storage setup for seriously using it. However I'm now looking at upgrading either to one of the new Mac Minis, or the new Mac Pro; obviously the Mac Pro is preferred for ECC, but I'm wary that Apple's going to punch me in the guts on price here in the UK, especially when entry level Mac Pros have never been good value for money (not that the Mac Minis are great either, the non-upgradeable storage costs are insane!)

Anyway, now that I'm looking at ZFS seriously again, I've a few questions, and am hoping for some recommendations.

Encryption
First question is; what is the current state of native encryption? I was very excited when I heard that this feature was getting close; while I'm not looking to upgrade for a few months yet, I'm curious how production ready the feature is considered to be, as it would be a huge boon to be able to eliminate the need to layer ZFS onto Core Storage, with all the headaches that entails!

Secondly on encryption, but what is the best way to automatically decrypt volumes? Since my system volume will be encrypted (and probably only bootable using my admin account), I'm happy for my ZFS devices to be unlocked automatically; does OpenZFS support doing this using the system keychain? If not, what are the best options for auto-mounting encrypted ZFS volumes on a Mac?

Setup
My intention is to use the new machine's fast internal storage for my system volume, which I'm probably just going to leave as APFS to take advantage of hardware encryption, and because bootable ZFS sounds like a lot of added complexity. I'm looking to get around a 1tb internal drive, but may only allocate around half of that at most for the system volume (probably less), leaving the rest of the drive available as a cache for ZFS.

I will then have an external array for main storage, and am currently planning to use four 4tb HDDs, probably WD Reds or similar. I'll be moving all my main data onto this volume, which includes a variety of content including raw HD video (possibly 4k in future); while performance isn't strictly a concern (I'm not really a professional video editor, I just work with it sometimes) more is always better.

I'll also have a second external array for backup, likely also four disks for now but with higher capacity and performance less of a concern (WD Greens or similar maybe?). My intention is to create an HFS+ formatted zvol so that Time Machine can backup the system volume, and then I'll use snapshots and zfs send to backup my main storage (my plan is to use a script so that I can ensure that both don't run at the same time, but instead one after the other).


Does this seem reasonable/sensible?

I'm interested in recommendations for how I should setup the drives for each array. For example, I was thinking of having my main storage array be four drives in two mirrored pairs, since this should keep performance high while giving me a minimum of one disk's worth of redundancy, but I'm a little wary about this, as I'm given to understand that with drive capacities being what they are, it may not be possible to guarantee an error doesn't occur while replacing a failed drive, so should I be aiming for more redundancy? In what configuration? Would five drives be significantly better?

For the backup drive I've been considering whether to use RAID-Z2 as, while this will still cut the storage in half, it should give a full two disks of redundancy, is this worth doing? For my backup volume I'm not so concerned about performance. My intention is to setup an HFS+ formatted zvol for Time Machine to backup my system volume, and then use snapshots and ZFS send to backup my main storage array. In the off chance that Time Machine is updated to run on APFS, how is this likely to impact a zvol (i.e- is APFS on a zvol counter-productive since both support snapshots and copy-on-write)?

In terms of upgrading, can I just swap drives out for larger ones (i.e- swap one, wait for array to rebuild, then swap another and so-on till all are the same)? I'm not really looking to add disks to the arrays, they'll probably be four or five bays, six max.

Lastly, I'm interested in recommendations on how to setup caching; like I say I'm aiming for 1tb for the internal SSD, and shouldn't need loads of space for the system volume (I think 256gb should be plenty, as a I rule I tend not to install apps into /Applications, but instead into per user ~/Applications folders instead, wherever possible). Also I'm curious what the best way to handle caching will be; for example, if my ZFS devices are encrypted, is it safe to just use an internal, unencrypted partition, or could/should I use APFS volumes somehow for this? Is it worth using write caching (zlog device I think) for either of my ZFS devices? In the off chance that Apple doesn't price the Mac Pro completely outside my budget, I'll likely get it with two internal SSDs (assuming Apple enables bootable APFS RAID-1), how would this change how I should configure caching?


Sorry if some of these questions seem obvious, also there's a lot of them and some are very open-ended; I'm very, very far from an expert, I'd consider myself more of an enthusiast in that I know various things out of interest and understand them in theory, but since a lot of things presumably won't be easy to change, it's important I have as clear a plan as I can make :)
Haravikk
 
Posts: 75
Joined: Tue Mar 17, 2015 4:52 am

Re: Encryption and APFS

Postby Sharko » Tue Aug 06, 2019 11:38 am

Regarding native encryption, I've been using it for the past few months on all my external disks and disk arrays. It seems to work pretty flawlessly, once I understood that one sets up child datasets under an encrypted dataset. As far as I know, there aren't any really slick solutions to the problem of supplying the passphrase so that data can be unlocked automatically; you could embed it into a script that gets executed via launchd when the disk is detected by the system, but a lot of people would regard that as insecure. I just boot up into an admin account and do 'sudo zpool import -a' and 'sudo zfs mount -l TANK/ENCRYPTED' to bring up my encrypted datasets.

Conventional wisdom on array setup would dictate the 2x2 mirror for your main pool, for decent speed. I don't think that there is a huge risk in only having one disk of redundancy, especially considering that you're going to be snapshotting and replicating to a backup pool in a second enclosure. Personally I wouldn't give up the speed of the mirrors for the additional redundancy of the zraid in your main storage.

For the backup pool, yes, I would go zraid2.

As far as caching goes, I've heard Allan Jude run through the numbers, and for most scenarios it doesn't make sense to bring L2ARC or ZLOG devices into the mix, at least for generalized use. The reason is that every block that cached in a secondary storage device has to have a pointer and some data stored in RAM, and this displaces data that could otherwise just live in the ARC and be accessed at the highest speed. I think Allan's advice was just to spend your money on RAM, and forget secondary caching.
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: Encryption and APFS

Postby Haravikk » Wed Aug 07, 2019 3:36 am

Sharko wrote:Regarding native encryption, I've been using it for the past few months on all my external disks and disk arrays. It seems to work pretty flawlessly, once I understood that one sets up child datasets under an encrypted dataset.

I need to brush up on my ZFS terminology, but do you mean here that the encryption is at the zpool (collection of physical devices) level, or that you need to create individual zvols with encryption, and sub-volumes within them if you want to share a key?

I should only need one volume for the main pool so hopefully that's easy enough.

For the backup pool, I'll need two volumes I think, one for sending snapshots to, and then I'm thinking of using another formatted as HFS+ for Time Machine, as this way I can have a hard limit on how much space it will occupy (since it'll be backing up the system volume it's really just macOS plus system-wide apps anyway, doesn't need a tonne of version history). If I setup my backup volume with one encrypted zvol, then a child of that formatted for Time Machine, would that be suitable do you think?

Sharko wrote:you could embed it into a script that gets executed via launchd when the disk is detected by the system, but a lot of people would regard that as insecure. I just boot up into an admin account and do 'sudo zpool import -a' and 'sudo zfs mount -l TANK/ENCRYPTED' to bring up my encrypted datasets.

I suppose the question of security depends upon whether the password is embedded in the script or not? If the password were stored in a file accessible only to root for example then that'd surely be fine, since the system volume where it will be stored on is also encrypted? I suppose a better solution would maybe be to make a simple command line tool rather than a script, as this could then fetch the password from a keychain, in which case access is limited to the tool only (i.e- anything else that's able to run as root isn't automatically guaranteed access, unlike the loading from a file option).

It's solvable anyway; like I say it'll be a little while before I'm actually ready to upgrade, but I'll try to remember and share whatever I end up doing.

Sharko wrote:As far as caching goes, I've heard Allan Jude run through the numbers, and for most scenarios it doesn't make sense to bring L2ARC or ZLOG devices into the mix, at least for generalized use. The reason is that every block that cached in a secondary storage device has to have a pointer and some data stored in RAM, and this displaces data that could otherwise just live in the ARC and be accessed at the highest speed. I think Allan's advice was just to spend your money on RAM, and forget secondary caching.

Hmm, makes sense, and I suppose thinking about it, the bulk of my data isn't going to be commonly accessed; in fact most commonly accessed files are probably caches anyway, which are probably best moved off as symlinks since I don't really want to be backing those up, after that it's metadata databases (Spotlight, media indexes etc.) which should be fine in RAM.

In that case I might actually be able to save a bundle by just having less internal storage, especially at Apple's prices! Just taking a quick look at the Mac Mini, by getting 256gb of internal storage, rather an 1tb, it would save £360, for which I can easily get 64gb of RAM, plus change, from a third party, compared to Apple's £900! I love their machines, but man are their upgrade prices BS. No idea what the Mac Pro pricing will be like yet, but I'm not expecting to be pleasantly surprised ;)
Haravikk
 
Posts: 75
Joined: Tue Mar 17, 2015 4:52 am

Re: Encryption and APFS

Postby Sharko » Wed Aug 07, 2019 11:01 am

I need to brush up on my ZFS terminology, but do you mean here that the encryption is at the zpool (collection of physical devices) level, or that you need to create individual zvols with encryption, and sub-volumes within them if you want to share a key?

I should only need one volume for the main pool so hopefully that's easy enough.

For the backup pool, I'll need two volumes I think, one for sending snapshots to, and then I'm thinking of using another formatted as HFS+ for Time Machine, as this way I can have a hard limit on how much space it will occupy (since it'll be backing up the system volume it's really just macOS plus system-wide apps anyway, doesn't need a tonne of version history). If I setup my backup volume with one encrypted zvol, then a child of that formatted for Time Machine, would that be suitable do you think?


Native encryption is not done at the pool level - it is done at the level of a dataset; a dataset can be thought of as a filesystem mount point, since it usually is accessible in the Finder as a separate external disk. It is sort of dual in nature though, since you can view the top level pool, and the child datasets show up there as contents within the pool.

ZVOLS are not the same as datasets; they just provide a raw block storage device to the operating system, which then must format them as either HFS+ or APFS (never tried this myself) to make them usable. ZVOLS have a defined size at creation, so yes, it will provide a hard limit for Time Machine.

So your storage would look something like this

TANK: your root level zpool (not encrypted, nor encryptable with native encryption)
TANK/ENCRYPTED: your zfs-native encrypted dataset that merely serves as a container for child datasets below it
TANK/ENCRYPTED/BACKUP: your child dataset that receives snapshots from the source, which is encrypted by inheritance
TANK/TM__ZVOL: your zvol storage for Time Machine, defined size at creation, not ZFS encrypted but encrypted by Time Machine

Or you could set up
TANK/ENCRYPTED/TM_ZVOL as a child zvolume, encrypted by zfs inheritance, no need to turn on encryption in Time Machine

I didn't list this latter possibility first just because I believe this is theoretically possible (to put a zvol under an encrypted dataset) but I haven't actually done it myself.
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm


Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 12 guests

cron