The confusing part is that I've been using ZFS for a while now with similar settings; I replaced my plain HFS+ Time Machine backup drive some time ago with an HFS+ formatted encrypted ZVOL with ZSTD compression, running on a two disk mirror, and I've never noticed big CPU spikes like this during backups, but then maybe Time Machine is just too slow for it to be noticeable?
The dataset where the spikes occur is also encrypted with ZSTD compression, running as a plain ZFS dataset (with com.apple.mimic=hfs), on two unmirrored disks (no redundancy yet), so I don't see anything different that should account for such a big spike in CPU use.
For reference, here are the non-default ZFS properties for the dataset in question:
- Code: Select all
zdata/media/Queue recordsize 1M local
zdata/media/Queue mountpoint /Users/media/Downloads/Queue local
zdata/media/Queue compression zstd inherited from zdata
zdata/media/Queue readonly off inherited from zdata/media
zdata/media/Queue xattr sa inherited from zdata
zdata/media/Queue copies 1 local
zdata/media/Queue dnodesize auto inherited from zdata
zdata/media/Queue relatime on inherited from zdata
zdata/media/Queue com.apple.browse on inherited from zdata
zdata/media/Queue com.apple.ignoreowner off inherited from zdata
zdata/media/Queue com.apple.mimic hfs inherited from zdata
zdata/media/Queue com.apple.devdisk on inherited from zdata
Changing compression to LZ4 or turning it off entirely didn't seem to make any meaningful difference to CPU usage.
In case it matters, I'm using rsync to perform the copy, copying all attributes (rsync -auishxAXNHP) however the target dataset was empty when I started so rsync shouldn't really be doing anything, i.e- there should be no checksumming of existing files or such. Copying in the Finder also exhibits the same CPU spike when the files are big enough.
I can't fault the transfer speed, as it's hovering around 170-200mb/sec and the two drives that make up the pool are rated for around 120mb/sec write each so that's near enough to their maximum theoretical speed.
Of course I don't anticipate so much write activity under normal use, but the spike in CPU usage seems extremely high; does OpenZFS on macOS support hardware encryption? What is the impact of having datasets with different encryption keys?
Update: Increasing the record size to 1M might have helped actually, now the CPU utilisation is around 500% for kernel_task, could the issue be checksum generation? That seems a bit excessive though, as surely checksums can be hardware accelerated as well?