Hey
I am wondering about this. Having used RAID arrays mostly mdadm-style for almost ten years before going ZFS, I "grew up" with the traditional UNIXy separation of layers, where the FS knew nothing about the underlying block devices, and didn't care.
Along came ZFS, and a good ten years ago I created my first zpool, adhering to the rule of giving it the entire drives rather than partitions. Made sense given the docs.
Drives did fail, as they do, and I discovered that not all drives are created equal — that is, they're not the same size, and some are smaller. Also (applicable to my current concerns when running ZFS on USB drives), some USB controllers steal some sectors and report a smaller available space to the OS. I learned a lesson: Assuming that your current drive's size won't be larger than the replacement is not a good assumption.
When I created my new zpool recently, I therefore partitioned them with gdisk and left some hundreds of MBs unused at the end, to allow for slightly smaller drives and controllers stealing space. So, it's been running on the partitions, and I've had a ton of problems, most of which I don't attribute to this setup, but rather that it's a USB-driven pool.
Experiencing that my new strategy sometimes did result in many files missing after a hard reboot, or an external pool of two drives in a USB bay continuing to be written to minutes after exporting the pool, leaves me wondering how true the rule remains that you should assign whole drives and not partitions when you create a pool.
Especially on a Mac, as I originally adopted ZFS using the FUSE variant on Linux, so my initial experience was that of somewhat of a hack, and in my mind I kinda think of O3X of a hack