"ZFS assertion failed: zap->zap_u.zap_fat.zap_phys->zap_magic == 0x2F52AB2ABULL (0x0 == 0x2f52ab2ab)"
That's pretty clear. ZAPs is the ZFS Attribute Processor; it sits on the DMU (like the ZPL, the ZFS POSIX Layer does) and manages objects that provide an efficient attribute/value data store. The assertion in question is checking to make sure that the in-core structure it is looking at is indeed a fat ZAP object by looking for a magic number. Instead of the magic number, it sees zero, and the assertion fails (causing a panic).
You can expect fat ZAPs to be in play when opening a dataset -- any dataset -- and so this type of cross-platform assertion failure is fairly commonly observed in the presence of on-disk structure corruption, usually after a violent disconnection when the dataset structures are in the process of being updated.
Note that the problem may not be a read error in the strict sense -- that is, what was written is being returned faithfully. Scrubs will NEVER detect such a problem, EVER. An example case is in single-device pools where the ZIL is slow (slow bus, slow device, lots of traffic), and it can also happen with a poor choice of SLOG, where the separate log device is unreliable at pool import time. You could look for ZIL errors in /var/log/system.log for evidence of this. There may also be a transaction group scheduling problem where there is a bad assumption somewhere up the ZPL/ZAP stack that critically related sets of DMU objects will be dealt with in the same TXG commit; those may or not be cross-platform.
It is highly unlikely that you will recover your damaged dataset; you should restore it from a backup.
However, *if* you want to poke at the dataset (or you have to -- however, please note that 1 April is International Backup Day, and that you should have known-to-be-restorable backups on hand; zfs send/receive to a pool on a different system is a good approach) then as I noted, you might find that an older snapshot (if one exists) might be usable in *read only* mode.
It would be sensible to set the canmount=off property on the bad dataset, and to create (and mount read-only) clones of older snapshots. You should copy the data out of the dataset (/usr/bin/rsync -avhPEs from/ to/) to a new dataset the moment you have read-only access to it; you should not rely on a rollback or a clone to be free from ZAP problems that may bring down your system again.
Finally, note that you can still archive the dataset in question, since zfs send and receive deal with DMU objects without considering their semantics in upper layers (like the ZAP). The problem is thus that you get a bad fat ZAP DMU that will cause this assertion in any pool on any zfs platform the moment it is processed.
Since the moment in question is at or very near the mount, be careful. Panics are crashes and thus are a form of unpredictable violent disconnect, so they pose some risk to data availability, especially to any dataset that was mounted read-write, and to non-ZFS filesystems as well, such as your JHFS+ boot partition.