Release 2.0.0 send/recv b0rkage

I upgraded my machine hosting my main pools to 2.0.0 today (host is running Catalina) and I find my scripts that I use for backups via send/recv no longer work and complain about "bad magic number". I rolled back to 1.9.4 and verified that the scripts do indeed work with the older version. 2.0.0 on Big Sur is also exhibiting the same issue.
Simple test case using one pool and send/recv between two different datasets.
#1 Setup and run on Catalina 1.9.4
Happy days!
#2 Then on 2.0.0 Big Sur (but same result on Catalina with 2.0.0), just moving the pool over to the other machine (I've also tested by creating a new pool on 2.0.0 - same breakage):
Sad times!
However, could this be related to nested datasets (recursive send)? If I target to move just a deeper dataset/ZVol, it "works" (even though it reports "cannot receive: failed to read from stream"):
Anyone else running up against this?
Cheers!
James
Simple test case using one pool and send/recv between two different datasets.
#1 Setup and run on Catalina 1.9.4
- Code: Select all
user@Lazarus ~ % sudo zpool create -o ashift=12 -O compression=lz4 -O casesensitivity=sensitive -O normalization=formD Empty /dev/disk2
user@Lazarus ~ % sudo zfs create Empty/Dataset1
user@Lazarus ~ % sudo zfs create Empty/Dataset2
user@Lazarus ~ % sudo zfs create Empty/Dataset1/Files
user@Lazarus ~ % sudo zfs create -V 100G Empty/Dataset1/ZVol
user@Lazarus ~ % sudo touch /Volumes/Empty/Dataset1/Foo
user@Lazarus ~ % sudo touch /Volumes/Empty/Dataset1/Files/Bar
user@Lazarus ~ % sudo zfs snapshot -r Empty/Dataset1@snapshot
user@Lazarus ~ % sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
Empty 100G 799G 460K /Volumes/Empty
Empty/Dataset1 100G 799G 444K /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot 0 - 444K /Volumes/Empty/Dataset1/.zfs/snapshot/snapshot
Empty/Dataset1/Files 444K 799G 444K /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot 0 - 444K /Volumes/Empty/Dataset1/Files/.zfs/snapshot/snapshot
Empty/Dataset1/ZVol 100G 899G 56K -
Empty/Dataset1/ZVol@snapshot 0 - 56K /Volumes/Empty/Dataset1/ZVol/.zfs/snapshot/snapshot
Empty/Dataset2 460K 799G 460K /Volumes/Empty/Dataset2
user@Lazarus ~ % sudo zfs send -R Empty/Dataset1@snapshot | sudo zfs receive -Fduvs Empty/Dataset2
receiving full stream of Empty/Dataset1@snapshot into Empty/Dataset2/Dataset1@snapshot
received 1.05M stream in 1 seconds (1.05M/sec)
receiving full stream of Empty/Dataset1/ZVol@snapshot into Empty/Dataset2/Dataset1/ZVol@snapshot
received 4.16K stream in 1 seconds (4.16K/sec)
receiving full stream of Empty/Dataset1/Files@snapshot into Empty/Dataset2/Dataset1/Files@snapshot
received 1.05M stream in 1 seconds (1.05M/sec)
user@Lazarus ~ % sudo zpool export Empty
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty/Dataset2'
Unmount successful for /Volumes/Empty/Dataset2
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty/Dataset1/Files'
Unmount successful for /Volumes/Empty/Dataset1/Files
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty/Dataset1'
Unmount successful for /Volumes/Empty/Dataset1
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty'
Volume Empty on disk3s1 unmounted
Attempting to eject volume 'Empty/Dataset2/Dataset1/ZVol'
Running process: '/usr/sbin/diskutil' 'unmountDisk' '/dev/disk5'
Unmount of all volumes on disk5 was successful
Attempting to eject volume 'Empty/Dataset1/ZVol'
Running process: '/usr/sbin/diskutil' 'unmountDisk' '/dev/disk4'
Unmount of all volumes on disk4 was successful
Happy days!

#2 Then on 2.0.0 Big Sur (but same result on Catalina with 2.0.0), just moving the pool over to the other machine (I've also tested by creating a new pool on 2.0.0 - same breakage):
- Code: Select all
user@apotheosis ~ % sudo zpool import Empty
do_mount calling mount with fstype zfs, rpath /Volumes/Empty, fspec /dev/disk3s1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 1
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2, fspec Empty/Dataset2, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset1, fspec Empty/Dataset1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset1/Files, fspec Empty/Dataset1/Files, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1, fspec Empty/Dataset2/Dataset1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1/Files, fspec Empty/Dataset2/Dataset1/Files, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
user@apotheosis ~ % sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
Empty 200G 699G 480K /Volumes/Empty
Empty/Dataset1 100G 699G 500K /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot 192K - 444K -
Empty/Dataset1/Files 632K 699G 464K /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot 168K - 444K -
Empty/Dataset1/ZVol 100G 799G 56K -
Empty/Dataset1/ZVol@snapshot 0B - 56K -
Empty/Dataset2 100G 699G 468K /Volumes/Empty/Dataset2
Empty/Dataset2/Dataset1 100G 699G 484K /Volumes/Empty/Dataset2/Dataset1
Empty/Dataset2/Dataset1@snapshot 180K - 444K -
Empty/Dataset2/Dataset1/Files 664K 699G 484K /Volumes/Empty/Dataset2/Dataset1/Files
Empty/Dataset2/Dataset1/Files@snapshot 180K - 444K -
Empty/Dataset2/Dataset1/ZVol 100G 799G 56K -
Empty/Dataset2/Dataset1/ZVol@snapshot 0B - 56K -
user@apotheosis ~ % sudo zfs destroy -r Empty/Dataset2/Dataset1
retry
Unmount successful for /Volumes/Empty/Dataset2/Dataset1/Files
retry
Unmount successful for /Volumes/Empty/Dataset2/Dataset1
retry
user@apotheosis ~ % sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
Empty 100G 799G 480K /Volumes/Empty
Empty/Dataset1 100G 799G 500K /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot 192K - 444K -
Empty/Dataset1/Files 632K 799G 464K /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot 168K - 444K -
Empty/Dataset1/ZVol 100G 899G 56K -
Empty/Dataset1/ZVol@snapshot 0B - 56K -
Empty/Dataset2 468K 799G 468K /Volumes/Empty/Dataset2
user@apotheosis ~ % sudo zfs send -R Empty/Dataset1@snapshot | sudo zfs receive -Fduvs Empty/Dataset2
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 1
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 8 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(1)
receiving full stream of Empty/Dataset1@snapshot into Empty/Dataset2/Dataset1@snapshot
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 0
libzfs_macos_wrapfd: readfd 6 (0)
libzfs_macos_wrapfd: writefd 7 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(0) write(7)
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 8
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 9 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(8)
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 9
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 10 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(9)
received 1.05M stream in 1 seconds (1.05M/sec)
cannot receive: invalid stream (bad magic number)
warning: cannot send 'Empty/Dataset1/Files@snapshot': signal received
user@apotheosis ~ % sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
Empty 100G 799G 480K /Volumes/Empty
Empty/Dataset1 100G 799G 500K /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot 192K - 444K -
Empty/Dataset1/Files 632K 799G 464K /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot 168K - 444K -
Empty/Dataset1/ZVol 100G 899G 56K -
Empty/Dataset1/ZVol@snapshot 0B - 56K -
Empty/Dataset2 912K 799G 468K /Volumes/Empty/Dataset2
Empty/Dataset2/Dataset1 444K 799G 444K /Volumes/Empty/Dataset2/Dataset1
Empty/Dataset2/Dataset1@snapshot 0B - 444K -
Sad times!

However, could this be related to nested datasets (recursive send)? If I target to move just a deeper dataset/ZVol, it "works" (even though it reports "cannot receive: failed to read from stream"):
- Code: Select all
user@apotheosis ~ % sudo zfs send -R Empty/Dataset1/Files@snapshot | sudo zfs receive -Fduvs Empty/Dataset2
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 1
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 8 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(1)
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1, fspec Empty/Dataset2/Dataset1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
receiving full stream of Empty/Dataset1/Files@snapshot into Empty/Dataset2/Dataset1/Files@snapshot
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 0
libzfs_macos_wrapfd: readfd 6 (0)
libzfs_macos_wrapfd: writefd 7 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(0) write(7)
loop exit
pipe_io_relay: thread done: 1094032 bytes
loop exit
pipe_io_relay: thread done: 1094032 bytes
received 1.04M stream in 1 seconds (1.04M/sec)
cannot receive: failed to read from stream
user@apotheosis ~ % sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
Empty 100G 799G 480K /Volumes/Empty
Empty/Dataset1 100G 799G 500K /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot 192K - 444K -
Empty/Dataset1/Files 632K 799G 464K /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot 168K - 444K -
Empty/Dataset1/ZVol 100G 899G 56K -
Empty/Dataset1/ZVol@snapshot 0B - 56K -
Empty/Dataset2 1020K 799G 468K /Volumes/Empty/Dataset2
Empty/Dataset2/Dataset1 552K 799G 108K /Volumes/Empty/Dataset2/Dataset1
Empty/Dataset2/Dataset1/Files 444K 799G 444K /Volumes/Empty/Dataset2/Dataset1/Files
Empty/Dataset2/Dataset1/Files@snapshot 0B - 444K -
user@apotheosis ~ % sudo zfs mount Empty/Dataset2/Dataset1/Files
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1/Files, fspec Empty/Dataset2/Dataset1/Files, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
user@apotheosis ~ % ls -l /Volumes/Empty/Dataset2/Dataset1/Files
total 1
-rw-r--r-- 1 root wheel 0 3 Jan 13:57 Bar
Anyone else running up against this?

Cheers!
James