Release 2.0.0 send/recv b0rkage

All your general support questions for OpenZFS on OS X.

Release 2.0.0 send/recv b0rkage

Postby Jimbo » Sat Jan 02, 2021 10:22 pm

I upgraded my machine hosting my main pools to 2.0.0 today (host is running Catalina) and I find my scripts that I use for backups via send/recv no longer work and complain about "bad magic number". I rolled back to 1.9.4 and verified that the scripts do indeed work with the older version. 2.0.0 on Big Sur is also exhibiting the same issue.

Simple test case using one pool and send/recv between two different datasets.

#1 Setup and run on Catalina 1.9.4
Code: Select all
user@Lazarus ~ % sudo zpool create -o ashift=12 -O compression=lz4 -O casesensitivity=sensitive -O normalization=formD Empty /dev/disk2
user@Lazarus ~ % sudo zfs create Empty/Dataset1
user@Lazarus ~ % sudo zfs create Empty/Dataset2
user@Lazarus ~ % sudo zfs create Empty/Dataset1/Files
user@Lazarus ~ % sudo zfs create -V 100G Empty/Dataset1/ZVol   
user@Lazarus ~ % sudo touch /Volumes/Empty/Dataset1/Foo
user@Lazarus ~ % sudo touch /Volumes/Empty/Dataset1/Files/Bar
user@Lazarus ~ % sudo zfs snapshot -r Empty/Dataset1@snapshot
user@Lazarus ~ % sudo zfs list -t all
NAME                            USED  AVAIL  REFER  MOUNTPOINT
Empty                           100G   799G   460K  /Volumes/Empty
Empty/Dataset1                  100G   799G   444K  /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot            0      -   444K  /Volumes/Empty/Dataset1/.zfs/snapshot/snapshot
Empty/Dataset1/Files            444K   799G   444K  /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot      0      -   444K  /Volumes/Empty/Dataset1/Files/.zfs/snapshot/snapshot
Empty/Dataset1/ZVol             100G   899G    56K  -
Empty/Dataset1/ZVol@snapshot       0      -    56K  /Volumes/Empty/Dataset1/ZVol/.zfs/snapshot/snapshot
Empty/Dataset2                  460K   799G   460K  /Volumes/Empty/Dataset2
user@Lazarus ~ % sudo zfs send -R Empty/Dataset1@snapshot | sudo zfs receive -Fduvs Empty/Dataset2
receiving full stream of Empty/Dataset1@snapshot into Empty/Dataset2/Dataset1@snapshot
received 1.05M stream in 1 seconds (1.05M/sec)
receiving full stream of Empty/Dataset1/ZVol@snapshot into Empty/Dataset2/Dataset1/ZVol@snapshot
received 4.16K stream in 1 seconds (4.16K/sec)
receiving full stream of Empty/Dataset1/Files@snapshot into Empty/Dataset2/Dataset1/Files@snapshot
received 1.05M stream in 1 seconds (1.05M/sec)
user@Lazarus ~ % sudo zpool export Empty
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty/Dataset2'
Unmount successful for /Volumes/Empty/Dataset2
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty/Dataset1/Files'
Unmount successful for /Volumes/Empty/Dataset1/Files
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty/Dataset1'
Unmount successful for /Volumes/Empty/Dataset1
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/Empty'
Volume Empty on disk3s1 unmounted
Attempting to eject volume 'Empty/Dataset2/Dataset1/ZVol'
Running process: '/usr/sbin/diskutil' 'unmountDisk' '/dev/disk5'
Unmount of all volumes on disk5 was successful
Attempting to eject volume 'Empty/Dataset1/ZVol'
Running process: '/usr/sbin/diskutil' 'unmountDisk' '/dev/disk4'
Unmount of all volumes on disk4 was successful

Happy days! :)

#2 Then on 2.0.0 Big Sur (but same result on Catalina with 2.0.0), just moving the pool over to the other machine (I've also tested by creating a new pool on 2.0.0 - same breakage):
Code: Select all
user@apotheosis ~ % sudo zpool import Empty                                                                                               
do_mount calling mount with fstype zfs, rpath /Volumes/Empty, fspec /dev/disk3s1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 1
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2, fspec Empty/Dataset2, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset1, fspec Empty/Dataset1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset1/Files, fspec Empty/Dataset1/Files, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1, fspec Empty/Dataset2/Dataset1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1/Files, fspec Empty/Dataset2/Dataset1/Files, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
user@apotheosis ~ % sudo zfs list -t all                                                                                                 
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
Empty                                    200G   699G      480K  /Volumes/Empty
Empty/Dataset1                           100G   699G      500K  /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot                  192K      -      444K  -
Empty/Dataset1/Files                     632K   699G      464K  /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot            168K      -      444K  -
Empty/Dataset1/ZVol                      100G   799G       56K  -
Empty/Dataset1/ZVol@snapshot               0B      -       56K  -
Empty/Dataset2                           100G   699G      468K  /Volumes/Empty/Dataset2
Empty/Dataset2/Dataset1                  100G   699G      484K  /Volumes/Empty/Dataset2/Dataset1
Empty/Dataset2/Dataset1@snapshot         180K      -      444K  -
Empty/Dataset2/Dataset1/Files            664K   699G      484K  /Volumes/Empty/Dataset2/Dataset1/Files
Empty/Dataset2/Dataset1/Files@snapshot   180K      -      444K  -
Empty/Dataset2/Dataset1/ZVol             100G   799G       56K  -
Empty/Dataset2/Dataset1/ZVol@snapshot      0B      -       56K  -
user@apotheosis ~ % sudo zfs destroy -r Empty/Dataset2/Dataset1
retry
Unmount successful for /Volumes/Empty/Dataset2/Dataset1/Files
retry
Unmount successful for /Volumes/Empty/Dataset2/Dataset1
retry
user@apotheosis ~ % sudo zfs list -t all                       
NAME                            USED  AVAIL     REFER  MOUNTPOINT
Empty                           100G   799G      480K  /Volumes/Empty
Empty/Dataset1                  100G   799G      500K  /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot         192K      -      444K  -
Empty/Dataset1/Files            632K   799G      464K  /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot   168K      -      444K  -
Empty/Dataset1/ZVol             100G   899G       56K  -
Empty/Dataset1/ZVol@snapshot      0B      -       56K  -
Empty/Dataset2                  468K   799G      468K  /Volumes/Empty/Dataset2
user@apotheosis ~ % sudo zfs send -R Empty/Dataset1@snapshot | sudo zfs receive -Fduvs Empty/Dataset2
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 1
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 8 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(1)
receiving full stream of Empty/Dataset1@snapshot into Empty/Dataset2/Dataset1@snapshot
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 0
libzfs_macos_wrapfd: readfd 6 (0)
libzfs_macos_wrapfd: writefd 7 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(0) write(7)
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 8
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 9 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(8)
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 9
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 10 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(9)
received 1.05M stream in 1 seconds (1.05M/sec)
cannot receive: invalid stream (bad magic number)
warning: cannot send 'Empty/Dataset1/Files@snapshot': signal received
user@apotheosis ~ % sudo zfs list -t all                                                             
NAME                               USED  AVAIL     REFER  MOUNTPOINT
Empty                              100G   799G      480K  /Volumes/Empty
Empty/Dataset1                     100G   799G      500K  /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot            192K      -      444K  -
Empty/Dataset1/Files               632K   799G      464K  /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot      168K      -      444K  -
Empty/Dataset1/ZVol                100G   899G       56K  -
Empty/Dataset1/ZVol@snapshot         0B      -       56K  -
Empty/Dataset2                     912K   799G      468K  /Volumes/Empty/Dataset2
Empty/Dataset2/Dataset1            444K   799G      444K  /Volumes/Empty/Dataset2/Dataset1
Empty/Dataset2/Dataset1@snapshot     0B      -      444K  -

Sad times! :(

However, could this be related to nested datasets (recursive send)? If I target to move just a deeper dataset/ZVol, it "works" (even though it reports "cannot receive: failed to read from stream"):
Code: Select all
user@apotheosis ~ % sudo zfs send -R Empty/Dataset1/Files@snapshot | sudo zfs receive -Fduvs Empty/Dataset2
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 1
libzfs_macos_wrapfd: readfd 7 (0)
libzfs_macos_wrapfd: writefd 8 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(7) write(1)
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1, fspec Empty/Dataset2/Dataset1, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
receiving full stream of Empty/Dataset1/Files@snapshot into Empty/Dataset2/Dataset1/Files@snapshot
libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 0
libzfs_macos_wrapfd: readfd 6 (0)
libzfs_macos_wrapfd: writefd 7 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(0) write(7)
loop exit
pipe_io_relay: thread done: 1094032 bytes
loop exit
pipe_io_relay: thread done: 1094032 bytes
received 1.04M stream in 1 seconds (1.04M/sec)
cannot receive: failed to read from stream
user@apotheosis ~ % sudo zfs list -t all                                                                   
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
Empty                                    100G   799G      480K  /Volumes/Empty
Empty/Dataset1                           100G   799G      500K  /Volumes/Empty/Dataset1
Empty/Dataset1@snapshot                  192K      -      444K  -
Empty/Dataset1/Files                     632K   799G      464K  /Volumes/Empty/Dataset1/Files
Empty/Dataset1/Files@snapshot            168K      -      444K  -
Empty/Dataset1/ZVol                      100G   899G       56K  -
Empty/Dataset1/ZVol@snapshot               0B      -       56K  -
Empty/Dataset2                          1020K   799G      468K  /Volumes/Empty/Dataset2
Empty/Dataset2/Dataset1                  552K   799G      108K  /Volumes/Empty/Dataset2/Dataset1
Empty/Dataset2/Dataset1/Files            444K   799G      444K  /Volumes/Empty/Dataset2/Dataset1/Files
Empty/Dataset2/Dataset1/Files@snapshot     0B      -      444K  -
user@apotheosis ~ % sudo zfs mount Empty/Dataset2/Dataset1/Files
do_mount calling mount with fstype zfs, rpath /Volumes/Empty/Dataset2/Dataset1/Files, fspec Empty/Dataset2/Dataset1/Files, mflag 0, optptr defaults,atime,strictatime,dev,exec,rw,setuid,nomand,zfsutil, optlen 60, devdisk 0, ispool 0
user@apotheosis ~ % ls -l /Volumes/Empty/Dataset2/Dataset1/Files
total 1
-rw-r--r--  1 root  wheel  0  3 Jan 13:57 Bar

Anyone else running up against this? :?:

Cheers!
James
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am

Re: Release 2.0.0 send/recv b0rkage

Postby lundman » Wed Jan 06, 2021 4:36 pm

Hmm alas as you can see, we had to change how zfs send/recv is handled, as Apple removed some functions we use. So it looks like the first send
goes through, but when sends are stacked in a row, it goes wrong?
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Release 2.0.0 send/recv b0rkage

Postby Jimbo » Wed Jan 06, 2021 5:10 pm

Yes, that does seem to be the case, the first part comes through, then busted.
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am

Re: Release 2.0.0 send/recv b0rkage

Postby UnConundrum » Fri Feb 12, 2021 4:42 am

I am also having a problem with "bad magic number", but mine is on a newly created pool. I'm sending from Ubuntu 20.04 to Big Sur (zfs-2.0.0-1). The send over ssh results in:

libzfs_macos_wrapfd: checking if we need pipe wrap
libzfs_macos_wrapfd: is pipe: work on fd 0
libzfs_macos_wrapfd: readfd 6 (0)
libzfs_macos_wrapfd: writefd 7 (0)
libzfs_macos_wrapfd: spawning thread
pipe_io_relay: thread up: read(0) write(7)
cannot receive: invalid stream (bad magic number)
UnConundrum
 
Posts: 13
Joined: Tue Dec 03, 2019 12:30 pm

Re: Release 2.0.0 send/recv b0rkage

Postby lundman » Fri Mar 12, 2021 2:53 am

OK, I think maybe I fixed the send/recv issue, please try todays build
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Release 2.0.0 send/recv b0rkage

Postby Jimbo » Fri Mar 12, 2021 6:09 pm

Cool cool. My test case above now works. Though when stuffing around, but pool entirely disappeared at some point (as in after exporting, there was no pool to import); it seemed to have "reverted" to an APFS drive - whether or not this had something to do with me having created an APFS file system on the ZVOL and then having sent/received it... unclear. Subsequent runs of the same test worked fine, so... I'll look a bit further if I can.

Going to try some other tests now between disks.

Thanks Lundman!
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am

Re: Release 2.0.0 send/recv b0rkage

Postby Jimbo » Fri Mar 12, 2021 9:27 pm

Looks pretty good. Sending nested datasets including ZVOLs between pools seems to work. Also tested the same setup from an unencrypted pool to encrypted pool and that seems to be happy too.

Happy days!
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am

Re: Release 2.0.0 send/recv b0rkage

Postby lundman » Fri Mar 12, 2021 9:46 pm

OK, recv -s (resume) is broken, I have already fixed it, for next installer.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Release 2.0.0 send/recv b0rkage

Postby Jimbo » Sat Mar 13, 2021 12:56 am

Ah... yes, not tested that (resume). Good to know - thanks!
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am


Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 27 guests

cron