Here There Be Monsters: Cross-Platform Adventures

All your general support questions for OpenZFS on OS X.

Here There Be Monsters: Cross-Platform Adventures

Postby Sharko » Wed Nov 13, 2019 8:33 am

So, I seem to have gotten myself wedged into a corner with a pool on an external disk that I was using to transfer data over to a FreeNAS box. The external disk is a Micron 960GB SSD in an OWC Elite that I've been using the eSATA interface on. The problem is that it appears to have some space unaccounted for, and when I try to destroy a certain snapshot under OpenZFSonOSX 1.9.2 the Mac kernel panics. The FreeNAS box is running version 11.2U6, which is latest, last I checked. The short history on it is more or less as follows (complete with mistakes made along the way):

I started by connecting external disk via USB to a Linux system, and then using GPartEd to delete all partitions except for the EFI partition.

Then, within FreeNAS I created a pool ELITE_SINGLE (to distinguish from my other dual disk ELITE enclosure). I think I was able to set caseinsensitive on the pool in FreeNAS at creation time, and set atime off.

Then, because FreeNAS won't let you set normalization at pool creation, I created a dataset ELITE_SINGLE/MACOS with normalization=formD.

At this point you have to shut down the FreeNAS box (which is an old HP Microserver N40L with a built-in eSATA port) because it appears that FreeNAS doesn't handle hot plug / hot unplug of eSATA devices, even after zpool export.

Moved the enclosure over to my Mac Pro 5,1 and transferred my oldest snapshot to ELITE_SINGLE/MAC_OS/SHOME_BACKUP. Exported the pool and unplugged the enclosure, since the Mac seems to tolerate that without incident.

Connected the enclosure to FreeNAS, booted up (several minutes with an SSD boot drive even, grrr). Started a tmux session in the Shell, and did a zfs send-receive to push data over to the main FreeNAS RAIDZ pool. It took a couple hours, but it worked! Shut down the FreeNAS box again.

Connected the enclosure to the Mac, imported the pool without incident. Made the enclosure pool read-only, to minimize MacOS mucking around with it. Here is where I got greedy and made a mistake: I wanted to transfer as much data as possible via this enclosure method, so as to minimize what had to to be transferred over ethernet wire once I started up a regular zxfer process. So I set up a 'zfs send -I SHOME@second_oldest_snapshot SHOME@recent_snapshot | zfs receive -F ELITE_SINGLE/MACOS/SHOME_BACKUP' command to run. Problem was, I tried to send too much data to the enclosure! It transferred a couple of snapshots, and then barfed a bunch of 'broken pipe' complaints, leaving 2.9GB available on the enclosure pool. I did notice that when I totaled up the 'used' column space of the snapshots and added in the last 'referred' space it only added up to 712GB, so it seemed like maybe there were some blocks in limbo on the enclosure pool. Since it seemed to have transferred a couple of snapshots I thought I might as well move those over to the FreeNAS, so I exported the enclosure pool.

Connected the enclosure to the FreeNAS box, booted it up, opened up the Shell again, started tmux, did a zfs send-receive on the existing snapshots. Again, to my surprise, it worked and put a half-dozen snapshots on the RAIDZ pool. So, I shut down the FreeNAS, moved the box over to the Mac again, imported the pool.

... and here is where the monsters came out. I needed to clear out all but the last snapshot so that I could put another set of later snapshots on the enclosure pool, so deleted the oldest snapshot using a simple 'zfs destroy <oldest_snapshot>'. That completed normally, or at least came back to the command prompt. So I typed in 'zfs destroy <second_oldest_snapshot>, and BAM, kernel panic! I don't have keepsyms on yet, but clearly it is zfs related:

Code: Select all
Anonymous UUID:       44F79C15-D3FC-74F8-04A1-7F6F7A246E32

Wed Nov 13 07:03:05 2019

*** Panic Report ***
panic(cpu 1 caller 0xffffff7fa080122e): zfs: allocating allocated segment(offset=387127205888 size=40960) of (offset=387127201792 size=45056)

Backtrace (CPU 1), Frame : Return Address
0xffffffa42050b1f0 : 0xffffff801fbaf57d
0xffffffa42050b240 : 0xffffff801fceb065
0xffffffa42050b280 : 0xffffff801fcdc79a
0xffffffa42050b2f0 : 0xffffff801fb5c9d0
0xffffffa42050b310 : 0xffffff801fbaef97
0xffffffa42050b430 : 0xffffff801fbaede3
0xffffffa42050b4a0 : 0xffffff7fa080122e
0xffffffa42050b5c0 : 0xffffff7fa1acd03a
0xffffffa42050b620 : 0xffffff7fa1a82955
0xffffffa42050b6c0 : 0xffffff7fa1a7f856
0xffffffa42050b710 : 0xffffff7fa1a80686
0xffffffa42050b750 : 0xffffff7fa1b012bd
0xffffffa42050b760 : 0xffffff7fa1afd503
0xffffffa42050b7c0 : 0xffffff7fa1a6e2ab
0xffffffa42050b800 : 0xffffff7fa1a3154f
0xffffffa42050b950 : 0xffffff7fa1a318d2
0xffffffa42050baa0 : 0xffffff7fa1a6b57d
0xffffffa42050bc90 : 0xffffff7fa1a90754
0xffffffa42050bed0 : 0xffffff7fa1a9c485
0xffffffa42050bfa0 : 0xffffff801fb5c0ce
      Kernel Extensions in backtrace:
         net.lundman.spl(1.9.2)[EAA28CC7-9F6A-3C7B-BB90-691EBDC3A258]@0xffffff7fa07ff000->0xffffff7fa19f3fff
         net.lundman.zfs(1.9.2)[4C34A112-866A-3499-B5EA-4DF7CDF1FFF4]@0xffffff7fa1a24000->0xffffff7fa1dddfff
            dependency: com.apple.iokit.IOStorageFamily(2.1)[C733FB96-7C76-3174-9B7F-C8BE0E9E65C2]@0xffffff7fa19f4000
            dependency: net.lundman.spl(1.9.2)[EAA28CC7-9F6A-3C7B-BB90-691EBDC3A258]@0xffffff7fa07ff000

BSD process name corresponding to current thread: kernel_task
Boot args: cwae=2

Mac OS version:
18G1012

Kernel version:
Darwin Kernel Version 18.7.0: Sat Oct 12 00:02:19 PDT 2019; root:xnu-4903.278.12~1/RELEASE_X86_64
Kernel UUID: DFB5D0E2-3B41-3647-A48B-D704AFCC06B4
Kernel slide:     0x000000001f800000
Kernel text base: 0xffffff801fa00000
__HIB  text base: 0xffffff801f900000
System model name: MacPro5,1 (Mac-F221BEC8)


After it rebooted I tried again to delete that second oldest snapshot, and got the same panic. So, is there any value to turning on keepsyms and collecting the panic report, or is this just so far out in left field with the data overrun and the cross-platform access that I should whip out GParted and start over?
Sharko
 
Posts: 101
Joined: Thu May 12, 2016 12:19 pm

Re: Here There Be Monsters: Cross-Platform Adventures

Postby Sharko » Wed Nov 13, 2019 4:47 pm

Or is there a useful zdb command that I should run on the ELITE_SINGLE pool?
Sharko
 
Posts: 101
Joined: Thu May 12, 2016 12:19 pm

Re: Here There Be Monsters: Cross-Platform Adventures

Postby Sharko » Wed Nov 13, 2019 8:53 pm

Here is the crash report with symbols:

Code: Select all
Anonymous UUID:       44F79C15-D3FC-74F8-04A1-7F6F7A246E32

Wed Nov 13 20:36:31 2019

*** Panic Report ***
panic(cpu 1 caller 0xffffff7f8900122e): zfs: allocating allocated segment(offset=387127205888 size=40960) of (offset=387127201792 size=45056)

Backtrace (CPU 1), Frame : Return Address
0xffffffa3f789b1f0 : 0xffffff80083af57d mach_kernel : _handle_debugger_trap + 0x47d
0xffffffa3f789b240 : 0xffffff80084eb065 mach_kernel : _kdp_i386_trap + 0x155
0xffffffa3f789b280 : 0xffffff80084dc79a mach_kernel : _kernel_trap + 0x50a
0xffffffa3f789b2f0 : 0xffffff800835c9d0 mach_kernel : _return_from_trap + 0xe0
0xffffffa3f789b310 : 0xffffff80083aef97 mach_kernel : _panic_trap_to_debugger + 0x197
0xffffffa3f789b430 : 0xffffff80083aede3 mach_kernel : _panic + 0x63
0xffffffa3f789b4a0 : 0xffffff7f8900122e net.lundman.spl : _vcmn_err + 0x8e
0xffffffa3f789b5c0 : 0xffffff7f8a2cd03a net.lundman.zfs : _zfs_panic_recover + 0x6a
0xffffffa3f789b620 : 0xffffff7f8a282955 net.lundman.zfs : _range_tree_add_impl + 0x1fc
0xffffffa3f789b6c0 : 0xffffff7f8a27f856 net.lundman.zfs : _metaslab_free_concrete + 0x193
0xffffffa3f789b710 : 0xffffff7f8a280686 net.lundman.zfs : _metaslab_free + 0x113
0xffffffa3f789b750 : 0xffffff7f8a3012bd net.lundman.zfs : _zio_dva_free + 0x23
0xffffffa3f789b760 : 0xffffff7f8a2fd503 net.lundman.zfs : _zio_nowait + 0x133
0xffffffa3f789b7c0 : 0xffffff7f8a26e2ab net.lundman.zfs : _dsl_scan_free_block_cb + 0x91
0xffffffa3f789b800 : 0xffffff7f8a23154f net.lundman.zfs : _bpobj_iterate_impl + 0xe6
0xffffffa3f789b950 : 0xffffff7f8a2318d2 net.lundman.zfs : _bpobj_iterate_impl + 0x469
0xffffffa3f789baa0 : 0xffffff7f8a26b57d net.lundman.zfs : _dsl_scan_sync + 0x290
0xffffffa3f789bc90 : 0xffffff7f8a290754 net.lundman.zfs : _spa_sync + 0xa5b
0xffffffa3f789bed0 : 0xffffff7f8a29c485 net.lundman.zfs : _txg_sync_thread + 0x273
0xffffffa3f789bfa0 : 0xffffff800835c0ce mach_kernel : _call_continuation + 0x2e
      Kernel Extensions in backtrace:
         net.lundman.spl(1.9.2)[EAA28CC7-9F6A-3C7B-BB90-691EBDC3A258]@0xffffff7f88fff000->0xffffff7f8a1f3fff
         net.lundman.zfs(1.9.2)[4C34A112-866A-3499-B5EA-4DF7CDF1FFF4]@0xffffff7f8a224000->0xffffff7f8a5ddfff
            dependency: com.apple.iokit.IOStorageFamily(2.1)[C733FB96-7C76-3174-9B7F-C8BE0E9E65C2]@0xffffff7f8a1f4000
            dependency: net.lundman.spl(1.9.2)[EAA28CC7-9F6A-3C7B-BB90-691EBDC3A258]@0xffffff7f88fff000

BSD process name corresponding to current thread: kernel_task
Boot args: keepsyms=y cwae=2

Mac OS version:
18G1012

Kernel version:
Darwin Kernel Version 18.7.0: Sat Oct 12 00:02:19 PDT 2019; root:xnu-4903.278.12~1/RELEASE_X86_64
Kernel UUID: DFB5D0E2-3B41-3647-A48B-D704AFCC06B4
Kernel slide:     0x0000000008000000
Kernel text base: 0xffffff8008200000
__HIB  text base: 0xffffff8008100000
System model name: MacPro5,1 (Mac-F221BEC8)

System uptime in nanoseconds: 783253877046
last loaded kext at 393339363112: com.apple.driver.AppleXsanScheme   3 (addr 0xffffff7f8e4ce000, size 32768)
last unloaded kext at 311121712919: com.apple.filesystems.msdosfs   1.10 (addr 0xffffff7f8e4ce000, size 61440)
loaded kexts:
org.virtualbox.kext.VBoxNetAdp   6.0.4
org.virtualbox.kext.VBoxNetFlt   6.0.4
org.virtualbox.kext.VBoxUSB   6.0.4
org.virtualbox.kext.VBoxDrv   6.0.4
com.Cycling74.driver.Soundflower   1.6.7
net.lundman.zfs   1.9.2
net.lundman.spl   1.9.2
at.obdev.nke.LittleSnitch   5430
com.apple.driver.AppleBluetoothMultitouch   96
com.apple.fileutil   20.036.15
com.apple.driver.AppleUpstreamUserClient   3.6.5
com.apple.driver.AppleMCCSControl   1.5.9
com.apple.kext.AMDFramebuffer   2.1.1
com.apple.filesystems.autofs   3.0
com.apple.driver.AGPM   110.25.11
com.apple.driver.AppleMikeyHIDDriver   131
com.apple.driver.AppleMikeyDriver   282.54
com.apple.kext.AMDRadeonX4000   2.1.1
com.apple.driver.AppleGraphicsDevicePolicy   3.50.14
com.apple.AGDCPluginDisplayMetrics   3.50.14
com.apple.driver.AppleHDA   282.54
com.apple.driver.AppleHV   1
com.apple.iokit.IOUserEthernet   1.0.1
com.apple.iokit.IOBluetoothSerialManager   6.0.14d3
com.apple.driver.pmtelemetry   1
com.apple.Dont_Steal_Mac_OS_X   7.0.0
com.apple.driver.AppleGFXHDA   100.1.414
com.apple.kext.AMD9500Controller   2.1.1
com.apple.driver.ACPI_SMC_PlatformPlugin   1.0.0
com.apple.driver.AppleIntelSlowAdaptiveClocking   4.0.0
com.apple.driver.AppleLPC   3.1
com.apple.driver.AppleOSXWatchdog   1
com.apple.driver.AudioAUUC   1.70
com.apple.filesystems.apfs   945.275.8
com.apple.driver.AppleUSBDisplays   380
com.apple.driver.AppleVirtIO   2.1.3
com.apple.filesystems.hfs.kext   407.200.4
com.apple.AppleFSCompression.AppleFSCompressionTypeDataless   1.0.0d1
com.apple.BootCache   40
com.apple.AppleFSCompression.AppleFSCompressionTypeZlib   1.0.0
com.apple.AppleSystemPolicy   1.0
com.apple.iokit.SCSITaskUserClient   408.250.3
com.apple.private.KextAudit   1.0
com.apple.driver.AirPort.Brcm4331   800.21.31
com.apple.driver.AppleFWOHCI   5.6.0
com.apple.driver.Intel82574LEthernet   2.7.2
com.apple.driver.AppleAHCIPort   329.260.5
com.apple.driver.AppleRTC   2.0
com.apple.driver.AppleHPET   1.8
com.apple.driver.AppleACPIButtons   6.1
com.apple.driver.AppleSMBIOS   2.1
com.apple.driver.AppleACPIEC   6.1
com.apple.driver.AppleAPIC   1.7
com.apple.driver.AppleIntelCPUPowerManagementClient   220.0.0
com.apple.nke.applicationfirewall   202
com.apple.security.TMSafetyNet   8
com.apple.driver.AppleIntelCPUPowerManagement   220.0.0
com.apple.driver.AppleXsanScheme   3
com.apple.driver.IOBluetoothHIDDriver   6.0.14d3
com.apple.driver.AppleMultitouchDriver   2450.1
com.apple.driver.AppleInputDeviceSupport   2440.2
com.apple.iokit.IOUSBUserClient   900.4.2
com.apple.kext.triggers   1.0
com.apple.kext.AMDRadeonX4000HWLibs   1.0
com.apple.iokit.IOAcceleratorFamily2   404.14
com.apple.kext.AMDRadeonX4000HWServices   2.1.1
com.apple.driver.AppleGraphicsControl   3.50.14
com.apple.driver.DspFuncLib   282.54
com.apple.kext.OSvKernDSPLib   528
com.apple.iokit.IOAVBFamily   760.6
com.apple.plugin.IOgPTPPlugin   740.2
com.apple.iokit.IOEthernetAVBController   1.1.0
com.apple.iokit.IOSkywalkFamily   1
com.apple.driver.AppleSSE   1.0
com.apple.iokit.IOSurface   255.6.1
com.apple.AppleGPUWrangler   3.50.14
com.apple.iokit.IONDRVSupport   530.51
com.apple.driver.AppleSMBusController   1.0.18d1
com.apple.driver.IOPlatformPluginLegacy   1.0.0
com.apple.iokit.IOSlowAdaptiveClockingFamily   1.0.0
com.apple.kext.AMDSupport   2.1.1
com.apple.AppleGraphicsDeviceControl   3.50.14
com.apple.driver.IOPlatformPluginFamily   6.0.0d8
com.apple.iokit.IOFireWireIP   2.3.0
com.apple.driver.AppleSMBusPCI   1.0.14d1
com.apple.driver.AppleHDAController   282.54
com.apple.iokit.IOHDAFamily   282.54
com.apple.iokit.IOGraphicsFamily   530.67
com.apple.driver.AppleHIDKeyboard   208
com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport   6.0.14d3
com.apple.iokit.IOBluetoothHostControllerUSBTransport   6.0.14d3
com.apple.iokit.IOBluetoothHostControllerTransport   6.0.14d3
com.apple.iokit.IOBluetoothFamily   6.0.14d3
com.apple.driver.CoreStorage   546.50.1
com.apple.iokit.IOAHCIBlockStorage   301.270.1
com.apple.driver.AppleUSBAudio   315.6
com.apple.driver.usb.IOUSBHostHIDDevice   1.2
com.apple.iokit.IOAudioFamily   206.5
com.apple.vecLib.kext   1.2.0
com.apple.driver.usb.networking   5.0.0
com.apple.driver.usb.AppleUSBHostCompositeDevice   1.2
com.apple.driver.usb.AppleUSBHub   1.2
com.apple.iokit.IOSerialFamily   11
com.apple.filesystems.hfs.encodings.kext   1
com.apple.iokit.IOSCSIMultimediaCommandsDevice   408.250.3
com.apple.iokit.IOBDStorageFamily   1.8
com.apple.iokit.IODVDStorageFamily   1.8
com.apple.iokit.IOCDStorageFamily   1.8
com.apple.iokit.IOAHCISerialATAPI   267.50.1
com.apple.iokit.IO80211Family   1200.12.2
com.apple.driver.corecapture   1.0.4
com.apple.iokit.IOFireWireFamily   4.7.3
com.apple.driver.usb.AppleUSBEHCIPCI   1.2
com.apple.driver.usb.AppleUSBXHCIPCI   1.2
com.apple.driver.usb.AppleUSBXHCI   1.2
com.apple.driver.usb.AppleUSBUHCIPCI   1.2
com.apple.driver.usb.AppleUSBUHCI   1.2
com.apple.driver.usb.AppleUSBEHCI   1.2
com.apple.iokit.IOAHCIFamily   288
com.apple.driver.usb.AppleUSBHostPacketFilter   1.0
com.apple.iokit.IOUSBFamily   900.4.2
com.apple.driver.AppleUSBHostMergeProperties   1.2
com.apple.driver.AppleEFINVRAM   2.1
com.apple.driver.AppleEFIRuntime   2.1
com.apple.iokit.IOSMBusFamily   1.1
com.apple.iokit.IOHIDFamily   2.0.0
com.apple.security.quarantine   3
com.apple.security.sandbox   300.0
com.apple.kext.AppleMatch   1.0.0d1
com.apple.driver.DiskImages   493.0.0
com.apple.driver.AppleFDEKeyStore   28.30
com.apple.driver.AppleEffaceableStorage   1.0
com.apple.driver.AppleKeyStore   2
com.apple.driver.AppleUSBTDM   456.260.3
com.apple.driver.AppleMobileFileIntegrity   1.0.5
com.apple.iokit.IOUSBMassStorageDriver   145.200.2
com.apple.iokit.IOSCSIBlockCommandsDevice   408.250.3
com.apple.iokit.IOSCSIArchitectureModelFamily   408.250.3
com.apple.iokit.IOStorageFamily   2.1
com.apple.kext.CoreTrust   1
com.apple.driver.AppleCredentialManager   1.0
com.apple.driver.KernelRelayHost   1
com.apple.iokit.IOUSBHostFamily   1.2
com.apple.driver.usb.AppleUSBCommon   1.0
com.apple.driver.AppleBusPowerController   1.0
com.apple.driver.AppleSEPManager   1.0.1
com.apple.driver.IOSlaveProcessor   1
com.apple.iokit.IOReportFamily   47
com.apple.iokit.IOTimeSyncFamily   740.2
com.apple.iokit.IONetworkingFamily   3.4
com.apple.driver.AppleACPIPlatform   6.1
com.apple.driver.AppleSMC   3.1.9
com.apple.iokit.IOPCIFamily   2.9
com.apple.iokit.IOACPIFamily   1.4
com.apple.kec.pthread   1
com.apple.kec.Libm   1
com.apple.kec.corecrypto   1.0

EOF
Model: MacPro5,1, BootROM 144.0.0.0.0, 6 processors, 6-Core Intel Xeon, 3.46 GHz, 32 GB, SMC 1.39f11
Graphics: Radeon RX 560, Radeon RX 560, spdisplays_pcie_device, 4 GB
Memory Module: DIMM 1, 8 GB, DDR3 ECC, 1333 MHz, 0x85F7, 0x463732314755363547393333334700520000
Memory Module: DIMM 2, 8 GB, DDR3 ECC, 1333 MHz, 0x85F7, 0x463732314755363547393333334700520000
Memory Module: DIMM 3, 8 GB, DDR3 ECC, 1333 MHz, 0x85F7, 0x463732314755363547393333334700520000
Memory Module: DIMM 4, 8 GB, DDR3 ECC, 1333 MHz, 0x80CE, 0x4D33393342314B37304448302D5948392020
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8E), Broadcom BCM43xx 1.0 (5.106.98.102.31)
Bluetooth: Version 6.0.14d3, 3 services, 27 devices, 1 incoming serial ports
Network Service: Ethernet 1, Ethernet, en0
PCI Card: pci1b21,612, AHCI Controller, Slot-2
PCI Card: pci1002,aae0, Audio Device, Slot-1
PCI Card: Radeon RX 560, Display Controller, Slot-1
PCI Card: pci1b21,625, AHCI Controller, Slot-4
PCI Card: PXS3, USB eXtensible Host Controller, Slot-3
Serial ATA Device: HL-DT-ST DVD-RW GH61N
Serial ATA Device: SPCC Solid State Disk, 120.03 GB
Serial ATA Device: WDC WD20EVDS-63T3B0, 2 TB
Serial ATA Device: SanDisk SDSSDH32000G, 2 TB
USB Device: USB Bus
USB Device: USB Bus
USB Device: CP1500PFCLCD
USB Device: USB Bus
USB Device: USB Bus
USB Device: BRCM2046 Hub
USB Device: Bluetooth USB Host Controller
USB Device: USB Bus
USB Device: USB Bus
USB Device: USB 2.0 Bus
USB Device: Hub
USB Device: Hub
USB Device: Apple LED Cinema Display
USB Device: Display iSight
USB Device: Display Audio
USB Device: Hub
USB Device: Keyboard Hub
USB Device: Apple Keyboard
USB Device: Apple Cinema HD Display
USB Device: USB 2.0 Bus
USB Device: USB 3.0 Bus
FireWire Device: built-in_hub, Up to 800 Mb/sec
Thunderbolt Bus:


Here is some basic zdb output; the asize value looks wrong! EDIT: oops, confused asize with ashift, which is normal.

Code: Select all
sh-3.2# zdb -C ELITE_SINGLE

MOS Configuration:
        version: 5000
        name: 'ELITE_SINGLE'
        state: 0
        txg: 35903
        pool_guid: 15921136833983772115
        errata: 0
        hostid: 2035037572
        hostname: ''
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 15921136833983772115
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 2462182297371340311
                path: '/private/var/run/disk/by-id/media-478C85AD-0445-11EA-A160-2C768AAD7246'
                whole_disk: 1
                metaslab_array: 38
                metaslab_shift: 33
                ashift: 12
                asize: 958044831744
                is_log: 0
                create_txg: 4
                com.delphix:vdev_zap_leaf: 36
                com.delphix:vdev_zap_top: 37
        features_for_read:
            com.delphix:hole_birth
            com.delphix:embedded_data
sh-3.2#


Digging into dataset info with zdb:
Code: Select all
sh-3.2# zdb -d ELITE_SINGLE
Dataset mos [META], ID 0, cr_txg 4, 117M, 199 objects
Dataset ELITE_SINGLE/MACOS/SHOME_BACKUP@2016_09_user_data_final_Yosemite [ZPL], ID 651, cr_txg 21989, 388G, 543646 objects
Dataset ELITE_SINGLE/MACOS/SHOME_BACKUP@2014_10_user_data_SL [ZPL], ID 171, cr_txg 16492, 398G, 719331 objects
Dataset ELITE_SINGLE/MACOS/SHOME_BACKUP@2014_04_user_data_SL [ZPL], ID 167, cr_txg 14203, 321G, 401203 objects
Dataset ELITE_SINGLE/MACOS/SHOME_BACKUP@2015_02_user_data_SL [ZPL], ID 174, cr_txg 18256, 402G, 507980 objects
Dataset ELITE_SINGLE/MACOS/SHOME_BACKUP@2016_06_user_data_Yosemite [ZPL], ID 122, cr_txg 20708, 343G, 532878 objects
Dataset ELITE_SINGLE/MACOS/SHOME_BACKUP [ZPL], ID 65, cr_txg 192, 388G, 543646 objects
Dataset ELITE_SINGLE/MACOS [ZPL], ID 60, cr_txg 105, 508K, 104 objects
Dataset ELITE_SINGLE [ZPL], ID 21, cr_txg 1, 488K, 113 objects
Verified large_blocks feature refcount of 0 is correct
Verified sha512 feature refcount of 0 is correct
Verified skein feature refcount of 0 is correct
Verified device_removal feature refcount of 0 is correct
Verified indirect_refcount feature refcount of 0 is correct
sh-3.2#


One final run with zdb -b ELITE_SINGLE:
Code: Select all
sh-3.2# zdb -b ELITE_SINGLE

Traversing all blocks to verify nothing leaked ...

loading concrete vdev 0, metaslab 110 of 111 ...
 748G completed (11276MB/s) estimated time remaining: 0hr 00min 00sec       
   No leaks (block sum matches space maps exactly)

   bp count:               8296002
   ganged count:                 0
   bp logical:        872588358656      avg: 105181
   bp physical:       801040553472      avg:  96557     compression:   1.09
   bp allocated:      803584585728      avg:  96864     compression:   1.09
   bp deduped:                   0    ref>1:      0   deduplication:   1.00
   Normal class:      803584585728     used: 84.28%

   additional, non-pointer bps of type 0:    1066581
   Dittoed blocks on same vdev: 301085
   Dittoed blocks in same metaslab: 676

sh-3.2#
Last edited by Sharko on Thu Nov 14, 2019 9:50 am, edited 1 time in total.
Sharko
 
Posts: 101
Joined: Thu May 12, 2016 12:19 pm

Re: Here There Be Monsters: Cross-Platform Adventures

Postby lundman » Thu Nov 14, 2019 12:16 am

I don't know for sure, but it looks like you have a bad snapshot - I only glanced at the details - my apologies, busy day. However, I see in the
panic report it calls zfs_panic_recover(), this is a function that if you set recover = 1, using sysctl, you can ask it to ignore that error and keep going.

Could be worth trying. Something like "sysctl kstat | grep recover" "sysctl thenameofthe.zfs.recover=1".
User avatar
lundman
 
Posts: 655
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Here There Be Monsters: Cross-Platform Adventures

Postby Sharko » Thu Nov 14, 2019 5:25 pm

Thank you, Mr. Lundman. I can hang onto this for a short while, if you think there is anything to be learned from it? Otherwise, I know how to go forward: I'll just wipe the external pool disk and start in with the next snapshot in the source series that hasn't been transferred over.
Sharko
 
Posts: 101
Joined: Thu May 12, 2016 12:19 pm

Re: Here There Be Monsters: Cross-Platform Adventures

Postby lundman » Thu Nov 14, 2019 6:12 pm

Carry on, I'm swamped at the moment, running everything by myself :)
User avatar
lundman
 
Posts: 655
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Here There Be Monsters: Cross-Platform Adventures

Postby Sharko » Sun Nov 17, 2019 9:53 am

Just a warning to others who might go down this path of trying to make a pool inter operate with FreeNAS: there really are monsters (problems) with getting FreeNAS to play nice with OpenZFSOnOSX. I wiped the external disk and started over, again with pool creation on FreeNAS. Then moved it over to my Mojave box with ZFS 1.9.2 and added a set of snapshots. That went fine, and I didn’t overrun the pool with snapshots this time. Connected it up to the FreeNAS box again, and used the command line to transfer all the snapshots onto the FreeNAS internal pool. Everything went fine, up until the point I tried to delete the series of unneeded snapshots on the external pool, using the FreeNAS GUI interface.

FreeNAS instantly kernel-panicked. With the enclosure still connected it couldn’t even successfully boot - I left it for an hour, and it never got to the login screen, nor could I ssh into it. I was really quite worried that I might have totally borked it, but I hard shut down the FreeNAS box, and rebooted without incident as long as the enclosure wasn’t connected.

Just for fun, I connected the enclosure to my Mac again, and tried to import the external pool... you guessed it, kernel panic! So, now you know: if you have a pool that has been touched by both FreeNAS and Mac, don’t try and delete any snapshots. Just wipe it to HFS+ on the Mac, bring it over to the FreeNAS and wipe it again, initialize to a pool and Mac-compatible dataset. Then do what you need to do with it as far as data transfer. Rinse and repeat.

I know that FreeNAS, being based on FreeBSD, is kind of a different branch of ZFS from what we have (which is based on ZFS on Linux, as I understand it). The FreeBSD branch of ZFS development had kind of fallen behind on a lot of features, and mutated a bit in specialized ways to suit FreeBSD. That is probably the source of the bug, being different branches of development. I know that there is a movement afoot to bring all the OpenZFS projects back into alignment, and that will likely (eventually) fix whatever the incompatibility is between the two projects. Any ideas about the timing of that process?
Sharko
 
Posts: 101
Joined: Thu May 12, 2016 12:19 pm

Re: Here There Be Monsters: Cross-Platform Adventures

Postby lundman » Sun Nov 17, 2019 4:16 pm

That is quite interesting - deleting snapshots have never been a process with bugs, nor new features.

The Zofol project is progressing well, I believe they hope to have it merged before the end of the year, that will bring fbsd inline with zol in one go.
User avatar
lundman
 
Posts: 655
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan


Return to General Help

Who is online

Users browsing this forum: No registered users and 4 guests

cron