I started by connecting external disk via USB to a Linux system, and then using GPartEd to delete all partitions except for the EFI partition.
Then, within FreeNAS I created a pool ELITE_SINGLE (to distinguish from my other dual disk ELITE enclosure). I think I was able to set caseinsensitive on the pool in FreeNAS at creation time, and set atime off.
Then, because FreeNAS won't let you set normalization at pool creation, I created a dataset ELITE_SINGLE/MACOS with normalization=formD.
At this point you have to shut down the FreeNAS box (which is an old HP Microserver N40L with a built-in eSATA port) because it appears that FreeNAS doesn't handle hot plug / hot unplug of eSATA devices, even after zpool export.
Moved the enclosure over to my Mac Pro 5,1 and transferred my oldest snapshot to ELITE_SINGLE/MAC_OS/SHOME_BACKUP. Exported the pool and unplugged the enclosure, since the Mac seems to tolerate that without incident.
Connected the enclosure to FreeNAS, booted up (several minutes with an SSD boot drive even, grrr). Started a tmux session in the Shell, and did a zfs send-receive to push data over to the main FreeNAS RAIDZ pool. It took a couple hours, but it worked! Shut down the FreeNAS box again.
Connected the enclosure to the Mac, imported the pool without incident. Made the enclosure pool read-only, to minimize MacOS mucking around with it. Here is where I got greedy and made a mistake: I wanted to transfer as much data as possible via this enclosure method, so as to minimize what had to to be transferred over ethernet wire once I started up a regular zxfer process. So I set up a 'zfs send -I SHOME@second_oldest_snapshot SHOME@recent_snapshot | zfs receive -F ELITE_SINGLE/MACOS/SHOME_BACKUP' command to run. Problem was, I tried to send too much data to the enclosure! It transferred a couple of snapshots, and then barfed a bunch of 'broken pipe' complaints, leaving 2.9GB available on the enclosure pool. I did notice that when I totaled up the 'used' column space of the snapshots and added in the last 'referred' space it only added up to 712GB, so it seemed like maybe there were some blocks in limbo on the enclosure pool. Since it seemed to have transferred a couple of snapshots I thought I might as well move those over to the FreeNAS, so I exported the enclosure pool.
Connected the enclosure to the FreeNAS box, booted it up, opened up the Shell again, started tmux, did a zfs send-receive on the existing snapshots. Again, to my surprise, it worked and put a half-dozen snapshots on the RAIDZ pool. So, I shut down the FreeNAS, moved the box over to the Mac again, imported the pool.
... and here is where the monsters came out. I needed to clear out all but the last snapshot so that I could put another set of later snapshots on the enclosure pool, so deleted the oldest snapshot using a simple 'zfs destroy <oldest_snapshot>'. That completed normally, or at least came back to the command prompt. So I typed in 'zfs destroy <second_oldest_snapshot>, and BAM, kernel panic! I don't have keepsyms on yet, but clearly it is zfs related:
- Code: Select all
Anonymous UUID: 44F79C15-D3FC-74F8-04A1-7F6F7A246E32
Wed Nov 13 07:03:05 2019
*** Panic Report ***
panic(cpu 1 caller 0xffffff7fa080122e): zfs: allocating allocated segment(offset=387127205888 size=40960) of (offset=387127201792 size=45056)
Backtrace (CPU 1), Frame : Return Address
0xffffffa42050b1f0 : 0xffffff801fbaf57d
0xffffffa42050b240 : 0xffffff801fceb065
0xffffffa42050b280 : 0xffffff801fcdc79a
0xffffffa42050b2f0 : 0xffffff801fb5c9d0
0xffffffa42050b310 : 0xffffff801fbaef97
0xffffffa42050b430 : 0xffffff801fbaede3
0xffffffa42050b4a0 : 0xffffff7fa080122e
0xffffffa42050b5c0 : 0xffffff7fa1acd03a
0xffffffa42050b620 : 0xffffff7fa1a82955
0xffffffa42050b6c0 : 0xffffff7fa1a7f856
0xffffffa42050b710 : 0xffffff7fa1a80686
0xffffffa42050b750 : 0xffffff7fa1b012bd
0xffffffa42050b760 : 0xffffff7fa1afd503
0xffffffa42050b7c0 : 0xffffff7fa1a6e2ab
0xffffffa42050b800 : 0xffffff7fa1a3154f
0xffffffa42050b950 : 0xffffff7fa1a318d2
0xffffffa42050baa0 : 0xffffff7fa1a6b57d
0xffffffa42050bc90 : 0xffffff7fa1a90754
0xffffffa42050bed0 : 0xffffff7fa1a9c485
0xffffffa42050bfa0 : 0xffffff801fb5c0ce
Kernel Extensions in backtrace:
net.lundman.spl(1.9.2)[EAA28CC7-9F6A-3C7B-BB90-691EBDC3A258]@0xffffff7fa07ff000->0xffffff7fa19f3fff
net.lundman.zfs(1.9.2)[4C34A112-866A-3499-B5EA-4DF7CDF1FFF4]@0xffffff7fa1a24000->0xffffff7fa1dddfff
dependency: com.apple.iokit.IOStorageFamily(2.1)[C733FB96-7C76-3174-9B7F-C8BE0E9E65C2]@0xffffff7fa19f4000
dependency: net.lundman.spl(1.9.2)[EAA28CC7-9F6A-3C7B-BB90-691EBDC3A258]@0xffffff7fa07ff000
BSD process name corresponding to current thread: kernel_task
Boot args: cwae=2
Mac OS version:
18G1012
Kernel version:
Darwin Kernel Version 18.7.0: Sat Oct 12 00:02:19 PDT 2019; root:xnu-4903.278.12~1/RELEASE_X86_64
Kernel UUID: DFB5D0E2-3B41-3647-A48B-D704AFCC06B4
Kernel slide: 0x000000001f800000
Kernel text base: 0xffffff801fa00000
__HIB text base: 0xffffff801f900000
System model name: MacPro5,1 (Mac-F221BEC8)
After it rebooted I tried again to delete that second oldest snapshot, and got the same panic. So, is there any value to turning on keepsyms and collecting the panic report, or is this just so far out in left field with the data overrun and the cross-platform access that I should whip out GParted and start over?