Panics are more common nowadays, not exactly panics but system resets.
With the last build I was not able to send successfully a snapshot of 2.4TB to a file on another disk. Always Panic, somtimes at 400GB, sometimes at 900...
With this build from master branch following happened:
11.08.15 01:49:34,000 kernel[0]: ZFS: Loaded module v1.3.1-362_g36c9576, ZFS pool version 5000, ZFS filesystem version 5
11.08.15 01:49:34,000 kernel[0]: SPL: Loaded module v1.3.1-26_g363ee27, (ncpu 16, memsize 103079215104, pages 25165824)
Sunrise:OpenZFS on OS X 1.3.1-r2 michael$ kextstat | grep lundman
112 1 0xffffff7f80d21000 0x308 0x308 net.lundman.kernel.dependencies.1.3.1 (12.5.0)
113 1 0xffffff7f80d22000 0x37000 0x37000 net.lundman.spl (1.3.1) <112 7 5 4 3 1>
114 0 0xffffff7f80d59000 0x23b000 0x23b000 net.lundman.zfs (1.3.1) <113 16 7 5 4 3 1>
Today I tried to export a pool, BOOM, System Reset.
After I rebooted, I exported cleanly both pools, fast and tank, and tried to reimport (Because obviously despite I used -t 16000), my cache device was UNAVAIL...
So I imported -d /var/run/disk/by-path and the following happend:
Tank got imported, but its name was now fast. See:
Sunrise:~ michael$ zpool status
pool: fast
state: ONLINE
scan: scrub repaired 0 in 7h10m with 0 errors on Wed Jun 24 17:30:27 2015
config:
NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@0:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@1:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@2:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@3:1 ONLINE 0 0 0
logs
PCI0@0-NPE7@3-Areca@0-disk@4:2 ONLINE 0 0 0
cache
PCI0@0-NPE7@3-Areca@0-disk@4:3 ONLINE 0 0 0
errors: No known data errors
Sunrise:~ michael$ zpool status fast
pool: fast
state: ONLINE
scan: scrub repaired 0 in 7h10m with 0 errors on Wed Jun 24 17:30:27 2015
config:
NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@0:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@1:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@2:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@3:1 ONLINE 0 0 0
logs
PCI0@0-NPE7@3-Areca@0-disk@4:2 ONLINE 0 0 0
cache
PCI0@0-NPE7@3-Areca@0-disk@4:3 ONLINE 0 0 0
errors: No known data errors
Sunrise:~ michael$ zpool status tank
cannot open 'tank': no such pool
Sunrise:~ michael$ sudo zpool export tank
cannot open 'tank': no such pool
Sunrise:~ michael$ sudo zpool export fast
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank/projects'
Unmount successful for /Volumes/tank/projects
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank/michael'
Unmount successful for /Volumes/tank/michael
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank/media'
Unmount successful for /Volumes/tank/media
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank'
Unmount successful for /Volumes/tank
Sunrise:~ michael$ diskutil list
Is there a general problem with these kind of hardware resets, is there something we should know about which is deeply unstable in the code?
I was not successfully able to use the 1.3.2rc2 on yosemite, was inherently unstable, so I tried to stick with the master branch. But this seems also not to be the best solution.
What should I do? I need somehow at least a version which will not reset if the pool gets exported/imported or if I send a snapshot as file.
Should I try to reduce ARC? I upgraded main memory to 96GB, and set ARC to
kstat.zfs.darwin.tunable.zfs_arc_max=21474836480
kstat.zfs.darwin.tunable.zfs_arc_max=21474836480
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=16106127360
Could this be the problem?
What should I do with the wrong -named Pool? The other pool vanished, I can not import tank as tank, but as fast. The fast pool is not existing anymore...
Thanks for help
Michael