Two pools exchange name after panic reset

All your general support questions for OpenZFS on OS X.

Two pools exchange name after panic reset

Postby mike0810 » Mon Aug 10, 2015 4:07 pm

Panics are more common nowadays, not exactly panics but system resets.

With the last build I was not able to send successfully a snapshot of 2.4TB to a file on another disk. Always Panic, somtimes at 400GB, sometimes at 900...

With this build from master branch following happened:
11.08.15 01:49:34,000 kernel[0]: ZFS: Loaded module v1.3.1-362_g36c9576, ZFS pool version 5000, ZFS filesystem version 5
11.08.15 01:49:34,000 kernel[0]: SPL: Loaded module v1.3.1-26_g363ee27, (ncpu 16, memsize 103079215104, pages 25165824)
Sunrise:OpenZFS on OS X 1.3.1-r2 michael$ kextstat | grep lundman
112 1 0xffffff7f80d21000 0x308 0x308 net.lundman.kernel.dependencies.1.3.1 (12.5.0)
113 1 0xffffff7f80d22000 0x37000 0x37000 net.lundman.spl (1.3.1) <112 7 5 4 3 1>
114 0 0xffffff7f80d59000 0x23b000 0x23b000 net.lundman.zfs (1.3.1) <113 16 7 5 4 3 1>


Today I tried to export a pool, BOOM, System Reset.

After I rebooted, I exported cleanly both pools, fast and tank, and tried to reimport (Because obviously despite I used -t 16000), my cache device was UNAVAIL...

So I imported -d /var/run/disk/by-path and the following happend:

Tank got imported, but its name was now fast. See:

Sunrise:~ michael$ zpool status
pool: fast
state: ONLINE
scan: scrub repaired 0 in 7h10m with 0 errors on Wed Jun 24 17:30:27 2015
config:

NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@0:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@1:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@2:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@3:1 ONLINE 0 0 0
logs
PCI0@0-NPE7@3-Areca@0-disk@4:2 ONLINE 0 0 0
cache
PCI0@0-NPE7@3-Areca@0-disk@4:3 ONLINE 0 0 0

errors: No known data errors
Sunrise:~ michael$ zpool status fast
pool: fast
state: ONLINE
scan: scrub repaired 0 in 7h10m with 0 errors on Wed Jun 24 17:30:27 2015
config:

NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@0:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@1:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@2:1 ONLINE 0 0 0
PCI0@0-NPE7@3-Areca@0-disk@3:1 ONLINE 0 0 0
logs
PCI0@0-NPE7@3-Areca@0-disk@4:2 ONLINE 0 0 0
cache
PCI0@0-NPE7@3-Areca@0-disk@4:3 ONLINE 0 0 0

errors: No known data errors
Sunrise:~ michael$ zpool status tank
cannot open 'tank': no such pool
Sunrise:~ michael$ sudo zpool export tank
cannot open 'tank': no such pool
Sunrise:~ michael$ sudo zpool export fast
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank/projects'
Unmount successful for /Volumes/tank/projects
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank/michael'
Unmount successful for /Volumes/tank/michael
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank/media'
Unmount successful for /Volumes/tank/media
Running process: '/usr/sbin/diskutil' 'unmount' '/Volumes/tank'
Unmount successful for /Volumes/tank
Sunrise:~ michael$ diskutil list

Is there a general problem with these kind of hardware resets, is there something we should know about which is deeply unstable in the code?

I was not successfully able to use the 1.3.2rc2 on yosemite, was inherently unstable, so I tried to stick with the master branch. But this seems also not to be the best solution.
What should I do? I need somehow at least a version which will not reset if the pool gets exported/imported or if I send a snapshot as file.

Should I try to reduce ARC? I upgraded main memory to 96GB, and set ARC to
kstat.zfs.darwin.tunable.zfs_arc_max=21474836480

kstat.zfs.darwin.tunable.zfs_arc_max=21474836480
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=16106127360

Could this be the problem?

What should I do with the wrong -named Pool? The other pool vanished, I can not import tank as tank, but as fast. The fast pool is not existing anymore...

Thanks for help
Michael
mike0810
 
Posts: 55
Joined: Fri Jan 16, 2015 5:17 pm

Re: Two pools exchange name after panic reset

Postby mike0810 » Mon Aug 17, 2015 2:49 am

Can anyone give a hint how to restore the second pool and rename to the proper names?

Thanks
mike0810
 
Posts: 55
Joined: Fri Jan 16, 2015 5:17 pm

Re: Two pools exchange name after panic reset

Postby Brendon » Mon Aug 17, 2015 5:53 pm

@mike0810

I'm sorry, its not at all clear what you have done or how you achieved it.

Perhaps you could drop into the IRC channel for some support?

Cheers
Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: Two pools exchange name after panic reset

Postby mike0810 » Tue Aug 18, 2015 1:04 pm

Thanks for the reply Brandon,

I checked my bash history and I did wrong:
sudo zpool import -d /var/run/disk/by-path tank fast

Sorry for the hustle.

I reimported by ID and everything ok now.

Now installed the newest master branch, hopefully the stability will increase. Still suffering from intermediate System Resets. Hard to track...
Loaded module v1.3.1-374_g3cf7efc, ZFS pool version 5000, ZFS filesystem version 5
SPL: Loaded module v1.3.1-29_g0ea00ce, (ncpu 16, memsize 103079215104, pages 25165824)

Michael
mike0810
 
Posts: 55
Joined: Fri Jan 16, 2015 5:17 pm


Return to General Help

Who is online

Users browsing this forum: No registered users and 21 guests