Abort trap 6 on zpool status

All your general support questions for OpenZFS on OS X.

Re: Abort trap 6 on zpool status

Postby tangent » Thu Jul 19, 2018 12:37 am

lundman wrote:Ah the script "cmd.sh" in the zfs tree lets you run commands from the build tree:

Code: Select all
# ./cmd.sh zpool status


it does the LD binding (DYLD) for you.

Thank you!

I think I also have a debug.sh that runs "lldb" before the exec $cmd. Not sure it is part of the repo though.

It isn't, but it should be easy to create as a fork of cmd.sh. I could do the same to the libtool wrapper, but it gets overwritten on each build.

So, any more ideas for things to try?
tangent
 
Posts: 34
Joined: Tue Nov 11, 2014 6:58 pm

Re: Abort trap 6 on zpool status

Postby tangent » Fri Sep 14, 2018 10:45 pm

So, any more ideas for things to try?


I've just upgraded to 1.7.4 beta, and the symptom is unchanged. So, I repeat my question: do you have any more ideas of things to try?
tangent
 
Posts: 34
Joined: Tue Nov 11, 2014 6:58 pm

Re: Abort trap 6 on zpool status

Postby lundman » Mon Sep 17, 2018 10:26 pm

What happened when you run it with lldb? Where does it break?

Might be faster if you come to irc when I'm around - this problem had slipped my mind :)
User avatar
lundman
 
Posts: 486
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Abort trap 6 on zpool status

Postby tangent » Wed Sep 19, 2018 2:15 pm

lundman wrote:What happened when you run it with lldb? Where does it break?


First, for the archives, you can run a program under lldb via cmd.sh by changing the last line to:

Code: Select all
exec lldb -o "target create ${topdir}/cmd/$cmd/.libs/$cmd" -o "process launch $@"


Here's the tail end of the output you get:

Code: Select all
  pool: tank
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
   still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
   the pool may no longer be accessible by software that does not support
   the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 36h26m with 0 errors on Mon Sep 17 12:26:47 2018
Process 26295 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00007fff56112b66 libsystem_kernel.dylib`__pthread_kill + 10
libsystem_kernel.dylib`__pthread_kill:
->  0x7fff56112b66 <+10>: jae    0x7fff56112b70            ; <+20>
    0x7fff56112b68 <+12>: movq   %rax, %rdi
    0x7fff56112b6b <+15>: jmp    0x7fff56109ae9            ; cerror_nocancel
    0x7fff56112b70 <+20>: retq   
Target 0: (zpool) stopped.

Process 26295 launched: '/usr/local/src/o3x/zfs/cmd/zpool/.libs/zpool' (x86_64)
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
  * frame #0: 0x00007fff56112b66 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff562dd080 libsystem_pthread.dylib`pthread_kill + 333
    frame #2: 0x00007fff5606e1ae libsystem_c.dylib`abort + 127
    frame #3: 0x00000001005f8299 libzfs.2.dylib`devid_str_decode(devidstr="ata-WDC_WD4000F9YZ-09N20L1_WD-WCC5D3ZRFVX7-part1", retdevid=0x00007ffeefbfde4c, retminor_name=0x00007ffeefbfde40) at devid.h:46
    frame #4: 0x00000001005f5a45 libzfs.2.dylib`devid_to_path(devid_str="ata-WDC_WD4000F9YZ-09N20L1_WD-WCC5D3ZRFVX7-part1") at libzfs_pool.c:3642
    frame #5: 0x00000001005f20c9 libzfs.2.dylib`zpool_vdev_name(hdl=0x0000000101800000, zhp=0x00000001009035e0, nv=0x0000000100908b10, name_flags=8) at libzfs_pool.c:3811
    frame #6: 0x0000000100003cf7 zpool`max_width(zhp=0x00000001009035e0, nv=0x0000000100908b10, depth=4, max=10, name_flags=0) at zpool_main.c:1523
    frame #7: 0x0000000100003eaa zpool`max_width(zhp=0x00000001009035e0, nv=0x0000000100904bb8, depth=2, max=10, name_flags=0) at zpool_main.c:1548
    frame #8: 0x0000000100003eaa zpool`max_width(zhp=0x00000001009035e0, nv=0x00000001009039a0, depth=0, max=6, name_flags=0) at zpool_main.c:1548
    frame #9: 0x000000010000569c zpool`status_callback(zhp=0x00000001009035e0, data=0x00007ffeefbfe3d8) at zpool_main.c:6631
    frame #10: 0x0000000100001242 zpool`pool_list_iter(zlp=0x0000000100902590, unavail=1, func=(zpool`status_callback at zpool_main.c:6332), data=0x00007ffeefbfe3d8) at zpool_iter.c:172
    frame #11: 0x000000010000146a zpool`for_each_pool(argc=0, argv=0x00007ffeefbff6a8, unavail=1, proplist=0x0000000000000000, func=(zpool`status_callback at zpool_main.c:6332), data=0x00007ffeefbfe3d8) at zpool_iter.c:246
    frame #12: 0x000000010000b56e zpool`zpool_do_status(argc=0, argv=0x00007ffeefbff6a8) at zpool_main.c:6766
    frame #13: 0x0000000100007cb7 zpool`main(argc=2, argv=0x00007ffeefbff698) at zpool_main.c:8108
    frame #14: 0x00007fff55fc2015 libdyld.dylib`start + 1
    frame #15: 0x00007fff55fc2015 libdyld.dylib`start + 1


That's against the tip of master, updated just minutes ago, built with --enable-debug. The only diff against master is that one-line change to cmd.sh.

Let me know if you need me to print any variables or add any debug messages.
tangent
 
Posts: 34
Joined: Tue Nov 11, 2014 6:58 pm

Re: Abort trap 6 on zpool status

Postby lundman » Wed Sep 19, 2018 3:52 pm

Oh right, I remember now. None of the illumos-only devid_str_decode functions have any code in them. Let me put some code in and you can try
User avatar
lundman
 
Posts: 486
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Abort trap 6 on zpool status

Postby lundman » Wed Sep 19, 2018 4:12 pm

Can you try master (https://github.com/openzfsonosx/zfs/com ... 58dc366630) and see how it handles now? If you need binary let me know.
User avatar
lundman
 
Posts: 486
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Abort trap 6 on zpool status

Postby tangent » Thu Sep 20, 2018 12:59 am

That's finally got it fixed, thank you! This is the first time I've seen zpool status output in months. It's quite a relief to see that no disks have gone bad in the meantime.

Now one last thing: how do I configure the build tree so that on "make install" it overlays the binary package's files exactly? Everything I tried ended up putting the zpool command and its friends in .../sbin. I moved those to ../bin, but now I worry that I've got a mixture of packaged and from-source files on my system.
tangent
 
Posts: 34
Joined: Tue Nov 11, 2014 6:58 pm

Re: Abort trap 6 on zpool status

Postby lundman » Thu Sep 20, 2018 4:29 pm

You should use the --prefix=/usr/local - which it already is, then --sbindir=bin I believe. The installer uses: --prefix=/usr/local --sbindir=/usr/local/bin
User avatar
lundman
 
Posts: 486
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Abort trap 6 on zpool status

Postby tangent » Mon Sep 24, 2018 11:30 pm

lundman wrote:You should use the --prefix=/usr/local - which it already is, then --sbindir=bin I believe. The installer uses: --prefix=/usr/local --sbindir=/usr/local/bin


There's something wrong with the build system's dependencies: reconfiguring doesn't cause etc/launchd/daemons/*.in to be reprocessed.

The consequence is complicated. Reconfiguring as you've specified causes the InvariantDisks daemon to land in $prefix/bin instead of $prefix/sbin, but since the launchd plist files weren't regenerated, they still point at sbin, which no longer contains such files because I cleaned up after the prior install. This in turn causes the daemon to fail to load on the next reboot, which in turn causes any pool(s) based on /var/run/disk paths to appear unimportable because ZFS can't find any of the disks. "sudo zpool import" says "no pools available to import" or similar!

I ended up fixing it with a "make distclean", but that shouldn't be necessary. Reconfiguring should just rebuild all output files from *.in.

It appears you're using some kind of recursive build system, so that reconfiguring at the top level doesn't reprocess all of the subordinate *.in files.
tangent
 
Posts: 34
Joined: Tue Nov 11, 2014 6:58 pm

Re: Abort trap 6 on zpool status

Postby lundman » Wed Sep 26, 2018 7:50 pm

You want to run ./autogen.sh if you are to change --prefix afaik?
User avatar
lundman
 
Posts: 486
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

PreviousNext

Return to General Help

Who is online

Users browsing this forum: No registered users and 1 guest

cron