possible bug in zfs when running virtualbox machines

All your general support questions for OpenZFS on OS X.

possible bug in zfs when running virtualbox machines

Postby mauricev » Mon Oct 14, 2019 5:34 pm

I have two similarly configured Mac Pros both each with a zfs-based external pool. On both, I run VirtualBox. On one where the VirtualBox machines are stored on the builtin APFS volume, everything runs fine. But on the other where I store the virtual machines on the zfs pool itself, I repeatedly see kernel panics within VirtualBox. I thought initially these panics were due to a bug introduced into a recent version of VirtualBox, but going back to older versions doesn't prevent the panic. It's now looking more likely this bug originates in zfs and it may have been introduced in a recent version of it. (I think I'm running 0.90).

This panic occurs frequently enough that it is testable.
mauricev
 
Posts: 27
Joined: Mon Oct 27, 2014 9:57 pm

Re: possible bug in zfs when running virtualbox machines

Postby lundman » Tue Oct 15, 2019 4:13 pm

Set keepsyms=1 and share the paniclog so we can see where it crashes.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: possible bug in zfs when running virtualbox machines

Postby mauricev » Tue Oct 15, 2019 8:26 pm

I already have a panic report from before, attached. Will keepsyms being added provide more info?
Attachments
panic.txt
(10.87 KiB) Downloaded 609 times
mauricev
 
Posts: 27
Joined: Mon Oct 27, 2014 9:57 pm

Re: possible bug in zfs when running virtualbox machines

Postby lundman » Tue Oct 15, 2019 10:19 pm

Yep, without keepsyms we just get naked addresses, which are hard to correlate.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: possible bug in zfs when running virtualbox machines

Postby mauricev » Wed Nov 06, 2019 2:52 pm

Here's the output with keepsyms=1

Code: Select all
Anonymous UUID:       5253B016-28A5-7723-9B26-8A33595B262A

Wed Nov  6 17:32:18 2019

*** Panic Report ***
panic(cpu 0 caller 0xffffff801d6db92d): Kernel trap at 0xffffffa653851f56, type 14=page fault, registers:
CR0: 0x0000000080010033, CR2: 0x00007fb4782ae000, CR3: 0x00000001c2699063, CR4: 0x00000000001626e0
RAX: 0x0000000000000000, RBX: 0x00007fb4b8418944, RCX: 0x0000000000000200, RDX: 0x000000000000003c
RSP: 0xffffffa41f800c60, RBP: 0xffffffa41f800d00, RSI: 0x00007fb4b83e8000, RDI: 0x00007fb4782ae000
R8:  0x00007fb4b8418918, R9:  0x00007fb4b8418978, R10: 0x0000000000000000, R11: 0x00007fb4b8418942
R12: 0x00007fb4b8418940, R13: 0x0000000000020800, R14: 0x00007fb4b8418938, R15: 0x00007fb4b8418918
RFL: 0x0000000000010246, RIP: 0xffffffa653851f56, CS:  0x0000000000000008, SS:  0x0000000000000010
Fault CR2: 0x00007fb4782ae000, Error code: 0x0000000000000002, Fault CPU: 0x0, PL: 1, VF: 5

Backtrace (CPU 0), Frame : Return Address
0xffffffa41f800730 : 0xffffff801d5aea2d mach_kernel : _handle_debugger_trap + 0x47d
0xffffffa41f800780 : 0xffffff801d6e9e95 mach_kernel : _kdp_i386_trap + 0x155
0xffffffa41f8007c0 : 0xffffff801d6db70a mach_kernel : _kernel_trap + 0x50a
0xffffffa41f800830 : 0xffffff801d55bb40 mach_kernel : _return_from_trap + 0xe0
0xffffffa41f800850 : 0xffffff801d5ae447 mach_kernel : _panic_trap_to_debugger + 0x197
0xffffffa41f800970 : 0xffffff801d5ae293 mach_kernel : _panic + 0x63
0xffffffa41f8009e0 : 0xffffff801d6db92d mach_kernel : _kernel_trap + 0x72d
0xffffffa41f800b50 : 0xffffff801d55bb40 mach_kernel : _return_from_trap + 0xe0
0xffffffa41f800b70 : 0xffffffa653851f56
0xffffffa41f800d00 : 0xffffffa65383d735
0xffffffa41f800dc0 : 0xffffffa653784da9
0xffffffa41f800df0 : 0xffffffa653779e55
0xffffffa41f800e30 : 0xffffffa65376eed5
0xffffffa41f800fb0 : 0xffffffa653859168
0xffffffa44b18b8e0 : 0xffffffa653786942
0xffffffa44b18ba70 : 0xffffff7fa2f974cf org.virtualbox.kext.VBoxDrv : _supdrvIOCtlFast + 0x5f
0xffffffa44b18ba80 : 0xffffff7fa2faa20b org.virtualbox.kext.VBoxDrv : __ZL18VBoxDrvDarwinIOCtlimPciP4proc + 0x11b
0xffffffa44b18bb40 : 0xffffff801d83a753 mach_kernel : _spec_ioctl + 0xb3
0xffffffa44b18bb80 : 0xffffff801d82eecf mach_kernel : _VNOP_IOCTL + 0xbf
0xffffffa44b18bc00 : 0xffffff801d820b34 mach_kernel : _utf8_normalizestr + 0xd34
0xffffffa44b18be00 : 0xffffff801da9bffb mach_kernel : _fo_ioctl + 0x7b
0xffffffa44b18be30 : 0xffffff801daf2259 mach_kernel : _ioctl + 0x529
0xffffffa44b18bf40 : 0xffffff801dbb9cbd mach_kernel : _unix_syscall64 + 0x27d
0xffffffa44b18bfa0 : 0xffffff801d55c306 mach_kernel : _hndl_unix_scall64 + 0x16
      Kernel Extensions in backtrace:
         org.virtualbox.kext.VBoxDrv(6.0.14)[53BAF3B2-AB27-321B-9DF8-7A1395C58923]@0xffffff7fa2f95000->0xffffff7fa3084fff

BSD process name corresponding to current thread: VirtualBoxVM
Boot args: -v keepsyms=1


It doesn't seem any different than before and it still seems to point to VirtualBox as the culprit despite the fact this happens on only one of my Macs.
mauricev
 
Posts: 27
Joined: Mon Oct 27, 2014 9:57 pm

Re: possible bug in zfs when running virtualbox machines

Postby lundman » Thu Nov 07, 2019 4:49 pm

No real evidence that ZFS was involved in that panic - are the stacks all looking like that, each time it panics?
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: possible bug in zfs when running virtualbox machines

Postby mauricev » Thu Nov 07, 2019 7:46 pm

Yes.
mauricev
 
Posts: 27
Joined: Mon Oct 27, 2014 9:57 pm


Return to General Help

Who is online

Users browsing this forum: No registered users and 29 guests

cron