2.1.0 Release Stack Overflow

All your general support questions for OpenZFS on OS X.

2.1.0 Release Stack Overflow

Postby jawbroken » Tue Apr 05, 2022 3:32 am

I'm getting kernel panics that look like this on an M1 Ultra Mac Studio (macOS Monterey 12.3.1) running OpenZFS 2.1.0 Release. I notice that the changelog mentions a few fixes around stack usage while scrubbing, etc., but I wasn't scrubbing the pool at the time.
Code: Select all
panic(cpu 4 caller 0xfffffe001d1da1b0): stack_alloc: kernel_memory_allocate(size: , mask: 0x7fff, flags: 0x1134) failed with 3 @stack.c:184
Debugger message: panic
Memory ID: 0x6
OS release type: User
OS version: 21E258
Kernel version: Darwin Kernel Version 21.4.0: Fri Mar 18 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000
Fileset Kernelcache UUID: 0631AF68D2B8D6FEA30E36D7895D4DB4
Kernel UUID: C342869F-FFB9-3CCE-A5A3-EA711C1E87F6
iBoot version: iBoot-7459.101.3
secure boot?: YES
Paniclog version: 13
KernelCache slide: 0x00000000158ac000
KernelCache base:  0xfffffe001c8b0000
Kernel slide:      0x000000001605c000
Kernel text base:  0xfffffe001d060000
Kernel text exec slide: 0x0000000016144000
Kernel text exec base:  0xfffffe001d148000
mach_absolute_time: 0x4c978bb4608
Epoch Time:        sec       usec
  Boot    : 0x6248a9e4 0x00097580
  Sleep   : 0x00000000 0x00000000
  Wake    : 0x00000000 0x00000000
  Calendar: 0x624c0289 0x00047351

Zone info:
  Foreign : 0xfffffe0024ba0000 - 0xfffffe0024bb0000
  Native  : 0xfffffe100069c000 - 0xfffffe300069c000
  Readonly: 0xfffffe14cd368000 - 0xfffffe1666d00000
  Metadata: 0xfffffe8bf8968000 - 0xfffffe8c048d8000
  Bitmaps : 0xfffffe8c048d8000 - 0xfffffe8c248d8000

CORE 0 PVH locks held: None
CORE 1 PVH locks held: None
CORE 2 PVH locks held: None
CORE 3 PVH locks held: None
CORE 4 PVH locks held: None
CORE 5 PVH locks held: None
CORE 6 PVH locks held: None
CORE 7 PVH locks held: None
CORE 8 PVH locks held: None
CORE 9 PVH locks held: None
CORE 10 PVH locks held: None
CORE 11 PVH locks held: None
CORE 12 PVH locks held: None
CORE 13 PVH locks held: None
CORE 14 PVH locks held: None
CORE 15 PVH locks held: None
CORE 16 PVH locks held: None
CORE 17 PVH locks held: None
CORE 18 PVH locks held: None
CORE 19 PVH locks held: None
CORE 0: PC=0xfffffe001d2dae10, LR=0xfffffe001d2dae0c, FP=0xfffffe6193b03e90
CORE 1: PC=0xfffffe001d1d021c, LR=0xfffffe001d1d021c, FP=0xfffffe817a8738a0
CORE 2: PC=0xfffffe001d25e210, LR=0xfffffe001d25e078, FP=0xfffffe60d4571d50
CORE 3: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe60f4ff3f00
CORE 4 is the one that panicked. Check the full backtrace for details.
CORE 5: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe619f873f00
CORE 6: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe6193a33f00
CORE 7: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe60d4b63f00
CORE 8: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe817b0a3f00
CORE 9: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe619e6c3f00
CORE 10: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe817a823f00
CORE 11: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe817a813f00
CORE 12: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe6065f0bf00
CORE 13: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe6191aebf00
CORE 14: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe817afa3f00
CORE 15: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe6193be3f00
CORE 16: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe817a863f00
CORE 17: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe817ae03f00
CORE 18: PC=0xfffffe001d1d7418, LR=0xfffffe001d1d7418, FP=0xfffffe619ece3f00
CORE 19: PC=0xfffffe001d1d741c, LR=0xfffffe001d1d7418, FP=0xfffffe817b0b3f00
Compressor Info: 0% of compressed pages limit (OK) and 0% of segments limit (OK) with 0 swapfiles and OK swap space
Panicked task 0xfffffe24cd21c678: 0 pages, 1359 threads: pid 0: kernel_task
Panicked thread: 0xfffffe1b339168c8, backtrace: 0xfffffe60d4c83770, tid: 105
        lr: 0xfffffe001d1a1560  fp: 0xfffffe60d4c837e0
        lr: 0xfffffe001d1a1228  fp: 0xfffffe60d4c83850
        lr: 0xfffffe001d2e5ecc  fp: 0xfffffe60d4c83870
        lr: 0xfffffe001d2d805c  fp: 0xfffffe60d4c838e0
        lr: 0xfffffe001d2d5a98  fp: 0xfffffe60d4c839a0
        lr: 0xfffffe001d14f7f8  fp: 0xfffffe60d4c839b0
        lr: 0xfffffe001d1a0eac  fp: 0xfffffe60d4c83d50
        lr: 0xfffffe001d1a0eac  fp: 0xfffffe60d4c83dc0
        lr: 0xfffffe001d9caacc  fp: 0xfffffe60d4c83de0
        lr: 0xfffffe001d1da1b0  fp: 0xfffffe60d4c83e60
        lr: 0xfffffe001d1f09d4  fp: 0xfffffe60d4c83e90
        lr: 0xfffffe001d1bf968  fp: 0xfffffe60d4c83f00
        lr: 0xfffffe001d1bf898  fp: 0xfffffe60d4c83f20
        lr: 0xfffffe001d158e78  fp: 0x0000000000000000

Are there new instructions somewhere for preserving the kernel symbols in the backtrace on an M1 Mac? I'm not sure that the old method works, or maybe I did it incorrectly.

Bug report here: https://github.com/openzfsonosx/zfs/issues/796.
jawbroken
 
Posts: 61
Joined: Wed Apr 01, 2015 4:46 am

Return to General Help

Who is online

Users browsing this forum: No registered users and 32 guests

cron