Difference between revisions of "Development"
(→Debugging with LLDB) |
(→Debugging with GDB) |
||
Line 17: | Line 17: | ||
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here]. | On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here]. | ||
− | |||
− | |||
− | |||
<syntaxhighlight lang="text"> | <syntaxhighlight lang="text"> | ||
+ | $ gdb /Volumes/Kernelit/mach_kernel | ||
+ | |||
(gdb) source /Volumes/KernelDebugKit/kgmacros | (gdb) source /Volumes/KernelDebugKit/kgmacros | ||
Line 34: | Line 33: | ||
<code>^Z</code> to suspend gdb, or, use another terminal | <code>^Z</code> to suspend gdb, or, use another terminal | ||
− | <syntaxhighlight lang=" | + | <syntaxhighlight lang="text"> |
^Z | ^Z | ||
− | $ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ | + | $ sudo kextutil -s /tmp -n \ |
+ | -k /Volumes/KernelDebugKit/mach_kernel \ | ||
+ | -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ \ | ||
+ | ../spl/module/spl/spl.kext/ | ||
$ fg # resume gdb, or go back to gdb terminal | $ fg # resume gdb, or go back to gdb terminal | ||
− | |||
− | |||
(gdb) set kext-symbol-file-path /tmp | (gdb) set kext-symbol-file-path /tmp | ||
+ | |||
(gdb) add-kext /tmp/spl.kext | (gdb) add-kext /tmp/spl.kext | ||
+ | |||
(gdb) add-kext /tmp/zfs.kext | (gdb) add-kext /tmp/zfs.kext | ||
+ | |||
(gdb) bt | (gdb) bt | ||
</syntaxhighlight> | </syntaxhighlight> |
Revision as of 07:29, 11 April 2014
Contents
Kernel
Debugging with GDB
Dealing with panics.
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html
Boot target VM with
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"
Make it panic.
On your development machine, you will need the Kernel Debug Kit. Download it from Apple here.
$ gdb /Volumes/Kernelit/mach_kernel (gdb) source /Volumes/KernelDebugKit/kgmacros (gdb) target remote-kdp (gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM (gdb) showallkmods
Find the addresses for ZFS and SPL modules.
^Z
to suspend gdb, or, use another terminal
^Z $ sudo kextutil -s /tmp -n \ -k /Volumes/KernelDebugKit/mach_kernel \ -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ \ ../spl/module/spl/spl.kext/ $ fg # resume gdb, or go back to gdb terminal (gdb) set kext-symbol-file-path /tmp (gdb) add-kext /tmp/spl.kext (gdb) add-kext /tmp/zfs.kext (gdb) bt
Debugging with LLDB
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit $ lldb /Volumes/KernelDebugKit/mach_kernel (lldb) kdp-remote 192.168.30.146 (lldb) showallkmods (lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods) (lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000
Then follow the guide for GDB above.
Non-panic
If you prefer to work in GDB, you can always panic a kernel with
$ sudo dtrace -w -n "BEGIN{ panic();}"
But this was revealing:
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log $ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt $ less /tmp/trace.txt
Note that my hang is here:
PID: 156 Process: zpool Thread ID: 0x4e2 Thread state: 0x9 == TH_WAIT |TH_UNINT Thread wait_event: 0xffffff8006608a6c Kernel stack: machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e) 0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711) thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc) lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce) 0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6) msleep (in mach_kernel) + 116 (0xffffff800056a2e4) 0xffffff7f80e52a76 (0xffffff7f80e52a76) 0xffffff7f80e53fae (0xffffff7f80e53fae) 0xffffff7f80e54173 (0xffffff7f80e54173) 0xffffff7f80f1a870 (0xffffff7f80f1a870) 0xffffff7f80f2bb4e (0xffffff7f80f2bb4e) 0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7) 0xffffff7f80f1b65f (0xffffff7f80f1b65f) 0xffffff7f80f042ee (0xffffff7f80f042ee) 0xffffff7f80f45c5b (0xffffff7f80f45c5b) 0xffffff7f80f4ce92 (0xffffff7f80f4ce92) spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd) VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)
$ sudo kextstat #grab the addresses of SPL and ZFS again $ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \ -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ $ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym 0xffffff800056a2e4 (0xffffff800056a2e4) spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76) taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae) taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173) 0xffffff7f80f1a870 (0xffffff7f80f1a870) $ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym 0xffffff7f80e54173 (0xffffff7f80e54173) vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870) vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e) vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7) vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f) spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)
Voilà!
Memory leaks
In some cases, you may suspect memory issues, for instance if you saw the following panic:
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826
To debug this, you can attach GDB and use the zprint command:
(gdb) zprint ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME 0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX 0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX 0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX 0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX 0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX 0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX 0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX 0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX
In this case, kalloc.256 is suspect.
Reboot kernel with zlog=kalloc.256 on the command line, then we can use
(gdb) findoldest oldest record is at log index 393: --------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 ------------- 0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp) 0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35> 0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp) 0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15 0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp) and indeed, list any index (gdb) zstack 394 --------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 ------------- 0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp) 0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35> 0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp) 0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15 0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp) How many times was zfs_kmem_alloc involved in the leaked allocs? (gdb) countpcs 0xffffff7f80e847df occurred 3999 times in log (100% of records)
At least we know it is our fault.
How many times is it arc_buf_alloc?
(gdb) countpcs 0xffffff7f80e90649 occurred 2390 times in log (59% of records)
Flamegraphs
Huge thanks to BrendanGregg for so much of the dtrace magic.
dtrace the kernel while running a command:
dtrace -x stackframes=100 -n 'profile-997 /arg0/ { @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks
It will run for 60 seconds.
Convert it to a flamegraph:
./stackcollapse.pl out.stacks > out.folded ./flamegraph.pl out.folded > out.svg
This is rsync -a /usr/ /BOOM/deletea/ running:
Or running Bonnie++ in various stages:
Create files in sequential order
Stat files in sequential order
Delete files in sequential order
ZVOL block size
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.
vnode_create thread
Currently, we have to protect the call to
vnode_create()
due to the fact that it can potentially call several vnops (fsync, pageout, reclaim), and so we have a "reclaim thread" to deal with this. One issue is that reclaim can be called both as a separate thread (periodic reclaims) and as the "calling thread" ofvnode_create
. This makes locking tricky.One idea is we create a "vnode_create thread" (with each dataset). Then in
zfs_zget
andzfs_znode_alloc
, which callvnode_create
, we simply place the newly allocatedzp
on the vnode_create thread's "request list," and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.In the vnode_create thread, we pop items off the list, call
vnode_create
(guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.In theory this should let us handle reclaim, fsync, and pageout in the same manner as upstream ZFS, with no special cases required. This should alleviate the current situation where the reclaim_list grows to a very large number (230,000 nodes observed).
It might mean we need to be careful in any function which could end up in
zfs_znode_alloc
, to make sure we have avp
attached before we resume. For example,zfs_lookup
andzfs_create
.vnode_thread branch
The branch vnode_thread is just this idea. It creates a vnode_create_thread per dataset, and when we need to call
vnode_create()
, it simply adds thezp
to the list of requests, then signals the thread. The thread will callvnode_create()
and upon completion, setzp->z_vnode
then signal back. The requester forzp
will sit inzfs_znode_wait_vnode()
waiting for the signal back.This means the ZFS code base is littered with calls to
zfs_znode_wait_vnode()
(46 to be exact) placed at the correct location (i.e., after all the locks are released, andzil_commit()
has been called). It is possible that this number could be decreased, as the calls tozfs_zget()
appear not to suffer thezil_commit()
issue, and can probably just block at the end ofzfs_zget()
. However, the calls tozfs_mknode()
are what cause the issue.sysctl zfs.vnode_create_list
tracks the number ofzp
nodes in the list waiting forvnode_create()
to complete. Typically, 0 or 1. Rarely higher.It appears to deadlock from time to time.
vnode_threadX branch
The second branch vnode_threadX takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when
zfs_znode_getvnode()
is called. This new thread calls_zfs_znode_getvnode()
which functions as above. Callvnode_create()
then signal back. The samezfs_znode_wait_vnode()
blockers exist.sysctl zfs.vnode_create_list
tracks the number of vnode_create threads we have started. Interestingly, these remain 0 or 1, rarely higher.It has not yet deadlocked.
Conclusions
- It is undesirable that we have
zfs_znode_wait_vnode()
littered all over the source, and that we must pay special attention to each one. Nonetheless, it does not hurt to call it in excess, given that no wait will occur ifzp->z_vnode
is already set. - It is unknown if it is all right to resume ZFS execution while
z_vnode
is stillNULL
, and only to block (to wait for it to be filled in) once we are close to leaving the VNOP. - However, the fact that
vnop_reclaim
is direct and can be cleaned up immediately is very desirable. We no longer need to check for the "zp
withoutvp
" case inzfs_zget()
. - We no longer need to lock protect
vnop_fsync
orvnop_pageout
in case they are called fromvnode_create()
. - We don't have to throttle the reclaim thread due to the list's being massive. (Populating the list is much faster than cleaning up a
zp
node—up to 250,000 nodes in the list have been observed.)
Iozone
Quick peek at how they compare, just to see how much we should improve it by.
HFS+ and ZFS were created on the same virtual disk in VMware. Of course, this is not ideal testing specs, but should serve as an indicator.
The pool was created with
$ sudo zpool create -f -o ashift=12 \ -O atime=off \ -O casesensitivity=insensitive \ -O normalization=formD \ BOOM /dev/disk1
and the HFS+ file system was created with the standard OS X Disk Utility.app, with everything default (journaled, case-insensitive).
Iozone was run with standard automode:
sudo iozone -a -b outfile.xls
As a guess, writes need to double, and reads need to triple.