Page 1 of 2

Forced restarts during large file copies

PostPosted: Wed Dec 30, 2015 10:38 pm
by joconnor
I have OpenZFS on OSX 1.4.5 with 10.10.5 on a Mini with 16GB RAM

sudo zpool create -f -o ashift=12 -O compression=lz4 -O casesensitivity=insensitive -O atime=off -O normalization=formD pool02 mirror /dev/disk1 /dev/disk2 mirror /dev/disk3 /dev/disk4

Doing large file copies (100s of GB) my kernel memory fills up/wired memory goes through the roof and I get a forced reboot, but not a kernel panic log.


That isn't QUITE right. I tried to copy about a terabyte of files to a pool described above and get the forced restarts. I suspect kernel memory is filling up, but I can't prove it yet. I can get further if I break the Finder copies into pieces. Since each piece takes 30 minutes, though, it is tough to experiment enough to get solid data.

Just happened again, but the kernel/wired memory isn't out of bounds at the point of the forced reboot -- in the 3G range, which is lower than I've seen it before, so it isn't an accumulation of leaks...

Does anyone have similar problems?

Re: Kernel memory fills up

PostPosted: Wed Dec 30, 2015 11:08 pm
by ilovezfs
This has been addressed in HEAD. You can try this build to verify that:

148-1010.pkg
(5.3 MiB) Downloaded 436 times

zfs@2d50c3f8a8e72bf49e44dcb1cb2cf57df47ec40e
spl@fa83804bdc878599eacc07010bfde88b3e5944a6

Re: Forced restarts during large file copies

PostPosted: Wed Dec 30, 2015 11:17 pm
by joconnor
Thanks for the pointer. I'll give it a try.

Re: Forced restarts during large file copies

PostPosted: Wed Dec 30, 2015 11:51 pm
by joconnor
My kernel task memory is up to 9.72G and wired is about 12.14G out of a total of 16G.
This is on a copy of two compressed DMGs totaling almost 200G with 20G left to go.
But I haven't crashed yet.

Re: Forced restarts during large file copies

PostPosted: Wed Dec 30, 2015 11:55 pm
by ilovezfs
Sounds correct. HEAD currently has arc max default at 75% but enforces the caps more aggressively and starts enforcing if at 80% memory-in-use (zfs or otherwise) regardless of arc's size.

Re: Forced restarts during large file copies

PostPosted: Wed Dec 30, 2015 11:58 pm
by joconnor
Using the build you dropped me, the memory doesn't get freed back up after the copy completes. I'm still sitting at 9.75GB kernel thread memory and 12.17GB wired memory. But I was able to move a couple big files without a forced restart.

Thanks for the help. If there is anything I can do to help you figure out what is still amiss please let me know.

Re: Forced restarts during large file copies

PostPosted: Wed Dec 30, 2015 11:59 pm
by joconnor
ilovezfs wrote:Sounds correct. HEAD currently has arc max default at 75% but enforces the caps more aggressively and starts enforcing if at 80% memory-in-use (zfs or otherwise) regardless of arc's size.



Okay, thanks. Will the memory eventually be released?

Re: Forced restarts during large file copies

PostPosted: Thu Dec 31, 2015 12:07 am
by ilovezfs
Nothing is amiss. It's a cache. You shouldn't expect it to drop down just because the copy finishes.

Yes, if you hit pressure or export the pools.

You can also retest with a lower arc_max by creating /etc/zfs/zsysctl.conf and putting in a value:
https://openzfsonosx.org/wiki/Memory_utilization

Get the current value:
sysctl -a | grep c_max

Reboot, verify it's now using your custom value:
sysctl -a | grep c_max

Re-run your test.

Re: Forced restarts during large file copies

PostPosted: Thu Dec 31, 2015 12:35 am
by rottegift
joconnor wrote:Okay, thanks. Will the memory eventually be released?


It gets released if there is memory pressure for anywhere else on the system, including but not limited to the buffer cache for other filesystems like HFS+ and applications allocating memory. The current implementation is pretty gentle and should jump out of the way of anything else that wants memory.

Generally if you do much with zfs, this is exactly what you want; when your ARC is warm (i.e., big) you issue far fewer read operations to your underlying storage, and you get much better aggregation of write operations.

There are a tiny handful of applications that *can* try to allocate many gigabytes of RAM in one allocation call to the operating system, and fail if that one call does not succeed. There are two options to deal with that.

Firstly, write to the application writer and say, "hey, you should allocate in reasonable chunks and grow to the target allocation size rather than allocate the whole thing at once, or at least fall back to that".

Secondly, you can run "sudo sysctl -w kstat.spl.misc.spl_misc.spl_spl_free_fast_pressure=1 kstat.spl.misc.spl_misc.spl_spl_free_manual_pressure=numberofmegabytes", and ARC will immediately shrink back by that number of megabytes, or down to the ARC's minimum. Then try the bad application again.

Thirdly, you can run an application which smoothly allocates and dirties the desired amount of memory in smaller chunks and then exits, which will push ARC out of the way. That mimics the behaviour of smarter applications. Parallels, for instance, grabs its required wired memory in chunks; VMWare Fusion.

I personally have not yet run into ANY other application that does what VMWare does on a Mac (failing to grab memory because ARC is big). If you run into one, please let everyone know.

Re: Forced restarts during large file copies

PostPosted: Thu Dec 31, 2015 8:52 am
by joconnor
I started a 3TB copy before I went to bed, and it is still going now with 1TB left to go.
No intervening crashes.
Thanks for the help.