Forced restarts during large file copies

All your general support questions for OpenZFS on OS X.

Forced restarts during large file copies

Postby joconnor » Wed Dec 30, 2015 10:38 pm

I have OpenZFS on OSX 1.4.5 with 10.10.5 on a Mini with 16GB RAM

sudo zpool create -f -o ashift=12 -O compression=lz4 -O casesensitivity=insensitive -O atime=off -O normalization=formD pool02 mirror /dev/disk1 /dev/disk2 mirror /dev/disk3 /dev/disk4

Doing large file copies (100s of GB) my kernel memory fills up/wired memory goes through the roof and I get a forced reboot, but not a kernel panic log.


That isn't QUITE right. I tried to copy about a terabyte of files to a pool described above and get the forced restarts. I suspect kernel memory is filling up, but I can't prove it yet. I can get further if I break the Finder copies into pieces. Since each piece takes 30 minutes, though, it is tough to experiment enough to get solid data.

Just happened again, but the kernel/wired memory isn't out of bounds at the point of the forced reboot -- in the 3G range, which is lower than I've seen it before, so it isn't an accumulation of leaks...

Does anyone have similar problems?
Last edited by joconnor on Wed Dec 30, 2015 11:19 pm, edited 2 times in total.
joconnor
 
Posts: 13
Joined: Wed Dec 30, 2015 10:29 pm

Re: Kernel memory fills up

Postby ilovezfs » Wed Dec 30, 2015 11:08 pm

This has been addressed in HEAD. You can try this build to verify that:

148-1010.pkg
(5.3 MiB) Downloaded 425 times

zfs@2d50c3f8a8e72bf49e44dcb1cb2cf57df47ec40e
spl@fa83804bdc878599eacc07010bfde88b3e5944a6
ilovezfs
 
Posts: 232
Joined: Thu Mar 06, 2014 7:58 am

Re: Forced restarts during large file copies

Postby joconnor » Wed Dec 30, 2015 11:17 pm

Thanks for the pointer. I'll give it a try.
joconnor
 
Posts: 13
Joined: Wed Dec 30, 2015 10:29 pm

Re: Forced restarts during large file copies

Postby joconnor » Wed Dec 30, 2015 11:51 pm

My kernel task memory is up to 9.72G and wired is about 12.14G out of a total of 16G.
This is on a copy of two compressed DMGs totaling almost 200G with 20G left to go.
But I haven't crashed yet.
joconnor
 
Posts: 13
Joined: Wed Dec 30, 2015 10:29 pm

Re: Forced restarts during large file copies

Postby ilovezfs » Wed Dec 30, 2015 11:55 pm

Sounds correct. HEAD currently has arc max default at 75% but enforces the caps more aggressively and starts enforcing if at 80% memory-in-use (zfs or otherwise) regardless of arc's size.
ilovezfs
 
Posts: 232
Joined: Thu Mar 06, 2014 7:58 am

Re: Forced restarts during large file copies

Postby joconnor » Wed Dec 30, 2015 11:58 pm

Using the build you dropped me, the memory doesn't get freed back up after the copy completes. I'm still sitting at 9.75GB kernel thread memory and 12.17GB wired memory. But I was able to move a couple big files without a forced restart.

Thanks for the help. If there is anything I can do to help you figure out what is still amiss please let me know.
joconnor
 
Posts: 13
Joined: Wed Dec 30, 2015 10:29 pm

Re: Forced restarts during large file copies

Postby joconnor » Wed Dec 30, 2015 11:59 pm

ilovezfs wrote:Sounds correct. HEAD currently has arc max default at 75% but enforces the caps more aggressively and starts enforcing if at 80% memory-in-use (zfs or otherwise) regardless of arc's size.



Okay, thanks. Will the memory eventually be released?
joconnor
 
Posts: 13
Joined: Wed Dec 30, 2015 10:29 pm

Re: Forced restarts during large file copies

Postby ilovezfs » Thu Dec 31, 2015 12:07 am

Nothing is amiss. It's a cache. You shouldn't expect it to drop down just because the copy finishes.

Yes, if you hit pressure or export the pools.

You can also retest with a lower arc_max by creating /etc/zfs/zsysctl.conf and putting in a value:
https://openzfsonosx.org/wiki/Memory_utilization

Get the current value:
sysctl -a | grep c_max

Reboot, verify it's now using your custom value:
sysctl -a | grep c_max

Re-run your test.
ilovezfs
 
Posts: 232
Joined: Thu Mar 06, 2014 7:58 am

Re: Forced restarts during large file copies

Postby rottegift » Thu Dec 31, 2015 12:35 am

joconnor wrote:Okay, thanks. Will the memory eventually be released?


It gets released if there is memory pressure for anywhere else on the system, including but not limited to the buffer cache for other filesystems like HFS+ and applications allocating memory. The current implementation is pretty gentle and should jump out of the way of anything else that wants memory.

Generally if you do much with zfs, this is exactly what you want; when your ARC is warm (i.e., big) you issue far fewer read operations to your underlying storage, and you get much better aggregation of write operations.

There are a tiny handful of applications that *can* try to allocate many gigabytes of RAM in one allocation call to the operating system, and fail if that one call does not succeed. There are two options to deal with that.

Firstly, write to the application writer and say, "hey, you should allocate in reasonable chunks and grow to the target allocation size rather than allocate the whole thing at once, or at least fall back to that".

Secondly, you can run "sudo sysctl -w kstat.spl.misc.spl_misc.spl_spl_free_fast_pressure=1 kstat.spl.misc.spl_misc.spl_spl_free_manual_pressure=numberofmegabytes", and ARC will immediately shrink back by that number of megabytes, or down to the ARC's minimum. Then try the bad application again.

Thirdly, you can run an application which smoothly allocates and dirties the desired amount of memory in smaller chunks and then exits, which will push ARC out of the way. That mimics the behaviour of smarter applications. Parallels, for instance, grabs its required wired memory in chunks; VMWare Fusion.

I personally have not yet run into ANY other application that does what VMWare does on a Mac (failing to grab memory because ARC is big). If you run into one, please let everyone know.
rottegift
 
Posts: 26
Joined: Fri Apr 25, 2014 12:00 am

Re: Forced restarts during large file copies

Postby joconnor » Thu Dec 31, 2015 8:52 am

I started a 3TB copy before I went to bed, and it is still going now with 1TB left to go.
No intervening crashes.
Thanks for the help.
joconnor
 
Posts: 13
Joined: Wed Dec 30, 2015 10:29 pm

Next

Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 33 guests