Page 1 of 1

ZFS memory usage?

PostPosted: Mon Oct 07, 2019 3:37 pm
by photonclock
v1.9.2 / OS 10.13.6

When I work with a pool, lots of reads/writes, ZFS eventually allocates almost all memory (24GB in this case).

I modified my /etc/zfs/zsysctl.conf per below to experiment with limiting mem usage, but ZFS somehow still ends up allocating all memory. Why?

Is there a way to make ZFS purge cached files from memory?


# Uses the standard SYSCTL.CONF(5) format.
# Comments are denoted by a "#" at the beginning of a line.

# It is highly recommended to put a date and justification as comments
# alongside each tuning.

# The zfs_arc_max parameter is in bytes and accepts decimal or
# hexadecimal values. The following text shows how to set this parameter
# to 11 GB, as an example:

# 10 Mar 2015; ilovezfs
# Cap the ARC to 11 GB reserving 5 GB for applications.
# 11 * 2^30 = 11,811,160,064
# 12 * 2^30 = 12_884_901_888 # 12884901888

# python calc size decimal and hex
# import math
# size = 12
# arc_max = (size * (2**30))
# arc_max_hex = hex(arc_max)
# print(arc_max)
# print(arc_max_hex)
# arc_meta = int(3/4 * arc_max)
# arc_meta_hex = hex(arc_meta)
# print(arc_meta)
# print(arc_meta_hex)

# changed 2019-10-05 per https://openzfsonosx.org/wiki/Memory_utilization
kstat.zfs.darwin.tunable.zfs_arc_max=0x300000000

# As another example, let's raise the zfs_arc_meta_limit:
# 10 Mar 2015; ilovezfs
# Raise zfs_arc_meta_limit to 3/4 (instead of 1/4) of zfs_arc_max.
# 3/4 * (11 * 2^30) = 8,858,370,048
# But let's use hexadecimal this time.
# 8,858,370,048 = 0x210000000

# changed 2019-10-05
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=0x240000000

Re: ZFS memory usage?

PostPosted: Tue Oct 08, 2019 12:21 am
by roemer
A second this request, I have the same problems with O3X version 1.9.2 on macOS Mojave 10.14.6.

I tried to limit arc_max to 8GB via /etc/zfs/zsysconf, but after some heavy use (in my case: working with Photos and having photolibraryd analyse and upload a larger number of new images), the systems ends up with over 15.5GB of memory 'wired'. This renders my machine basically useless with frequent freezing as it has only 16 GB of RAM. The only thing which works is to reboot.

Interestingly, arcstat.pl claims that the ARC target size (size) is only 2714M, while kstat.spl.misc.spl_misc.os_mem_alloc is shown as 14892662784
If this is not a memory-pressure situation, I don't know...
But ZFS seems not to free any of its memory allocation. Why?

Looks like a serious memory leak to me...

Re: ZFS memory usage?

PostPosted: Tue Oct 08, 2019 3:51 pm
by lundman
Looking over the data you provided it looks like 32768 could be leaking, the trick will be to find where.

Re: ZFS memory usage?

PostPosted: Wed Oct 09, 2019 2:00 am
by roemer
Thanks for the feedback - not sure though what (bucket?) '32768' stands for...

Anyway, got a tip yesterday at IIRC to leave the metadata size at the default of ΒΌ of arc_max - and while this indeed apparently reduced the memory footprint, I also just got a hanging 100% kernel_task... Made a spindump which I will try to pass on on IIRC - perhaps this gives a hint on where the leak happens?