ZFS memory usage?

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

ZFS memory usage?

Postby photonclock » Mon Oct 07, 2019 3:37 pm

v1.9.2 / OS 10.13.6

When I work with a pool, lots of reads/writes, ZFS eventually allocates almost all memory (24GB in this case).

I modified my /etc/zfs/zsysctl.conf per below to experiment with limiting mem usage, but ZFS somehow still ends up allocating all memory. Why?

Is there a way to make ZFS purge cached files from memory?


# Uses the standard SYSCTL.CONF(5) format.
# Comments are denoted by a "#" at the beginning of a line.

# It is highly recommended to put a date and justification as comments
# alongside each tuning.

# The zfs_arc_max parameter is in bytes and accepts decimal or
# hexadecimal values. The following text shows how to set this parameter
# to 11 GB, as an example:

# 10 Mar 2015; ilovezfs
# Cap the ARC to 11 GB reserving 5 GB for applications.
# 11 * 2^30 = 11,811,160,064
# 12 * 2^30 = 12_884_901_888 # 12884901888

# python calc size decimal and hex
# import math
# size = 12
# arc_max = (size * (2**30))
# arc_max_hex = hex(arc_max)
# print(arc_max)
# print(arc_max_hex)
# arc_meta = int(3/4 * arc_max)
# arc_meta_hex = hex(arc_meta)
# print(arc_meta)
# print(arc_meta_hex)

# changed 2019-10-05 per https://openzfsonosx.org/wiki/Memory_utilization
kstat.zfs.darwin.tunable.zfs_arc_max=0x300000000

# As another example, let's raise the zfs_arc_meta_limit:
# 10 Mar 2015; ilovezfs
# Raise zfs_arc_meta_limit to 3/4 (instead of 1/4) of zfs_arc_max.
# 3/4 * (11 * 2^30) = 8,858,370,048
# But let's use hexadecimal this time.
# 8,858,370,048 = 0x210000000

# changed 2019-10-05
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=0x240000000
photonclock
 
Posts: 11
Joined: Sat Oct 05, 2019 2:40 pm

Re: ZFS memory usage?

Postby roemer » Tue Oct 08, 2019 12:21 am

A second this request, I have the same problems with O3X version 1.9.2 on macOS Mojave 10.14.6.

I tried to limit arc_max to 8GB via /etc/zfs/zsysconf, but after some heavy use (in my case: working with Photos and having photolibraryd analyse and upload a larger number of new images), the systems ends up with over 15.5GB of memory 'wired'. This renders my machine basically useless with frequent freezing as it has only 16 GB of RAM. The only thing which works is to reboot.

Interestingly, arcstat.pl claims that the ARC target size (size) is only 2714M, while kstat.spl.misc.spl_misc.os_mem_alloc is shown as 14892662784
If this is not a memory-pressure situation, I don't know...
But ZFS seems not to free any of its memory allocation. Why?

Looks like a serious memory leak to me...
roemer
 
Posts: 73
Joined: Sat Mar 15, 2014 2:32 pm

Re: ZFS memory usage?

Postby lundman » Tue Oct 08, 2019 3:51 pm

Looking over the data you provided it looks like 32768 could be leaking, the trick will be to find where.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: ZFS memory usage?

Postby roemer » Wed Oct 09, 2019 2:00 am

Thanks for the feedback - not sure though what (bucket?) '32768' stands for...

Anyway, got a tip yesterday at IIRC to leave the metadata size at the default of ¼ of arc_max - and while this indeed apparently reduced the memory footprint, I also just got a hanging 100% kernel_task... Made a spindump which I will try to pass on on IIRC - perhaps this gives a hint on where the leak happens?
roemer
 
Posts: 73
Joined: Sat Mar 15, 2014 2:32 pm


Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 10 guests