ZFS memory usage?
Posted: Mon Oct 07, 2019 3:37 pm
v1.9.2 / OS 10.13.6
When I work with a pool, lots of reads/writes, ZFS eventually allocates almost all memory (24GB in this case).
I modified my /etc/zfs/zsysctl.conf per below to experiment with limiting mem usage, but ZFS somehow still ends up allocating all memory. Why?
Is there a way to make ZFS purge cached files from memory?
# Uses the standard SYSCTL.CONF(5) format.
# Comments are denoted by a "#" at the beginning of a line.
# It is highly recommended to put a date and justification as comments
# alongside each tuning.
# The zfs_arc_max parameter is in bytes and accepts decimal or
# hexadecimal values. The following text shows how to set this parameter
# to 11 GB, as an example:
# 10 Mar 2015; ilovezfs
# Cap the ARC to 11 GB reserving 5 GB for applications.
# 11 * 2^30 = 11,811,160,064
# 12 * 2^30 = 12_884_901_888 # 12884901888
# python calc size decimal and hex
# import math
# size = 12
# arc_max = (size * (2**30))
# arc_max_hex = hex(arc_max)
# print(arc_max)
# print(arc_max_hex)
# arc_meta = int(3/4 * arc_max)
# arc_meta_hex = hex(arc_meta)
# print(arc_meta)
# print(arc_meta_hex)
# changed 2019-10-05 per https://openzfsonosx.org/wiki/Memory_utilization
kstat.zfs.darwin.tunable.zfs_arc_max=0x300000000
# As another example, let's raise the zfs_arc_meta_limit:
# 10 Mar 2015; ilovezfs
# Raise zfs_arc_meta_limit to 3/4 (instead of 1/4) of zfs_arc_max.
# 3/4 * (11 * 2^30) = 8,858,370,048
# But let's use hexadecimal this time.
# 8,858,370,048 = 0x210000000
# changed 2019-10-05
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=0x240000000
When I work with a pool, lots of reads/writes, ZFS eventually allocates almost all memory (24GB in this case).
I modified my /etc/zfs/zsysctl.conf per below to experiment with limiting mem usage, but ZFS somehow still ends up allocating all memory. Why?
Is there a way to make ZFS purge cached files from memory?
# Uses the standard SYSCTL.CONF(5) format.
# Comments are denoted by a "#" at the beginning of a line.
# It is highly recommended to put a date and justification as comments
# alongside each tuning.
# The zfs_arc_max parameter is in bytes and accepts decimal or
# hexadecimal values. The following text shows how to set this parameter
# to 11 GB, as an example:
# 10 Mar 2015; ilovezfs
# Cap the ARC to 11 GB reserving 5 GB for applications.
# 11 * 2^30 = 11,811,160,064
# 12 * 2^30 = 12_884_901_888 # 12884901888
# python calc size decimal and hex
# import math
# size = 12
# arc_max = (size * (2**30))
# arc_max_hex = hex(arc_max)
# print(arc_max)
# print(arc_max_hex)
# arc_meta = int(3/4 * arc_max)
# arc_meta_hex = hex(arc_meta)
# print(arc_meta)
# print(arc_meta_hex)
# changed 2019-10-05 per https://openzfsonosx.org/wiki/Memory_utilization
kstat.zfs.darwin.tunable.zfs_arc_max=0x300000000
# As another example, let's raise the zfs_arc_meta_limit:
# 10 Mar 2015; ilovezfs
# Raise zfs_arc_meta_limit to 3/4 (instead of 1/4) of zfs_arc_max.
# 3/4 * (11 * 2^30) = 8,858,370,048
# But let's use hexadecimal this time.
# 8,858,370,048 = 0x210000000
# changed 2019-10-05
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=0x240000000