ZFS not observing desired memory limits?

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

ZFS not observing desired memory limits?

Postby Sharko » Tue Jul 05, 2016 8:52 pm

Hi, I followed the instructions on this page:

https://openzfsonosx.org/wiki/Memory_utilization

to create a file "zsysctl.conf" that I had hoped would limit the RAM usage by ZFS; my intent was to reserve 8GB (out of 24GB total RAM) for the operating system and applications at all times. However, I'm not seeing that behavior, I don't think; if I do some actions that cause ZFS to cache a bunch of file data (like ditto an HFS directory onto a ZFS dataset, or run Carbon Copy Cloner to populate a ZFS dataset from a backup) the wired memory crawls up until wired+app memory consumes the whole 24GB. And then if I try to launch a large app like LibreOffice it takes 30+ seconds instead of 1 second.

Here is my zsysctl.conf file:
# Uses the standard SYSCTL.CONF(5) format.
# Comments are denoted by a "#" at the beginning of a line.

# It is highly recommended to put a date and justification as comments
# alongside each tuning.

# The zfs_arc_max parameter is in bytes and accepts decimal or
# hexadecimal values. The following text shows how to set this parameter
# to 16 GB, as an example:

# 27 May 2016: adminkurt
# Cap the ARC to 16 GB reserving 8 GB for applications (24 GB RAM in box)
kstat.zfs.darwin.tunable.zfs_arc_max=17179869184

# 27 May 2016: adminkurt
# Raise zfs_arc_meta_limit to 10 GB (instead of 1/4 of zfs_arc_max).
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=10737418240
# end of file

Do you think ZFS is or isn't observing the limit I've tried to put in place? Or are the actions I'm doing (copying a large group of files in the Terminal, say) causing OS X to also use wired memory, and together they are grabbing all the available memory? How might I distinguish between those two possibilities?

Some notes on my setup:
I have four 2TB drives in the Mac Pro, set up as a zpool with two 2TB mirrored VDEVs. No deduplication is in use on this pool; I have a backup drive that I don't keep plugged that does have dedup turned on, just for testing, but in the cases I'm talking about the backup drive has not been connected and is not a factor.

Thanks for your help and suggestions.
Sharko
 
Posts: 228
Joined: Thu May 12, 2016 12:19 pm

Re: ZFS not observing desired memory limits?

Postby Brendon » Tue Jul 05, 2016 11:27 pm

The arc limit is not a hard limit on zfs memory use, this is a common misconception.

Typically the memory used will be 30-40% larger, due in part to overheads.

This should not be a problem, zfs will release memory as required when the machine is experiencing memory pressure. In most cases, that is

HTH
- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: ZFS not observing desired memory limits?

Postby Sharko » Thu Jul 07, 2016 8:08 am

Good to know. I guess I will try dialing the ZFS memory setting down further, to say 8GB, and see how that does. In the instances that I've observed, the memory pressure graph stayed green, even though launching LibreOffice took forever. So perhaps whatever OpenZFS uses to sense when to release memory isn't quite optimal? Thanks for your advice.
Sharko
 
Posts: 228
Joined: Thu May 12, 2016 12:19 pm

Re: ZFS not observing desired memory limits?

Postby s_mcleod » Fri Sep 23, 2016 1:19 pm

Hi,

Pretty new to ZFS as I mainly use BTRFS or EXT4 on my linux servers but have been trying out OpenZFS for my iMacs, but this is what's holding me back from considering it usable.

I find that the stock configuration leaks / eats memory like tomorrow and editing `/etc/zfs/zsysctl.conf` has no effect at all.

My zsysctl.conf current consists of just this:

Code: Select all
kstat.zfs.darwin.tunable.zfs_arc_max=2147483648
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=536870912


(and some #'ed out comments for my own reference)


My machine:
Image

----
s_mcleod
 
Posts: 7
Joined: Fri Sep 23, 2016 10:14 am

Re: ZFS not observing desired memory limits?

Postby Brendon » Fri Sep 23, 2016 4:23 pm

Who knows, you're running on Sierra! One can only assume that zsyscontrold is not functioning correctly on that OS.

In general the maximum settings for ARC constrain the size of, well, the ARC. However, the ARC has to get its memory from somewhere - the allocator, which in our case resides in the SPL. All allocators have overhead, in our case probably around 30% +/-, depending on what your are doing. Therefore the amount of wired memory used by ZFS will be ARC max + ~30% at full load.

Now its true to say that the ARC in general will respond to memory pressure and shrink, but this is one of our weaknesses - difficult to achieve optimally. So, the v1.5.2 release got a little sub-optimal in that regard, and is in fact a bit of a dog. The code in the repo uses the earlier strategy, and works a little better in that regard. We are of course working on solving this completely, and are getting closer. Having said that in kernel memory management on OSX is fairly hostile for our kind of use.

Best strategy - for biggish machines, constrain ARC to around 25% of physical memory or no more than about 8GB for now.

- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: ZFS not observing desired memory limits?

Postby s_mcleod » Fri Sep 23, 2016 10:47 pm

Sierra is the current stable version of macOS.

it's betas, then RCs / GMs were out for quite a while before it was released and theres not many changes around the daemon management subsystem the whole way along so I don't think it's related to that.

To prove this I've also been setting the kernel sysctl params by hand and it still gobbled up all 32GB of the RAM very quickly.

It looks more to me like a rounding issue or perhaps ZSH now takes in KB instead of Bytes or something because I just figured out of you set it a _lot_ smaller, it behaves how you'd expect it to:

Code: Select all
samm at samm-imac in ~ cat /etc/zfs/zsysctl.conf

kstat.zfs.darwin.tunable.zfs_arc_max=2147483648
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=536870912


Code: Select all
samm-imac# vm_stat
Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free:                             4963901.
Pages active:                            695956.
Pages inactive:                          954164.
Pages speculative:                        67828.
Pages throttled:                              0.
Pages wired down:                       1588814.
Pages purgeable:                          16059.
"Translation faults":                  48432339.
Pages copy-on-write:                    1879848.
Pages zero filled:                     21621966.
Pages reactivated:                       127080.
Pages purged:                            178166.
File-backed pages:                       790760.
Anonymous pages:                         927188.
Pages stored in compressor:              253959.
Pages occupied by compressor:            115088.
Decompressions:                          153163.
Compressions:                           2279687.
Pageins:                                4019290.
Pageouts:                                  3475.
Swapins:                                   2677.
Swapouts:                                  2677.


samm-imac# memory_pressure
The system has 2147483648 (524288 pages with a page size of 4096).

Stats:
Pages free: 4965610
Pages purgeable: 16059
Pages purged: 178166

Swap I/O:
Swapins: 2677
Swapouts: 2677

Page Q counts:
Pages active: 698193
Pages inactive: 954215
Pages speculative: 65710
Pages throttled: 0
Pages wired down: 1588462

Compressor Stats:
Pages used by compressor: 115088
Pages decompressed: 153161
Pages compressed: 2279687

File I/O:
Pageins: 4019222
Pageouts: 3475


Now that it doesn't seem to be crashing my system, I'll let my rsync finish, then I can chuck two SSDs in front of it in a mirror for the cache a log - I believe that's supposed to help with performance a lot as right now I'm maxing out the spinning rust at around 120MB/s which is shocking when you're used to between 1500-2500MB/s normally heh.
s_mcleod
 
Posts: 7
Joined: Fri Sep 23, 2016 10:14 am

Re: ZFS not observing desired memory limits?

Postby Brendon » Sat Sep 24, 2016 2:27 am

@s_mcleod

I have just done some basic checks around the functionality you speak of on Sierra, and it seems to be working pretty much fine:

Default values on kext load (this a 2GB VM):

Code: Select all
sysctl -a | grep arcstats.c
kstat.zfs.misc.arcstats.c: 1288488960    <----- this is the target amount of ARC to utilise
kstat.zfs.misc.arcstats.c_min: 161061120
kstat.zfs.misc.arcstats.c_max: 1288488960


Set the limits:

Code: Select all
sudo sysctl -w kstat.zfs.darwin.tunable.zfs_arc_max=$((512*1024*1024))
kstat.zfs.darwin.tunable.zfs_arc_max: 0 -> 536870912
big-vm-mac-imac:~ zfs-tests$ sysctl -a | grep arcstats.c
kstat.zfs.misc.arcstats.c: 536870912


Pressure the VM (an rsync):

Code: Select all
big-vm-mac-imac:~ zfs-tests$ arcstat.pl
    Time   read   miss  miss%   dmis  dm%  pmis  pm%   mmis  mm%   size  tsize 
03:05:52    10K    10K     99    10K   99     0    0    10K   99   192M   536M 
03:05:53    241    235     97    235   97     0    0    235   97   227M   536M 
03:05:54    196    194     98    194   98     0    0    194  100   234M   536M 
03:05:55   1513   1104     72   1104   72     0    0   1104   84   243M   536M 
03:05:56    489    489    100    489  100     0    0    489  100   255M   536M 
03:05:57    718    718    100    718  100     0    0    718  100   261M   536M 
03:05:58    547    547    100    548  100     0    0    548  100   264M   536M 
03:05:59    802    794     99    794   99     0    0    794   99   160M   161M       <---- this point is where O3X released memory due to OS pressure
03:06:00    796    796    100    795  100     0    0    795  100   167M   161M 
03:06:01    716    716    100    716  100     0    0    716  100   164M   161M


And the resulting memory in use.

Code: Select all
sysctl -a | grep os_mem_alloc
kstat.spl.misc.spl_misc.os_mem_alloc: 391118848 <---- this is the exact amount of memory O3X has allocated


Regarding your comments about leaks and memory growth -> By default O3X will consume most of a machines memory as ARC and other structures, until such time the machine experiences memory pressure. At which point memory is released. The mechanism below generally works fairly well, but not 100% for everyone. Version 1.5.2 has a performance problem when allocating significant portions of large memory machines, it is necessary to restrict ARC to around 8GB to avoid these problems. This high memory use strategy is identical for OpenZFS on all platforms it runs on - FreeBSD, Linux, Illumos and OSX. Maximising use of ARC maximises filesystem performance. Users are of course free to tune O3X to taste.

@sharko - memory management inside the OSX kernel is anything but trivial. The performance problem is not about releasing memory, its more about O3X smashing the low level page allocator, and causing interactive glitches/other performance problems. The code in the repo is better - back to 1.4.x performance, we are currently working on a more permanent fix.

- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: ZFS not observing desired memory limits?

Postby Sharko » Sat Sep 24, 2016 8:48 am

I will chime in to say that since I set the memory limit to 8GB I have not experienced any UI lag issues, generally things are working well.

Kurt
Sharko
 
Posts: 228
Joined: Thu May 12, 2016 12:19 pm

Re: ZFS not observing desired memory limits?

Postby znubee » Mon Jan 09, 2017 4:10 pm

So I've got this on a new machine and have been having some trouble with performance and memory consumption with ZFS. I have 128 GB of RAM on this system, but I'd like to limit ZFS down quite a bit (as little as possible for solid performance on a 12 TB RAID-Z).

Right now I have /etc/zfs/zsysctl.sh loading and it contains (found here):
Code: Select all
#!/bin/bash

export PATH=/usr/bin:/bin:/sbin:/usr/sbin

sysctl -w kstat.zfs.darwin.tunable.zfs_arc_min=$((512*1024**2+1*1024**3))
sysctl -w kstat.zfs.darwin.tunable.zfs_arc_meta_min=$((256*1024**2+1*1024**3))
sysctl -w kstat.zfs.darwin.tunable.zfs_arc_meta_limit=$((5*1000**3))
sysctl -w kstat.zfs.darwin.tunable.zfs_arc_max=$((11*1024**3))
sysctl -w kstat.zfs.darwin.tunable.zfs_dirty_data_max=$((512*1024**2))


My zsysctl.conf contains:
Code: Select all
kstat.zfs.darwin.tunable.zfs_arc_max=8589934592


What should I actually be using here? I've been copying and pasting pieces from things I'm seeing, but performance just isn't improving and constant reboots are a real pain.

Is there a solid guide to tune this? Can you give me some suggestions? Right now kernel_task is chewing up around 40GB.
znubee
 
Posts: 13
Joined: Mon Jul 18, 2016 7:33 am

Re: ZFS not observing desired memory limits?

Postby Brendon » Wed Jan 11, 2017 2:24 am

If you have the ability/desire to build you own binaries, now is a good time to try the "knight" ZFS and SPL branches.

These are approaching release candidate status.

- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Next

Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 3 guests