Are these speeds out of the norm?

All your general support questions for OpenZFS on OS X.

Are these speeds out of the norm?

Postby tim.rohrer » Sun Sep 23, 2018 8:49 pm

In my working with zfs so far, I'm seeing it really slow down my 2013 MacPro QC 16G, and I'm seeing performance with the pools that seem far worse than noted in other threads in this forum.

A couple of examples.

ps aux | grep <string> can take several seconds to produce results. On other (less powerful) computers ps aux | grep <string> pops results out near instantaneously.

I had about 200G of data in a folder on a pool that I 'rm -rf''d. Although I had been trying to simultaneously rsync files to that pool (different dataset), there was really no other load on the MacPro. The time to complete the deletion was:

real 1964m19.449s
user 0m5.309s
sys 4m19.075s

That deletion took 32 hours.

So, these issues seem to go beyond the problems previously noted.

Thoughts?
tim.rohrer
 
Posts: 29
Joined: Tue Jul 24, 2018 6:49 pm

Re: Are these speeds out of the norm?

Postby tim.rohrer » Mon Sep 24, 2018 6:23 am

For comparison, I have a 2014 MacMini that I have repurposed with Ubuntu 18.04. The box has 4GB ram, and has not been tweaked.

I did an rm of 185G of the same type of data as above:

real 10m34.355s
user 0m2.426s
sys 1m10.171s

Back on the MacPro, I had 357G of sparsebundle type data (old network TimeMachine) on an external USB3 drive with HFS. Deletion time:

real 0m7.341s
user 0m0.002s
sys 0m2.719s
tim.rohrer
 
Posts: 29
Joined: Tue Jul 24, 2018 6:49 pm

Re: Are these speeds out of the norm?

Postby leeb » Mon Sep 24, 2018 6:27 am

Definitely sounds like you hit some pathological edge case (I assume you checked SMART and the like and there are no hardware issues), I haven't seen anything quite like that with my pools despite O3X clearly still being a ways from the optimization and trivial performance tuned defaults it'll hopefully reach eventually. One thing that would help is to share more specifics: OS/O3X version, how you've got the pool and FS set up (vdev, properties used), what specific hardware you're using it with if any (DAS?), and more details on the exact test case. Ie., "200 GB of data" isn't very specific, there is a difference between one 200 GB image and 10 million files of a few kilobytes each in thousands of nested folder, to pick two possible extreme usage cases (VM vs a file-based db say). All that would make it easier to try to replicate if possible.
leeb
 
Posts: 43
Joined: Thu May 15, 2014 12:10 pm

Re: Are these speeds out of the norm?

Postby tim.rohrer » Mon Sep 24, 2018 12:05 pm

Thanks leeb. Most of the information was in other posts I've recently started, but you're right that I should have included it here too.

I have done quick smartctl tests against each of the disks and they all pass. I have not done long tests, but can and will.

Code: Select all
spl.kext_version: 1.7.2-1
zfs.kext_version: 1.7.2-1


Here is my ctl file:

Code: Select all
# 5SEP2018 Adding meta at 3/4 to try and find a better balance
kstat.zfs.darwin.tunable.zfs_arc_max=4294967296
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=3221225472

# 21SEP2018: tim.rohrer per lundman
#kstat.zfs.darwin.tunable.ignore_negatives: 1
#kstat.zfs.darwin.tunable.ignore_positives: 1
#kstat.zfs.darwin.tunable.create_negatives: 0

# 13SEP2018: tim.rohrer
# Disabled prefetch to try and solve the rsync problem. Based on thread: https://openzfsonosx.org/forum/viewtopic.php?f=26&t=2779&p=6902&hilit=rsync+arc#p6902
kstat.zfs.darwin.tunable.zfs_prefetch_disable=1
kstat.zfs.darwin.tunable.arc_lotsfree_percent=30


The ignores and create statements are actually set the way as shown above.

macOS High Sierra (10.13.6).

tank0 (from where I did the large delete that took 32 hours) is a RAIDZ1 pool using a 4-bay enclosure (OWC Mercury Rack Pro) connected via USB3. The disks, 4x3TB, are Hitachi Deskstar 7K3000s. Default settings were used in setting up the pool; some datasets have quotas, but that is about all.

Code: Select all
NAME   PROPERTY               VALUE                  SOURCE
tank0  type                   filesystem             -
tank0  creation               Sat Aug 11 15:42 2018  -
tank0  used                   5.35T                  -
tank0  available              2.53T                  -
tank0  referenced             5.76G                  -
tank0  compressratio          1.00x                  -
tank0  mounted                yes                    -
tank0  quota                  none                   default
tank0  reservation            none                   default
tank0  recordsize             128K                   default
tank0  mountpoint             /Volumes/tank0         default
tank0  sharenfs               off                    default
tank0  checksum               on                     default
tank0  compression            off                    default
tank0  atime                  on                     default
tank0  devices                on                     default
tank0  exec                   on                     default
tank0  setuid                 on                     default
tank0  readonly               off                    default
tank0  zoned                  off                    default
tank0  snapdir                hidden                 default
tank0  aclmode                passthrough            default
tank0  aclinherit             restricted             default
tank0  canmount               on                     default
tank0  xattr                  on                     default
tank0  copies                 1                      default
tank0  version                5                      -
tank0  utf8only               off                    -
tank0  normalization          none                   -
tank0  casesensitivity        sensitive              -
tank0  vscan                  off                    default
tank0  nbmand                 off                    default
tank0  sharesmb               off                    default
tank0  refquota               none                   default
tank0  refreservation         none                   default
tank0  primarycache           all                    default
tank0  secondarycache         all                    default
tank0  usedbysnapshots        0                      -
tank0  usedbydataset          5.76G                  -
tank0  usedbychildren         5.34T                  -
tank0  usedbyrefreservation   0                      -
tank0  logbias                latency                default
tank0  dedup                  off                    default
tank0  mlslabel               none                   default
tank0  sync                   standard               default
tank0  refcompressratio       1.00x                  -
tank0  written                5.76G                  -
tank0  logicalused            2.65T                  -
tank0  logicalreferenced      5.76G                  -
tank0  filesystem_limit       none                   default
tank0  snapshot_limit         none                   default
tank0  filesystem_count       none                   default
tank0  snapshot_count         none                   default
tank0  snapdev                hidden                 default
tank0  com.apple.browse       on                     default
tank0  com.apple.ignoreowner  off                    default
tank0  com.apple.mimic_hfs    off                    default
tank0  shareafp               off                    default
tank0  redundant_metadata     all                    default
tank0  overlay                off                    default
tank0  encryption             off                    default
tank0  keylocation            none                   default
tank0  keyformat              none                   default
tank0  pbkdf2iters            0                      default


Likewise with the dataset:

Code: Select all
NAME                 PROPERTY               VALUE                         SOURCE
tank0/ArchivesLocal  type                   filesystem                    -
tank0/ArchivesLocal  creation               Sat Sep 22  7:19 2018         -
tank0/ArchivesLocal  used                   242G                          -
tank0/ArchivesLocal  available              2.26T                         -
tank0/ArchivesLocal  referenced             1.25M                         -
tank0/ArchivesLocal  compressratio          1.00x                         -
tank0/ArchivesLocal  mounted                yes                           -
tank0/ArchivesLocal  quota                  2.50T                         local
tank0/ArchivesLocal  reservation            none                          default
tank0/ArchivesLocal  recordsize             128K                          default
tank0/ArchivesLocal  mountpoint             /Volumes/tank0/ArchivesLocal  default
tank0/ArchivesLocal  sharenfs               off                           default
tank0/ArchivesLocal  checksum               on                            default
tank0/ArchivesLocal  compression            off                           default
tank0/ArchivesLocal  atime                  on                            default
tank0/ArchivesLocal  devices                on                            default
tank0/ArchivesLocal  exec                   on                            default
tank0/ArchivesLocal  setuid                 on                            default
tank0/ArchivesLocal  readonly               off                           default
tank0/ArchivesLocal  zoned                  off                           default
tank0/ArchivesLocal  snapdir                hidden                        default
tank0/ArchivesLocal  aclmode                passthrough                   default
tank0/ArchivesLocal  aclinherit             restricted                    default
tank0/ArchivesLocal  canmount               on                            default
tank0/ArchivesLocal  xattr                  on                            default
tank0/ArchivesLocal  copies                 1                             default
tank0/ArchivesLocal  version                5                             -
tank0/ArchivesLocal  utf8only               off                           -
tank0/ArchivesLocal  normalization          none                          -
tank0/ArchivesLocal  casesensitivity        sensitive                     -
tank0/ArchivesLocal  vscan                  off                           default
tank0/ArchivesLocal  nbmand                 off                           default
tank0/ArchivesLocal  sharesmb               off                           default
tank0/ArchivesLocal  refquota               none                          default
tank0/ArchivesLocal  refreservation         none                          default
tank0/ArchivesLocal  primarycache           all                           default
tank0/ArchivesLocal  secondarycache         all                           default
tank0/ArchivesLocal  usedbysnapshots        0                             -
tank0/ArchivesLocal  usedbydataset          1.25M                         -
tank0/ArchivesLocal  usedbychildren         242G                          -
tank0/ArchivesLocal  usedbyrefreservation   0                             -
tank0/ArchivesLocal  logbias                latency                       default
tank0/ArchivesLocal  dedup                  off                           default
tank0/ArchivesLocal  mlslabel               none                          default
tank0/ArchivesLocal  sync                   standard                      default
tank0/ArchivesLocal  refcompressratio       1.00x                         -
tank0/ArchivesLocal  written                1.25M                         -
tank0/ArchivesLocal  logicalused            241G                          -
tank0/ArchivesLocal  logicalreferenced      1.20M                         -
tank0/ArchivesLocal  filesystem_limit       none                          default
tank0/ArchivesLocal  snapshot_limit         none                          default
tank0/ArchivesLocal  filesystem_count       none                          default
tank0/ArchivesLocal  snapshot_count         none                          default
tank0/ArchivesLocal  snapdev                hidden                        default
tank0/ArchivesLocal  com.apple.browse       on                            default
tank0/ArchivesLocal  com.apple.ignoreowner  off                           default
tank0/ArchivesLocal  com.apple.mimic_hfs    off                           default
tank0/ArchivesLocal  shareafp               off                           default
tank0/ArchivesLocal  redundant_metadata     all                           default
tank0/ArchivesLocal  overlay                off                           default
tank0/ArchivesLocal  encryption             off                           default
tank0/ArchivesLocal  keylocation            none                          default
tank0/ArchivesLocal  keyformat              none                          default
tank0/ArchivesLocal  pbkdf2iters            0                             default


The pool also has a 3.5T ZVOL set up with HFS for receiving TimeMachine Backs. Although I didn't capture specifics, I had at one time deleted a rather large amount of data (sparsebundles I believe) and that seemed to process quite quickly.

The one other top level dataset I have receives snapshots from my tank1.

The data I deleted that took so long were the User folders I'd been trying to rsync (You may see my other posts), although I skip ~/Library/caches in my copying. So, some user documents, a lot of email messages, A LOT of photos in the Photos library, and quite a bit of music; some movies. Unfortunately, I didn't analyze exactly what was there before I started the process, but that should give you an idea.
tim.rohrer
 
Posts: 29
Joined: Tue Jul 24, 2018 6:49 pm

Re: Are these speeds out of the norm?

Postby leeb » Mon Sep 24, 2018 5:08 pm

tim.rohrer wrote:Thanks leeb.

Thanks for the details as well!
Code: Select all
spl.kext_version: 1.7.2-1
zfs.kext_version: 1.7.2-1

I probably sound like a broken record given I just said it in another thread, but seriously start by upgrading to 1.7.4. There have significant changes in the last 6 months since 1.7.2 was released, and you're on 10.13 anyway.

Here is my ctl file:

Nothing radical looking, though I actually reverted to pure vanilla despite having 64 GiB or memory during the last few updates to see how the newer memory balancers worked. I don't have enough uptime though yet to really say how 1.7.4 works there.

tank0 (from where I did the large delete that took 32 hours) is a RAIDZ1 pool using a 4-bay enclosure (OWC Mercury Rack Pro) connected via USB3. The disks, 4x3TB, are Hitachi Deskstar 7K3000s. Default settings were used in setting up the pool; some datasets have quotas, but that is about all.

I assume you're using it in dock mode, so none of its RAID5 hardware involved? I assume it's reliable enough there but have no experience with that one in particular.
Code: Select all
NAME   PROPERTY               VALUE                  SOURCE
[...]

So looking at this overall obviously a vanilla create. I assume the Deskstars do nothing funky that would trick autodetect so ashift value should be fine. Having said that a few things to consider about the defaults FWIW, some performance related though I don't think they'd account for your observed behavior. I've chopped out irrelevant stuff:
Code: Select all
[...]
tank0  compression            off                    default
tank0  atime                  on                     default
[...]
tank0  utf8only               off                    -
tank0  normalization          none                   -
tank0  casesensitivity        sensitive              -

So in order-
Compression: with OpenZFS and the integration of LZ4 you should basically always enable compression outside of specific tuning situations where you know you have a good reason not to. LZ4 is extremely fast and also avoids incompressible failure modes. Old advice about enabling/disabling per filesystem is niche or obsolete now, just enable LZ4 all the time. Compression can increase performance by trading a small amount of CPU (cheap and abundant) for disk/subsystem latency/bandwidth (way more restricted). Worst case it should do no harm. And of course even a 10 or 20% space savings isn't nothing over enough terabytes, particularly since it's generally a good idea to keep a pool from getting more then 75%-85% full.

atime: This can impose a significant write performance penalty and serves only a few unusual and niche old unix apps these days (are you running a mail server maybe?). If relatime was implemented under O3X might be worth still keeping that, but between on and off normal recommendation is off.

utf8/normalization: Mac OS X under HFS always traditionally used normalization formD and I found subtle but significant errors in the past running ZFS without it (way back in the ZEVO days actually). AFPS mixed that up, but still is pure UTF8. I don't know how strictly it's required now but I've stuck to just running formD myself for native usage (utf8 will auto go on if normalization is used). Not relevant to zvol.

casesensitivity: though it's always been possible to format case-sensitive under OS X/macOS, the default is insensitive and some apps have screwy behavior under case sensitive, if only because it's non-default (Adobe is particularly scummy/infamous in this regard). Since ZFS supports this per-FS anyway under the Mac I usually just match the default and then enable case sensitivity for code repos and the like specifically. You may have your own reasons there though! Obviously also not relevant to a zvol, whatever FS is formatted on top will be dealing with that.

The data I deleted that took so long were the User folders I'd been trying to rsync (You may see my other posts), although I skip ~/Library/caches in my copying. So, some user documents, a lot of email messages, A LOT of photos in the Photos library, and quite a bit of music; some movies. Unfortunately, I didn't analyze exactly what was there before I started the process, but that should give you an idea.

It was a while ago and under an older version of O3X, but when I used rsync to bring over all my data from my old system I do remember that my Library folder specifically took an inordinately long time vs everything else, though it was a one-time issue. Even ignoring caches there are tons and tons of tiny little files in there (Mail stores everything that way for example), and perhaps older versions at least (maybe improved? haven't stress tested that aspect yet under the most recent ones) have trouble in that area. Unfortunately I don't have hard numbers for you, but I will say yeah, what you describe does in fact ring a bell :).

I'm actually running my main account with the home folder native under a somewhat complex multipool setup. I haven't experimented in a year with comparing to a formatted zvol or the like I'm sorry to say. However I'll see if I can't find some time to experiment with this a bit, I've been meaning to set back up some ongoing rsync jobs anyway since I'm running encryption and won't be able to do any send/receives until either O3X 1.8 or I get my other system upgraded to something later then 10.12 as well.
leeb
 
Posts: 43
Joined: Thu May 15, 2014 12:10 pm

Re: Are these speeds out of the norm?

Postby tim.rohrer » Mon Sep 24, 2018 6:13 pm

Great information!

The server is now only a file server and a host for VMs (these are in another pool using SSDs).

I'll start my upgrading to 1.7.4. I'm searching for directions as this will be my first upgrade. Not sure if this is simply a reinstall using the new version or if I need to do anything special with the pools.

Then I'll go through the rest of your recommendations and make the appropriate changes. Then I'll create some more tests.

To answer your question about the mode for the enclosure, you're correct it is in bare disk/dock mode....no RAID involved.

And as far as the ashift, I had confirmed these earlier. Yes, it appears zfs correctly autodetected as they are 512b sectors.

Thanks!
tim.rohrer
 
Posts: 29
Joined: Tue Jul 24, 2018 6:49 pm

Re: Are these speeds out of the norm?

Postby leeb » Tue Sep 25, 2018 5:22 am

tim.rohrer wrote:I'll start my upgrading to 1.7.4. I'm searching for directions as this will be my first upgrade. Not sure if this is simply a reinstall using the new version or if I need to do anything special with the pools.

Should require nothing special, at least nothing that it won't warn you about. The main requirement is that you must zpool export all pools before an upgrade, however the installer has a preflight script to check that and will refuse to proceed if you forget, so no footgun to worry about there.

After you upgrade and reimport, a zpool status would let you know that upgrading your pools is required to use any new feature flags, but that's optional and only if you wanted them. The main tradeoff to know there is that many feature flags permanently change the on-disk format and make for irreversible upgrades, so once you start using them you can't import the pool with a version of OpenZFS that doesn't support that flag, the pool would have to be destroyed and recreated. This is primarily an issue if you're using pools across multiple different platforms/implementations (illumos/FreeBSD/ZoL/O3X) which of course can be out of step sometimes (it's not going to mess with just using something over NFS or the like). Use extra caution if you use ZoL. Ultimately though the various features are documented in zpool-features(5) so you can just read about them there, and the various platforms document what they've implemented. The main OpenZFS wiki appears to be out of date last I checked, so I guess go to the first party if it's a concern for you.
leeb
 
Posts: 43
Joined: Thu May 15, 2014 12:10 pm

Re: Are these speeds out of the norm?

Postby jdwhite » Sun Oct 07, 2018 11:14 am

I'm also seeing similar behavior when unlink(2)ing files. When removing a tree of files with rm -rfv {dir} the output will scroll 20-30 filenames by in the first second but then slow to 3-4 per second. Sometimes there will be a sub-second burst of multiple lines of output and then back to 3-4 lines per second. I assume that some buffer is being filled, drained, filled, etc, but what I don't understand is why I can only unlink 3-4 files per second.

Mac mini late 2014, 16GB RAM, macOS Mojave 10.14.0, O3X 1.7.4
Disks are six WD Red WD30EFRX (4K physical sectors) i two OWC 4-bay 3.5" thunderbolt 2 enclosures chained from the mini.
All disks passed the SMART health check.

My test directory is from the NetBSD pkgsrc distribution (https://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc.tar.xz). It consists of approximately 166,000 files, most of which are are well under 100K in size.

Pool details:
History for 'spool':
2015-02-04.20:05:42 zpool create -o ashift=12 -m none spool raidz2 disk2 disk3 disk4 disk5 disk6 disk7

I'm using a separate volume, spool/scratch, so I can tweak some per-volume parameters. Currently they are:
Code: Select all
NAME           PROPERTY               VALUE                  SOURCE
spool/scratch  type                   filesystem             -
spool/scratch  creation               Sat Apr 16 12:33 2016  -
spool/scratch  used                   1.26G                  -
spool/scratch  available              2.81T                  -
spool/scratch  referenced             1.26G                  -
spool/scratch  compressratio          1.75x                  -
spool/scratch  mounted                yes                    -
spool/scratch  quota                  none                   default
spool/scratch  reservation            none                   default
spool/scratch  recordsize             128K                   default
spool/scratch  mountpoint             /scratch               local
spool/scratch  sharenfs               off                    default
spool/scratch  checksum               on                     default
spool/scratch  compression            lz4                    local
spool/scratch  atime                  off                    local
spool/scratch  devices                on                     default
spool/scratch  exec                   on                     default
spool/scratch  setuid                 on                     default
spool/scratch  readonly               off                    default
spool/scratch  zoned                  off                    default
spool/scratch  snapdir                hidden                 default
spool/scratch  aclmode                passthrough            default
spool/scratch  aclinherit             restricted             default
spool/scratch  canmount               on                     default
spool/scratch  xattr                  on                     default
spool/scratch  copies                 1                      default
spool/scratch  version                5                      -
spool/scratch  utf8only               off                    -
spool/scratch  normalization          none                   -
spool/scratch  casesensitivity        sensitive              -
spool/scratch  vscan                  off                    default
spool/scratch  nbmand                 off                    default
spool/scratch  sharesmb               off                    default
spool/scratch  refquota               none                   default
spool/scratch  refreservation         none                   default
spool/scratch  primarycache           all                    default
spool/scratch  secondarycache         all                    default
spool/scratch  usedbysnapshots        0                      -
spool/scratch  usedbydataset          1.26G                  -
spool/scratch  usedbychildren         0                      -
spool/scratch  usedbyrefreservation   0                      -
spool/scratch  logbias                latency                default
spool/scratch  dedup                  off                    default
spool/scratch  mlslabel               none                   default
spool/scratch  sync                   standard               default
spool/scratch  refcompressratio       1.75x                  -
spool/scratch  written                1.26G                  -
spool/scratch  logicalused            404M                   -
spool/scratch  logicalreferenced      404M                   -
spool/scratch  filesystem_limit       none                   default
spool/scratch  snapshot_limit         none                   default
spool/scratch  filesystem_count       none                   default
spool/scratch  snapshot_count         none                   default
spool/scratch  snapdev                hidden                 default
spool/scratch  com.apple.browse       on                     default
spool/scratch  com.apple.ignoreowner  off                    default
spool/scratch  com.apple.mimic_hfs    off                    default
spool/scratch  com.apple.devdisk      poolonly               default
spool/scratch  shareafp               off                    default
spool/scratch  redundant_metadata     all                    default
spool/scratch  overlay                off                    default
spool/scratch  encryption             off                    default
spool/scratch  keylocation            none                   default
spool/scratch  keyformat              none                   default
spool/scratch  pbkdf2iters            0                      default


Kernel sysctls:

Code: Select all
$ sysctl -a | egrep '(zfs|spl)'
vfs.generic.nfs.client.access_dotzfs: 1
spl.kext_version: 1.7.4-1
kstat.vmem.vmem.spl_default_arena_parent.mem_inuse: 0
kstat.vmem.vmem.spl_default_arena_parent.mem_import: 0
kstat.vmem.vmem.spl_default_arena_parent.mem_total: 0
kstat.vmem.vmem.spl_default_arena_parent.vmem_source: 0
kstat.vmem.vmem.spl_default_arena_parent.alloc: 0
kstat.vmem.vmem.spl_default_arena_parent.free: 0
kstat.vmem.vmem.spl_default_arena_parent.wait: 0
kstat.vmem.vmem.spl_default_arena_parent.fail: 0
kstat.vmem.vmem.spl_default_arena_parent.lookup: 0
kstat.vmem.vmem.spl_default_arena_parent.search: 0
kstat.vmem.vmem.spl_default_arena_parent.populate_fail: 0
kstat.vmem.vmem.spl_default_arena_parent.contains: 0
kstat.vmem.vmem.spl_default_arena_parent.contains_search: 0
kstat.vmem.vmem.spl_default_arena_parent.parent_alloc: 0
kstat.vmem.vmem.spl_default_arena_parent.parent_free: 0
kstat.vmem.vmem.spl_default_arena_parent.threads_waiting: 0
kstat.vmem.vmem.spl_default_arena_parent.excess: 0
kstat.vmem.vmem.spl_default_arena.mem_inuse: 1334706839
kstat.vmem.vmem.spl_default_arena.mem_import: 1325400064
kstat.vmem.vmem.spl_default_arena.mem_total: 1342177280
kstat.vmem.vmem.spl_default_arena.vmem_source: 1
kstat.vmem.vmem.spl_default_arena.alloc: 7682
kstat.vmem.vmem.spl_default_arena.free: 38
kstat.vmem.vmem.spl_default_arena.wait: 0
kstat.vmem.vmem.spl_default_arena.fail: 0
kstat.vmem.vmem.spl_default_arena.lookup: 35
kstat.vmem.vmem.spl_default_arena.search: 0
kstat.vmem.vmem.spl_default_arena.populate_fail: 0
kstat.vmem.vmem.spl_default_arena.contains: 0
kstat.vmem.vmem.spl_default_arena.contains_search: 0
kstat.vmem.vmem.spl_default_arena.parent_alloc: 16
kstat.vmem.vmem.spl_default_arena.parent_free: 0
kstat.vmem.vmem.spl_default_arena.threads_waiting: 0
kstat.vmem.vmem.spl_default_arena.excess: 0
kstat.vmem.vmem.zfs_qcache.mem_inuse: 806572032
kstat.vmem.vmem.zfs_qcache.mem_import: 806572032
kstat.vmem.vmem.zfs_qcache.mem_total: 806572032
kstat.vmem.vmem.zfs_qcache.vmem_source: 16
kstat.vmem.vmem.zfs_qcache.alloc: 254605
kstat.vmem.vmem.zfs_qcache.free: 243994
kstat.vmem.vmem.zfs_qcache.wait: 0
kstat.vmem.vmem.zfs_qcache.fail: 0
kstat.vmem.vmem.zfs_qcache.lookup: 41729
kstat.vmem.vmem.zfs_qcache.search: 0
kstat.vmem.vmem.zfs_qcache.populate_fail: 0
kstat.vmem.vmem.zfs_qcache.contains: 0
kstat.vmem.vmem.zfs_qcache.contains_search: 0
kstat.vmem.vmem.zfs_qcache.parent_alloc: 442883
kstat.vmem.vmem.zfs_qcache.parent_free: 243994
kstat.vmem.vmem.zfs_qcache.threads_waiting: 0
kstat.vmem.vmem.zfs_qcache.excess: 0
kstat.vmem.vmem.zfs_file_data.mem_inuse: 166739968
kstat.vmem.vmem.zfs_file_data.mem_import: 166739968
kstat.vmem.vmem.zfs_file_data.mem_total: 166739968
kstat.vmem.vmem.zfs_file_data.vmem_source: 31
kstat.vmem.vmem.zfs_file_data.alloc: 406413
kstat.vmem.vmem.zfs_file_data.free: 404032
kstat.vmem.vmem.zfs_file_data.wait: 0
kstat.vmem.vmem.zfs_file_data.fail: 0
kstat.vmem.vmem.zfs_file_data.lookup: 106974
kstat.vmem.vmem.zfs_file_data.search: 0
kstat.vmem.vmem.zfs_file_data.populate_fail: 0
kstat.vmem.vmem.zfs_file_data.contains: 0
kstat.vmem.vmem.zfs_file_data.contains_search: 0
kstat.vmem.vmem.zfs_file_data.parent_alloc: 494715
kstat.vmem.vmem.zfs_file_data.parent_free: 404032
kstat.vmem.vmem.zfs_file_data.threads_waiting: 0
kstat.vmem.vmem.zfs_file_data.excess: 0
kstat.vmem.vmem.zfs_metadata.mem_inuse: 560508928
kstat.vmem.vmem.zfs_metadata.mem_import: 560508928
kstat.vmem.vmem.zfs_metadata.mem_total: 560508928
kstat.vmem.vmem.zfs_metadata.vmem_source: 31
kstat.vmem.vmem.zfs_metadata.alloc: 951414
kstat.vmem.vmem.zfs_metadata.free: 892891
kstat.vmem.vmem.zfs_metadata.wait: 0
kstat.vmem.vmem.zfs_metadata.fail: 0
kstat.vmem.vmem.zfs_metadata.lookup: 346527
kstat.vmem.vmem.zfs_metadata.search: 0
kstat.vmem.vmem.zfs_metadata.populate_fail: 0
kstat.vmem.vmem.zfs_metadata.contains: 0
kstat.vmem.vmem.zfs_metadata.contains_search: 0
kstat.vmem.vmem.zfs_metadata.parent_alloc: 1051390
kstat.vmem.vmem.zfs_metadata.parent_free: 892891
kstat.vmem.vmem.zfs_metadata.threads_waiting: 0
kstat.vmem.vmem.zfs_metadata.excess: 0
kstat.unix.kmem_cache.zfs_qcache_4096.buf_size: 4096
kstat.unix.kmem_cache.zfs_qcache_4096.align: 4096
kstat.unix.kmem_cache.zfs_qcache_4096.chunk_size: 4096
kstat.unix.kmem_cache.zfs_qcache_4096.slab_size: 65536
kstat.unix.kmem_cache.zfs_qcache_4096.alloc: 954043
kstat.unix.kmem_cache.zfs_qcache_4096.alloc_fail: 0
kstat.unix.kmem_cache.zfs_qcache_4096.free: 1557093
kstat.unix.kmem_cache.zfs_qcache_4096.depot_alloc: 223719
kstat.unix.kmem_cache.zfs_qcache_4096.depot_free: 230846
kstat.unix.kmem_cache.zfs_qcache_4096.depot_contention: 21
kstat.unix.kmem_cache.zfs_qcache_4096.slab_alloc: 680517
kstat.unix.kmem_cache.zfs_qcache_4096.slab_free: 638220
kstat.unix.kmem_cache.zfs_qcache_4096.buf_constructed: 0
kstat.unix.kmem_cache.zfs_qcache_4096.buf_avail: 18167
kstat.unix.kmem_cache.zfs_qcache_4096.buf_inuse: 42297
kstat.unix.kmem_cache.zfs_qcache_4096.buf_total: 60464
kstat.unix.kmem_cache.zfs_qcache_4096.buf_max: 221632
kstat.unix.kmem_cache.zfs_qcache_4096.slab_create: 17193
kstat.unix.kmem_cache.zfs_qcache_4096.slab_destroy: 13414
kstat.unix.kmem_cache.zfs_qcache_4096.vmem_source: 31
kstat.unix.kmem_cache.zfs_qcache_4096.hash_size: 131072
kstat.unix.kmem_cache.zfs_qcache_4096.hash_lookup_depth: 132784
kstat.unix.kmem_cache.zfs_qcache_4096.hash_rescale: 3
kstat.unix.kmem_cache.zfs_qcache_4096.full_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_4096.empty_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_4096.magazine_size: 3
kstat.unix.kmem_cache.zfs_qcache_4096.reap: 177
kstat.unix.kmem_cache.zfs_qcache_4096.defrag: 0
kstat.unix.kmem_cache.zfs_qcache_4096.scan: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_callbacks: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_yes: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_no: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_later: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_dont_need: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_dont_know: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_hunt_found: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_slabs_freed: 0
kstat.unix.kmem_cache.zfs_qcache_4096.move_reclaimable: 0
kstat.unix.kmem_cache.zfs_qcache_4096.no_vba_success: 0
kstat.unix.kmem_cache.zfs_qcache_4096.no_vba_fail: 0
kstat.unix.kmem_cache.zfs_qcache_4096.arc_no_grow_set: 0
kstat.unix.kmem_cache.zfs_qcache_4096.arc_no_grow: 0
kstat.unix.kmem_cache.zfs_qcache_8192.buf_size: 8192
kstat.unix.kmem_cache.zfs_qcache_8192.align: 4096
kstat.unix.kmem_cache.zfs_qcache_8192.chunk_size: 8192
kstat.unix.kmem_cache.zfs_qcache_8192.slab_size: 65536
kstat.unix.kmem_cache.zfs_qcache_8192.alloc: 130057
kstat.unix.kmem_cache.zfs_qcache_8192.alloc_fail: 0
kstat.unix.kmem_cache.zfs_qcache_8192.free: 192363
kstat.unix.kmem_cache.zfs_qcache_8192.depot_alloc: 62398
kstat.unix.kmem_cache.zfs_qcache_8192.depot_free: 66523
kstat.unix.kmem_cache.zfs_qcache_8192.depot_contention: 6
kstat.unix.kmem_cache.zfs_qcache_8192.slab_alloc: 58351
kstat.unix.kmem_cache.zfs_qcache_8192.slab_free: 58266
kstat.unix.kmem_cache.zfs_qcache_8192.buf_constructed: 0
kstat.unix.kmem_cache.zfs_qcache_8192.buf_avail: 115
kstat.unix.kmem_cache.zfs_qcache_8192.buf_inuse: 85
kstat.unix.kmem_cache.zfs_qcache_8192.buf_total: 200
kstat.unix.kmem_cache.zfs_qcache_8192.buf_max: 2640
kstat.unix.kmem_cache.zfs_qcache_8192.slab_create: 2752
kstat.unix.kmem_cache.zfs_qcache_8192.slab_destroy: 2727
kstat.unix.kmem_cache.zfs_qcache_8192.vmem_source: 31
kstat.unix.kmem_cache.zfs_qcache_8192.hash_size: 256
kstat.unix.kmem_cache.zfs_qcache_8192.hash_lookup_depth: 19020
kstat.unix.kmem_cache.zfs_qcache_8192.hash_rescale: 10
kstat.unix.kmem_cache.zfs_qcache_8192.full_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_8192.empty_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_8192.magazine_size: 1
kstat.unix.kmem_cache.zfs_qcache_8192.reap: 177
kstat.unix.kmem_cache.zfs_qcache_8192.defrag: 0
kstat.unix.kmem_cache.zfs_qcache_8192.scan: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_callbacks: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_yes: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_no: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_later: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_dont_need: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_dont_know: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_hunt_found: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_slabs_freed: 0
kstat.unix.kmem_cache.zfs_qcache_8192.move_reclaimable: 0
kstat.unix.kmem_cache.zfs_qcache_8192.no_vba_success: 0
kstat.unix.kmem_cache.zfs_qcache_8192.no_vba_fail: 0
kstat.unix.kmem_cache.zfs_qcache_8192.arc_no_grow_set: 0
kstat.unix.kmem_cache.zfs_qcache_8192.arc_no_grow: 0
kstat.unix.kmem_cache.zfs_qcache_12288.buf_size: 12288
kstat.unix.kmem_cache.zfs_qcache_12288.align: 4096
kstat.unix.kmem_cache.zfs_qcache_12288.chunk_size: 12288
kstat.unix.kmem_cache.zfs_qcache_12288.slab_size: 65536
kstat.unix.kmem_cache.zfs_qcache_12288.alloc: 37664
kstat.unix.kmem_cache.zfs_qcache_12288.alloc_fail: 0
kstat.unix.kmem_cache.zfs_qcache_12288.free: 55799
kstat.unix.kmem_cache.zfs_qcache_12288.depot_alloc: 18296
kstat.unix.kmem_cache.zfs_qcache_12288.depot_free: 19357
kstat.unix.kmem_cache.zfs_qcache_12288.depot_contention: 1
kstat.unix.kmem_cache.zfs_qcache_12288.slab_alloc: 17404
kstat.unix.kmem_cache.zfs_qcache_12288.slab_free: 17239
kstat.unix.kmem_cache.zfs_qcache_12288.buf_constructed: 0
kstat.unix.kmem_cache.zfs_qcache_12288.buf_avail: 290
kstat.unix.kmem_cache.zfs_qcache_12288.buf_inuse: 165
kstat.unix.kmem_cache.zfs_qcache_12288.buf_total: 455
kstat.unix.kmem_cache.zfs_qcache_12288.buf_max: 1910
kstat.unix.kmem_cache.zfs_qcache_12288.slab_create: 2300
kstat.unix.kmem_cache.zfs_qcache_12288.slab_destroy: 2209
kstat.unix.kmem_cache.zfs_qcache_12288.vmem_source: 31
kstat.unix.kmem_cache.zfs_qcache_12288.hash_size: 512
kstat.unix.kmem_cache.zfs_qcache_12288.hash_lookup_depth: 8358
kstat.unix.kmem_cache.zfs_qcache_12288.hash_rescale: 16
kstat.unix.kmem_cache.zfs_qcache_12288.full_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_12288.empty_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_12288.magazine_size: 1
kstat.unix.kmem_cache.zfs_qcache_12288.reap: 177
kstat.unix.kmem_cache.zfs_qcache_12288.defrag: 0
kstat.unix.kmem_cache.zfs_qcache_12288.scan: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_callbacks: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_yes: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_no: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_later: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_dont_need: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_dont_know: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_hunt_found: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_slabs_freed: 0
kstat.unix.kmem_cache.zfs_qcache_12288.move_reclaimable: 0
kstat.unix.kmem_cache.zfs_qcache_12288.no_vba_success: 0
kstat.unix.kmem_cache.zfs_qcache_12288.no_vba_fail: 0
kstat.unix.kmem_cache.zfs_qcache_12288.arc_no_grow_set: 0
kstat.unix.kmem_cache.zfs_qcache_12288.arc_no_grow: 0
kstat.unix.kmem_cache.zfs_qcache_16384.buf_size: 16384
kstat.unix.kmem_cache.zfs_qcache_16384.align: 4096
kstat.unix.kmem_cache.zfs_qcache_16384.chunk_size: 16384
kstat.unix.kmem_cache.zfs_qcache_16384.slab_size: 65536
kstat.unix.kmem_cache.zfs_qcache_16384.alloc: 517572
kstat.unix.kmem_cache.zfs_qcache_16384.alloc_fail: 0
kstat.unix.kmem_cache.zfs_qcache_16384.free: 834254
kstat.unix.kmem_cache.zfs_qcache_16384.depot_alloc: 167127
kstat.unix.kmem_cache.zfs_qcache_16384.depot_free: 168948
kstat.unix.kmem_cache.zfs_qcache_16384.depot_contention: 12
kstat.unix.kmem_cache.zfs_qcache_16384.slab_alloc: 345905
kstat.unix.kmem_cache.zfs_qcache_16384.slab_free: 330383
kstat.unix.kmem_cache.zfs_qcache_16384.buf_constructed: 0
kstat.unix.kmem_cache.zfs_qcache_16384.buf_avail: 2
kstat.unix.kmem_cache.zfs_qcache_16384.buf_inuse: 15522
kstat.unix.kmem_cache.zfs_qcache_16384.buf_total: 15524
kstat.unix.kmem_cache.zfs_qcache_16384.buf_max: 134324
kstat.unix.kmem_cache.zfs_qcache_16384.slab_create: 42329
kstat.unix.kmem_cache.zfs_qcache_16384.slab_destroy: 38448
kstat.unix.kmem_cache.zfs_qcache_16384.vmem_source: 31
kstat.unix.kmem_cache.zfs_qcache_16384.hash_size: 8192
kstat.unix.kmem_cache.zfs_qcache_16384.hash_lookup_depth: 92060
kstat.unix.kmem_cache.zfs_qcache_16384.hash_rescale: 9
kstat.unix.kmem_cache.zfs_qcache_16384.full_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_16384.empty_magazines: 0
kstat.unix.kmem_cache.zfs_qcache_16384.magazine_size: 3
kstat.unix.kmem_cache.zfs_qcache_16384.reap: 177
kstat.unix.kmem_cache.zfs_qcache_16384.defrag: 0
kstat.unix.kmem_cache.zfs_qcache_16384.scan: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_callbacks: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_yes: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_no: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_later: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_dont_need: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_dont_know: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_hunt_found: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_slabs_freed: 0
kstat.unix.kmem_cache.zfs_qcache_16384.move_reclaimable: 0
kstat.unix.kmem_cache.zfs_qcache_16384.no_vba_success: 0
kstat.unix.kmem_cache.zfs_qcache_16384.no_vba_fail: 0
kstat.unix.kmem_cache.zfs_qcache_16384.arc_no_grow_set: 0
kstat.unix.kmem_cache.zfs_qcache_16384.arc_no_grow: 0
kstat.unix.kmem_cache.zfs_znode_cache.buf_size: 1520
kstat.unix.kmem_cache.zfs_znode_cache.align: 8
kstat.unix.kmem_cache.zfs_znode_cache.chunk_size: 1520
kstat.unix.kmem_cache.zfs_znode_cache.slab_size: 12288
kstat.unix.kmem_cache.zfs_znode_cache.alloc: 13988730
kstat.unix.kmem_cache.zfs_znode_cache.alloc_fail: 0
kstat.unix.kmem_cache.zfs_znode_cache.free: 15192516
kstat.unix.kmem_cache.zfs_znode_cache.depot_alloc: 167548
kstat.unix.kmem_cache.zfs_znode_cache.depot_free: 196274
kstat.unix.kmem_cache.zfs_znode_cache.depot_contention: 34
kstat.unix.kmem_cache.zfs_znode_cache.slab_alloc: 1415077
kstat.unix.kmem_cache.zfs_znode_cache.slab_free: 1217371
kstat.unix.kmem_cache.zfs_znode_cache.buf_constructed: 155410
kstat.unix.kmem_cache.zfs_znode_cache.buf_avail: 164456
kstat.unix.kmem_cache.zfs_znode_cache.buf_inuse: 42296
kstat.unix.kmem_cache.zfs_znode_cache.buf_total: 206752
kstat.unix.kmem_cache.zfs_znode_cache.buf_max: 256232
kstat.unix.kmem_cache.zfs_znode_cache.slab_create: 141140
kstat.unix.kmem_cache.zfs_znode_cache.slab_destroy: 115296
kstat.unix.kmem_cache.zfs_znode_cache.vmem_source: 29
kstat.unix.kmem_cache.zfs_znode_cache.hash_size: 262144
kstat.unix.kmem_cache.zfs_znode_cache.hash_lookup_depth: 440815
kstat.unix.kmem_cache.zfs_znode_cache.hash_rescale: 10
kstat.unix.kmem_cache.zfs_znode_cache.full_magazines: 10354
kstat.unix.kmem_cache.zfs_znode_cache.empty_magazines: 0
kstat.unix.kmem_cache.zfs_znode_cache.magazine_size: 15
kstat.unix.kmem_cache.zfs_znode_cache.reap: 177
kstat.unix.kmem_cache.zfs_znode_cache.defrag: 0
kstat.unix.kmem_cache.zfs_znode_cache.scan: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_callbacks: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_yes: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_no: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_later: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_dont_need: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_dont_know: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_hunt_found: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_slabs_freed: 0
kstat.unix.kmem_cache.zfs_znode_cache.move_reclaimable: 0
kstat.unix.kmem_cache.zfs_znode_cache.no_vba_success: 0
kstat.unix.kmem_cache.zfs_znode_cache.no_vba_fail: 0
kstat.unix.kmem_cache.zfs_znode_cache.arc_no_grow_set: 0
kstat.unix.kmem_cache.zfs_znode_cache.arc_no_grow: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.priority: 81
kstat.unix.taskq_d.zfs_vn_rele_taskq.btasks: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.bexecuted: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.bmaxtasks: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.bnalloc: 32
kstat.unix.taskq_d.zfs_vn_rele_taskq.bnactive: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.btotaltime: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.hits: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.misses: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.overflows: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.tcreates: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.tdeaths: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.maxthreads: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.nomem: 4
kstat.unix.taskq_d.zfs_vn_rele_taskq.disptcreates: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.totaltime: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.nalloc: 0
kstat.unix.taskq_d.zfs_vn_rele_taskq.nfree: 0
kstat.spl.misc.spl_misc.os_mem_alloc: 11574181888
kstat.spl.misc.spl_misc.active_threads: 343
kstat.spl.misc.spl_misc.active_mutex: 2587318
kstat.spl.misc.spl_misc.active_rwlock: 2489166
kstat.spl.misc.spl_misc.active_tsd: 1582
kstat.spl.misc.spl_misc.spl_free_wake_count: 29730680
kstat.spl.misc.spl_misc.spl_spl_free: 793686016
kstat.spl.misc.spl_misc.spl_spl_free_manual_pressure: 0
kstat.spl.misc.spl_misc.spl_spl_free_fast_pressure: 0
kstat.spl.misc.spl_misc.spl_spl_free_delta_ema: 24576
kstat.spl.misc.spl_misc.spl_spl_free_negative_count: 164776
kstat.spl.misc.spl_misc.spl_osif_malloc_success: 61260
kstat.spl.misc.spl_misc.spl_osif_malloc_bytes: 19916128256
kstat.spl.misc.spl_misc.spl_osif_free: 24330
kstat.spl.misc.spl_misc.spl_osif_free_bytes: 8341946368
kstat.spl.misc.spl_misc.spl_bucket_non_pow2_allocs: 144377
kstat.spl.misc.spl_misc.vmem_unconditional_allocs: 253
kstat.spl.misc.spl_misc.vmem_unconditional_alloc_bytes: 127664128
kstat.spl.misc.spl_misc.vmem_conditional_allocs: 60991
kstat.spl.misc.spl_misc.vmem_conditional_alloc_bytes: 18463064064
kstat.spl.misc.spl_misc.vmem_conditional_alloc_deny: 8327
kstat.spl.misc.spl_misc.vmem_conditional_alloc_deny_bytes: 2244214784
kstat.spl.misc.spl_misc.spl_xat_success: 60985
kstat.spl.misc.spl_misc.spl_xat_late_success: 3
kstat.spl.misc.spl_misc.spl_xat_late_success_nosleep: 0
kstat.spl.misc.spl_misc.spl_xat_pressured: 0
kstat.spl.misc.spl_misc.spl_xat_bailed: 256
kstat.spl.misc.spl_misc.spl_xat_bailed_contended: 42
kstat.spl.misc.spl_misc.spl_xat_lastalloc: 296403017695635
kstat.spl.misc.spl_misc.spl_xat_lastfree: 313655173903713
kstat.spl.misc.spl_misc.spl_xat_forced: 253
kstat.spl.misc.spl_misc.spl_xat_sleep: 10937
kstat.spl.misc.spl_misc.spl_xat_late_deny: 0
kstat.spl.misc.spl_misc.spl_xat_no_waiters: 259
kstat.spl.misc.spl_misc.spl_xft_wait: 32
kstat.spl.misc.spl_misc.spl_vba_parent_memory_appeared: 20827
kstat.spl.misc.spl_misc.spl_vba_parent_memory_blocked: 14050
kstat.spl.misc.spl_misc.spl_vba_hiprio_blocked: 7808
kstat.spl.misc.spl_misc.spl_vba_cv_timeout: 2114
kstat.spl.misc.spl_misc.spl_vba_loop_timeout: 89
kstat.spl.misc.spl_misc.spl_vba_cv_timeout_blocked: 12149
kstat.spl.misc.spl_misc.spl_vba_loop_timeout_blocked: 2306
kstat.spl.misc.spl_misc.spl_vba_sleep: 149152
kstat.spl.misc.spl_misc.spl_vba_loop_entries: 20077
kstat.spl.misc.spl_misc.spl_tunable_large_span: 1048576
kstat.spl.misc.spl_misc.spl_tunable_small_span: 262144
kstat.spl.misc.spl_misc.spl_buckets_mem_free: 2124009472
kstat.spl.misc.spl_misc.spl_arc_no_grow_bits: 32
kstat.spl.misc.spl_misc.spl_arc_no_grow_count: 27850
kstat.spl.misc.spl_misc.spl_vmem_frag_max_walk: 1000
kstat.spl.misc.spl_misc.spl_vmem_frag_walked_out: 467287
kstat.spl.misc.spl_misc.spl_vmem_frag_walk_cnt: 652640437
kstat.spl.misc.spl_misc.spl_arc_reclaim_avoided: 1005
kstat.spl.misc.spl_misc.kmem_free_to_slab_when_fragmented: 0
kstat.zfs.darwin.ldi.handle_count: 6
kstat.zfs.darwin.ldi.handle_count_iokit: 6
kstat.zfs.darwin.ldi.handle_count_vnode: 0
kstat.zfs.darwin.ldi.handle_refs: 6
kstat.zfs.darwin.ldi.handle_open_rw: 6
kstat.zfs.darwin.ldi.handle_open_ro: 0
kstat.zfs.darwin.tunable.spa_version: 5000
kstat.zfs.darwin.tunable.zpl_version: 5
kstat.zfs.darwin.tunable.active_vnodes: 42296
kstat.zfs.darwin.tunable.vnop_debug: 0
kstat.zfs.darwin.tunable.reclaim_nodes: 13778885
kstat.zfs.darwin.tunable.ignore_negatives: 0
kstat.zfs.darwin.tunable.ignore_positives: 0
kstat.zfs.darwin.tunable.create_negatives: 1
kstat.zfs.darwin.tunable.force_formd_normalized: 0
kstat.zfs.darwin.tunable.skip_unlinked_drain: 0
kstat.zfs.darwin.tunable.zfs_arc_max: 0
kstat.zfs.darwin.tunable.zfs_arc_min: 0
kstat.zfs.darwin.tunable.zfs_arc_meta_limit: 0
kstat.zfs.darwin.tunable.zfs_arc_meta_min: 0
kstat.zfs.darwin.tunable.zfs_arc_grow_retry: 60
kstat.zfs.darwin.tunable.zfs_arc_shrink_shift: 0
kstat.zfs.darwin.tunable.zfs_arc_p_min_shift: 0
kstat.zfs.darwin.tunable.zfs_arc_average_blocksize: 8192
kstat.zfs.darwin.tunable.l2arc_write_max: 8388608
kstat.zfs.darwin.tunable.l2arc_write_boost: 8388608
kstat.zfs.darwin.tunable.l2arc_headroom: 2
kstat.zfs.darwin.tunable.l2arc_headroom_boost: 200
kstat.zfs.darwin.tunable.l2arc_max_block_size: 16777216
kstat.zfs.darwin.tunable.l2arc_feed_secs: 1
kstat.zfs.darwin.tunable.l2arc_feed_min_ms: 200
kstat.zfs.darwin.tunable.max_active: 1000
kstat.zfs.darwin.tunable.sync_read_min_active: 10
kstat.zfs.darwin.tunable.sync_read_max_active: 10
kstat.zfs.darwin.tunable.sync_write_min_active: 10
kstat.zfs.darwin.tunable.sync_write_max_active: 10
kstat.zfs.darwin.tunable.async_read_min_active: 1
kstat.zfs.darwin.tunable.async_read_max_active: 3
kstat.zfs.darwin.tunable.async_write_min_active: 1
kstat.zfs.darwin.tunable.async_write_max_active: 10
kstat.zfs.darwin.tunable.scrub_min_active: 1
kstat.zfs.darwin.tunable.scrub_max_active: 2
kstat.zfs.darwin.tunable.async_write_min_dirty_pct: 30
kstat.zfs.darwin.tunable.async_write_max_dirty_pct: 60
kstat.zfs.darwin.tunable.aggregation_limit: 131072
kstat.zfs.darwin.tunable.read_gap_limit: 32768
kstat.zfs.darwin.tunable.write_gap_limit: 4096
kstat.zfs.darwin.tunable.arc_reduce_dnlc_percent: 3
kstat.zfs.darwin.tunable.arc_lotsfree_percent: 10
kstat.zfs.darwin.tunable.zfs_dirty_data_max: 858993459
kstat.zfs.darwin.tunable.zfs_dirty_data_sync: 67108864
kstat.zfs.darwin.tunable.zfs_delay_max_ns: 100000000
kstat.zfs.darwin.tunable.zfs_delay_min_dirty_percent: 60
kstat.zfs.darwin.tunable.zfs_delay_scale: 500000
kstat.zfs.darwin.tunable.spa_asize_inflation: 6
kstat.zfs.darwin.tunable.zfs_mdcomp_disable: 0
kstat.zfs.darwin.tunable.zfs_prefetch_disable: 0
kstat.zfs.darwin.tunable.zfetch_max_streams: 8
kstat.zfs.darwin.tunable.zfetch_min_sec_reap: 2
kstat.zfs.darwin.tunable.zfetch_array_rd_sz: 1048576
kstat.zfs.darwin.tunable.zfs_default_bs: 9
kstat.zfs.darwin.tunable.zfs_default_ibs: 17
kstat.zfs.darwin.tunable.metaslab_aliquot: 524288
kstat.zfs.darwin.tunable.spa_max_replication_override: 3
kstat.zfs.darwin.tunable.spa_mode_global: 3
kstat.zfs.darwin.tunable.zfs_flags: 0
kstat.zfs.darwin.tunable.zfs_txg_timeout: 5
kstat.zfs.darwin.tunable.zfs_vdev_cache_max: 16384
kstat.zfs.darwin.tunable.zfs_vdev_cache_size: 0
kstat.zfs.darwin.tunable.zfs_vdev_cache_bshift: 0
kstat.zfs.darwin.tunable.vdev_mirror_shift: 0
kstat.zfs.darwin.tunable.zfs_scrub_limit: 0
kstat.zfs.darwin.tunable.zfs_no_scrub_io: 0
kstat.zfs.darwin.tunable.zfs_no_scrub_prefetch: 0
kstat.zfs.darwin.tunable.fzap_default_block_shift: 14
kstat.zfs.darwin.tunable.zfs_immediate_write_sz: 32768
kstat.zfs.darwin.tunable.zfs_read_chunk_size: 1048576
kstat.zfs.darwin.tunable.zfs_nocacheflush: 0
kstat.zfs.darwin.tunable.zil_replay_disable: 0
kstat.zfs.darwin.tunable.metaslab_df_alloc_threshold: 16777216
kstat.zfs.darwin.tunable.metaslab_df_free_pct: 4
kstat.zfs.darwin.tunable.zio_injection_enabled: 0
kstat.zfs.darwin.tunable.zvol_immediate_write_sz: 32768
kstat.zfs.darwin.tunable.l2arc_noprefetch: 1
kstat.zfs.darwin.tunable.l2arc_feed_again: 1
kstat.zfs.darwin.tunable.l2arc_norw: 1
kstat.zfs.darwin.tunable.zfs_top_maxinflight: 32
kstat.zfs.darwin.tunable.zfs_resilver_delay: 2
kstat.zfs.darwin.tunable.zfs_scrub_delay: 4
kstat.zfs.darwin.tunable.zfs_scan_idle: 50
kstat.zfs.darwin.tunable.zfs_recover: 0
kstat.zfs.darwin.tunable.zfs_free_bpobj_enabled: 1
kstat.zfs.darwin.tunable.zfs_send_corrupt_data: 0
kstat.zfs.darwin.tunable.zfs_send_queue_length: 16777216
kstat.zfs.darwin.tunable.zfs_recv_queue_length: 16777216
kstat.zfs.darwin.tunable.zvol_inhibit_dev: 0
kstat.zfs.darwin.tunable.zfs_send_set_freerecords_bit: 1
kstat.zfs.darwin.tunable.zfs_write_implies_delete_child: 1
kstat.zfs.darwin.tunable.zfs_send_holes_without_birth_time: 1
kstat.zfs.darwin.tunable.dbuf_cache_max_bytes: 234881024
kstat.zfs.darwin.tunable.zfs_vdev_queue_depth_pct: 1000
kstat.zfs.darwin.tunable.zio_dva_throttle_enabled: 1
kstat.zfs.darwin.tunable.zfs_vdev_file_size_mismatch_cnt: 0
kstat.zfs.misc.fm.erpt-dropped: 0
kstat.zfs.misc.fm.erpt-set-failed: 0
kstat.zfs.misc.fm.fmri-set-failed: 0
kstat.zfs.misc.fm.payload-set-failed: 0
kstat.zfs.misc.metaslab_trace_stats.metaslab_trace_over_limit: 0
kstat.zfs.misc.abdstats.struct_size: 63695848
kstat.zfs.misc.abdstats.scatter_cnt: 247321
kstat.zfs.misc.abdstats.scatter_data_size: 6076854272
kstat.zfs.misc.abdstats.scatter_chunk_waste: 50160640
kstat.zfs.misc.abdstats.linear_cnt: 0
kstat.zfs.misc.abdstats.linear_data_size: 0
kstat.zfs.misc.abdstats.is_file_data_scattered: 5829285888
kstat.zfs.misc.abdstats.is_metadata_scattered: 247568384
kstat.zfs.misc.abdstats.is_file_data_linear: 0
kstat.zfs.misc.abdstats.is_metadata_linear: 0
kstat.zfs.misc.abdstats.small_scatter_cnt: 80302
kstat.zfs.misc.abdstats.metadata_scattered_buffers: 61931
kstat.zfs.misc.abdstats.filedata_scattered_buffers: 185390
kstat.zfs.misc.abdstats.borrowed_bufs: 0
kstat.zfs.misc.abdstats.move_refcount_nonzero: 0
kstat.zfs.misc.abdstats.moved_linear: 0
kstat.zfs.misc.abdstats.moved_scattered_filedata: 461909
kstat.zfs.misc.abdstats.moved_scattered_metadata: 696313
kstat.zfs.misc.abdstats.move_to_buf_flag_fail: 99306227
kstat.zfs.misc.xuio_stats.onloan_read_buf: 0
kstat.zfs.misc.xuio_stats.onloan_write_buf: 0
kstat.zfs.misc.xuio_stats.read_buf_copied: 0
kstat.zfs.misc.xuio_stats.read_buf_nocopy: 0
kstat.zfs.misc.xuio_stats.write_buf_copied: 0
kstat.zfs.misc.xuio_stats.write_buf_nocopy: 482964
kstat.zfs.misc.zfetchstats.hits: 5064712
kstat.zfs.misc.zfetchstats.misses: 50605373
kstat.zfs.misc.zfetchstats.max_streams: 50277016
kstat.zfs.misc.dmu_tx.dmu_tx_assigned: 19584384
kstat.zfs.misc.dmu_tx.dmu_tx_delay: 0
kstat.zfs.misc.dmu_tx.dmu_tx_error: 0
kstat.zfs.misc.dmu_tx.dmu_tx_suspended: 0
kstat.zfs.misc.dmu_tx.dmu_tx_group: 8
kstat.zfs.misc.dmu_tx.dmu_tx_memory_reserve: 0
kstat.zfs.misc.dmu_tx.dmu_tx_memory_reclaim: 0
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_throttle: 0
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_delay: 0
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_over_max: 0
kstat.zfs.misc.dmu_tx.dmu_tx_quota: 0
kstat.zfs.misc.arcstats.hits: 3077531
kstat.zfs.misc.arcstats.misses: 5040694
kstat.zfs.misc.arcstats.demand_data_hits: 405462
kstat.zfs.misc.arcstats.demand_data_misses: 231507
kstat.zfs.misc.arcstats.demand_metadata_hits: 1339042
kstat.zfs.misc.arcstats.demand_metadata_misses: 3975735
kstat.zfs.misc.arcstats.prefetch_data_hits: 121778
kstat.zfs.misc.arcstats.prefetch_data_misses: 120095
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 1211249
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 713357
kstat.zfs.misc.arcstats.mru_hits: 728804
kstat.zfs.misc.arcstats.mru_ghost_hits: 130249
kstat.zfs.misc.arcstats.mfu_hits: 2025889
kstat.zfs.misc.arcstats.mfu_ghost_hits: 372585
kstat.zfs.misc.arcstats.deleted: 1413573
kstat.zfs.misc.arcstats.mutex_miss: 22234
kstat.zfs.misc.arcstats.evict_skip: 13459108
kstat.zfs.misc.arcstats.evict_not_enough: 1210704
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_l2_eligible: 101569713664
kstat.zfs.misc.arcstats.evict_l2_ineligible: 8263962624
kstat.zfs.misc.arcstats.evict_l2_skip: 0
kstat.zfs.misc.arcstats.hash_elements: 326045
kstat.zfs.misc.arcstats.hash_elements_max: 326047
kstat.zfs.misc.arcstats.hash_collisions: 1855391
kstat.zfs.misc.arcstats.hash_chains: 41573
kstat.zfs.misc.arcstats.hash_chain_max: 5
kstat.zfs.misc.arcstats.p: 5312491923
kstat.zfs.misc.arcstats.c: 6970038027
kstat.zfs.misc.arcstats.c_min: 939524096
kstat.zfs.misc.arcstats.c_max: 7516192768
kstat.zfs.misc.arcstats.size: 6909579776
kstat.zfs.misc.arcstats.compressed_size: 6076854272
kstat.zfs.misc.arcstats.uncompressed_size: 7955185664
kstat.zfs.misc.arcstats.overhead_size: 426586624
kstat.zfs.misc.arcstats.hdr_size: 77958944
kstat.zfs.misc.arcstats.data_size: 5932531200
kstat.zfs.misc.arcstats.metadata_size: 570909696
kstat.zfs.misc.arcstats.other_size: 328179936
kstat.zfs.misc.arcstats.anon_size: 5398528
kstat.zfs.misc.arcstats.anon_evictable_data: 0
kstat.zfs.misc.arcstats.anon_evictable_metadata: 0
kstat.zfs.misc.arcstats.mru_size: 5274651648
kstat.zfs.misc.arcstats.mru_evictable_data: 4778288128
kstat.zfs.misc.arcstats.mru_evictable_metadata: 114551296
kstat.zfs.misc.arcstats.mru_ghost_size: 1417127424
kstat.zfs.misc.arcstats.mru_ghost_evictable_data: 1051964416
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata: 365163008
kstat.zfs.misc.arcstats.mfu_size: 1223390720
kstat.zfs.misc.arcstats.mfu_evictable_data: 953356800
kstat.zfs.misc.arcstats.mfu_evictable_metadata: 59849728
kstat.zfs.misc.arcstats.mfu_ghost_size: 1830138880
kstat.zfs.misc.arcstats.mfu_ghost_evictable_data: 1107024896
kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata: 723113984
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_lock_retry: 0
kstat.zfs.misc.arcstats.l2_writes_skip_toobig: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_evict_l1cached: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_asize: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.arc_meta_used: 977048576
kstat.zfs.misc.arcstats.arc_meta_limit: 1879048192
kstat.zfs.misc.arcstats.arc_meta_max: 5790178832
kstat.zfs.misc.arcstats.arc_meta_min: 117440512
kstat.zfs.misc.arcstats.sync_wait_for_async: 5602
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch: 104876
kstat.zfs.misc.arcstats.tempreserve: 0
kstat.zfs.misc.arcstats.loaned_bytes: 0
kstat.zfs.misc.arcstats.dbuf_redirtied: 66729276
kstat.zfs.misc.arcstats.arc_no_grow: 0
kstat.zfs.misc.arcstats.arc_move_try: 105337257
kstat.zfs.misc.arcstats.arc_move_no_small_qcache: 4852509
kstat.zfs.misc.arcstats.arc_move_skip_young_abd: 20188
kstat.zfs.misc.arcstats.arc_move_buf_too_young: 111
kstat.zfs.misc.arcstats.arc_move_buf_busy: 0
kstat.zfs.misc.arcstats.arc_move_no_linear: 0
kstat.zfs.misc.arcstats.abd_scan_passes: 32140
kstat.zfs.misc.arcstats.abd_scan_not_one_pass: 0
kstat.zfs.misc.arcstats.abd_scan_not_mutex_skip: 3892
kstat.zfs.misc.arcstats.abd_scan_completed_list: 84269
kstat.zfs.misc.arcstats.abd_scan_list_timeout: 39750
kstat.zfs.misc.arcstats.abd_scan_big_arc: 34175
kstat.zfs.misc.arcstats.abd_scan_full_walk: 5765
kstat.zfs.misc.arcstats.abd_scan_skip_young: 39442722
kstat.zfs.misc.arcstats.abd_scan_skip_nothing: 6
kstat.zfs.misc.arcstats.abd_move_no_shared: 0
kstat.zfs.misc.arcstats.arc_reclaim_waiters_cnt: 0
kstat.zfs.misc.arcstats.arc_reclaim_waiters_cur: 0
kstat.zfs.misc.arcstats.arc_reclaim_waiters_sig: 0
kstat.zfs.misc.arcstats.arc_reclaim_waiters_bcst: 52522
kstat.zfs.misc.arcstats.arc_reclaim_waiters_tout: 0
kstat.zfs.misc.zil.zil_commit_count: 0
kstat.zfs.misc.zil.zil_commit_writer_count: 0
kstat.zfs.misc.zil.zil_itx_count: 0
kstat.zfs.misc.zil.zil_itx_indirect_count: 0
kstat.zfs.misc.zil.zil_itx_indirect_bytes: 0
kstat.zfs.misc.zil.zil_itx_copied_count: 0
kstat.zfs.misc.zil.zil_itx_copied_bytes: 0
kstat.zfs.misc.zil.zil_itx_needcopy_count: 0
kstat.zfs.misc.zil.zil_itx_needcopy_bytes: 0
kstat.zfs.misc.zil.zil_itx_metaslab_normal_count: 0
kstat.zfs.misc.zil.zil_itx_metaslab_normal_bytes: 0
kstat.zfs.misc.zil.zil_itx_metaslab_slog_count: 0
kstat.zfs.misc.zil.zil_itx_metaslab_slog_bytes: 0
kstat.zfs.misc.vdev_cache_stats.delegations: 0
kstat.zfs.misc.vdev_cache_stats.hits: 0
kstat.zfs.misc.vdev_cache_stats.misses: 0
kstat.zfs/spool.misc.dmu_tx_assign.1 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.2 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.4 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.8 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.16 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.32 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.64 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.128 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.256 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.512 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.1024 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.2048 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.4096 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.8192 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.16384 ns: 3
kstat.zfs/spool.misc.dmu_tx_assign.32768 ns: 2
kstat.zfs/spool.misc.dmu_tx_assign.65536 ns: 2
kstat.zfs/spool.misc.dmu_tx_assign.131072 ns: 1
kstat.zfs/spool.misc.dmu_tx_assign.262144 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.524288 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.1048576 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.2097152 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.4194304 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.8388608 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.16777216 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.33554432 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.67108864 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.134217728 ns: 5
kstat.zfs/spool.misc.dmu_tx_assign.268435456 ns: 2
kstat.zfs/spool.misc.dmu_tx_assign.536870912 ns: 5
kstat.zfs/spool.misc.dmu_tx_assign.1073741824 ns: 7
kstat.zfs/spool.misc.dmu_tx_assign.2147483648 ns: 18
kstat.zfs/spool.misc.dmu_tx_assign.4294967296 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.8589934592 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.17179869184 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.34359738368 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.68719476736 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.137438953472 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.274877906944 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.549755813888 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.1099511627776 ns: 0
kstat.zfs/spool.misc.dmu_tx_assign.2199023255552 ns: 0
zfs.kext_version: 1.7.4-1
jdwhite
 
Posts: 11
Joined: Sat May 10, 2014 6:04 pm

Re: Are these speeds out of the norm?

Postby tangent » Wed Oct 10, 2018 7:20 am

Have you read the wiki page on memory tuning?

The default advice is to have about 1 gig of RAM per terabyte of disk space, but the actual value depends on usage patterns. On a heavily-used pool, you'll need more RAM than for a pool that mostly just sits there idle, with the occasional read.

With "only" 16 gigs of RAM, there's an excellent chance that you're running your system into swapping because the ZFS ARC is sucking up all the RAM for itself. You might find that limiting the maximum ARC size per the wiki article's advice greatly improves overall system performance.

TANSTAAFL: doing so means ZFS will have to go back to disk more often than it would otherwise prefer to, but unless this system is only used as a file server, you'll want to reserve some RAM for non-ZFS tasks.
tangent
 
Posts: 47
Joined: Tue Nov 11, 2014 6:58 pm

Re: Are these speeds out of the norm?

Postby leeb » Wed Oct 10, 2018 4:15 pm

tangent wrote:Have you read the wiki page on memory tuning?

Be careful about the wiki, a lot of the information there is horribly out of date. That specific page for example hasn't been edited since February of 2015, well over 3 years ago, and O3X has seen massive improvements to its memory handling and release since then.

The "1GB per TB" is also not correct, it's a piece of "wisdom" that's become entrenched at this point but it's based off a misunderstanding of an old Sun post aimed at a different focus then most end users have (back then of course there was no such thing). ZFS will work fine with far less memory, though it'll use more if available of course to increase speed. O3X at this point should be pretty decent compared to the old days at releasing if memory pressure grows, though of course there might still be a few edge cases or bugs there.
leeb
 
Posts: 43
Joined: Thu May 15, 2014 12:10 pm

Next

Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 19 guests