slow write speeds still :cry:

All your general support questions for OpenZFS on OS X.

Re: slow write speeds still :cry:

Postby lundman » Thu Jul 12, 2018 9:19 pm

I think they consider the slow-512-writes to be a feature, in that you can run multiples of those writes and they will all perform the same, and not interfere with other transfers.

But I don't think that is the issue we talk about, your slowdown is something that happens after it has been running for a while, which might be a unique problem to osx we would need to solve.

We do have some changing coming up though, it is possible that the taskq fixes that rottegift is working on might help a little, hopefully he'll make a PR for that soon.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: slow write speeds still :cry:

Postby e8wwv » Tue Jul 17, 2018 9:57 pm

lundman wrote:I think they consider the slow-512-writes to be a feature, in that you can run multiples of those writes and they will all perform the same, and not interfere with other transfers.

But I don't think that is the issue we talk about, your slowdown is something that happens after it has been running for a while, which might be a unique problem to osx we would need to solve.

We do have some changing coming up though, it is possible that the taskq fixes that rottegift is working on might help a little, hopefully he'll make a PR for that soon.


I have to do a reboot before running a tape backup to get full speed. Even something as simple as opening itunes or viewing a few finder windows causes the slowdown. Instead of 125mb/s I'll get 80-90. With torrents its a 50% reduction, never maintains a constant speed until a reboot. Image
e8wwv
 
Posts: 14
Joined: Sat Apr 21, 2018 3:38 am

Re: slow write speeds still :cry:

Postby mkush » Wed Jul 18, 2018 5:41 am

That sounds exactly like what I experienced before I set the ARC to 1GB: fast just after reboot, slow thereafter. Dumb question: is your setting of the ARC actually working? Syntax error or something?
mkush
 
Posts: 53
Joined: Tue Sep 30, 2014 1:17 pm

Re: slow write speeds still :cry:

Postby e8wwv » Fri Jul 20, 2018 5:27 pm

mkush wrote:That sounds exactly like what I experienced before I set the ARC to 1GB: fast just after reboot, slow thereafter. Dumb question: is your setting of the ARC actually working? Syntax error or something?


My /etc/zfs/zsysctl.conf has 1 line: kstat.zfs.darwin.tunable.zfs_arc_max=1073741824

Code: Select all
sudo sysctl -w kstat.zfs.darwin.tunable.zfs_arc_max=1073741824
kstat.zfs.darwin.tunable.zfs_arc_max: 1073741824 -> 1073741824


1gb arc setting appears to be correctly entered.

But, after 4 days uptime, kstat.spl.misc.spl_misc.os_mem_alloc result is 5152604160 bytes.....

There is this thread: viewtopic.php?f=11&t=2838

and this quote:
Brendon wrote:The arc limit is not a hard limit on zfs memory use, this is a common misconception.

Typically the memory used will be 30-40% larger, due in part to overheads.

This should not be a problem, zfs will release memory as required when the machine is experiencing memory pressure. In most cases, that is

HTH
- Brendon


So therefore I am confused. 500% larger due to overheads? Is there a memory leak?
e8wwv
 
Posts: 14
Joined: Sat Apr 21, 2018 3:38 am

Re: slow write speeds still :cry:

Postby Brendon » Sun Jul 22, 2018 1:53 pm

Briefly -

No memory allocator is 100% space efficient. Always a compromise between performance and space efficiency. Ours is to all intents and purposes identical to the Illumos allocator - its a best of breed design.

As Lundman has pointed out to you in the past there are no memory leaks.

Cheers
Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: slow write speeds still :cry:

Postby e8wwv » Sun Jul 22, 2018 10:49 pm

Brendon wrote:Briefly -

No memory allocator is 100% space efficient. Always a compromise between performance and space efficiency. Ours is to all intents and purposes identical to the Illumos allocator - its a best of breed design.

As Lundman has pointed out to you in the past there are no memory leaks.

Cheers
Brendon


Okay, but your comment about 30-40% larger doesn't fit. Mine is now up to 5960073216 bytes. I gradually run out of memory over 10-14 days. 7 day chart: Image
e8wwv
 
Posts: 14
Joined: Sat Apr 21, 2018 3:38 am

Re: slow write speeds still :cry:

Postby lundman » Mon Jul 23, 2018 5:02 pm

Neat, certainly something building forever, but that we do cleanup and free on unload. Something that should be part of the reaping when ARC needs to deflate. I wonder what it could be.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: slow write speeds still :cry:

Postby Brendon » Mon Jul 23, 2018 8:02 pm

Rottegifts fragmentation scenario maybe? I had similar issues with long uptimes.

B.
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: slow write speeds still :cry:

Postby lundman » Wed Jul 25, 2018 4:09 pm

Also interested in his taskq work, since he did say master's threads will sink into oblivion. But getting a PR out of him is the hard part :)
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: slow write speeds still :cry:

Postby lundman » Wed Jul 25, 2018 9:40 pm

Actually, unloading SPL with all the bell's and whistles set to stun
Code: Select all
    0    kernel: (spl) SPL: stop spl_free_thread
    0    kernel: (spl) SPL: stop spl_free_thread, lock acquired, setting exit variable and waiting
    0    kernel: (spl) SPL: spl_free_thread_exit set to FALSE and exiting: cv_broadcasting
    0    kernel: (spl) SPL: spl_free_thread thread_exit
    0    kernel: (spl) SPL: spl_free_thread stop: while loop ended, dropping mutex
    0    kernel: (spl) SPL: spl_free_thread stop: destroying cv and mutex
    0    kernel: (spl) SPL: bsd_untimeout
    0    kernel: (spl) SPL: wait for taskqs to empty
    0    kernel: (spl) SPL: destroy taskq
    0    kernel: (spl) SPL: tsd unloading 0
    0    kernel: (spl) SPL: Warning: kmem_cache_destroy: 'kmem_alloc_160' (0xffffff80bc557870) not empty
    0    kernel: (spl) SPL: Warning: kmem_cache_destroy: 'kmem_va_4096' (0xffffff80bc553070) not empty
    0    kernel: (spl) SPL: Warning: kmem_cache_destroy: 'kmem_bufctl_cache' (0xffffff80bc552870) not empty
    0    kernel: (spl) SPL: Warning: kmem_cache_destroy: 'kmem_slab_cache' (0xffffff80bc552470) not empty
    0    kernel: (spl) SPL: Released 44 slabs
    0    kernel: (spl) SPL: vmem_destroy('kmem_default'): leaked 4096 bytes
    0    kernel: (spl) SPL: vmem_destroy('kmem_va'): leaked 32768 bytes
    0    kernel: (spl) SPL: vmem_destroy('kmem_msb'): leaked 40960 bytes
    0    kernel: (spl) SPL: vmem_destroy('kmem_metadata'): leaked 40960 bytes
    0    kernel: (spl) SPL: vmem_fini: stopped vmem_update.  Creating list and walking arenas.
    0    kernel: (spl)

    0    kernel: (spl) SPL: vmem_destroy('heap'): leaked 294912 bytes
    0    kernel: (spl) SPL: vmem_fini: walking spl_heap_arena, aka bucket_heap (pass 1)
    0    kernel: (spl) SPL: vmem_fini: calling vmem_xfree(spl_default_arena, ptr, 268435456);
    0    kernel: (spl) SPL: vmem_fini: walking spl_heap_arena, aka bucket_heap (pass 2)
    0    kernel: (spl) SPL: vmem_fini: walking bucket arenas...
    0    kernel: (spl) SPL: vmem_fini: walking vmem metadata-related arenas...
    0    kernel: (spl) SPL: vmem_fini walking the root arena (spl_default_arena)...
    0    kernel: (spl) SPL: vmem_fini destroying bucket heap
    0    kernel: (spl) SPL: vmem_destroy('bucket_heap'): leaked 294912 bytes
    0    kernel: (spl) SPL: vmem_fini destroying spl_bucket_arenas...
    0    kernel: (spl)  4096
    0    kernel: (spl)  8192
    0    kernel: (spl)  16384
    0    kernel: (spl)  32768
    0    kernel: (spl)  65536
    0    kernel: (spl)  131072
    0    kernel: (spl)  262144
    0    kernel: (spl)  524288
    0    kernel: (spl)  1048576
    0    kernel: (spl)  2097152
    0    kernel: (spl)  4194304
    0    kernel: (spl)  8388608
    0    kernel: (spl)  16777216
    0    kernel: (spl)
    0    kernel: (spl) SPL: vmem_fini destroying vmem_vmem_arena
    0    kernel: (spl) SPL: vmem_destroy('vmem_vmem'): leaked 26352 bytes
    0    kernel: (spl) SPL: vmem_destroy('vmem_vmem'): STILL 26352 bytes at kstat_delete() time
    0    kernel: (spl) SPL: vmem_fini destroying vmem_seg_arena
    0    kernel: (spl) SPL: vmem_destroy('vmem_seg'): leaked 180224 bytes
    0    kernel: (spl) SPL: vmem_destroy('vmem_seg'): STILL 180224 bytes at kstat_delete() time
    0    kernel: (spl) SPL: vmem_fini destroying vmem_hash_arena
    0    kernel: (spl) SPL: vmem_fini destroying vmem_metadata_arena
    0    kernel: (spl) SPL: vmem_destroy('vmem_metadata'): leaked 180224 bytes
    0    kernel: (spl) SPL: vmem_destroy('vmem_metadata'): STILL 180224 bytes at kstat_delete() time
    0    kernel: (spl)

    0    kernel: (spl) SPL: vmem_destroy('spl_default_arena'): leaked 197271 bytes
    0    kernel: (spl) SPL: vmem_destroy('spl_default_arena'): STILL 197271 bytes at kstat_delete() time
    0    kernel: (spl) SPL: vmem_fini destroying spl_default_arena_parent
    0    kernel: (spl) SPL: arenas removed, now try destroying mutexes...
    0    kernel: (spl) vmem_xnu_alloc_lock
    0    kernel: (spl) vmem_panic_lock
    0    kernel: (spl) vmem_pushpage_lock
    0    kernel: (spl) vmem_nosleep_lock
    0    kernel: (spl) vmem_sleep_lock
    0    kernel: (spl) vmem_segfree_lock
    0    kernel: (spl) vmem_list_lock
    0    kernel: (spl)

    0    kernel: (spl) SPL: WOULD HAVE released 1273271 bytes (82 spans) from arenas
    0    kernel: (spl) SPL: vmem_fini: Brief delay for readability...
    0    kernel: (spl) SPL: vmem_fini: done!
    0    kernel: (spl) Dumping leaked mutex allocations...
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_mech_tabs.c kcf_init_mech_tabs 161 : # leaks: 1
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_mech_tabs.c kcf_init_mech_tabs 242 : # leaks: 276
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_prov_tabs.c kcf_prov_tab_init 82 : # leaks: 1
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_sched.c kcf_sched_init 1124 : # leaks: 1
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_sched.c kcf_sched_init 1134 : # leaks: 16
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_sched.c kcfpool_alloc 1288 : # leaks: 1
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_sched.c kcfpool_alloc 1291 : # leaks: 1
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_sched.c kcf_sched_init 1142 : # leaks: 1
    0    kernel: (spl)   mutex <private> : ../icp/core/kcf_prov_tabs.c kcf_alloc_provider_desc 350 : # leaks: 3
    0    kernel: (spl)   mutex <private> : aggsum.c aggsum_init 87 : # leaks: 7
    0    kernel: (spl)   mutex <private> : aggsum.c aggsum_init 93 : # leaks: 14
    0    kernel: (spl) Dumped 322 leaked allocations. Wait for watchdog to exit..
    0    kernel: (spl) SPL: watchdog thread exit
    0    kernel: (spl) SPL: Unloaded module v1.7.3-1-gf93f1008 (os_mem_alloc: 0)


Nothing nuclear in there, but we should clean that up - these are all bite-size if anyone wants to have a stab at them.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Previous

Return to General Help

Who is online

Users browsing this forum: No registered users and 27 guests