erratic performance with new drives

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

Re: erratic performance with new drives

Postby e8vww » Tue Feb 27, 2018 11:55 am

lundman wrote:I do appreciate the effort put into trying to figure out the bottleneck. I have not yet reached the milestone to look at speed improvements and general optimisations. But perhaps it is ported enough now, that we could.

One thing that would be interesting to see, once it is slower than expected, is to get a flamegraph, so we can see what it is spending the most time doing.

This usually entails running:
Code: Select all
sudo dtrace -x stackframes=100 -n 'profile-997 /arg0/ {
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks


when the slow down is happening, it will run for 60s and produce a hefty text file. This can then be changed to a svg flamegraph, which I can do if I have access to the text file.


Ok will do so. Its unpredictable so I will take a baseline and then another when it bottlenecks.

I want to make sure I am up to date first, I followed the instructions at https://openzfsonosx.org/wiki/Install#Upgrading_a_source_install and my kext versions do not update. Any idea what this could be caused by?
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby lundman » Wed Feb 28, 2018 4:17 am

Perhaps the kext unloading step fails? Use kextstat to check that are really unloaded.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: erratic performance with new drives

Postby Brendon » Thu Mar 01, 2018 4:36 am

Its probably worth removing all kext and zfs executable files manually when switching between installer and self compiled configurations.

I'm not 100% sure if its still the case, but the installer used to place some files in different locations than "make install", leading to some uncomfortable configuration mismatches.

At a minimum removing zfs.kext and spl.kext and then rebooting should clear the way for a cleanout if you're stuck.
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: erratic performance with new drives

Postby e8vww » Thu Mar 01, 2018 8:07 am

lundman wrote:Perhaps the kext unloading step fails? Use kextstat to check that are really unloaded.


Thanks, they would not unload.

Brendon wrote:Its probably worth removing all kext and zfs executable files manually when switching between installer and self compiled configurations
At a minimum removing zfs.kext and spl.kext and then rebooting should clear the way for a cleanout if you're stuck.


Yes this did it. Thanks! 1GB arc_max setting was retained from the previous install.

lundman wrote:
Code: Select all
sudo dtrace -x stackframes=100 -n 'profile-997 /arg0/ {
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks


Where do I find the output from this? Is there software I can get to make the graph for you? I can report anecdotally that everything seems much faster overall under e364899 (is there a page showing a decimal version equivalent?). Still with inexplicable slow periods. Will continue testing and report back.

Update: overall the spiking has been much reduced. The remaining problem is that activity on one pool slows down the other pools. Under hfs I was limited only by the speed of the drives, multiple operations between volumes did not slow down remaining volumes the way it does under o3x. Should I do dtrace with single pool vs multiple pool activity? I have 6 mirrored pairs.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby lundman » Thu Mar 01, 2018 11:08 am

-o out.stacks

Would create a text file name out.stacks in the directory you are residing inside.

If you have slowdowns and are still curious what it is doing, feel free to do a spindump. If master is performing acceptably for you at the moment, keep an eye on it for now.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: erratic performance with new drives

Postby e8vww » Fri Mar 02, 2018 3:18 am

lundman wrote:-o out.stacks

Would create a text file name out.stacks in the directory you are residing inside.

If you have slowdowns and are still curious what it is doing, feel free to do a spindump. If master is performing acceptably for you at the moment, keep an eye on it for now.


Ok thanks will do. I have no knowledge of openzfs but it seems like as it is it can only handle a single pool with acceptable performance. Each added pool degrades overall performance. I read on your site that you have a single chassis for testing so it wouldn't show this problem. I am no programmer, can there be a zfs process running for each pool? Do you think increasing its priority would make a difference?
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby Brendon » Fri Mar 02, 2018 4:55 am

Actually we develop primarily in a virtualised environment. The project has little to no hardware to speak of except for the VM host itself. Various people test and/or use it on hardware specific to their use, and we attempt, usually sucessfully to incorportate or remedy issues raised.

I will create some sort of virtualised setup resembling yours in a VM and see if I can see similar behaviour.

I’m only aware of one other user and fortunately developer that may have sufficient hardware to replicate your scenario.

As to your process questions, no, there are no processes in the way that you state it. I dont think there is anything user tunable in terms of the priority we assign to the threads in kernel.

I guess the other thought I have is that we have no sense as to what the theoretical max throughput of zfs is on a Mac. It is unlikely to be as fast as hfs, and in my experience I’ve never seen more than 500mb/sec going to 4/5 disk pools one raiz2, one striped mirrors (on vastly different host machines). I’ll ask the guy with more hardware what he’s seen when I next see him online.

One thing we do know is that zfs drives the memory allocator very hard (manifests itself as glitchy pausy behaviour, and low ZFS FS throughput), we ported the Solaris allocator along with zfs, and it made a huge difference. But ultimately it’s backed by MacOS internals and they are quite slow under load.
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: erratic performance with new drives

Postby e8vww » Sat Mar 03, 2018 5:19 am

Brendon wrote:I will create some sort of virtualised setup resembling yours in a VM and see if I can see similar behaviour.

I’m only aware of one other user and fortunately developer that may have sufficient hardware to replicate your scenario.

As to your process questions, no, there are no processes in the way that you state it. I dont think there is anything user tunable in terms of the priority we assign to the threads in kernel.

I guess the other thought I have is that we have no sense as to what the theoretical max throughput of zfs is on a Mac. It is unlikely to be as fast as hfs, and in my experience I’ve never seen more than 500mb/sec going to 4/5 disk pools one raiz2, one striped mirrors (on vastly different host machines). I’ll ask the guy with more hardware what he’s seen when I next see him online.

One thing we do know is that zfs drives the memory allocator very hard (manifests itself as glitchy pausy behaviour, and low ZFS FS throughput), we ported the Solaris allocator along with zfs, and it made a huge difference. But ultimately it’s backed by MacOS internals and they are quite slow under load.


All good points I hadn't considered, thanks. I have acceptable performance only because the six pools are mostly idle, two concurrent users max. Torrents really kill overall i/o, even with a separate recordsize=16k pool for that purpose. I am using tape for backup and its only running at about half speed backing up a mirrored z pair vs hfs it would hit full speed routinely. As mentioned the spiking is reduced but I still notice activity on #1 &2 slows down a download to #3 even though the download speed is 1/40 of the drive's max speed.

There was mention of drive & software cache size impact on speed. Hfs does not use a cache, can there be a way to disable caching in o3x? Seems like it creates an unnecessary step and extra i/o. For large video files a cache offers 0 benefit.

What are people going to switch to now that solaris is no longer being actively developed.. macz mentioned the oracle storage simulator, does that offer higher performance? Seems a bit clunky to have to run vmware in order to access your drives.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby Brendon » Sat Mar 03, 2018 10:08 am

Well now that you’re all upgraded, back to @lundmans benchmarking and flame graph suggestion.

Have you described your hardware and OS configuration anywhere?

Cheers
Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: erratic performance with new drives

Postby e8vww » Sun Mar 04, 2018 3:42 am

Brendon wrote:Well now that you’re all upgraded, back to @lundmans benchmarking and flame graph suggestion.

Have you described your hardware and OS configuration anywhere?

Cheers
Brendon


How much detail do you want? Non-thunderbolt mac pro, 10.12.6, 36gb ram, (3) 8tb z mirror, (2) 3tb z mirror, (1) 4tb z mirror, (1) 2tb hfs boot mirror. The z mirrors are split 1-1 between two enclosures attached to the same esata controller, except for the 4tb mirror which is attached to the motherboard.

I generated a flamegraph during a period of poor performance, copying data off a recordsize=16k pool that was also receiving torrent data, onto a default recordsize mirror.

Image
Last edited by e8vww on Sun Mar 04, 2018 7:51 am, edited 1 time in total.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

PreviousNext

Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 15 guests