Help! slow reads… 8(((

All your general support questions for OpenZFS on OS X.

Help! slow reads… 8(((

Postby tangles » Sun Jan 14, 2018 10:44 pm

My writes are almost 3x greater than my reads… :shock:

Please help to get my reads up...

-------------Client-------------
MacPro 2008 (3,1)
Chelsio 10Gbit
Xeon dual quad core @ 3.0GHz
32GB RAM
Boot is macOS 10.13.2 client on Apple SSD SM0128G Media installed in PCIe adapter (yanked from Mac mini Fusion drive)

-------------Server-------------
HacPro (Gigabyte X58)
Chesio 10Gbit
Xeon single quad core @ 3.0GHz
24GB RAM
Boot is macOS 10.13.2 client on Samsung Evo 840 250GB

ZFS pool array is using Seagate ST4000DM000 disks.

Samba 4.7.3 (compiled from src)
Code: Select all
$ sudo smbstatus

Samba version 4.7.3
PID     Username     Group        Machine                                   Protocol Version  Encryption           Signing             
----------------------------------------------------------------------------------------------------------------------------------------
1502    madmin       staff        10.10.10.2 (ipv4:10.10.10.2:49518)        SMB3_02           -                    partial(AES-128-CMAC)

Service      pid     Machine       Connected at                     Encryption   Signing     
---------------------------------------------------------------------------------------------
Files        1502    10.10.10.2    Mon Jan 15 16:10:33 2018 AEDT    -            -           


Test file is ~30GB to ensure arc keeps out of the way.
Machines are directly connected via 2Mt Cat6A cable to eliminate all other network traffic.

Finder copy of 30GB from MacPro to HacPro and using zpool iostat to gather pool read speeds when uploading to ztank/Files

Code: Select all
                                                 capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
ztank                                           10.4T  4.11T    228      0   228M      0
  mirror                                        2.60T  1.03T     59      0  58.7M      0
    media-99FA76E2-ED5F-494D-97DE-F97A80287033      -      -     14      0  14.9M      0
    media-92B591C1-3EBE-D34E-B5F0-57A746C5FBF3      -      -     44      0  43.8M      0
  mirror                                        2.60T  1.03T     57      0  57.6M      0
    media-6FD8C2D0-7B50-074D-9070-3C7F482B8F27      -      -     18      0  18.9M      0
    media-9B215566-639F-3C43-995F-6BC4DC308963      -      -     38      0  38.8M      0
  mirror                                        2.60T  1.03T     54      0  54.7M      0
    media-7249EB5E-9EF9-8747-B17A-88A0AFD23260      -      -     18      0  18.9M      0
    media-99D6CA9C-DCED-2A47-BA10-0382851DEAAD      -      -     35      0  35.8M      0
  mirror                                        2.60T  1.03T     56      0  56.6M      0
    media-8E377A23-7832-F24E-9B97-1512C3BC1C89      -      -     30      0  30.8M      0
    media-4EF4837C-9E04-E844-A6F7-52C7D309B0F9      -      -     25      0  25.8M      0
----------------------------------------------  -----  -----  -----  -----  -----  -----


Now look at write speeds:
Code: Select all
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
ztank                                           10.4T  4.13T      0    729      0   729M
  mirror                                        2.59T  1.03T      0    194      0   195M
    media-99FA76E2-ED5F-494D-97DE-F97A80287033      -      -      0     97      0  97.3M
    media-92B591C1-3EBE-D34E-B5F0-57A746C5FBF3      -      -      0     97      0  97.3M
  mirror                                        2.60T  1.03T      0    195      0   195M
    media-6FD8C2D0-7B50-074D-9070-3C7F482B8F27      -      -      0     97      0  97.3M
    media-9B215566-639F-3C43-995F-6BC4DC308963      -      -      0     98      0  98.2M
  mirror                                        2.59T  1.03T      0    195      0   195M
    media-7249EB5E-9EF9-8747-B17A-88A0AFD23260      -      -      0     99      0  99.2M
    media-99D6CA9C-DCED-2A47-BA10-0382851DEAAD      -      -      0     96      0  96.3M
  mirror                                        2.59T  1.03T      0    143      0   144M
    media-8E377A23-7832-F24E-9B97-1512C3BC1C89      -      -      0    101      0   101M
    media-4EF4837C-9E04-E844-A6F7-52C7D309B0F9      -      -      0     42      0  42.8M
----------------------------------------------  -----  -----  -----  -----  -----  -----


INSANE difference!

ztank history:
Code: Select all
zpool create -f -o ashift=12 -O compression=lz4 -O checksum=skein -O casesensitivity=insensitive -O atime=off -O normalization=formD ztank mirror disk2 disk3 mirror disk4 disk5 mirror disk6 disk7 mirror disk21 disk22
2017-10-21.15:32:35 zfs set reservation=1m ztank
2017-10-21.15:32:41 zfs set com.apple.mimic_hfs=on ztank
2017-10-21.15:44:45 zfs set recordsize=1m ztank
2017-10-21.15:56:11 zfs set xattr=sa ztank
2017-10-21.15:57:10 zfs set redundant_metadata=most ztank
2017-10-21.16:04:02 zfs create ztank/Comedy
2017-10-21.16:04:07 zfs create ztank/Docos
2017-10-21.16:04:11 zfs create ztank/Files
2017-10-21.16:04:16 zfs create ztank/Movies
2017-10-21.16:04:21 zfs create ztank/Music
2017-10-21.16:04:31 zfs create ztank/Pictures
2017-10-21.16:04:35 zfs create ztank/Sport
2017-10-21.16:04:44 zfs create ztank/TVShows
2017-10-21.16:04:48 zfs create ztank/Video


All files on this pool are big, fat and juicy video files, which is why I set the recordsize to 1M.
I changed it back to 512k and 1024k which made no difference.

server tunables:
cat /etc/zfs/zsysctl.conf
Code: Select all
# 10 Mar 2015; ilovezfs
# Cap the ARC to 11 GB reserving 5 GB for applications.
# 11 * 2^30 = 11,811,160,064
kstat.zfs.darwin.tunable.zfs_arc_max=11811160064
# As another example, let's raise the zfs_arc_meta_limit:
# 10 Mar 2015; ilovezfs
# Raise zfs_arc_meta_limit to 3/4 (instead of 1/4) of zfs_arc_max.
# 3/4 * (11 * 2^30) = 8,858,370,048
# But let's use hexadecimal this time.
# 8,858,370,048 = 0x210000000
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=0x210000000


cat sysctl.conf
Code: Select all
kern.ipc.maxsockbuf=8388608
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=65536
net.inet.tcp.win_scale_factor=4
net.inet.tcp.sendspace=1042560
net.inet.tcp.recvspace=1042560
net.inet.tcp.mssdflt=1448
net.inet.tcp.v6mssdflt=1428
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=3
net.inet.tcp.slowstart_flightsize=20
net.inet.tcp.local_slowstart_flightsize=20
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50


client tuneables are the same for sysctl:
cat sysctl.conf
Code: Select all
kern.ipc.maxsockbuf=8388608
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=65536
net.inet.tcp.win_scale_factor=4
net.inet.tcp.sendspace=1042560
net.inet.tcp.recvspace=1042560
net.inet.tcp.mssdflt=1448
net.inet.tcp.v6mssdflt=1428
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=3
net.inet.tcp.slowstart_flightsize=20
net.inet.tcp.local_slowstart_flightsize=20
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50


Anyone able to hint as to why the pool is so slow for reads? (reading locally from the pool (i.e. without samba) results in the same poor read speeds.
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: Help! slow reads… 8(((

Postby lundman » Mon Jan 15, 2018 11:54 pm

Yes, that does seem quite slow, is this master? We did some recent performance fixes there.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Help! slow reads… 8(((

Postby lundman » Tue Jan 16, 2018 3:45 am

Hmm actually, each HDD can do about ~40MB/s right, and the mirror paired disk about half that, so total read is 228MB/s which is decent. Yes, writes are much higher, but are you sure that isn't just you filling the ARC? Have you watched arcstat running and check the write speed you get once it hits capacity?
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Help! slow reads… 8(((

Postby tangles » Tue Jan 16, 2018 5:32 am

Here's arcstat while reading a 30GB file off my main pool onto another pool
Both pools use same disks. (4TB Seagates)
Code: Select all
    Time   read   miss  miss%   dmis  dm%  pmis  pm%   mmis  mm%   size  tsize 
00:03:09    339    201     59     18   11   184  100     14   73  2465M  2147M 
00:03:10     51     41     80     17   62    23  100     16   88  2207M  2147M 
00:03:12    354    209     59     15    9   194  100     13   61  2020M  2147M 
00:03:13    341    196     57      4    2   192  100      2  100  2191M  2147M 
00:03:14    305    172     56     14    9   158  100     12   85  2545M  2147M 
00:03:15    126     69     54     12   17    57  100     11   84  2070M  2147M 
00:03:16    474    266     56     15    6   251  100     12   80  2232M  2147M 
00:03:17    345    191     55     14    8   177   93     11   40  2147M  2147M 
00:03:18    130     84     64     16   25    69  100     15  100  2228M  2147M 
00:03:19    275    156     56     13    9   142  100     11   68  2155M  2147M


and here's zpool iostat at the same time:
Code: Select all
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
BT                                              1.27T  2.35T      0  2.48K      0   317M
  mirror                                        1.27T  2.35T      0  2.48K      0   317M
    media-7F0EF0A8-4949-7049-AC43-A21B4E07795F      -      -      0  1.23K      0   158M
    media-D9741B61-CAB9-E146-BA5B-095A497E0CA6      -      -      0  1.24K      0   159M
----------------------------------------------  -----  -----  -----  -----  -----  -----
ztank                                           10.4T  4.13T    182      0   182M      0
  mirror                                        2.59T  1.03T     45      0  45.8M      0
    media-99FA76E2-ED5F-494D-97DE-F97A80287033      -      -     35      0  35.1M      0
    media-92B591C1-3EBE-D34E-B5F0-57A746C5FBF3      -      -     10      0  10.7M      0
  mirror                                        2.59T  1.03T     44      0  44.8M      0
    media-6FD8C2D0-7B50-074D-9070-3C7F482B8F27      -      -     43      0  43.9M      0
    media-9B215566-639F-3C43-995F-6BC4DC308963      -      -      0      0   998K      0
  mirror                                        2.59T  1.03T     45      0  45.8M      0
    media-7249EB5E-9EF9-8747-B17A-88A0AFD23260      -      -     45      0  45.8M      0
    media-99D6CA9C-DCED-2A47-BA10-0382851DEAAD      -      -      0      0      0      0
  mirror                                        2.59T  1.03T     45      0  45.8M      0
    media-8E377A23-7832-F24E-9B97-1512C3BC1C89      -      -     44      0  44.8M      0
    media-4EF4837C-9E04-E844-A6F7-52C7D309B0F9      -      -      0      0   998K      0
cache                                               -      -      -      -      -      -
  media-D2A1589A-338D-954C-935E-42333CCEFE8D     907M  55.0G      0      0      0      0
  media-696E44EB-0F9E-D64A-99F6-C7FDED422500     881M  55.0G      0      0      0      0
  media-5698E694-1763-3A41-BF43-B650437929C3     891M  55.0G      0     15      0  15.6M
  media-C8E67841-6861-3440-928F-555EE75DE17A     908M  55.0G      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----


I copied the above when the file was 15 of 30GB transferred.

I've since added a 4 x 60GB SSDs as cache to see if it's helps… (as suspected, nope)
After testing with ARC = 12GB, the above test is with arc reduced to just 2GB to see (if any) changes are visible, but I'm not seeing any sequential read improvements.

I forgot to mention that all the drives and connected via a RocketRaid 2744 SAS card.

I don't think I have any dodgy disks as all of them exhibit around the same read speeds in the pool.

HFS raw read of these disks is ~ 165MB/sec sequential writes and ~170MB/sec sequential reads…

I did think about master, so I'll pull that down tomorrow night and see if it makes a difference.

ta
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: Help! slow reads… 8(((

Postby tangles » Tue Jan 23, 2018 3:33 am

So, with 1.70, I too set ARC to just 2GB, but no difference on the read side of things.

I then manually exported and deleted all related zfs files and installed master…

No change in read speeds, whether I have 2GB ARC or my normal 12GB.

I did read from ztank while writing to dev/null and did manage to see speeds via zpool iostat jump to over 500MB/sec but wasn't all that consistent.

It's like the pool can do it… it just somehow gets told to hold back… 8((

I added 4 x 60GB cache devices (as a stripe, just for testing, and have since removed) which didn't help given it can only be helpful if the file/blocks have been requested in the past.

I'll wait and see if any updates make an improvement.

cheers.
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: Help! slow reads… 8(((

Postby e8vww » Tue Jan 30, 2018 3:42 am

lundman wrote:Yes, that does seem quite slow, is this master? We did some recent performance fixes there.


How do I get this running on 10.12? I can't use ZFS for itunes, there is a beachball between each track change. Brand new enterprise drives exclusively used by itunes :(
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: Help! slow reads… 8(((

Postby tangles » Sun Feb 11, 2018 3:58 pm

updated to master last week.

Still no change when reading off my main pool… (i.e. ~250MB/sec)

So I thought I'd kick off a scrub:

Code: Select all
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
ztank                                           10.6T  3.94T    575      0   503M      0
  mirror                                        2.64T  1010G    138      0   125M      0
    media-99FA76E2-ED5F-494D-97DE-F97A80287033      -      -     69      0  63.0M      0
    media-92B591C1-3EBE-D34E-B5F0-57A746C5FBF3      -      -     68      0  62.1M      0
  mirror                                        2.64T  1007G    146      0   126M      0
    media-6FD8C2D0-7B50-074D-9070-3C7F482B8F27      -      -     72      0  61.7M      0
    media-9B215566-639F-3C43-995F-6BC4DC308963      -      -     73      0  64.5M      0
  mirror                                        2.64T  1008G    146      0   129M      0
    media-7249EB5E-9EF9-8747-B17A-88A0AFD23260      -      -     72      0  64.3M      0
    media-99D6CA9C-DCED-2A47-BA10-0382851DEAAD      -      -     73      0  64.3M      0
  mirror                                        2.64T  1008G    141      0   121M      0
    media-8E377A23-7832-F24E-9B97-1512C3BC1C89      -      -     71      0  60.0M      0
    media-4EF4837C-9E04-E844-A6F7-52C7D309B0F9      -      -     69      0  61.1M      0
----------------------------------------------  -----  -----  -----  -----  -----  -----


So the drives can clearly combine to push > 500MB/sec read speeds… (this I already knew given that writes are up around this speed…)

So something's strange with the ZFS pipeline that's holding back reads…

short of blowing the pool away and re-creating, anyone got some suggestions?

Current tuneables:
Code: Select all
cat /etc/sysctl.conf
kern.ipc.maxsockbuf=8388608
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=65536
net.inet.tcp.win_scale_factor=4
net.inet.tcp.mssdflt=1448
net.inet.tcp.v6mssdflt=1428
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.slowstart_flightsize=20
net.inet.tcp.local_slowstart_flightsize=9
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50
#begin_atto_network_settings
net.inet.tcp.sendspace=1048576
net.inet.tcp.recvspace=1048576
net.inet.tcp.delayed_ack=0
net.inet.tcp.rfc1323=1
#end_atto_network_settings
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: Help! slow reads… 8(((

Postby e8vww » Mon Feb 12, 2018 8:42 am

lundman wrote:Hmm actually, each HDD can do about ~40MB/s right, and the mirror paired disk about half that, so total read is 228MB/s which is decent. Yes, writes are much higher, but are you sure that isn't just you filling the ARC? Have you watched arcstat running and check the write speed you get once it hits capacity?


My writes seem really slow. 8tb sata x 2 in a mirror, 35MB/s. 3tb sata x 2 in a mirror, 15MB/s. Compare to 2tb x 2 hfs mirror 107MB/s. Is this normal/is there any way to speed this up without adding drives?
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm


Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 27 guests

cron