Please help to get my reads up...
-------------Client-------------
MacPro 2008 (3,1)
Chelsio 10Gbit
Xeon dual quad core @ 3.0GHz
32GB RAM
Boot is macOS 10.13.2 client on Apple SSD SM0128G Media installed in PCIe adapter (yanked from Mac mini Fusion drive)
-------------Server-------------
HacPro (Gigabyte X58)
Chesio 10Gbit
Xeon single quad core @ 3.0GHz
24GB RAM
Boot is macOS 10.13.2 client on Samsung Evo 840 250GB
ZFS pool array is using Seagate ST4000DM000 disks.
Samba 4.7.3 (compiled from src)
- Code: Select all
$ sudo smbstatus
Samba version 4.7.3
PID Username Group Machine Protocol Version Encryption Signing
----------------------------------------------------------------------------------------------------------------------------------------
1502 madmin staff 10.10.10.2 (ipv4:10.10.10.2:49518) SMB3_02 - partial(AES-128-CMAC)
Service pid Machine Connected at Encryption Signing
---------------------------------------------------------------------------------------------
Files 1502 10.10.10.2 Mon Jan 15 16:10:33 2018 AEDT - -
Test file is ~30GB to ensure arc keeps out of the way.
Machines are directly connected via 2Mt Cat6A cable to eliminate all other network traffic.
Finder copy of 30GB from MacPro to HacPro and using zpool iostat to gather pool read speeds when uploading to ztank/Files
- Code: Select all
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------------- ----- ----- ----- ----- ----- -----
ztank 10.4T 4.11T 228 0 228M 0
mirror 2.60T 1.03T 59 0 58.7M 0
media-99FA76E2-ED5F-494D-97DE-F97A80287033 - - 14 0 14.9M 0
media-92B591C1-3EBE-D34E-B5F0-57A746C5FBF3 - - 44 0 43.8M 0
mirror 2.60T 1.03T 57 0 57.6M 0
media-6FD8C2D0-7B50-074D-9070-3C7F482B8F27 - - 18 0 18.9M 0
media-9B215566-639F-3C43-995F-6BC4DC308963 - - 38 0 38.8M 0
mirror 2.60T 1.03T 54 0 54.7M 0
media-7249EB5E-9EF9-8747-B17A-88A0AFD23260 - - 18 0 18.9M 0
media-99D6CA9C-DCED-2A47-BA10-0382851DEAAD - - 35 0 35.8M 0
mirror 2.60T 1.03T 56 0 56.6M 0
media-8E377A23-7832-F24E-9B97-1512C3BC1C89 - - 30 0 30.8M 0
media-4EF4837C-9E04-E844-A6F7-52C7D309B0F9 - - 25 0 25.8M 0
---------------------------------------------- ----- ----- ----- ----- ----- -----
Now look at write speeds:
- Code: Select all
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------------- ----- ----- ----- ----- ----- -----
ztank 10.4T 4.13T 0 729 0 729M
mirror 2.59T 1.03T 0 194 0 195M
media-99FA76E2-ED5F-494D-97DE-F97A80287033 - - 0 97 0 97.3M
media-92B591C1-3EBE-D34E-B5F0-57A746C5FBF3 - - 0 97 0 97.3M
mirror 2.60T 1.03T 0 195 0 195M
media-6FD8C2D0-7B50-074D-9070-3C7F482B8F27 - - 0 97 0 97.3M
media-9B215566-639F-3C43-995F-6BC4DC308963 - - 0 98 0 98.2M
mirror 2.59T 1.03T 0 195 0 195M
media-7249EB5E-9EF9-8747-B17A-88A0AFD23260 - - 0 99 0 99.2M
media-99D6CA9C-DCED-2A47-BA10-0382851DEAAD - - 0 96 0 96.3M
mirror 2.59T 1.03T 0 143 0 144M
media-8E377A23-7832-F24E-9B97-1512C3BC1C89 - - 0 101 0 101M
media-4EF4837C-9E04-E844-A6F7-52C7D309B0F9 - - 0 42 0 42.8M
---------------------------------------------- ----- ----- ----- ----- ----- -----
INSANE difference!
ztank history:
- Code: Select all
zpool create -f -o ashift=12 -O compression=lz4 -O checksum=skein -O casesensitivity=insensitive -O atime=off -O normalization=formD ztank mirror disk2 disk3 mirror disk4 disk5 mirror disk6 disk7 mirror disk21 disk22
2017-10-21.15:32:35 zfs set reservation=1m ztank
2017-10-21.15:32:41 zfs set com.apple.mimic_hfs=on ztank
2017-10-21.15:44:45 zfs set recordsize=1m ztank
2017-10-21.15:56:11 zfs set xattr=sa ztank
2017-10-21.15:57:10 zfs set redundant_metadata=most ztank
2017-10-21.16:04:02 zfs create ztank/Comedy
2017-10-21.16:04:07 zfs create ztank/Docos
2017-10-21.16:04:11 zfs create ztank/Files
2017-10-21.16:04:16 zfs create ztank/Movies
2017-10-21.16:04:21 zfs create ztank/Music
2017-10-21.16:04:31 zfs create ztank/Pictures
2017-10-21.16:04:35 zfs create ztank/Sport
2017-10-21.16:04:44 zfs create ztank/TVShows
2017-10-21.16:04:48 zfs create ztank/Video
All files on this pool are big, fat and juicy video files, which is why I set the recordsize to 1M.
I changed it back to 512k and 1024k which made no difference.
server tunables:
cat /etc/zfs/zsysctl.conf
- Code: Select all
# 10 Mar 2015; ilovezfs
# Cap the ARC to 11 GB reserving 5 GB for applications.
# 11 * 2^30 = 11,811,160,064
kstat.zfs.darwin.tunable.zfs_arc_max=11811160064
# As another example, let's raise the zfs_arc_meta_limit:
# 10 Mar 2015; ilovezfs
# Raise zfs_arc_meta_limit to 3/4 (instead of 1/4) of zfs_arc_max.
# 3/4 * (11 * 2^30) = 8,858,370,048
# But let's use hexadecimal this time.
# 8,858,370,048 = 0x210000000
kstat.zfs.darwin.tunable.zfs_arc_meta_limit=0x210000000
cat sysctl.conf
- Code: Select all
kern.ipc.maxsockbuf=8388608
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=65536
net.inet.tcp.win_scale_factor=4
net.inet.tcp.sendspace=1042560
net.inet.tcp.recvspace=1042560
net.inet.tcp.mssdflt=1448
net.inet.tcp.v6mssdflt=1428
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=3
net.inet.tcp.slowstart_flightsize=20
net.inet.tcp.local_slowstart_flightsize=20
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50
client tuneables are the same for sysctl:
cat sysctl.conf
- Code: Select all
kern.ipc.maxsockbuf=8388608
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=65536
net.inet.tcp.win_scale_factor=4
net.inet.tcp.sendspace=1042560
net.inet.tcp.recvspace=1042560
net.inet.tcp.mssdflt=1448
net.inet.tcp.v6mssdflt=1428
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=3
net.inet.tcp.slowstart_flightsize=20
net.inet.tcp.local_slowstart_flightsize=20
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50
Anyone able to hint as to why the pool is so slow for reads? (reading locally from the pool (i.e. without samba) results in the same poor read speeds.