Page 2 of 4

Re: slow write speeds still :cry:

PostPosted: Thu Mar 29, 2018 12:07 am
by tangles
Here's the same block size v data size test on freeNAS

Code: Select all
root@FreeNAS:/mnt/ztank # uname -imr
11.1-STABLE amd64 FREENAS64
root@FreeNAS:/mnt/ztank # zpool status ztank
  pool: ztank
 state: ONLINE
  scan: scrub repaired 0 in 0 days 13:09:46 with 0 errors on Tue Feb 13 11:55:49 2018
config:

   NAME                                            STATE     READ WRITE CKSUM
   ztank                                           ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       gptid/99fa76e2-ed5f-494d-97de-f97a80287033  ONLINE       0     0     0
       gptid/92b591c1-3ebe-d34e-b5f0-57a746c5fbf3  ONLINE       0     0     0
     mirror-1                                      ONLINE       0     0     0
       gptid/6fd8c2d0-7b50-074d-9070-3c7f482b8f27  ONLINE       0     0     0
       gptid/9b215566-639f-3c43-995f-6bc4dc308963  ONLINE       0     0     0
     mirror-2                                      ONLINE       0     0     0
       gptid/7249eb5e-9ef9-8747-b17a-88a0afd23260  ONLINE       0     0     0
       gptid/99d6ca9c-dced-2a47-ba10-0382851deaad  ONLINE       0     0     0
     mirror-3                                      ONLINE       0     0     0
       gptid/8e377a23-7832-f24e-9b97-1512c3bc1c89  ONLINE       0     0     0
       gptid/4ef4837c-9e04-e844-a6f7-52c7d309b0f9  ONLINE       0     0     0

errors: No known data errors

Code: Select all
root@FreeNAS:/mnt/ztank #  time dd if=/dev/zero of=/mnt/ztank/speedtest bs=131072 count=4096
4096+0 records in
4096+0 records out
536870912 bytes transferred in 0.481297 secs (1115466481 bytes/sec)
0.000u 0.392s 0:00.48 81.2%   30+174k 0+4096io 0pf+0w
root@FreeNAS:/mnt/ztank # time dd if=/dev/zero of=/mnt/ztank/speedtest bs=512 count=1048576
1048576+0 records in
1048576+0 records out
536870912 bytes transferred in 6.059399 secs (88601352 bytes/sec)
0.133u 5.883s 0:06.06 99.1%   30+173k 0+1048576io 0pf+0w
root@FreeNAS:/mnt/ztank #


½ a second and 6 seconds!! obviously this is staying in ARC… so what's ZFS doing on macOS then?

This test is on an old X58 mobo with a 3GHz QC Xeon and 24GB RAM… (pool uses 4TB Seagate connected to a RR2744)

This is the same hardware that was running macOS 10.13.2 a week ago… :o but I'm getting > 700MB/sec over my network again...

Re: slow write speeds still :cry:

PostPosted: Thu Mar 29, 2018 4:13 pm
by lundman
Ok, so we are going to ignore the rate limiting of small txgs. It seems it works as intended, all the platforms have it to certain degree.

And both "cp" and "Finder" uses the f_iosize of the filesystem to do the copy, ie, 131072 by default. So it is in "fast" mode.

But clearly something else goes wrong and it is slower for us.

Re: slow write speeds still :cry:

PostPosted: Sat Mar 31, 2018 2:59 am
by tangles
Thank you for looking into this Lundman.

Happy to donate another couple of pineapples your way to get to the bottom of this...

Cheers,

Re: slow write speeds still :cry:

PostPosted: Sat Mar 31, 2018 1:12 pm
by e8vww
Just to resummarize. Thank you tangles for doing some benchmarking as there is none out there that I could find. I figured with each new commit there would be a benchmark done to see the result. Can you confirm which version/commit you were on when you did this?

tangles wrote:Got my hands on a 5,1 MacPro, QC@3.3GHz 6GB RAM.
testfile is a compressed video file = 10.02GB
Summary of macOS Results
SDD 1TB HFS → 0m36.707s, ~280MB/sec
SSD 1TB ZFS → 1m7.716s, ~152MB/sec Δ 46% ↓ compared to HFS
Rot 1TB HFS → 0m56.543s, ~181MB/sec
Rot 1TB ZFS → 1m47.609s, ~95MB/sec (ashift=12) Δ 48% ↓ compared to HFS
Rot 1TB ZFS → 3m10.194s, ~54MB/sec (ashift=9) Δ 70% ↓ compared to HFS
Summary of freeBSD Results
SDD 1TB ZFS → 0m36.707s, ~280MB/sec == compared to HFS
Rot 1TB ZFS → 0m55.34s, ~185MB/sec Δ 3% ↑ compared to HFS
Summary of Fedora 27 Server Results
SDD 1TB ZFS → 0m37.619s, ~273MB/secΔ 2% ↓ compared to HFS
Rot 1TB ZFS → 0m58.155s, ~176MB/sec Δ 3% ↓ compared to HFS


I came to the same general conclusion that there is a 50% drop compared to HFS. I just updated to the latest master and my backup software that was reading off a 2 drive 8tb rotational mirror @ ~95mb/sec inline with your results, now dropped to ~60mb/sec average. My LTO6 were taking 8h to write out and now they take 11h. How do I revert to a specific commit/will this cause a problem for a pool with the latest features enabled?

By what means are you running freebsd and fedora on the mac pro?

Re: slow write speeds still :cry:

PostPosted: Sat Mar 31, 2018 2:39 pm
by tangles
Hi e8vww,

I used a cMacPro with a PCIe HBA that can mount Apple flash storage sticks.

I have a few 128Gb and a 500Gb flash storage that I used to install all OSes onto via USB installer.

I then used The SATA2 interface to hook up the SSD AND rotational disks for the test.

Re: slow write speeds still :cry:

PostPosted: Mon Apr 16, 2018 4:28 pm
by lundman
There is an illumos PR that is looking at something appearing quite similar to this issue. If you guys can, try the "pr616" branch here, and see how it changes the IO.

Re: slow write speeds still :cry:

PostPosted: Tue Apr 17, 2018 12:12 am
by tangles
am 6 hour drive away from test rig atm but will put it through its paces when I get back in a few days...

Re: slow write speeds still :cry:

PostPosted: Sat Apr 21, 2018 3:40 am
by e8wwv
lundman wrote:There is an illumos PR that is looking at something appearing quite similar to this issue. If you guys can, try the "pr616" branch here, and see how it changes the IO.


Thanks, will try it. How do I install the branch?

Re: slow write speeds still :cry:

PostPosted: Sun Apr 22, 2018 3:46 pm
by lundman
Compile sources from git - instructions are in the wiki, but issue a "git checkout pr616" first. There might be a switch to zfsadm if you prefer to compile that way

Re: slow write speeds still :cry:

PostPosted: Sun Apr 22, 2018 5:40 pm
by e8wwv
lundman wrote:Compile sources from git - instructions are in the wiki, but issue a "git checkout pr616" first. There might be a switch to zfsadm if you prefer to compile that way


apr 17 master 337f65a 20180421 16:43:01|7463|root|[L182] wrote 1219090432 blocks (2438180864 KBytes) on volume [1], 11:25:41, 59264 KB/sec
pr616 branch 5ae3d33 20180422 15:33:33|39793|root|[L182] wrote 1219090432 blocks (2438180864 KBytes) on volume [1], 11:09:13, 60722 KB/sec

No significant difference copying a large set.