slow write speeds still :cry:

All your general support questions for OpenZFS on OS X.

slow write speeds still :cry:

Postby tangles » Sat Mar 17, 2018 7:12 am

Got my hands on a 5,1 MacPro, QC@3.3GHz 6GB RAM.
testfile is a compressed video file = 10.02GB

Test is cp command of testfile from Apple PCIe flash storage (>700MB/sec) to target.

Update: I'll post more results as I test with more operating systems:

Summary of macOS Results
HFS (Baseline)
SDD 1TB HFS → 0m36.707s, ~280MB/sec
Rot 1TB HFS → 0m56.543s, ~181MB/sec

ZFS
SSD 1TB ZFS → 1m7.716s, ~152MB/sec Δ 46% ↓ compared to HFS

Rot 1TB ZFS → 1m47.609s, ~95MB/sec (ashift=12) Δ 48% ↓ compared to HFS
Rot 1TB ZFS → 3m10.194s, ~54MB/sec (ashift=9) Δ 70% ↓ compared to HFS

Summary of freeBSD Results
SDD 1TB ZFS → 0m36.707s, ~280MB/sec == compared to HFS
Rot 1TB ZFS → 0m55.34s, ~185MB/sec Δ 3% ↑ compared to HFS

Summary of Fedora 27 Server Results
SDD 1TB ZFS → 0m37.619s, ~273MB/sec Δ 2% ↓ compared to HFS
Rot 1TB ZFS → 0m58.155s, ~176MB/sec Δ 3% ↓ compared to HFS


Background:


Downloaded the latest FreeBSD installer to USB stick and installed onto PCIe 500GB flash storage which can sustain >700MB read/write.
Code: Select all
FreeBSD cMPro 11.1-RELEASE-p8 FreeBSD 11.1-RELEASE-p8 #0: Tue Mar 13 17:07:05 UTC 2018     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64

I updated FreeBSD and created a pool with:
Code: Select all
zpool create -O atime=off stripe2 /dev/ada1 /dev/ada2 /dev/ada3 /dev/ada4

I copied the 10GB test file via scp onto /usr/home/madmin on zroot.
Code: Select all
$ zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
stripe2             1.58G  7.02T  1.58G  /stripe
zroot               3.92G   103G    88K  /zroot
zroot/ROOT          3.27G   103G    88K  none
zroot/ROOT/default  3.27G   103G  3.27G  /
zroot/tmp             88K   103G    88K  /tmp
zroot/usr            655M   103G    88K  /usr
zroot/usr/home       132K   103G   132K  /usr/home
zroot/usr/ports      655M   103G   655M  /usr/ports
zroot/usr/src         88K   103G    88K  /usr/src
zroot/var            680K   103G    88K  /var
zroot/var/audit       88K   103G    88K  /var/audit
zroot/var/crash       88K   103G    88K  /var/crash
zroot/var/log        216K   103G   216K  /var/log
zroot/var/mail       112K   103G   112K  /var/mail
zroot/var/tmp         88K   103G    88K  /var/tmp

Now to test!
I wrote the 10GB file from zroot to stripe2 via cp command:
Code: Select all
               capacity     operations    bandwidth
pool       alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
stripe2     9.99G  7.24T      3  3.41K  15.7K   423M
  ada1      2.50G  1.81T      0    910      0   109M
  ada2      2.50G  1.81T      0    875  3.92K   105M
  ada3      2.49G  1.81T      0    845  3.92K   101M
  ada4      2.50G  1.81T      1    857  7.84K   107M
----------  -----  -----  -----  -----  -----  -----
zroot       13.3G  96.7G  3.10K      0   396M      0
  ada0p4    13.3G  96.7G  3.10K      0   396M      0
----------  -----  -----  -----  -----  -----  -----


each disk is > 100MB/sec… nice!

I shutdown the cMPro, removed the PCIe 500GB FreeBSD flash adapter and replaced it with a 128GB PCIe flash adapter.

I installed a fresh macOS 10.13.3 from a USB stick.
Code: Select all
Darwin cMPro.local 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64


Installed 1.7.1 and set ARC to 1000000000 (~1GB as this is best value for me to get maximum I/O from disks when ARC is either full or bypassed)
I imported the stripe2 pool that was created using FreeBSD.

Code: Select all
cMPro:~ madmin$ zpool status
  pool: stripe2
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
   still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
   the pool may no longer be accessible by software that does not support
   the features. See zpool-features(5) for details.
  scan: none requested
config:

   NAME                                  STATE     READ WRITE CKSUM
   stripe2                               ONLINE       0     0     0
     PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  ONLINE       0     0     0
     PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  ONLINE       0     0     0
     PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  ONLINE       0     0     0
     PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  ONLINE       0     0     0

errors: No known data errors
cMPro:~ madmin$


and I performed the same test using cp via Terminal.app (HFS PCIe adapter to ZFS pool)
Code: Select all
                                       capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               21.3G  7.23T      0  2.51K      0   294M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  6.51G  1.81T      0    653      0  74.6M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  5.36G  1.81T      0    670      0  78.2M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  4.98G  1.81T      0    652      0  74.3M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  4.48G  1.81T      0    596      0  67.4M
------------------------------------  -----  -----  -----  -----  -----  -----

mmm… not so nice…
That's 25MB/sec slower across each disk…
Updating the pool didn't help as it reduced speed… :oops:
Code: Select all
cMPro:~ madmin$ sudo zpool upgrade
Password:
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.

POOL  FEATURE
---------------
stripe2
      edonr
      encryption

Code: Select all
                                        capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               28.7G  7.22T      0  1.16K      0   127M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  8.37G  1.80T      0    300      0  30.6M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  7.19G  1.81T      0    313      0  32.9M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  6.81G  1.81T      0    295      0  33.4M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  6.30G  1.81T      0    278      0  29.8M
------------------------------------  -----  -----  -----  -----  -----  -----
                                        capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               28.8G  7.22T      0  1.20K      0   128M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  8.39G  1.80T      0    326      0  33.3M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  7.22G  1.81T      0    268      0  32.0M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  6.84G  1.81T      0    306      0  29.7M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  6.33G  1.81T      0    330      0  33.0M
------------------------------------  -----  -----  -----  -----  -----  -----
                                        capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               29.0G  7.22T      0  1.36K      0   146M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  8.44G  1.80T      0    320      0  35.1M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  7.26G  1.81T      0    351      0  36.5M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  6.88G  1.81T      0    365      0  37.5M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  6.37G  1.81T      0    351      0  36.4M
------------------------------------  -----  -----  -----  -----  -----  -----
                                        capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               29.1G  7.22T      0  1.35K      0   145M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  8.47G  1.80T      0    300      0  35.7M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  7.30G  1.81T      0    365      0  36.0M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  6.92G  1.81T      0    359      0  36.3M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  6.41G  1.81T      0    361      0  37.5M
------------------------------------  -----  -----  -----  -----  -----  -----
                                        capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               29.2G  7.22T      0  1.35K      0   145M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  8.50G  1.80T      0    324      0  36.2M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  7.32G  1.81T      0    336      0  35.3M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  6.94G  1.81T      0    358      0  37.0M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  6.44G  1.81T      0    359      0  36.5M
------------------------------------  -----  -----  -----  -----  -----  -----
                                        capacity     operations     bandwidth
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
stripe2                               29.3G  7.22T      0  1.35K      0   145M
  PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0  8.53G  1.80T      0    355      0  36.9M
  PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0  7.36G  1.81T      0    299      0  36.4M
  PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0  6.98G  1.81T      0    367      0  36.5M
  PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0  6.47G  1.81T      0    356      0  35.6M
------------------------------------  -----  -----  -----  -----  -----  -----

If I pull 1 of the 4 disks out of the pool and use HFS, it will outperform a the remaining 3 disk when striped with ZFS for large sustained transfers...
I really want to stay with ZFS on macOS but this is really hurting now…
I'll test again tomorrow using Linux. Really hope Linux/macOS values are similar.

bed time...
Last edited by tangles on Fri Apr 06, 2018 12:55 am, edited 7 times in total.
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: slow write speeds still :cry:

Postby tangles » Sat Mar 17, 2018 3:20 pm

Here's some more testing comparing HFS/ZFS using 1TB SSD against a 1TB WD enterprise rotational.
Hardware is the same 5,1 MacPro as above running macOS 10.13.3 using builtin SATA ports.
Testfile is: 10,020,288,910 bytes (10.02 GB on disk)
ARC max set to 1000000000 (~1GB)

SDD HFS Speed test
Code: Select all
cMPro:~ madmin$ diskutil info disk1
   Device Identifier:        disk1
   Device Node:              /dev/disk1
   Whole:                    Yes
   Part of Whole:            disk1
   Device / Media Name:      Samsung SSD 840 EVO 1TB

   Volume Name:              Not applicable (no file system)
   Mounted:                  Not applicable (no file system)
   File System:              None

   Content (IOContent):      GUID_partition_scheme
   OS Can Be Installed:      No
   Media Type:               Generic
   Protocol:                 SATA
   SMART Status:             Verified

   Disk Size:                1.0 TB (1000204886016 Bytes) (exactly 1953525168 512-Byte-Units)
   Device Block Size:        512 Bytes

   Read-Only Media:          No
   Read-Only Volume:         Not applicable (no file system)

   Device Location:          Internal
   Removable Media:          Fixed

   Solid State:              Yes
   Virtual:                  No
   OS 9 Drivers:             No
   Low Level Format:         Not supported
   Device Location:          "Upper"

cMPro:~ madmin$ sudo diskutil partitiondisk /dev/disk1 GPTFormat "Free Space" "Free Space" 100%;
Started partitioning on disk1
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Finished partitioning on disk1
/dev/disk1 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk1
   1:                        EFI EFI                     209.7 MB   disk1s1
cMPro:~ madmin$ sudo diskutil eraseDisk JHFS+ Samsung-SSD-840-EVO-1TB /dev/disk1
Started erase on disk1
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk1s2 as Mac OS Extended (Journaled) with name Samsung-SSD-840-EVO-1TB
Initialized /dev/rdisk1s2 as a 931 GB case-insensitive HFS Plus volume with a 81920k journal
Mounting disk
Finished erase on disk1
cMPro:~ madmin$ time cp /Users/madmin/Desktop/testfile.dat /Volumes/Samsung-SSD-840-EVO-1TB

real   0m36.707s
user   0m0.024s
sys   0m8.481s
cMPro:~ madmin$

SDD ZFS Speed test
Code: Select all
cMPro:~ madmin$ diskutil unmountDisk disk1
Unmount of all volumes on disk1 was successful
cMPro:~ madmin$ sudo diskutil partitiondisk /dev/disk1 GPTFormat "Free Space" "Free Space" 100%;
Password:
Started partitioning on disk1
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Finished partitioning on disk1
/dev/disk1 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk1
   1:                        EFI EFI                     209.7 MB   disk1s1
cMPro:~ madmin$ sudo zpool create -f -o ashift=12 -O compression=lz4 -O checksum=skein -O casesensitivity=insensitive -O atime=off -O normalization=formD Samsung-SSD-840-EVO-1TB-ZFS /dev/disk1
cMPro:~ madmin$ sudo chown -Rf unknown:staff /Volumes/Samsung-SSD-840-EVO-1TB-ZFS
cMPro:~ madmin$ zpool status
  pool: Samsung-SSD-840-EVO-1TB-ZFS
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   Samsung-SSD-840-EVO-1TB-ZFS  ONLINE       0     0     0
     disk1     ONLINE       0     0     0

errors: No known data errors
cMPro:~ madmin$ time cp /Users/madmin/Desktop/testfile.dat /Volumes/Samsung-SSD-840-EVO-1TB-ZFS

real   1m7.716s
user   0m0.020s
sys   0m9.580s
cMPro:~ madmin$

WD Enterprise 1TB rotational HFS test
Code: Select all
cMPro:~ madmin$ diskutil info disk0
   Device Identifier:        disk0
   Device Node:              /dev/disk0
   Whole:                    Yes
   Part of Whole:            disk0
   Device / Media Name:      WDC WD1002F9YZ-09H1JL1

   Volume Name:              Not applicable (no file system)
   Mounted:                  Not applicable (no file system)
   File System:              None

   Content (IOContent):      GUID_partition_scheme
   OS Can Be Installed:      No
   Media Type:               Generic
   Protocol:                 SATA
   SMART Status:             Verified

   Disk Size:                1.0 TB (1000204886016 Bytes) (exactly 1953525168 512-Byte-Units)
   Device Block Size:        512 Bytes

   Read-Only Media:          No
   Read-Only Volume:         Not applicable (no file system)

   Device Location:          Internal
   Removable Media:          Fixed

   Solid State:              No
   Virtual:                  No
   OS 9 Drivers:             No
   Low Level Format:         Not supported
   Device Location:          "Lower"

cMPro:~ madmin$ sudo diskutil partitiondisk /dev/disk0 GPTFormat "Free Space" "Free Space" 100%;
Started partitioning on disk0
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Finished partitioning on disk0
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
cMPro:~ madmin$ sudo diskutil eraseDisk JHFS+ WD-WD1002F9Y-ENT-1TB /dev/disk0
Started erase on disk0
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk0s2 as Mac OS Extended (Journaled) with name WD-WD1002F9Y-ENT-1TB
Initialized /dev/rdisk0s2 as a 931 GB case-insensitive HFS Plus volume with a 81920k journal
Mounting disk
Finished erase on disk0
cMPro:~ madmin$ time cp /Users/madmin/Desktop/testfile.dat /Volumes/WD-WD1002F9Y-ENT-1TB

real   0m56.543s
user   0m0.024s
sys   0m8.603s
cMPro:~ madmin$

WD Enterprise 1TB rotational ZFS test using ashift=12
Code: Select all
cMPro:~ madmin$ diskutil unmountDisk disk0
Unmount of all volumes on disk0 was successful
cMPro:~ madmin$ sudo diskutil partitiondisk /dev/disk0 GPTFormat "Free Space" "Free Space" 100%;
Started partitioning on disk0
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Finished partitioning on disk0
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk0
   1:                        EFI                         209.7 MB   disk0s1
cMPro:~ madmin$ sudo zpool create -f -o ashift=12 -O compression=lz4 -O checksum=skein -O casesensitivity=insensitive -O atime=off -O normalization=formD WD-Enterprise-1TB-rotational-ZFS /dev/disk0
cMPro:~ madmin$ zpool status
  pool: WD-Enterprise-1TB-rotational-ZFS
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   WD-Enterprise-1TB-rotational-ZFS  ONLINE       0     0     0
     disk0     ONLINE       0     0     0

errors: No known data errors
cMPro:~ madmin$ sudo chown -Rf unknown:staff /Volumes/WD-Enterprise-1TB-rotational-ZFS
cMPro:~ madmin$ time cp /Users/madmin/Desktop/testfile.dat /Volumes/WD-Enterprise-1TB-rotational-ZFS

real   1m47.609s
user   0m0.020s
sys   0m9.707s
cMPro:~ madmin$ clear

WD Enterprise 1TB rotational ZFS test using ashift=9
Code: Select all
cMPro:~ madmin$ sudo diskutil partitiondisk /dev/disk0 GPTFormat "Free Space" "Free Space" 100%;
Started partitioning on disk0
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Finished partitioning on disk0
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk0
   1:                        EFI                         209.7 MB   disk0s1
cMPro:~ madmin$ sudo zpool create -f -o ashift=9 -O compression=lz4 -O checksum=skein -O casesensitivity=insensitive -O atime=off -O normalization=formD WD-Enterprise-1TB-rotational-ZFS /dev/disk0
cMPro:~ madmin$ zpool status
  pool: WD-Enterprise-1TB-rotational-ZFS
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   WD-Enterprise-1TB-rotational-ZFS  ONLINE       0     0     0
     disk0     ONLINE       0     0     0

errors: No known data errors
cMPro:~ madmin$ sudo chown -Rf unknown:staff /Volumes/WD-Enterprise-1TB-rotational-ZFS
cMPro:~ madmin$ time cp /Users/madmin/Desktop/testfile.dat /Volumes/WD-Enterprise-1TB-rotational-ZFS

real   3m10.194s
user   0m0.020s
sys   0m9.763s
cMPro:~ madmin$

Summary of macOS Results
SDD 1TB HFS → 0m36.707s, ~280MB/sec
SSD 1TB ZFS → 1m7.716s, ~152MB/sec Δ 46% ↓

Rot 1TB HFS → 0m56.543s, ~181MB/sec
Rot 1TB ZFS → 1m47.609s, ~95MB/sec (ashift=12) Δ 48% ↓
Rot 1TB ZFS → 3m10.194s, ~54MB/sec (ashift=9) Δ 70% ↓

Conclusion
ZFS is slow at the moment… ~50% slower for large sustained transfers… :cry:
I accept that ZFS will never get near HFS considering all the other juicy stuff it does for me, but 50%…
Last edited by tangles on Sat Mar 17, 2018 5:19 pm, edited 1 time in total.
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: slow write speeds still :cry:

Postby tangles » Sat Mar 17, 2018 5:18 pm

I carried out the same test with FreeBSD
SDD ZFS test
Code: Select all
uname -a
FreeBSD cMPro 11.1-RELEASE-p8 FreeBSD 11.1-RELEASE-p8 #0: Tue Mar 13 17:07:05 UTC 2018     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
$ geom disk list
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 121332826112 (113G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e3
   descr: APPLE SSD SM0128G
   lunid: 5002538900000000
   ident: S29BNYAH514711
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   descr: Samsung SSD 840 EVO 1TB
   lunid: 50025388a06db5fe
   ident: S1D9NSAF920337K
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: ada2
Providers:
1. Name: ada2
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: WDC WD1002F9YZ-09H1JL1
   lunid: 50014ee00412644e
   ident: WD-WMC5K0D9AZ54
   rotationrate: 7200
   fwsectors: 63
   fwheads: 16

$ zpool status
  pool: freebsd-zfs-Samsung-EVO-1TB
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   freebsd-zfs-Samsung-EVO-1TB  ONLINE       0     0     0
     ada1      ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   zroot       ONLINE       0     0     0
     ada0p4    ONLINE       0     0     0

errors: No known data errors
$ time cp testfile.dat /freebsd-zfs-Samsung-EVO-1TB/
       35.58 real         0.02 user         6.40 sys
$

WD Enterprise 1TB rotational ZFS
Code: Select all
$ zpool status
  pool: freebsd-zfs-WD-ENT-1TB
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   freebsd-zfs-WD-ENT-1TB  ONLINE       0     0     0
     ada2      ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   zroot       ONLINE       0     0     0
     ada0p4    ONLINE       0     0     0

errors: No known data errors
$ time cp testfile.dat /freebsd-zfs-WD-ENT-1TB/
       55.34 real         0.01 user         6.33 sys
$

Summary of freeBSD Results
SDD 1TB ZFS → 0m36.707s, ~280MB/sec == compared to HFS
Rot 1TB ZFS → 0m55.34s, ~185MB/sec Δ 3% ↑ compared to HFS
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: slow write speeds still :cry:

Postby lundman » Sun Mar 18, 2018 4:13 pm

Yep, not too surprising, I for one have not yet look at any performance code at all. Thanks for the thorough tests. There are quite a few tunables (sysctl kstat) to play with as well, and they are all default IllumOS values, which might not be what we need. Then there is a dataset sync, and zil - but in theory, neither should kick in with a standard "cp" test. (one hopes).
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: slow write speeds still :cry:

Postby tangles » Thu Mar 22, 2018 2:07 am

Fedora Testing…

Code: Select all
[root@cmpro ~]# uname -a                                                                                                                 
Linux cmpro.localdomain 4.15.10-300.fc27.x86_64 #1 SMP Thu Mar 15 17:13:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux                       
[root@cmpro ~]# zpool create -O atime=off ssdpool /dev/sda                                                                               
[root@cmpro ~]# zpool create -O atime=off entpool /dev/sdb                                                                               
[root@cmpro ~]# zpool status ; zpool list                                                                                               
  pool: entpool                                                                                                                         
 state: ONLINE                                                                                                                           
  scan: none requested                                                                                                                   
config:                                                                                                                                 
                                                                                                                                         
        NAME        STATE     READ WRITE CKSUM                                                                                           
        entpool     ONLINE       0     0     0                                                                                           
          sdb       ONLINE       0     0     0                                                                                           
                                                                                                                                         
errors: No known data errors                                                                                                             
                                                                                                                                         
  pool: ssdpool                                                                                                                         
 state: ONLINE                                                                                                                           
  scan: none requested                                                                                                                   
config:                                                                                                                                 
                                                                                                                                         
        NAME        STATE     READ WRITE CKSUM                                                                                           
        ssdpool     ONLINE       0     0     0                                                                                           
          sda       ONLINE       0     0     0                                                                                           
                                                                                                                                         
errors: No known data errors                                                                                                             
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                                                             
entpool   928G   432K   928G         -     0%     0%  1.00x  ONLINE  -                                                                   
ssdpool   928G   864K   928G         -     0%     0%  1.00x  ONLINE  -                                                                   
[root@cmpro ~]# time cp ~/testfile.dat /ssdpool/                                                                                         
real    0m37.619s                                                                                                                       
user    0m0.183s                                                                                                                         
sys     0m10.824s                                                                                                                       
[root@cmpro ~]# time cp ~/testfile.dat /entpool/                                                                                         
real    0m58.155s                                                                                                                       
user    0m0.172s                                                                                                                         
sys     0m11.230s                                                                                                                       
[root@cmpro ~]#
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: slow write speeds still :cry:

Postby lundman » Tue Mar 27, 2018 12:58 am

I have been trying to figure out where we go wrong:

cmd: rm /Volumes/BOOM/speedtest ; time mkfile 512m /Volumes/BOOM/speedtest

hfs: real 0m5.564s

zfs: real 0m21.247s

So I wrote a little test function, to call the write from kernel

https://gist.github.com/lundman/3d7d952 ... d69ef42b26

Which I then trigger with:

sysctl kstat.zfs.darwin.tunable.vnop_debug=1234
(and wait up to 30s for finder thread to wake up)

2018-03-27 08:44:20.695647+0000 0x966 Default 0x0
0 <zfs`write_test (zfs_vnops_osx.c:208)> write_test start 16315
2018-03-27 08:44:20.695963+0000 0x966 Default 0x0
0 <zfs`write_test (zfs_vnops_osx.c:217)> vn_open(): 0
2018-03-27 08:44:22.591186+0000 0x966 Default 0x0
0 <zfs`write_test (zfs_vnops_osx.c:229)> write_test done 16504
2018-03-27 08:44:22.591188+0000 0x966 Default 0x0
0 <zfs`write_test (zfs_vnops_osx.c:228)> write_test delta 189

-rw-r--r-- 1 root wheel 536870912 Mar 27 17:49 /Volumes/BOOM/speedtest

So kernel takes 3s to write that file. It almost makes me think that VNOP_WRITE (and our zfs_vnop_write) is being throttled above us somewhere. But no proof yet.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: slow write speeds still :cry:

Postby lundman » Wed Mar 28, 2018 12:23 am

Yeha no, seems to be the number of txgs we go through to do a transfer, best demonstrated with:

Code: Select all
# time dd if=/dev/zero of=/Volumes/BOOM/speedtest bs=131072 count=4096
real   0m2.930s

# time dd if=/dev/zero of=/Volumes/BOOM/speedtest bs=512 count=1048576
real   0m24.469s


The peculiar thing is the same thing happens under IllumOS.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: slow write speeds still :cry:

Postby tangles » Wed Mar 28, 2018 4:08 am

Changing the recordsize has no measurable impact either…
Code: Select all
cMPro:~ madmin$ kextstat | grep lundman
   73    1 0xffffff7f80ee6000 0x498      0x498      net.lundman.kernel.dependencies.31 (12.5.0)
  163    1 0xffffff7f85791000 0x11f5000  0x11f5000  net.lundman.spl (1.7.2)
  164    1 0xffffff7f86986000 0x2d1000   0x2d1000   net.lundman.zfs (1.7.2)
cMPro:~ madmin$ zfs get recordsize Sammy
NAME   PROPERTY    VALUE    SOURCE
Sammy  recordsize  128K     local
cMPro:~ madmin$
cMPro:~ madmin$ time dd if=/dev/zero of=/Volumes/Sammy/speetest bs=131072 count=4096
4096+0 records in
4096+0 records out
536870912 bytes transferred in 1.666583 secs (322138753 bytes/sec)

real   0m1.670s
user   0m0.007s
sys   0m0.232s
cMPro:~ madmin$ time dd if=/dev/zero of=/Volumes/Sammy/speetest bs=512 count=1048576
1048576+0 records in
1048576+0 records out
536870912 bytes transferred in 10.447463 secs (51387682 bytes/sec)

real   0m10.451s
user   0m0.869s
sys   0m9.511s
cMPro:~ madmin$ sudo zfs set recordsize=1024k Sammy
cMPro:~ madmin$ time dd if=/dev/zero of=/Volumes/Sammy/speetest bs=131072 count=4096
4096+0 records in
4096+0 records out
536870912 bytes transferred in 1.230936 secs (436148584 bytes/sec)

real   0m1.234s
user   0m0.007s
sys   0m0.239s
cMPro:~ madmin$ time dd if=/dev/zero of=/Volumes/Sammy/speetest bs=512 count=1048576
1048576+0 records in
1048576+0 records out
536870912 bytes transferred in 10.128764 secs (53004583 bytes/sec)

real   0m10.132s
user   0m0.847s
sys   0m9.231s
cMPro:~ madmin$

Looking at the graph on our Wiki page under Performance, we were rock'n back in April 2014, and then something obviously changed that has hurt performance and pushed us back to O3X 1.2 days…
I "think" I started to notice this from perhaps v1.52 or 1.6 onwards...
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: slow write speeds still :cry:

Postby macz » Wed Mar 28, 2018 5:05 am

possible tie-in with the observations of poor performance with large ARC allocations?

I have noticed (subjectively) some of the same performance related behavior on ominos as well.. could be a memory/ARC related issues that has spread down from upstream?
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: slow write speeds still :cry:

Postby lundman » Thu Mar 29, 2018 12:01 am

As it has to do with the number of txgs coming through (not recordsize) and throttling those down, I would look at the ZFS write throttle rewrite commit.

Note that you can do a slow dd, and a fast dd, at the same time and they both finish as expected. So it is purposely slowing down the high IOPS.

The same thing happens on IllumOS, but on OSX it is more noticable as a lot of the tools seem to use 512 blocks, cp and finder etc. So, lets try to find out what it is they do.


ZOL also has it, but their side the difference is 0.9 to 3s .. so you hardly notice it.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Next

Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 28 guests