testfile is a compressed video file = 10.02GB
Test is cp command of testfile from Apple PCIe flash storage (>700MB/sec) to target.
Update: I'll post more results as I test with more operating systems:
Summary of macOS Results
HFS (Baseline)
SDD 1TB HFS → 0m36.707s, ~280MB/sec
Rot 1TB HFS → 0m56.543s, ~181MB/sec
ZFS
SSD 1TB ZFS → 1m7.716s, ~152MB/sec Δ 46% ↓ compared to HFS
Rot 1TB ZFS → 1m47.609s, ~95MB/sec (ashift=12) Δ 48% ↓ compared to HFS
Rot 1TB ZFS → 3m10.194s, ~54MB/sec (ashift=9) Δ 70% ↓ compared to HFS
Summary of freeBSD Results
SDD 1TB ZFS → 0m36.707s, ~280MB/sec == compared to HFS
Rot 1TB ZFS → 0m55.34s, ~185MB/sec Δ 3% ↑ compared to HFS
Summary of Fedora 27 Server Results
SDD 1TB ZFS → 0m37.619s, ~273MB/sec Δ 2% ↓ compared to HFS
Rot 1TB ZFS → 0m58.155s, ~176MB/sec Δ 3% ↓ compared to HFS
Background:
Downloaded the latest FreeBSD installer to USB stick and installed onto PCIe 500GB flash storage which can sustain >700MB read/write.
- Code: Select all
FreeBSD cMPro 11.1-RELEASE-p8 FreeBSD 11.1-RELEASE-p8 #0: Tue Mar 13 17:07:05 UTC 2018 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
I updated FreeBSD and created a pool with:
- Code: Select all
zpool create -O atime=off stripe2 /dev/ada1 /dev/ada2 /dev/ada3 /dev/ada4
I copied the 10GB test file via scp onto /usr/home/madmin on zroot.
- Code: Select all
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
stripe2 1.58G 7.02T 1.58G /stripe
zroot 3.92G 103G 88K /zroot
zroot/ROOT 3.27G 103G 88K none
zroot/ROOT/default 3.27G 103G 3.27G /
zroot/tmp 88K 103G 88K /tmp
zroot/usr 655M 103G 88K /usr
zroot/usr/home 132K 103G 132K /usr/home
zroot/usr/ports 655M 103G 655M /usr/ports
zroot/usr/src 88K 103G 88K /usr/src
zroot/var 680K 103G 88K /var
zroot/var/audit 88K 103G 88K /var/audit
zroot/var/crash 88K 103G 88K /var/crash
zroot/var/log 216K 103G 216K /var/log
zroot/var/mail 112K 103G 112K /var/mail
zroot/var/tmp 88K 103G 88K /var/tmp
Now to test!
I wrote the 10GB file from zroot to stripe2 via cp command:
- Code: Select all
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
stripe2 9.99G 7.24T 3 3.41K 15.7K 423M
ada1 2.50G 1.81T 0 910 0 109M
ada2 2.50G 1.81T 0 875 3.92K 105M
ada3 2.49G 1.81T 0 845 3.92K 101M
ada4 2.50G 1.81T 1 857 7.84K 107M
---------- ----- ----- ----- ----- ----- -----
zroot 13.3G 96.7G 3.10K 0 396M 0
ada0p4 13.3G 96.7G 3.10K 0 396M 0
---------- ----- ----- ----- ----- ----- -----
each disk is > 100MB/sec… nice!
I shutdown the cMPro, removed the PCIe 500GB FreeBSD flash adapter and replaced it with a 128GB PCIe flash adapter.
I installed a fresh macOS 10.13.3 from a USB stick.
- Code: Select all
Darwin cMPro.local 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64
Installed 1.7.1 and set ARC to 1000000000 (~1GB as this is best value for me to get maximum I/O from disks when ARC is either full or bypassed)
I imported the stripe2 pool that was created using FreeBSD.
- Code: Select all
cMPro:~ madmin$ zpool status
pool: stripe2
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
stripe2 ONLINE 0 0 0
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 ONLINE 0 0 0
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 ONLINE 0 0 0
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 ONLINE 0 0 0
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 ONLINE 0 0 0
errors: No known data errors
cMPro:~ madmin$
and I performed the same test using cp via Terminal.app (HFS PCIe adapter to ZFS pool)
- Code: Select all
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 21.3G 7.23T 0 2.51K 0 294M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 6.51G 1.81T 0 653 0 74.6M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 5.36G 1.81T 0 670 0 78.2M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 4.98G 1.81T 0 652 0 74.3M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 4.48G 1.81T 0 596 0 67.4M
------------------------------------ ----- ----- ----- ----- ----- -----
mmm… not so nice…
That's 25MB/sec slower across each disk…
Updating the pool didn't help as it reduced speed…
- Code: Select all
cMPro:~ madmin$ sudo zpool upgrade
Password:
This system supports ZFS pool feature flags.
All pools are formatted using feature flags.
Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.
POOL FEATURE
---------------
stripe2
edonr
encryption
- Code: Select all
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 28.7G 7.22T 0 1.16K 0 127M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 8.37G 1.80T 0 300 0 30.6M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 7.19G 1.81T 0 313 0 32.9M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 6.81G 1.81T 0 295 0 33.4M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 6.30G 1.81T 0 278 0 29.8M
------------------------------------ ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 28.8G 7.22T 0 1.20K 0 128M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 8.39G 1.80T 0 326 0 33.3M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 7.22G 1.81T 0 268 0 32.0M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 6.84G 1.81T 0 306 0 29.7M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 6.33G 1.81T 0 330 0 33.0M
------------------------------------ ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 29.0G 7.22T 0 1.36K 0 146M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 8.44G 1.80T 0 320 0 35.1M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 7.26G 1.81T 0 351 0 36.5M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 6.88G 1.81T 0 365 0 37.5M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 6.37G 1.81T 0 351 0 36.4M
------------------------------------ ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 29.1G 7.22T 0 1.35K 0 145M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 8.47G 1.80T 0 300 0 35.7M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 7.30G 1.81T 0 365 0 36.0M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 6.92G 1.81T 0 359 0 36.3M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 6.41G 1.81T 0 361 0 37.5M
------------------------------------ ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 29.2G 7.22T 0 1.35K 0 145M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 8.50G 1.80T 0 324 0 36.2M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 7.32G 1.81T 0 336 0 35.3M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 6.94G 1.81T 0 358 0 37.0M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 6.44G 1.81T 0 359 0 36.5M
------------------------------------ ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------------------ ----- ----- ----- ----- ----- -----
stripe2 29.3G 7.22T 0 1.35K 0 145M
PCI0@0-SATA@1F,2-PRT2@2-PMP@0-@0:0 8.53G 1.80T 0 355 0 36.9M
PCI0@0-SATA@1F,2-PRT3@3-PMP@0-@0:0 7.36G 1.81T 0 299 0 36.4M
PCI0@0-SATA@1F,2-PRT4@4-PMP@0-@0:0 6.98G 1.81T 0 367 0 36.5M
PCI0@0-SATA@1F,2-PRT5@5-PMP@0-@0:0 6.47G 1.81T 0 356 0 35.6M
------------------------------------ ----- ----- ----- ----- ----- -----
If I pull 1 of the 4 disks out of the pool and use HFS, it will outperform a the remaining 3 disk when striped with ZFS for large sustained transfers...
I really want to stay with ZFS on macOS but this is really hurting now…
I'll test again tomorrow using Linux. Really hope Linux/macOS values are similar.
bed time...