Hi all. I was hoping to tap into some expertise here to look into performance tuning my zfs array. Lately I've notice that under heavy write load, my system seems to go unresponsive for intervals. I did some tuning with the arc limit and that seemed to help a bit with that issue. I've also noticed that the total write bandwidth seems to be pretty bad, the degree to which surprised me. I've done some basic benchmarking and I wanted to see if this was typical or if there's anything I can do to address it.
Here's some background info. I have a 2x raidz1 array. All six disks are spread across two external USB 3.0 enclosures. I have 16GB of RAM with kstat.zfs.darwin.tunable.zfs_arc_max=8589934592. I used bonnie++ for my benchmarking (bonnie++ -r 16384).
Here are three different test runs for comparison.
Internal SSD:
Sequential Input, Block - 462872 K/sec, 26 %CPU
Sequential Output, Block - 232967 K/sec, 35 %CPU
Single APFS HFS+ drive in the same external enclosure as 1/2 of the ZFS array:
Sequential Input, Block - 130552 K/sec, 7 %CPU
Sequential Output, Block - 127390 K/sec, 16 %CPU
2x raidz1 in external enclosures:
Sequential Input, Block - 224166 K/sec, 25 %CPU
Sequential Output, Block - 5401 K/sec, 0 %CPU
My interpretation of the results: Obviously the internal SSD is very fast. The external single disk (spinning) is ok, 28% of the read speed of the SSD and 55% of the write speed. When you compare the zfs array versus the single external drive the results are interesting. The array read speed is 170% of the single drive, I'm guessing because the reads can be parallelized across more than one drive. The write speed is abysmal though, at 4% of the external drive. Naively, I would expect about 50% since the writes need to be done twice (the second time for parity storage), but 4% seems pathological.
Any experience like this? Any advice to give?
Thanks for any help.
-Andy