by Sharko » Thu Oct 18, 2018 9:25 pm
Couple thoughts:
There is not, I believe, any way to compress stored ZFS data after the fact. It either gets compressed when first written, or you have lost the opportunity. You'll have to transfer the data off to another medium, and then back again into a ZFS dataset (this time with compression turned on) to get that data compressed.
Secondly, one thing that is often overlooked in L2ARC situations is, do you have enough RAM to run L2ARC? Because all those blocks stored in L2ARC are going to need a pointer for each to keep track of them, and this decreases RAM available for ZFS to use for RAM caching (and RAM caching is much faster). So if your RAM availability is marginal to start with (say, 4 GB, or maybe even 8 GB) having a big L2ARC is actually going to hurt performance. ZFS likes lots of RAM, and I would recommend you forgo the L2ARC in favor of upgrading your RAM.
There are two parameters that you might consider changing to see if performance improves. The first is sector size. Many (large, i.e. >2TB) disks lie, and report themselves as 512 byte sector for compatibility reasons. If they are actually 4096 byte sectors on the disk, however, performance drops because a 4096 byte write turns into 8 reads and writes as ZFS instructs the disk hardware to do what you requested: each 512 byte change involves reading the whole 4096 byte sector, changing a 512 byte chunk, and writing it back. Going the other direction (assuming 4096 byte sector, but disk is really 512) is not quite as catastrophic, as a 4096 byte write is held up until all 4096 bytes are ready, and then the low level disk controller squirts 8 chunks out to disk, likely in a single revolution. So, check your pool: if it is 512 byte sectors (ashift=9) then you likely can pick up performance changing to ashift=12.
You could also try playing around with the record size; the default is 128K, but some people like to run 1 meg block size. Larger block sizes are more efficient, both in sequential read/write performance and in metadata efficiency. The only time you get burned with a big block size is when re-writing portions in the middle of a block, so that is important to understand what your data wants to do. For instance, you don't want to use a large block size with a database file that has small records, since re-writing a single small record involves reading in a large block of data, changing a small portion of it, and then writing it all out again.
Persistent cache is something I've heard rumors about coming in from upstream at some point, but I don't know how close it is.