ARC is for reads, not writes.
ZFS will send more write transactions to the vdevs that are the least full. (i.e. fewest allocated blocks)
ZIL is for writes but I see no advantage here with my pool of putting ZIL onto a fast/dedicated device for large/huge files.
Different story for small files/writes because the ZIL can turn all those small (random) writes into nice long and big sequential writes.
ARC/ZIL aside, I guess you could say a pool of type stripe would be only as fast as the slowest device for both reads and writes as all disks equally receive the same quantity of blocks when being written to.
You'd probably also notice a big drop in I/O if you added a new vdev of 2 x 4200 rpm disks to a mirror-configured pool that has nothing but 15000 rpm disks already in it. ouch!
I'm running 10 x 4TB disks (mirror config) here at home now, and the pool originally was created with 6 x 4TB disks.
When the pool was getting to 85%, I added a pair to 4TB disks to the pool. Using zpool iostat -v 1 1000, you can see that any new writes were predominantly being sent to the new vdev containing the 2 newest 4TB disks. ZFS is still protecting my data because each block is still mirrored.
Putting ZIL & ARC aside for now, what this effectively results in is a pool's write speed being limited to the I/O capabilities of the newest vdev(s) as the existing vdevs in the pool aren't really being pushed to their maximum capability anymore until the pool's distribution of blocks normalises across all disks again.
I help the normalisation along by copying a zfs dataset onto another/different pool and then delete that zfs-dataset from my main pool. (think backup pool here, so it's no hassle to do this)
This frees up space on the existing/older vdevs within my main pool.
I then copy the same zfs dataset back onto the main pool so that it gets redistributed better across all vdevs, rather than just the newest vdev.
The more I do this task, the more I even the distribution of writes across my pool's vdevs and so attain faster write speeds.
In fact, I can run this task at any time as you don't have to have new disks recently added. It's actually more effective if you perform this task without a new disks/vdev recently added.
Because most of the files are large, I'm not seeing any fragmentation to slow down write speeds afterwards either.
The net result for my setup is faster transfers over the network because all my vdevs are writing at similar speeds.
Ideally, you would want to delete all data off your main pool and copy it back on again every time you added a new vdev. I have done this in the past to compare my write speeds of a non-fragmented pool versus a fragmented pool. I/O was not effected that I could see.
This is why I'm looking forward to vdev removal because I won't have to zfs send → delete dataset → zfs recv anymore… I'll just script my two PCIe flash devices to be removed and re-added each night.
As long as my flash-based vdev has more free space than my spindle vdevs, I should see faster writes than not having the PCIe flash at all.
I can see I'm about to purchase 2 or 4 x M.2 cards and put them onto a 2 or 4 slot PCIe adapter…
and script something like zpool remove vdev-n zpool_name; verify identity of Flash media; zpool add mirror flash1 flash2 pool_name. (and include some serious device path/size checking to ensure I add the correct devices!)