Can someone explain why I see a significant performance degradation when I attempt to copy a 40GB swatch of a filesystem on a non-ZFS SSD filesystem over a Thunder-to-Thunder connection directly onto a newly created ZFS filesystem? It appears that after a certain amount of data is transferred within the processes (i.e. scp), the copy slows down by several orders (three) of magnitude. If I break up the copy by directory (i.e. loop through and copy each entry individually - then It seems to reset - until I encounter a large directory that degrades. Thoughts?
I'm using a a MacPro with three 8T USB 3.0 drives in one raidz pool - I create a filesystem with lz4 compression, and I disable md (via mdutil)