lundman wrote:We can't just add different compression algorithms ourselves, it has to be universally done by OpenZFS, or pools will be incompatible. Well, I mean, we could, but...
I'd never suggest that, evidently. Zlib-ng is still zlib; even when you don't build it in ABI-compatible mode (= as a *drop-in* replacement for "stock" zlib) it still generates zlib compression. Different implementations may not achieve exactly the same level of compression, but they can still de- and re-compress each others files. So the point here is not to introduce a feature upstream doesn't have.
I have only scanned through the code once and very quickly, so I'm not even certain if ZFS includes its own zlib source copy, let alone whether that copy is modified.
TBH, re: introducing new features; the thought did cross my mind that it wouldn't hurt to have the possibility with ZFS like on HFS to run zlib compression "offline" on existing files. A shortcut for
- set dataset compression to gzip-N (I like using level 8)
- rewrite all files under the given directory/ies
- set dataset compression back to what it was
but the only thing that could maybe really benefit from a lowlevel addition is the "rewrite this file" step. (With HFS you have to do that yourself too, to the resource fork at that.)
lundman wrote:On the assembler thing, it can definitely be done, but not in the 30s I gave it when I tested the commit
I hear you
I took just a bit more than that to test my idea of compiling it for MachO with clang under Linux. That doesn't work, at least not for the unrelated example I tried containing gas/linux specific assembler directives. If you tell clang to generate MachO object code it really behaves as if running on Mac. Guess that makes sense