Problem with user data on ZFS pool

All your general support questions for OpenZFS on OS X.

Problem with user data on ZFS pool

Postby TooDizzy » Thu Oct 18, 2018 1:07 am

Hi Guys,

I am running my laptop with an SSD for the OS and a HDD mounted through ZFS.
The purpose was, and still is, to utilize some of the SSD as a L2ARC cache for the HDD; to get faster access to data stored on this HDD.
And to play with ZFS :)

I am having a very slow login process; the Finder hangs for a couple of minutes before it will show the desktop and whatever files are stored there.
It seems that searching the file tree is really slow. Once a file is located, it seems to work fine (maybe it is then loaded from the cache).
Tried to scrub the pool, but not to much avail. Still the same slow behaviour.
I just recently turned on compression. Any way to force compression of already stored data? Or does that not make sense?

The cache seems to be populated but doesn't seem to survive a reboot.

I am running 1.7.2 on Sierra and have upgraded the pool to the latest version.
The pool consists of a single HDD with a small SDD partition as a cache disk.

What can I do to increase the speed of my disk access (or whatever is causing this)?

Regards Tue
TooDizzy
 
Posts: 9
Joined: Sun May 25, 2014 8:52 am

Re: Problem with user data on ZFS pool

Postby Sharko » Thu Oct 18, 2018 9:25 pm

Couple thoughts:

There is not, I believe, any way to compress stored ZFS data after the fact. It either gets compressed when first written, or you have lost the opportunity. You'll have to transfer the data off to another medium, and then back again into a ZFS dataset (this time with compression turned on) to get that data compressed.

Secondly, one thing that is often overlooked in L2ARC situations is, do you have enough RAM to run L2ARC? Because all those blocks stored in L2ARC are going to need a pointer for each to keep track of them, and this decreases RAM available for ZFS to use for RAM caching (and RAM caching is much faster). So if your RAM availability is marginal to start with (say, 4 GB, or maybe even 8 GB) having a big L2ARC is actually going to hurt performance. ZFS likes lots of RAM, and I would recommend you forgo the L2ARC in favor of upgrading your RAM.

There are two parameters that you might consider changing to see if performance improves. The first is sector size. Many (large, i.e. >2TB) disks lie, and report themselves as 512 byte sector for compatibility reasons. If they are actually 4096 byte sectors on the disk, however, performance drops because a 4096 byte write turns into 8 reads and writes as ZFS instructs the disk hardware to do what you requested: each 512 byte change involves reading the whole 4096 byte sector, changing a 512 byte chunk, and writing it back. Going the other direction (assuming 4096 byte sector, but disk is really 512) is not quite as catastrophic, as a 4096 byte write is held up until all 4096 bytes are ready, and then the low level disk controller squirts 8 chunks out to disk, likely in a single revolution. So, check your pool: if it is 512 byte sectors (ashift=9) then you likely can pick up performance changing to ashift=12.

You could also try playing around with the record size; the default is 128K, but some people like to run 1 meg block size. Larger block sizes are more efficient, both in sequential read/write performance and in metadata efficiency. The only time you get burned with a big block size is when re-writing portions in the middle of a block, so that is important to understand what your data wants to do. For instance, you don't want to use a large block size with a database file that has small records, since re-writing a single small record involves reading in a large block of data, changing a small portion of it, and then writing it all out again.

Persistent cache is something I've heard rumors about coming in from upstream at some point, but I don't know how close it is.
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: Problem with user data on ZFS pool

Postby TooDizzy » Fri Oct 19, 2018 12:16 am

Hi Sharko,

Thanks a lot for your detailed feedback!

I am running with 16GB installed memory.
I have assigned a max of 8gig for the ARC.

The pool is already setup with ashift=12, but will need to make sure I have the correct sector size before doing anything in this area.

As suggested I have now disabled the L2ARC and next I will try to increase the record size. It seems like it is worth a try.

Thanks a lot!

/Tue
TooDizzy
 
Posts: 9
Joined: Sun May 25, 2014 8:52 am

Re: Problem with user data on ZFS pool

Postby TooDizzy » Sat Oct 20, 2018 9:28 pm

A little update.

Managed to get a lot better performance out of the system now.
I found that the SATA cables were in a very bad shape. I fixed that and this did a lot for the performance.

I have also recreated most of my user files (documents, library etc) and cleaned up the downloads and desktop. This also helped a lot.
So I now have a system is workable. Which is great :-)

Next problem is that I am running out of battery almost instantly. It drains it almost as fast as it can.
This is a behaviour change that has started to occur after I upgraded to 1.7.2.
Maybe it isn't setting the spindle disk on idle? I also noticed that MDS is very active at the moment - will let it do its things and report back on the battery problems.

/Tue
TooDizzy
 
Posts: 9
Joined: Sun May 25, 2014 8:52 am


Return to General Help

Who is online

Users browsing this forum: No registered users and 9 guests

cron