ZIL problems - what will happen

Here you can discuss every aspect of OpenZFS on OS X. Note: not for support requests!

ZIL problems - what will happen

Postby haer22 » Mon Jul 07, 2014 6:43 am

My log is just a single SSD, not a mirror. What happens if it breaks down?

When writing a new entry onto ZIL, will it then offline the ZIL and continue without ZIL?

When reading from ZIL and there is an error, what happens? Panic? data lost? Can I loose a "structure block" and hence loose much data?
haer22
 
Posts: 123
Joined: Sun Mar 23, 2014 2:13 am

Re: ZIL problems - what will happen

Postby lundman » Wed Jul 09, 2014 12:29 am

ZIL usually lives in the pool, but can be pointed somewhere faster, like an SSD. If your ZIL dies, you need to use "import -m" to import without zil. Then you can replace the zil, or run without. You should not have issues with zil on ssd.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: ZIL problems - what will happen

Postby haer22 » Wed Jul 09, 2014 3:31 am

My ZIL will not be mirrored. So if shit happens I may loose tha "last" data, right? I think I read somewhere that every 10s the ZIL will get cycled and moved to the platters. Is that correct?

When it happens, will that pool just stop working until I do something?
haer22
 
Posts: 123
Joined: Sun Mar 23, 2014 2:13 am

Re: ZIL problems - what will happen

Postby lundman » Wed Jul 09, 2014 3:44 pm

If the ZIL dies, it will go back to using the ZIL in the pool, or at least try to. I suppose if you have heavy writes, ZIL dies, and you also crash, simultaneously, you will lose whatever transaction you were writing (but pool will be ok).
If you don't crash, the pool will keep going.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: ZIL problems - what will happen

Postby rottegift » Thu Jul 10, 2014 5:06 am

log vdevs are only read at pool import time, otherwise (if they exist) they are write-only.

What gets written to the log vdev is the "intent" to commit a given block of synchronous data to the correct part of a pool. Synchronous data is generated by the fsync(2) system call or its equivalents, by some processes involved in serving up filesystems to network clients, and by some activities involving directory manipulation. Except for file servers that receive a lot of synchronous write requests from network (as an example, all NFS writes are synchronous writes), synchronous writes are typically sporadic and rare. On spinning disks, individual writes may take a significant amount of time to commit to a pool, and the in-pool intents log (i.e., writing the log into the primary storage vdevs) is usually faster by avoiding some of the work involved in committing writes while still allowing a synchronous write call to return.

Logging into the primary storage vdevs can still be slow, especially on busy pools, because of underlying latencies, notably the time to move a disk's read-write head from one track to another. A separated log ("slog") that is doing nothing other than receiving intent writes is likely to allow synchronous write calls to return much faster, and will also reduce IOPS pressure on the primary storage devices. Because they only take sequential writes and are read only at import time, all sorts of devices make suitable slogs. A standalone fast spinning disk can in some cases be a better choice for a slog than a slow solid state device. (In practice a slog will short-stroke a dedicated spinning disk).

When a log vdev becomes faulty while the pool is imported, intents are logged into the primary pool storage instead.

If a log vdev is configured and is entirely unavailable (missing, faulty, etc.) at import time then zpool import will fail with a helpful error message.

If a log vdev is available but simply degraded (e.g. only one device in a mirrored log vdev is unavailable) the pool will import in DEGRADED state, and zpool status -vx will give a helpful error message.

If you choose to use the "-m" flag to import a pool with an entirely unavailable log vdev, in the worst case you will lose whatever synchronous writes that were written to the log vdev but not committed to the pool's primary storage. You will still have a fully consistent pool, but userland tools that rely upon synchronous write semantics may have problems; database inconsistency is a common issue and may require manual intervention. The best case is that there was no synchronous data in the log vdev to be lost, and that is fairly common (e.g. it is always the case if the pool was cleanly exported, and many workloads only rarely make synchronous writes). Unfortunately ZFS cannot tell that a wholly unavailable log vdev contains no data. The upper bound on data loss is proportional to the size of the log vdev and the length of time between transaction group commits.
rottegift
 
Posts: 26
Joined: Fri Apr 25, 2014 12:00 am

Re: ZIL problems - what will happen

Postby haer22 » Thu Jul 10, 2014 5:44 am

Ah, I forgot that ZIL is "just" to be able to quickly respond to the client-application that the data is stored. "write-only" is the keyword.

So in short: I will only loose data
IF the SSD dies
AND the system panics before everything that was on the SSD has been written to the platters.

Thanks for the responses.
haer22
 
Posts: 123
Joined: Sun Mar 23, 2014 2:13 am


Return to General Discussions

Who is online

Users browsing this forum: No registered users and 10 guests