splitting and importing a pool

All your general support questions for OpenZFS on OS X.

splitting and importing a pool

Postby kgreene » Sat Sep 09, 2017 9:49 am

I did a zpool split on a 3 disk mirror. (See my previous post for detail, best way to backup mirror with 3rd disk).

Attaching the 3rd disk went fine, splitting it gave me an error when specifying a disk (cannot label 'disk2': cannot label '/dev/disk2': unable to open device: 16) but worked when letting it choose one. (I think?) Background, this is an old maczfs pool that has not been upgraded (so pool version 8?)

Code: Select all
$ zpool status
  pool: gallifrey
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
   still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
   pool will no longer be accessible on software that does not support
   feature flags.
  scan: resilvered 3.52T in 15h46m with 0 errors on Sat Sep  9 09:59:33 2017
config:

   NAME                                            STATE     READ WRITE CKSUM
   gallifrey                                       ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-40391AD7-AD27-4E62-87D3-AA8B3A757180  ONLINE       0     0     0
       media-E681D2AD-475C-4A2C-BAE6-6CBAFD2CDF30  ONLINE       0     0     0
       disk2                                       ONLINE       0     0     0

$ sudo zpool split gallifrey gallifrey2 disk2
Password:
cannot label 'disk2': cannot label '/dev/disk2': unable to open device: 16


$ sudo zpool split gallifrey gallifrey2
192:~ kevin$ zpool status
  pool: gallifrey
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
   still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
   pool will no longer be accessible on software that does not support
   feature flags.
  scan: resilvered 3.52T in 15h46m with 0 errors on Sat Sep  9 09:59:33 2017
config:

   NAME                                            STATE     READ WRITE CKSUM
   gallifrey                                       ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-40391AD7-AD27-4E62-87D3-AA8B3A757180  ONLINE       0     0     0
       media-E681D2AD-475C-4A2C-BAE6-6CBAFD2CDF30  ONLINE       0     0     0

errors: No known data errors

$ zfs list
NAME        USED  AVAIL  REFER  MOUNTPOINT
gallifrey  3.52T  6.63G  3.52T  /Volumes/gallifrey

$ sudo zpool import gallifrey2



At this point it took several minutes before the prompt came back. At first I thought it had hung. When I went to check the console I got several minutes of these:

Code: Select all
default   10:28:57.007431 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 167492736 adjusted 4096 pre-adjust <private> to-free 52232064 pressure 634880
default   10:28:57.019433 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict <private> adjusted 12800 pre-adjust <private> to-free 52182912 pressure 0
default   10:28:57.021968 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 339968 adjusted 19456 pre-adjust <private> to-free 51842944 pressure 0
default   10:28:57.040709 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 0 adjusted 141824 pre-adjust <private> to-free 50315136 pressure 0
default   10:28:57.051836 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 0 adjusted 0 pre-adjust <private> to-free 50315136 pressure 0
.
.
.
default   10:29:12.484390 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 0 adjusted 0 pre-adjust <private> to-free 49661824 pressure 0
default   10:29:34.205982 -0700   kernel   ZFS: unlinked drain progress (110000)
default   10:29:37.578175 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 100663296 adjusted 0 pre-adjust <private> to-free 122908672 pressure 16777216
.
.
.
default   10:31:24.603493 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 0 adjusted 0 pre-adjust <private> to-free 35230336 pressure 0
default   10:31:42.721933 -0700   kernel   ZFS: unlinked drain completed (133228).
default   10:35:29.543769 -0700   kernel   SPL: arc_reclaim_thread: post-reap <private> post-evict 0 adjusted 0 pre-adjust <private> to-free 122654720 pressure 16777216
default   10:36:04.644030 -0700   kernel   ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 25186304



So I guess it was probably just doing the unlink drain thing? (I know a little about this but haven't researched it again...why would this happen with the new pool import and not the old one? Or does it happen every time or something? I thought it was a one time thing.)

Anyway, here's the status now

Code: Select all
$ zpool status -v
  pool: gallifrey
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
   still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
   pool will no longer be accessible on software that does not support
   feature flags.
  scan: resilvered 3.52T in 15h46m with 0 errors on Sat Sep  9 09:59:33 2017
config:

   NAME                                            STATE     READ WRITE CKSUM
   gallifrey                                       ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-40391AD7-AD27-4E62-87D3-AA8B3A757180  ONLINE       0     0     0
       media-E681D2AD-475C-4A2C-BAE6-6CBAFD2CDF30  ONLINE       0     0     0

errors: No known data errors

  pool: gallifrey2
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
   still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
   pool will no longer be accessible on software that does not support
   feature flags.
  scan: resilvered 3.52T in 15h46m with 0 errors on Sat Sep  9 09:59:33 2017
config:

   NAME                                          STATE     READ WRITE CKSUM
   gallifrey2                                    ONLINE       0     0     0
     media-1D9D2C62-3B90-2F4C-96EE-4F3F88DB6E29  ONLINE       0     0     0

errors: No known data errors
kgreene
 
Posts: 19
Joined: Sun Jul 05, 2015 8:10 am

Re: splitting and importing a pool

Postby kgreene » Sat Sep 09, 2017 12:18 pm

One update, I was having trouble exporting the pool before doing the update on my original pool (just in case). It kept saying diskutil couldn't unmount it. It turns out it was mdworker so I just killed that process and it exported fine. (thanks mdworker!) It took me a bit to figure out what was causing it till I looked in the console. I really wish the actual error messages would propagate up:

default 12:22:03.835391 -0700 diskutil Unmount of unknown blocked by dissenter PID=2018 (/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Support/mdworker) status=0x0000c010 (Resource busy)
kgreene
 
Posts: 19
Joined: Sun Jul 05, 2015 8:10 am

Re: splitting and importing a pool

Postby lundman » Sun Sep 10, 2017 4:28 pm

cannot label 'disk2': cannot label '/dev/disk2': unable to open device: 16


This would be because the disk is in use by the pool, perhaps if it was offlined first, it would be ok. It is curious that the command ran OK without specifying the device, perhaps it internally releases the device first.

mdworker/spotlight getting in the way is a constant hassle.

Unmount of unknown blocked by dissenter


Yes, this is rather frustrating, it is very well hidden - potentially it might get better once eject from GUI works.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: splitting and importing a pool

Postby kgreene » Mon Sep 11, 2017 8:08 am

Any idea why the split off pool processed the unlink drains list? The original pool has existed for a long time, so wouldn't it have processed it? Or is there some side effect with split or maybe it doesn't process it during auto import or something? Not sure I've recently manually imported the original pool.
kgreene
 
Posts: 19
Joined: Sun Jul 05, 2015 8:10 am

Re: splitting and importing a pool

Postby lundman » Mon Sep 11, 2017 4:29 pm

It should have always been processing the unlinked-drain list, each time. Perhaps you only noticed this time? Each import should have a small amount (depending on how many items you have deleted which had XATTRs in them).
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: splitting and importing a pool

Postby kgreene » Mon Sep 11, 2017 4:36 pm

Well it auto imports on boot so I don't usually look at the console to see what it's doing. I only noticed this time because it took a few minutes to import and I thought something bad was happening. So the list should have been generated since my last reboot, or my last export or import? I don't think I deleted anything manually before adding the 3rd disk and splitting. It's possible I am just forgetting or it was just something using the disk (i.e. temp files or something).

Is the # in parentheses the # of items processed?
default 10:31:42.721933 -0700 kernel ZFS: unlinked drain completed (133228).
kgreene
 
Posts: 19
Joined: Sun Jul 05, 2015 8:10 am

Re: splitting and importing a pool

Postby lundman » Tue Sep 12, 2017 4:40 pm

The number is the number of items in the list, yes.

But don't forget this includes all the junk files that mds/spotlight has in .fsevents and .Spotlight directories, and they are pretty much constantly doing "their thing". If the number is always the same, it could indicate a stuck item on the list
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan


Return to General Help

Who is online

Users browsing this forum: No registered users and 15 guests