Missing zVols on 2.0.0 Big Sur rc5 (4th)

All your general support questions for OpenZFS on OS X.

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby lundman » Wed Nov 18, 2020 3:59 pm

Hmm the lacking of GUID_partition_scheme is interesting, as we go and attach it:

Code: Select all
        OSDictionary *newProps = NULL;
        if (properties) newProps = OSDictionary::withDictionary(properties);
        if (!newProps) newProps = OSDictionary::withCapacity(2);
        OSString *str;
        str = OSString::withCString("IOGUIDPartitionScheme");
        newProps->setObject("IOClass", str);
        OSSafeReleaseNULL(str);
        str = OSString::withCString("GUID_partition_scheme");
        newProps->setObject("Content Mask", str);
        OSSafeReleaseNULL(str);

        if (IOPartitionScheme::init(newProps) == false) {
                dprintf("IOPartitionScheme init failed");
                OSSafeReleaseNULL(newProps);
                OSSafeReleaseNULL(_datasets);
                OSSafeReleaseNULL(_holes);
                return (false);
        }
        OSSafeReleaseNULL(newProps);


So I guess the next thing to look at is;

* "ioreg -al" - should confirm there is no guid attached
* Are we not calling ZFSDatasetScheme::init, or is it failing, by inspecting the debug output

You can either run "sudo dmesg" or "sysctl kstat.zfs.misc.dbgmsg.dbgmsg"

Meanwhile, taking a look at it under Catalina
User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby lundman » Wed Nov 18, 2020 4:05 pm

Hmm no we don't.. that's only for ZFSDatasetScheme, ie, datasets.

Ah but I do get "zvolRegisterDevice couldn't get matching service" in the logs, that could be something.
User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby lundman » Wed Nov 18, 2020 5:18 pm

Ok we appear to have a lock collision, stopping the service from matching, sometimes:

CORRECTION, there are three threads in play

Code: Select all
THREAD 1 : WANTs         rw_enter(&zvol_state_lock, RW_WRITER);  -> holds nothing

                                   *1000  spa_import + 1749 (spa.c:6119,2 in zfs + 613973) [0xffffff7f830c5e55]
                                     *1000  zvol_create_minors_recursive + 207 (zvol.c:1146,11 in zfs + 1281919) [0xffffff7f83168f7f]
                                       *1000  zvol_os_create_minor + 310 (zvol_os.c:631,3 in zfs + 1291894) [0xffffff7f8316b676]
                                         *1000  rw_enter + 66 (spl-rwlock.c:178,3 in zfs + 2777074) [0xffffff7f832d5ff2]

THREAD 2 : WANTs                 mutex_enter(&zv->zv_state_lock); holds zvol_state_lock

                   *1000  net_lundman_zfs_zvol_device::handleOpen(IOService*, unsigned int, void*) + 110 (zvolIO.cpp:511,6 in zfs + 1296494) [0xffffff7f8316c86e]
                     *1000  zvol_os_open_zv + 45 (zvol_os.c:711,6 in zfs + 1288173) [0xffffff7f8316a7ed]
                       *1000  zvol_os_verify_and_lock + 103 (zvol_os.c:132,3 in zfs + 1288471) [0xffffff7f8316a917]
                         *1000  spl_mutex_enter + 100 (spl-mutex.c:327,5 in zfs arc_os.c:454

THREAD 3 : WANTs IOMedia to show "zvolRegisterDevice couldn't get matching service" -> holds zv->zv_state_lock

     *1000  zvol_os_spawn_cb + 16 (zvol_os.c:74,2 in zfs + 1292880) [0xffffff7f8316ba50]
       *1000  zvol_os_register_device_cb + 26 (zvol_os.c:157,2 in zfs + 1292938) [0xffffff7f8316ba8a]
         *1000  zvolRegisterDevice + 338 (zvolIO.cpp:987,16 in zfs + 1300514) [0xffffff7f8316d822]
           *1000  IOService::waitForMatchingService(OSDictionary*, unsigned long long) + 217 (kernel.development + 10123401) [0xffffff8000ba7889]

User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby dereed999 » Wed Nov 18, 2020 5:50 pm

I had the following message written and was previewing itwhen your lock collision post came in. I'll post this anyway, but without a doubt the lock collision makes sense. The threads are racing each other. In my examples below I can see that behavior. If I create volumes "slowly" all works well. It's only when volumes are created too fast via zfs create, or when a pool is imported, that things go wrong.

Original post was:
On Big Sur, doing an import of the 10G pool backed by the test file gives me a sysctl kstat.zfs.misc.dbgmsg.dbgmsg output of:

Code: Select all
1605746859   spa.c:6157:spa_tryimport(): spa_tryimport: importing testPool
1605746859   spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADING
1605746859   vdev.c:129:vdev_dbgmsg(): file vdev '/Users/david/Desktop/testFile': best uberblock found for spa $import. txg 220
1605746859   spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=220
1605746859   spa.c:8207:spa_async_request(): spa=$import async request task=2048
1605746859   spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADED
1605746859   spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): UNLOADING
1605746859   spa.c:6005:spa_import(): spa_import: importing testPool
1605746859   spa_misc.c:411:spa_load_note(): spa_load(testPool, config trusted): LOADING
1605746859   vdev.c:129:vdev_dbgmsg(): file vdev '/Users/david/Desktop/testFile': best uberblock found for spa testPool. txg 220
1605746859   spa_misc.c:411:spa_load_note(): spa_load(testPool, config untrusted): using uberblock with txg=220
1605746859   spa_misc.c:411:spa_load_note(): spa_load(testPool, config trusted): read 0 log space maps (0 total blocks - blksz = 131072 bytes) in 0 ms
1605746859   mmp.c:241:mmp_thread_start(): MMP thread started pool 'testPool' gethrtime 28761154305876
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 8, smp_length 1328, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761154 ms, loading_time 0 ms, ms_max_size 536870912, max size error 536870912, old_weight 740000000000001, new_weight 740000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 9, smp_length 600, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761154 ms, loading_time 0 ms, ms_max_size 536870912, max size error 536870912, old_weight 740000000000001, new_weight 740000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 10, smp_length 1000, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761155 ms, loading_time 0 ms, ms_max_size 536870912, max size error 536870912, old_weight 740000000000001, new_weight 740000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 11, smp_length 1000, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761155 ms, loading_time 0 ms, ms_max_size 536870912, max size error 536870912, old_weight 740000000000001, new_weight 740000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 0, smp_length 6336, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761155 ms, loading_time 0 ms, ms_max_size 535205376, max size error 535205376, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 1, smp_length 6464, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761155 ms, loading_time 0 ms, ms_max_size 482712064, max size error 482712064, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 221, spa testPool, vdev_id 0, ms_id 2, smp_length 4656, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761155 ms, loading_time 0 ms, ms_max_size 536675840, max size error 536675840, old_weight 700000000000001, new_weight 700000000000001
1605746859   spa.c:8207:spa_async_request(): spa=testPool async request task=1
1605746859   spa.c:8207:spa_async_request(): spa=testPool async request task=2048
1605746859   spa_misc.c:411:spa_load_note(): spa_load(testPool, config trusted): LOADED
1605746859   spa_history.c:309:spa_history_log_sync(): txg 222 open pool version 5000; software version zfs-2.0.0-rc1-250-g4cf230eb65-dirty; uts Davids-MacBook-Pro-2019.local 20.1.0 Darwin Kernel Version 20.1.0: Sat Oct 31 00:07:11 PDT 2020; root:xnu-7195.50.7~2/RELEASE_X86_64
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 5, smp_length 3488, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 512, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 536602112, max size error 536602112, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 13, smp_length 1984, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 6656 + 4608, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 533750784, max size error 533750784, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 15, smp_length 1864, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 6656 + 4608, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 536765952, max size error 536765952, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 14, smp_length 896, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 533125632, max size error 533125632, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 12, smp_length 928, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 534445568, max size error 534445568, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 7, smp_length 2512, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 536822784, max size error 536822784, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 4, smp_length 4360, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 512, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 529208832, max size error 529208832, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 3, smp_length 5344, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 6144 + 8192, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 536177152, max size error 536177152, old_weight 700000000000001, new_weight 700000000000001
1605746859   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 222, spa testPool, vdev_id 0, ms_id 6, smp_length 5024, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 23552 + 22528, unloaded time 28761173 ms, loading_time 0 ms, ms_max_size 432152576, max size error 432152576, old_weight 700000000000001, new_weight 700000000000001
1605746859   spa.c:8207:spa_async_request(): spa=testPool async request task=32
1605746859   spa_history.c:309:spa_history_log_sync(): txg 224 import pool version 5000; software version zfs-2.0.0-rc1-250-g4cf230eb65-dirty; uts Davids-MacBook-Pro-2019.local 20.1.0 Darwin Kernel Version 20.1.0: Sat Oct 31 00:07:11 PDT 2020; root:xnu-7195.50.7~2/RELEASE_X86_64
1605746874   spa_history.c:296:spa_history_log_sync(): command: zpool import -d /Users/david/Desktop testPool
1605746874   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 228, spa testPool, vdev_id 0, ms_id 16, smp_length 1064, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28776106 ms, loading_time 0 ms, ms_max_size 532796416, max size error 532796416, old_weight 700000000000001, new_weight 700000000000001
1605746874   metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 228, spa testPool, vdev_id 0, ms_id 17, smp_length 928, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 28776106 ms, loading_time 0 ms, ms_max_size 536758272, max size error 536758272, old_weight 700000000000001, new_weight 700000000000001


Grabbing the the one disk showing without a partition table has a blank "Content" key section and lacks any children entries for partitions vs. the disk that shows the partition table which has "GUID_partition_scheme" and shows a chunk of content related to disk4s1.

Okay, here's what's really weird. On Big Sur if I create the 10G file and pool backed by that file from scratch and then create one volume and wait several seconds for it to appear in diskutil's output, I can format it. I can then repeat that create, wait, format cycle and all is well. So I end up with:
Code: Select all
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk3
   1:                  Apple_HFS ⁨Alt-Vol1⁩                1.1 GB     disk3s1

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk4
   1:                  Apple_HFS ⁨Alt-Vol2⁩                1.1 GB     disk4s1

/dev/disk5 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk5
   1:                  Apple_HFS ⁨Alt-Vol3⁩                1.1 GB     disk5s1


Great. But now for what's really odd. If I export the pool and re-import it I get:
Code: Select all
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk3

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk4
   1:                  Apple_HFS ⁨Alt-Vol1⁩                1.1 GB     disk4s1


And even more strange... if I wipe the file and start again but this time "quickly" create the 3 volumes in a row (zfs create -V 1G testPool-ALT2/vol1, hit enter, up arrow, replace the 1 with a 2, hit enter, and do the same for vol3) - basically creating them as fast as the command line lets me do it I end up with the 3rd one never showing up. It exists in zfs list, but never attaches as a /dev/disk entry.
Code: Select all
bash-3.2# diskutil list
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk3

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk4


I can do a "diskutil partitionDisk disk4 GPT jhfs+ Alt-VolB 0" on the two disks that show up and it works great for those two disks (volumes.)
Code: Select all
bash-3.2# diskutil list
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk3
   1:                  Apple_HFS ⁨Alt-Vol3⁩                1.1 GB     disk3s1

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk4
   1:                  Apple_HFS ⁨Alt-Vol3⁩                1.1 GB     disk4s1


But, it all falls apart when I export and re-import... and I get *all three* disks (volumes) showing, but none have partition tables.
Code: Select all
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk3

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk4

/dev/disk5 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk5


Two other interesting tidbits:
  • When I'm creating the zfs volumes "as fast as I can" the first "zfs create" returns "instantly"... the subsequent ones all take a few seconds before the command prompt comes back
  • If I pause and wait a minute after quickly creating 3 volumes I can create a 4th volume... and that 4th one will show in the diskutil list output. (I know it's the 4th because I changed the size of the volume from 1G to 1.2G)
Note how one volume is still missing. (The last of the "quickly created" ones)
Code: Select all
bash-3.2# diskutil list
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk3

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk4

/dev/disk5 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.3 GB     disk5
dereed999
 
Posts: 21
Joined: Thu Sep 10, 2020 6:33 am

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby lundman » Wed Nov 18, 2020 6:29 pm

OK, new build coming up..

OK, new Catalina build is up.

OK, new BigSur build is up.
User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby dereed999 » Thu Nov 19, 2020 5:21 am

On Big Sur, I've installed the latest build, but I'm still seeing the same behavior. I've tried removing the extension and re-installing and have verified the md5 checksum that I've actually got the new build.

What's really odd and I can't repeat... the first time I imported the pool, one of the volumes (disks) did come up with a partition scheme - but it was an fdisk scheme. Doing a dd showed the blocks were actually a GPT partition. I exported, reimported, and didn't have the fdisk scheme. I've not been able to repeat that odd behavior.

Regardless, I still get (on a 10G pool backed by a plain file) - so still missing one volume showing up as a disk and the partition tables:
Code: Select all
bash-3.2# kextstat |grep zfs
  168    1 0xffffff7fa1689000 0x32f000   0x32f000   net.lundman.zfs (2.0.0) F70BD5F0-5812-376F-95EC-B4C49B9C03F8 <30 8 6 5 3 1>

bash-3.2# zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
testPool-ALT2       4.34G  4.86G      901K  /Volumes/testPool-ALT2
testPool-ALT2/vol1  1.03G  5.86G     33.1M  -
testPool-ALT2/vol2  1.03G  5.86G     33.1M  -
testPool-ALT2/vol3  1.03G  5.90G       12K  -
testPool-ALT2/vol4  1.24G  6.08G     23.1M  -

bash-3.2# diskutil list
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool-ALT2⁩           10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk3

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk4
   1:                  Apple_HFS ⁨Alt-Vol3⁩                1.1 GB     disk4s1

/dev/disk5 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   +1.1 GB     disk5
dereed999
 
Posts: 21
Joined: Thu Sep 10, 2020 6:33 am

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby dereed999 » Thu Nov 19, 2020 5:33 am

Okay, on BigSur it still seems like there is a race condition. I'll look at Catalina this afternoon when I have access to that system again.

If I repeat the export/import steps, sometimes, but not all the time, I get partial partitioning. (Using a different 10G file which only has 3 volumes). In fact, in this particular case it was accompanied by an OS X popup that "The disk you attached was not readable by this computer" and the standard "eject/ignore/initialize" options. BUT.... all 3 of the volumes are already formatted.
Code: Select all
bash-3.2# zfs list
NAME            USED  AVAIL     REFER  MOUNTPOINT
testPool       3.01G  6.19G     1.08M  /Volumes/testPool
testPool/vol1  1.00G  7.16G     33.9M  -
testPool/vol2  1.00G  7.16G     33.8M  -
testPool/vol3  1.00G  7.16G     33.8M  -

bash-3.2# diskutil list
/dev/disk2 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +10.2 GB    disk2
   1:                ZFS Dataset ⁨testPool⁩                10.2 GB    disk2s1

/dev/disk3 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk3
   1:                  Apple_HFS ⁨⁩                        1.1 GB     disk3s1

/dev/disk4 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.1 GB     disk4
   1:                  Apple_HFS ⁨Vol 3⁩                   1.1 GB     disk4s1


It happens to be /dev/disk3s1 it's saying it's formatted... if I do a diskutil info on both disk3s1 and disk4s1 I get:
Code: Select all
bash-3.2# diskutil info disk3s1
   Device Identifier:         disk3s1
   Device Node:               /dev/disk3s1
   Whole:                     No
   Part of Whole:             disk3

   Volume Name:               
   Mounted:                   No

   Partition Type:            Apple_HFS
   File System Personality:   Journaled HFS+
   Type (Bundle):             hfs
   Name (User Visible):       Mac OS Extended (Journaled)
   Journal:                   Unknown (not mounted)
   Owners:                    Disabled

   OS Can Be Installed:       No
   Media Type:                Generic
   Protocol:                  Disk Image
   SMART Status:              Not Supported
   Disk / Partition UUID:     CD47C456-FE0D-4BD7-B36C-F281A89B3CC6
   Partition Offset:          20480 Bytes (40 512-Byte-Device-Blocks)

   Disk Size:                 1.1 GB (1073700864 Bytes) (exactly 2097072 512-Byte-Units)
   Device Block Size:         512 Bytes

   Volume Total Space:        0 B (0 Bytes) (exactly 0 512-Byte-Units)
   Volume Free Space:         0 B (0 Bytes) (exactly 0 512-Byte-Units)

   Media OS Use Only:         No
   Media Read-Only:           No
   Volume Read-Only:          Not applicable (not mounted)

   Device Location:           External
   Removable Media:           Fixed

   Solid State:               Yes

bash-3.2# diskutil info disk4s1
   Device Identifier:         disk4s1
   Device Node:               /dev/disk4s1
   Whole:                     No
   Part of Whole:             disk4

   Volume Name:               Vol 3
   Mounted:                   Yes
   Mount Point:               /Volumes/Vol 3

   Partition Type:            Apple_HFS
   File System Personality:   Journaled HFS+
   Type (Bundle):             hfs
   Name (User Visible):       Mac OS Extended (Journaled)
   Journal:                   Journal size 8192 KB at offset 0xa000
   Owners:                    Disabled

   OS Can Be Installed:       No
   Media Type:                Generic
   Protocol:                  Disk Image
   SMART Status:              Not Supported
   Volume UUID:               6D75BD34-9DFB-34AB-AA3F-C3EBDE0E4908
   Disk / Partition UUID:     40F2BA9B-D22F-4FFF-81A2-6FE63F358B2A
   Partition Offset:          20480 Bytes (40 512-Byte-Device-Blocks)

   Disk Size:                 1.1 GB (1073700864 Bytes) (exactly 2097072 512-Byte-Units)
   Device Block Size:         512 Bytes

   Volume Total Space:        1.1 GB (1073700864 Bytes) (exactly 2097072 512-Byte-Units)
   Volume Used Space:         34.4 MB (34406400 Bytes) (exactly 67200 512-Byte-Units) (3.2%)
   Volume Free Space:         1.0 GB (1039294464 Bytes) (exactly 2029872 512-Byte-Units) (96.8%)
   Allocation Block Size:     4096 Bytes

   Media OS Use Only:         No
   Media Read-Only:           No
   Volume Read-Only:          No

   Device Location:           External
   Removable Media:           Fixed

   Solid State:               Yes


Yet, despite the pop-up warning about not being readable, I can mount that partition by hand:
Code: Select all
bash-3.2# diskutil mount /dev/disk3s1
Volume Vol 2 on /dev/disk3s1 mounted

bash-3.2# diskutil info disk3s1
   Device Identifier:         disk3s1
   Device Node:               /dev/disk3s1
   Whole:                     No
   Part of Whole:             disk3

   Volume Name:               Vol 2
   Mounted:                   Yes
   Mount Point:               /Volumes/Vol 2

   Partition Type:            Apple_HFS
   File System Personality:   Journaled HFS+
   Type (Bundle):             hfs
   Name (User Visible):       Mac OS Extended (Journaled)
   Journal:                   Journal size 8192 KB at offset 0xa000
   Owners:                    Disabled

   OS Can Be Installed:       No
   Media Type:                Generic
   Protocol:                  Disk Image
   SMART Status:              Not Supported
   Volume UUID:               46017DCC-8C7C-3BDA-8781-D7B6C496D9F6
   Disk / Partition UUID:     CD47C456-FE0D-4BD7-B36C-F281A89B3CC6
   Partition Offset:          20480 Bytes (40 512-Byte-Device-Blocks)

   Disk Size:                 1.1 GB (1073700864 Bytes) (exactly 2097072 512-Byte-Units)
   Device Block Size:         512 Bytes

   Volume Total Space:        1.1 GB (1073700864 Bytes) (exactly 2097072 512-Byte-Units)
   Volume Used Space:         34.4 MB (34414592 Bytes) (exactly 67216 512-Byte-Units) (3.2%)
   Volume Free Space:         1.0 GB (1039286272 Bytes) (exactly 2029856 512-Byte-Units) (96.8%)
   Allocation Block Size:     4096 Bytes

   Media OS Use Only:         No
   Media Read-Only:           No
   Volume Read-Only:          No

   Device Location:           External
   Removable Media:           Fixed

   Solid State:               Yes


So, still not seeing all the volumes exposed as /dev/disk entries and sometimes the partition table shows up, sometimes it doesn't. When it does, it's not always "interpreted" properly.

Exporting and re-importing is kinda random in what volumes show as disks. Sometime's it's the 3rd volume, sometimes the 2nd volume, rarely the 1st volume.
dereed999
 
Posts: 21
Joined: Thu Sep 10, 2020 6:33 am

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby dereed999 » Fri Nov 20, 2020 11:52 am

So the problem still exists and gives the same inconsistent behavior under both Catalina and Big Sur (using the most recent builds - 6th for Big Sur, 5th for Catalina.) Not sure if the ?threading? issue is OS X side or zfs side, but it's clearly a case where "create from scratch,and slowly create volumes" works until reboot. Then the attempt to add all the /dev/disk entries "at once" seems to either trigger a thread issue in OS X or the ZFS extensions.
dereed999
 
Posts: 21
Joined: Thu Sep 10, 2020 6:33 am

Re: Missing zVols on 2.0.0 Big Sur rc5 (4th)

Postby dereed999 » Tue Dec 01, 2020 10:41 am

The 2.0 build for Catalina (4b762570beba1179ecfe4d6e9c866df ; OpenZFSonOsX-2.0.0-Catalina-10.15.pkg) indeed fixes the volume mounting problem.

So far, no issues observed on Catalina with 2.0.0. Scrub time on a 87.2T pool seems as good or better than under 1.9.4 (will know for sure when it's done).
dereed999
 
Posts: 21
Joined: Thu Sep 10, 2020 6:33 am

Previous

Return to General Help

Who is online

Users browsing this forum: No registered users and 10 guests