zfs 1.6.0rc1 and Photos

All your general support questions for OpenZFS on OS X.

zfs 1.6.0rc1 and Photos

Postby roemer » Fri Jan 27, 2017 4:47 pm

Since Friday, I am running O3X 1.6.0rc1 on macOS Sierra 10.12.3.

Under normal load, 1.6.0rc1 seems stable as far as I can tell so far.
But when I switched to Sierra, I also finally converted my iPhoto archive, which resides on a zfs dataset, to Apple Photos. And there seems to be some problem.

The conversion worked, I can start and use Photos, but when things gets a bit more intense on my zfs photo dataset, then sooner or later I end up with kernel task deadlocked at 100% CPU in an apparent busy loop.
I am not able yet to give you an exact sequence to reproduce the error. But say you are working in Photos (which means it reads a lot from disk) while it is still scanning in the background the images for faces (photolibraryd and photosanalysisd also churning away on the same dataset) and you have the photo library still selected in a Finder window (which means Finder also scans that directory to determine the size of the library), then there is a good chance getting to this stage. There is then also a good chance that Photos or Finder become unresponsive so that I needed to kill them. The 100% kernel CPU remains however even after killing Photos and Finder. I can then also only reboot with a hard reset on the power button, while a soft reboot will hang

At this very moment, my kernel is on 100% after I quit Photos successfully and the photolibraryd+photosanalysisd kicked in, and with no problems in Finder at all. The system is still responsive and I can access the ZFS datasets. But the kernel task skims through on something around >= 100%. Memory usage seems maxed out: according to Activity Monitor, 15.36 GB of 16GB are used, kernel task at 12.39 GB memory usage. Yes, my photo library is definitely larger than 16 GB :)
I get lots of entries in system.log of the following form:
default 11:43:15.993628 +1100 kernel SPL: arc_reclaim_thread: post-reap -207515648 post-evict 0 adjusted 0 pre-adjust -207515648 to-free 214855680 pressure 0

Any ideas?
roemer
 
Posts: 73
Joined: Sat Mar 15, 2014 2:32 pm

Re: zfs 1.6.0rc1 and Photos

Postby tangles » Sat Jan 28, 2017 9:23 pm

I've been using Photos.app with it's library sitting on a ZFS volumes for years and years now.

The only difference is that I typically use SMB when importing/using it.

so I decided to create a new library with ~1000 pics (90% being raw) using a local ZFS filesystem:

Code: Select all
MacOS Sierra
10.12.3 (16D32)
MacPro 2008 (Early 2008)
2 x 3 GHz Quad-Core Intel Xeon
32 GB 800 MHz DDR2 FB-DIMM
NVIDIA GeForce GTX TITAN 6143 MB

mirror pool (3 x APPLE SSD SM0128G Media in PCIe risers)

| => zpool status
  pool: Triple
 state: ONLINE
  scan: none requested
config:

   NAME                                            STATE     READ WRITE CKSUM
   Triple                                          ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-B99F121A-54CB-5349-B4AE-425484B45EA8  ONLINE       0     0     0
       media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E  ONLINE       0     0     0
       media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B  ONLINE       0     0     0

errors: No known data errors

Memory:
MemRegions: 61981 total, 20G resident, 216M private, 1812M shared. PhysMem: 29G used (17G wired), 3448M unused. VM: 913G vsize, 623M framework vsize, 0(0) swapins, 0(0) swapouts.

Processors:
Load Avg: 5.29, 4.79, 4.17  CPU usage: 15.94% user, 3.71% sys, 80.33% idle   SharedLibs: 344M resident, 58M data, 73M linkedit


The fans are kicking both during the importing and also post import, but there's not much i/o going on now:
Code: Select all
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.7G  79.3G      0     32      0  2.14M
  mirror                                        32.7G  79.3G      0     32      0  2.14M
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -      0     10      0   729K
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -      0     10      0   730K
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -      0     10      0   730K
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.7G  79.3G      0    382      0  24.8M
  mirror                                        32.7G  79.3G      0    382      0  24.8M
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -      0    130      0  8.28M
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -      0    126      0  8.28M
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -      0    124      0  8.28M
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.7G  79.3G     83    110  10.4M  9.43M
  mirror                                        32.7G  79.3G     83    110  10.4M  9.43M
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -     23     36  2.98M  3.14M
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -     31     36  3.97M  3.14M
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -     27     36  3.48M  3.14M
----------------------------------------------  -----  -----  -----  -----  -----  -----


Free RAM right now is currently just shy of 7GB, I'll now open up Photoshop.

Free RAM increased to 7278 after opening up a 24MB RAW image and saving out to a 59MB PSD file onto the ZFS filesystem.

I've been doing this now for about 10 minutes and am not seeing anything detrimental as a result. I'll be greedy and scrub!

okay, scrubbing the pool is a bit different! screen not updating while I type this message!!!
nothing is responding...
oh... hehe, and the keyboard just spat out it's buffer and spewed the above few lines...


I managed to stop the scrub but I was having to wait periods of up to 3 seconds before either screen would update.
CPUs went mental @ ~ 99%
ZFS I/O was insane though!
Code: Select all
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.8G  79.2G  7.57K     44   943M  1.28M
  mirror                                        32.8G  79.2G  7.57K     44   943M  1.28M
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -  2.53K     14   314M   437K
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -  2.52K     15   314M   437K
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -  2.52K     14   314M   437K
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.8G  79.2G  7.57K     17   946M   210K
  mirror                                        32.8G  79.2G  7.57K     17   946M   210K
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -  2.53K      6   315M  69.9K
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -  2.52K      5   315M  69.9K
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -  2.52K      5   315M  69.9K
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.8G  79.2G  7.58K     12   947M   152K
  mirror                                        32.8G  79.2G  7.58K     13   947M   171K
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -  2.53K      4   316M  58.0K
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -  2.52K      4   316M  58.0K
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -  2.52K      4   316M  64.3K
----------------------------------------------  -----  -----  -----  -----  -----  -----
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.8G  79.2G  7.59K     17   948M   267K
  mirror                                        32.8G  79.2G  7.59K     17   948M   248K
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -  2.54K      5   316M  81.7K
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -  2.53K      5   316M  81.7K
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -  2.53K      5   316M  75.4K
----------------------------------------------  -----  -----  -----  -----  -----  -----


nothing crashed though and free RAM is still around ~6.8GB.

Quitting Photoshop.

righto, CPUs/Fans have settled back down and are hovering around the 20% mark.

I'll save this post now so that I don't lose any of the above and kick off another scrub and see what happens.
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: zfs 1.6.0rc1 and Photos

Postby tangles » Sat Jan 28, 2017 9:33 pm

well...

there's something not quite right when scrubbing.

The CPUs are locked at 99-100% usage and the GUI is completely unresponsive.

this scrub flew (as expected) but certainly ended my attempts to do anything. To get the zpool status I was typing blind!
Code: Select all
| => zpool status
  pool: Triple
 state: ONLINE
  scan: scrub in progress since Sun Jan 29 16:23:57 2017
    27.9G scanned out of 32.9G at 295M/s, 0h0m to go
    0 repaired, 85.04% done
config:

   NAME                                            STATE     READ WRITE CKSUM
   Triple                                          ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-B99F121A-54CB-5349-B4AE-425484B45EA8  ONLINE       0     0     0
       media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E  ONLINE       0     0     0
       media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B  ONLINE       0     0     0

errors: No known data errors
| ~ @ MacPro (madmin)
| => zpool status
  pool: Triple
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Sun Jan 29 16:25:45 2017
config:

   NAME                                            STATE     READ WRITE CKSUM
   Triple                                          ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-B99F121A-54CB-5349-B4AE-425484B45EA8  ONLINE       0     0     0
       media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E  ONLINE       0     0     0
       media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B  ONLINE       0     0     0

errors: No known data errors
| ~ @ MacPro (madmin)
| => zfs list
NAME     USED  AVAIL  REFER  MOUNTPOINT
Triple  32.9G  75.6G  32.9G  /Volumes/Triple
| ~ @ MacPro (madmin)
| => zfs get all Triple
NAME    PROPERTY               VALUE                  SOURCE
Triple  type                   filesystem             -
Triple  creation               Wed Dec  7 17:19 2016  -
Triple  used                   32.9G                  -
Triple  available              75.6G                  -
Triple  referenced             32.9G                  -
Triple  compressratio          1.00x                  -
Triple  mounted                yes                    -
Triple  quota                  none                   default
Triple  reservation            1M                     local
Triple  recordsize             128K                   default
Triple  mountpoint             /Volumes/Triple        default
Triple  sharenfs               off                    default
Triple  checksum               skein                  local
Triple  compression            off                    default
Triple  atime                  off                    local
Triple  devices                on                     default
Triple  exec                   on                     default
Triple  setuid                 on                     default
Triple  readonly               off                    default
Triple  zoned                  off                    default
Triple  snapdir                hidden                 default
Triple  aclmode                passthrough            default
Triple  aclinherit             restricted             default
Triple  canmount               on                     default
Triple  xattr                  on                     default
Triple  copies                 1                      default
Triple  version                5                      -
Triple  utf8only               on                     -
Triple  normalization          formD                  -
Triple  casesensitivity        insensitive            -
Triple  vscan                  off                    default
Triple  nbmand                 off                    default
Triple  sharesmb               off                    default
Triple  refquota               none                   default
Triple  refreservation         none                   default
Triple  primarycache           all                    default
Triple  secondarycache         all                    default
Triple  usedbysnapshots        0                      -
Triple  usedbydataset          32.9G                  -
Triple  usedbychildren         4.71M                  -
Triple  usedbyrefreservation   0                      -
Triple  logbias                latency                default
Triple  dedup                  off                    default
Triple  mlslabel               none                   default
Triple  sync                   standard               default
Triple  refcompressratio       1.00x                  -
Triple  written                32.9G                  -
Triple  logicalused            32.8G                  -
Triple  logicalreferenced      32.8G                  -
Triple  filesystem_limit       none                   default
Triple  snapshot_limit         none                   default
Triple  filesystem_count       none                   default
Triple  snapshot_count         none                   default
Triple  snapdev                hidden                 default
Triple  com.apple.browse       on                     default
Triple  com.apple.ignoreowner  off                    default
Triple  com.apple.mimic_hfs    on                     local
Triple  shareafp               off                    default
Triple  redundant_metadata     all                    default
Triple  overlay                off                    default
| ~ @ MacPro (madmin)
| =>


Only took 1 minute to scrub 32GB but during that time I was completely unable to use the Mac.

Anyone else have similar experience?
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: zfs 1.6.0rc1 and Photos

Postby FadingIntoBlue » Sun Jan 29, 2017 1:02 am

Just running a scrub on my backup pool, 2TB HD mirror, MacMini server 1.4, 4GB! Sierra 10.12.3 CPU at about 60%

pool: ClearPool
state: ONLINE
scan: scrub in progress since Sun Jan 29 19:50:47 2017
11.7G scanned out of 1.10T at 76.4M/s, 4h9m to go
0 repaired, 1.04% done
config:

NAME STATE READ WRITE CKSUM
ClearPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
media-40EABA2A-1EDA-D541-B7D6-98A4DB7943C9 ONLINE 0 0 0
media-B8FA084D-FB57-2D4B-825A-5A8749489F08 ONLINE 0 0 0

No problems with loss of responsiveness.
FadingIntoBlue
 
Posts: 106
Joined: Tue May 27, 2014 12:25 am

Re: zfs 1.6.0rc1 and Photos

Postby lundman » Sun Jan 29, 2017 2:23 am

After the cpu goes busy, but you are otherwise not using ZFS heavily any more, can you please run "spindump" and post the output? It will tell us what it is doing inside the kernel.
User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: zfs 1.6.0rc1 and Photos

Postby rottegift » Sun Jan 29, 2017 3:56 am

Heh, edit: I just realized I'm replying to two separate people. roemer, the spotlight stuff is for you. tangles, the scrub slowdown stuff is for you.

"ZFS I/O was insane though!"

I think that's the root of the problem, and the I/O is most likely spotlight indexing.

The arc_reclaim_thread log lines strongly suggest lots of {read,write}-only-once traffic churning through; the line you pasted shows 200MB being tossed out of ARC.

Your pool Triple can do an enormous number of IOPS, and while that's not what your zpool iostat -v shows (I'll guess it's because of averaging, and zpool iostat -v 1 100 would show much larger IOPS and bandwidth), the interrupt and CPU cache load on your system will be pretty high.

(I wonder frankly if your 2008 Mac Pro can actually cope with the full read IOPS of Triple on a hardware level. In your scrub you could be doing about 2 million I/O interrupts per second! What does the OS think the block size of the SSDs is? 512 bytes or 4kiB?))

My bet (I have a couple of SSD based pools too, but none that are as crunchy as yours) is that it's mds/mdworker. You can test this by running fs_usage -w -f filesys and seeing what's generating filesystem IO. Or you can watch the CPU use of mds/mdworker. Or you can send SIGSTOP to them, or turn off indexing (mdutil -i off <path>, or even put a .metadata_never_index file in the root of a zfs filesystem).

Unfortunately it seems that mds/mdwoker don't self throttle at all, and the only check on their resource use is whatever system bottlneck there is. This will almost always be read latency (IOPS, in other words), and on a spinny-disk system or a single slow SSD (HFS+ or even a pool) the system impact won't be too high. But if you feed them -- and on your fast pool and in parallel (with multiple datasets indexing simultaneously), you sure can -- they can get in the way of everything else.

If you could preclude mds/mdworker as the problem, great. I'll have other questions if it's definitely not spotlight.

If it's mds/mdworker, and you can live without indexing for some data (especially on data where there's lots of small files, like in ~/Library for example) then that'll help. Or you can write launchd.plist that turns off indexing at boot and then serially turns it back on dataset-by-dataset if the load average is low, for example.

Lastly, "Load Avg: 5.29, 4.79, 4.17 CPU usage: 15.94% user, 3.71% sys, 80.33% idle". Was that during the worst of it? That's not exactly CPU meltdown, especially for your 3x quad xeon. I don't remember enough about the memory layout of 2008-era Mac Pros; can you paste "sysctl -h hw|grep cache"?

Also, if you can either "sysctl -h kstat.spl" or srcdir/spl/scripts/splstat and leave the output somewhere handy, that'd be interesting too.

ETA: the scrub meltdown is not terribly surprising. That's a lot of small random IOPS to start with too. But you can decrease kstat.zfs.darwin.tunable.scrub_max_active and increase kstat.zfs.darwin.tunable.zfs_scrub_delay dynamically with sysctl; both will throttle scrub activity. [along these lines, if you really HAVE to let spotlight do its thing and it's not really throttleable in userland, we can think about dynamically adjusting the new throttle variables (kstat.zfs.darwin.tunable.{sync,async}_{read,write}.* although they are not pool specific so there'd be a trade-off between tuning for your fast pool at its peak at the cost of slowing down any spinny-disk pools on the same system.]
rottegift
 
Posts: 26
Joined: Fri Apr 25, 2014 12:00 am

Re: zfs 1.6.0rc1 and Photos

Postby tangles » Sun Jan 29, 2017 11:54 pm

~ 8K IOPS

I purposefully didn't want to adjust any conf settings to reflect how most people would typically install/use ZFS on macOS...

Code: Select all
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Triple                                          32.9G  79.1G  7.84K     13   947M   117K
  mirror                                        32.9G  79.1G  7.84K     10   947M   103K
    media-B99F121A-54CB-5349-B4AE-425484B45EA8      -      -  2.65K      3   315M  34.1K
    media-2B860E9E-AA2F-2A4D-8AAF-B197F36BEB7E      -      -  2.59K      3   316M  26.2K
    media-C4E7CB16-0E21-9442-B46C-4BA84466EA7B      -      -  2.60K      3   316M  25.4K
----------------------------------------------  -----  -----  -----  -----  -----  -----


spindumps:
spindump during scrub: https://cloudstor.aarnet.edu.au/plus/in ... Xi0qxO68QK
spindump after scrub with idle CPUs: https://cloudstor.aarnet.edu.au/plus/in ... RA8iluOAMj
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: zfs 1.6.0rc1 and Photos

Postby lundman » Mon Jan 30, 2017 1:12 am

Not entirely sure what we are discussing now? Is there a problem of ZFS using lots of CPU when idle? If so, we need to fix.

Is it an issue that ZFS uses lots of CPU when in use? For example, when scrubbing. If so, tweak the scrubbing tunables. Nothing to fix.
User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: zfs 1.6.0rc1 and Photos

Postby rottegift » Mon Jan 30, 2017 1:23 am

Ignoring averaging effects (only a certain number of scrub reads are issued per TXG and each TXG barring other traffic will be 5 seconds in duration, and userland and the kernel both do some aggregation with respect to iostat statistics), that's also 8k records read, and record size can be up to 128k. So 8*(128*1024/512) for a 512-byte device blocksize - 2 million LBAs read, each of which will be processed by the operating system before being handed to ZFS. Again, it's a bit worse than that because the scrub reads are issued as fast as possible and then the issuing is throttled (the implementation is in dsl_scan_scrub_cb() in srcs/zfs/module/zfs/dsl_scan.c).

Some I/O activity will be gentler than others depending on the particulars of the bus and the bus-driving hardware in the host.

Additionally, 2008-vintage xeons are not as zippy at context switching as anything since late 2012, and the operating system and zfs are both hugely concurrent (zfs moreso in 1.6). If you expect to scrub your pool Triple often while sitting at the machine doing interactive things, you'll want to think about defensive tuning, unfortunately. Note there is very little we can do about context switching within xnu itself, but the zfs subsystem does try to minimize the number of I/O calls to xnu.

Finally, for your particular system the CPU use during scrubbing ought to have been worse in 1.5, since delay() was implemented using a spin wait (depending on how you view rep; nop) rather than a thread suspension and wakeup, so turning kstat.zfs.darwin.tunable.zfs_resilver_delay up reduced scrub IO but did not reduce CPU use.
rottegift
 
Posts: 26
Joined: Fri Apr 25, 2014 12:00 am

Re: zfs 1.6.0rc1 and Photos

Postby roemer » Mon Jan 30, 2017 5:35 am

lundman wrote:After the cpu goes busy, but you are otherwise not using ZFS heavily any more, can you please run "spindump" and post the output? It will tell us what it is doing inside the kernel.


Sorry, I was away over the weekend and just got around to reproduce my original issue (cf first post).
Macmini5,1 after fresh reboot, just login and working away in Photos until it got stuck.
System still responsive, but Photos stuck and kernel_task on around 130% CPU.
Memory usage is at 12.94 GB of 16, 7.94 GB of that for kernel_task.
I cannot see the SPL messages in the console as last time.

Code: Select all
top:
Processes: 459 total, 3 running, 7 stuck, 449 sleeping, 1890 threads                                                                     00:41:26
Load Avg: 3.70, 6.45, 8.21  CPU usage: 12.32% user, 62.9% sys, 25.58% idle   SharedLibs: 181M resident, 45M data, 44M linkedit.
MemRegions: 80490 total, 10G resident, 243M private, 1205M shared. PhysMem: 15G used (8944M wired), 1250M unused.
VM: 1181G vsize, 623M framework vsize, 0(0) swapins, 0(0) swapouts. Networks: packets: 2699750/1662M in, 2673018/2055M out.
Disks: 479906/8526M read, 227726/8169M written.

bash-3.2$  admin$ zpool status
  pool: ztank
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
   still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
   the pool may no longer be accessible by software that does not support
   the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 17h16m with 0 errors on Tue Nov  1 18:32:21 2016
config:

   NAME                                            STATE     READ WRITE CKSUM
   ztank                                           ONLINE       0     0     0
     raidz2-0                                      ONLINE       0     0     0
       media-CBA9F7FB-3767-2B48-BDB7-38D107DF96E5  ONLINE       0     0     0
       media-5DF749B8-6E58-1241-B618-6A248F2F5621  ONLINE       0     0     0
       media-69A9EB7E-97D8-3E40-96C5-089E09C6142A  ONLINE       0     0     0
       media-9D6A15C8-29DA-8D49-9969-79D6A4B8AE31  ONLINE       0     0     0
   logs
     media-B07E3B9A-6ED6-4439-9951-B4FEE9FC508E    ONLINE       0     0     0
   cache
     media-4A353723-4D0F-4008-B064-114BD53DC931    ONLINE       0     0     0
errors: No known data errors

bash-3.2$ sudo zpool iostat -v
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
ztank                                           7.24T  3.63T     46     31   944K   877K
  raidz2                                        7.24T  3.63T     46     20   943K   653K
    media-CBA9F7FB-3767-2B48-BDB7-38D107DF96E5      -      -     11      5   236K   163K
    media-5DF749B8-6E58-1241-B618-6A248F2F5621      -      -     11      5   236K   163K
    media-69A9EB7E-97D8-3E40-96C5-089E09C6142A      -      -     11      5   236K   163K
    media-9D6A15C8-29DA-8D49-9969-79D6A4B8AE31      -      -     11      5   235K   163K
logs                                                -      -      -      -      -      -
  media-B07E3B9A-6ED6-4439-9951-B4FEE9FC508E     160K  3.87G      0     10    442   224K
cache                                               -      -      -      -      -      -
  media-4A353723-4D0F-4008-B064-114BD53DC931    2.23G  40.1G     10      7   143K   795K
----------------------------------------------  -----  -----  -----  -----  -----  -----


Spindump output is 4.3 MB and 5.6 MB (10 mins later) - available here (pw: zfsspindump):
https://cloudstor.aarnet.edu.au/plus/in ... 4kQIih4Hal
https://cloudstor.aarnet.edu.au/plus/in ... rasVZPAbw6

Code: Select all
Date/Time:       2017-01-31 00:18:30 +1100
OS Version:      Mac OS X 10.12.3 (Build 16D32)
Architecture:    x86_64
Hardware model:  Macmini5,1
Active cpus:     4
Fan speed:       3845 rpm
...


Update 1:
Interestingly, after ca. 1.5 hours, Photos started reacting and the kernel_task did normalise its load again...

Update 2:
After briefly continuing to work in Photos, it sure enough locked up again with kernel_task at >100% CPU.
Third spindump for this second lockup:
https://cloudstor.aarnet.edu.au/plus/in ... kDpUToPs77
roemer
 
Posts: 73
Joined: Sat Mar 15, 2014 2:32 pm

Next

Return to General Help

Who is online

Users browsing this forum: No registered users and 42 guests

cron