Performance Observation

Moderators: jhartley, MSR734, nola

holistic

Post by grahamperrin » Sat Nov 10, 2012 12:14 am

TomUnderhill wrote:… the system delivers a much snappier response …


+1
for a holistic view of a system.

IMHO no benchmarking software can match the ability of a human to perceive the overall performance of a computer. If it doesn't feel right, no amount of 'good numbers' will shake that feeling.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

better than expected performance from an extremely full pool

Post by grahamperrin » Sun Nov 11, 2012 8:30 am

Today, copied from someone's posts to IRC for MacZFS –

> copying data feels slow, but it doesn't seem like it's hanging on every access

– and:

Code: Select all
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
Media2                 1.82T   1.79T   30.0G    98%  ONLINE     -


That 98% is way beyond the eighty that I'd recommend, the user plans to add a disk to the pool.

----

Yesterday, I was pleasantly surprised by better than expected performance from an extremely full disk (usually between 96% and 99% full at the pool level, 2 TB, much like the one discussed in IRC). In screenshots there, the 15.98 MB/s peak is way below what I'd expect from a sanely used drive of its class – Seagate GoFlex Desk – but for many datasets in this pool (at this stage in development of ZEVO) I'm happy to combine alarmingly little free space with rarely-recommended compression=gzip-9 – use of the pool for anything other than backup is so rare that I don't care about performance.

To the point of this post: the peak and averages, during yesterday's backup session, were dramatically greater what I had learnt to expect from use of the pool in recent weeks. Averages were probably more than one hundred times greater.

Without giving too much thought, I couldn't find an explanation for the greater performance yesterday. I wondered whether the data backed up lent itself more easily to gzip-9 but glancing at the log (two backups and a review of holds on a snapshot.txt), the stream sizes were typical so I assume that stream content was typical.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

cross reference

Post by grahamperrin » Wed Dec 05, 2012 11:07 pm

Related: Performance issues (2012-12-06)
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Spun off

Post by grahamperrin » Mon Apr 15, 2013 10:09 pm

Amongst the performance-related observations under Does ZEVO support TRIM?, at viewtopic.php?p=4666#p4666

emory wrote:
raattgift wrote:
jollyjinx wrote:A ZFS home does not feel fast compared to HFS+ anyways.


Really? On an identical partition on identical hardware connected identically?


Anecdotally, yes. That's why I went down the road of jHFS+ Fusion Drives for home directories (and symlinking Documents/Music onto a FreeNAS share). I don't have that exact case documented, but my FreeNAS over gigabit ethernet (raidz 3x3TB 7200RPM) is faster for sequential read/write than a local ZEVO mirror (mirror 2x1TB 7200RPM).

I have a Google Docs spreadsheet (https://docs.google.com/a/hellyeah.com/spreadsheet/ccc?key=0Av2d4b91SLePdE1CdjVDSldMaUM5eUxCSFV1MEtfbFE#gid=2) available though like I said it's anecdotal.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by emory » Tue May 21, 2013 2:55 pm

I've been doing some shuffling lately, and was getting frustrated with how long it was taking for rsync jobs to start up.

All disks are SATA3, Seagate brand.

    First example: The filesystem on /bananastand/ is NFS mounted from a FreeNAS raidz, 3x3TB 7200 disks, no ZIL, no cache. 8GB RAM, CPU core2quad 9650, FreeNAS 8

    Second example: The filesystem on lindsay (zpool: teamocil) is a local ZEVO mirror 2x1TB with a 7GB ZIL mirror across two SSDs and 2x8GB L2ARCs on one SSD and 1x8GB L2ARC on the other SSD (boot device) lindsay has 16GB of RAM, CPU core i7 2600K

Code: Select all
emory at lindsay in ~                                                                                                                               
$ time du -hs /bananastand/people/emory/Pictures/Photos\ from\ 1977-2011.aplibrary                                                                       
 29G    /bananastand/people/emory/Pictures/Photos from 1977-2011.aplibrary                                                                             
0.35s user 13.23s system 11% cpu 2:01.36 total       
                                                                                 
emory at lindsay in ~                                                                                                                               
$ time du -hs /Volumes/teamocil/emory/Pictures/Photos\ from\ 1977-2011.aplibrary                                             
 30G    /Volumes/teamocil/emory/Pictures/Photos from 1977-2011.aplibrary                                                                     
0.21s user 11.26s system 32% cpu 34.797 total


Capacity/utilization: bananastand is using 3,383 GB of ~6TB, teamocil is using 613GB of 1TB (just under 70% utilization)

lindsay zstat:

Code: Select all
 v2012.09.23    159 threads        4 mounts       35717 vnodes     14:49:04                                                                                                                                                                                                                                           [0/1760]
____________________________________________________________________________
             KALLOC      KERNEL/MAPS        TOTAL         EQUITY
  WIRED      44 MiB    1183 MiB/1192         1228 MiB      7.50%
  PEAK      123 MiB    2013 MiB              2136 MiB
  VMPAGE      29116 (IN)       4243 (OUT)       4119 (SYNC)         44 (MDS)
____________________________________________________________________________
                     HITS                  MISSES
  ARC overall:        89% (19488730)          11% (2347989)
  ARC demand data:    94% (7364279)            6% (404330)
  ARC demand meta:    95% (10437437)           5% (479469)
  ARC prefetch data:  11% (71822)             89% (534994)
  ARC prefetch meta:  63% (1615192)           37% (929196)
  DMU zfetch:         70% (37916412)          30% (16173539)
____________________________________________________________________________
     SIZE     SLAB    AVAIL    INUSE    TOTAL     PEAK  KMEM CACHE NAME
       72     4096    23786    18014    41800    45210  kmem_slab_cache
       24     4096   212089   217101   429190   443051  kmem_bufctl_cache
       88     4096      195     1020     1215     4050  taskq_ent_cache
      360     4096        1       21       22       33  taskq_cache
      824     8192        8        1        9    11286  zio_cache
       80     4096      133    35717    35850   115200  sa_cache
      840     8192      570    48543    49113   158571  dnode_t
      216     4096    33315    55839    89154   187722  dmu_buf_impl_t
      200     4096    14952   369668   384620   486600  arc_buf_hdr_t
      104     4096    18184    17498    35682    69692  arc_buf_t
      192     4096       56        4       60     1780  zil_lwb_cache
      400     4096      123    35717    35840   115180  znode_t
      512     8192    15422    46370    61792   176048  zio_buf_512
     1024     8192       18      126      144     8936  zio_buf_1024
     1536    12288        6      122      128     2512  zio_buf_1536
     2048     8192        8      104      112     2772  zio_buf_2048
     2560    20480       15       49       64    10016  zio_buf_2560
     3072    12288        2       22       24     3924  zio_buf_3072
     3584   114688       22       10       32     2624  zio_buf_3584
     4096     8192      391     4997     5388    29044  zio_buf_4096
     5120    20480        5       27       32      828  zio_buf_5120
     6144    12288        3       17       20      506  zio_buf_6144
     7168   114688       22       26       48      400  zio_buf_7168
     8192     8192        0       16       16      450  zio_buf_8192
    10240    20480        2       20       22     1586  zio_buf_10240
    12288    12288        0       11       11      701  zio_buf_12288
    14336   114688       10       14       24      328  zio_buf_14336
    16384    65536     2384    10160    12544    41724  zio_buf_16384
    20480    20480        0       31       31      996  zio_buf_20480
    24576    98304        6       14       20     1036  zio_buf_24576
    28672   114688        5       15       20      892  zio_buf_28672


I have atime=off on both pools all the way on down the chain.

FreeNAS has no zstat AFAICT.
emory Offline


 
Posts: 15
Joined: Mon Sep 17, 2012 7:47 pm

Re: Performance Observation

Post by emory » Thu May 23, 2013 8:05 pm

Point of order, for other users that use rsync(1) to move data around, the vendor-supplied version is old and one of the many benefits from moving to rsync out of Homebrew (or presumably MacPorts) is that it starts up faster by starting the rsync job before it has enumerated all the files but can also retain OS X metadata on files such as extended attributes. I haven't vigorously evaluated if all metadata makes it over, but it will at least handle, ahem, Resource Forks.

An example rsync that will show you progress and statistics in addition to shoveling data would be:

Code: Select all
/usr/local/bin/rsync -avPE ~/Pictures/ /Volumes/myzpool/Pictures
emory Offline


 
Posts: 15
Joined: Mon Sep 17, 2012 7:47 pm

Previous

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron