sparsebundles w/ JHFS+ on top of ZFS

Moderators: jhartley, MSR734, nola

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by ghaskins » Wed Dec 26, 2012 10:28 pm

Hi Thomas,

thkunze wrote:I'm actually having the same grave performance problems. "ls -laR" takes forever. Under those circumstances, zevo is practically unusable for me.


If you would be so kind, could you post some more details about your setup? Perhaps post the same details (sw_vers, etc), and any other relevant info you can think of (e.g. type of hardware you are on, how much data and how many files are in your pool, any benchmark data you might have from the hardware both outside of and on top of zevo, etc.

I can't imagine the vast majority of people are seeing this problem, as I agree that its virtually unusable. My thought is there seems to be some issue plaguing a small number of us.

Kind Regards,
-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Caches

Post by grahamperrin » Thu Dec 27, 2012 6:10 am

thkunze wrote:… "ls -laR" takes forever …


Here:

Code: Select all
ls -ahlR /Volumes/tall/Users


First run: eleven seconds.

Second run: less than a second. Probably thanks to caching.

I very rarely use ls in that way, so the time taken for the first run is not a problem. YMMV.

Notes

MacBookPro5,2 with 8 GB memory.

The pool named 'tall' is a single disk on USB, Seagate GoFlex Desk (0x50a5). Often around ninety-nine percent full so I don't expect great performance.

Code: Select all
macbookpro08-centrim:~ gjp22$ du -hs /Volumes/tall/Users
440G   /Volumes/tall/Users


Code: Select all
macbookpro08-centrim:~ gjp22$ zfs get all tall
NAME  PROPERTY              VALUE                  SOURCE
tall  type                  filesystem             -
tall  creation              Fri May  4 21:48 2012  -
tall  used                  1.78Ti                 -
tall  available             11.7Gi                 -
tall  referenced            440Gi                  -
tall  compressratio         1.10x                  -
tall  mounted               yes                    -
tall  quota                 none                   default
tall  reservation           none                   default
tall  recordsize            128Ki                  default
tall  mountpoint            /Volumes/tall          default
tall  checksum              on                     default
tall  compression           on                     local
tall  atime                 off                    local
tall  devices               on                     default
tall  exec                  on                     default
tall  setuid                on                     default
tall  readonly              off                    default
tall  snapdir               visible                local
tall  canmount              on                     default
tall  copies                1                      local
tall  version               5                      -
tall  utf8only              on                     -
tall  normalization         formD                  -
tall  casesensitivity       insensitive            -
tall  refquota              none                   default
tall  refreservation        none                   default
tall  primarycache          all                    default
tall  secondarycache        all                    default
tall  usedbysnapshots       477Gi                  -
tall  usedbydataset         440Gi                  -
tall  usedbychildren        904Gi                  -
tall  usedbyrefreservation  0                      -
tall  logbias               latency                default
tall  sync                  standard               default
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Caches

Post by ghaskins » Thu Dec 27, 2012 5:31 pm

grahamperrin wrote:I very rarely use ls in that way, so the time taken for the first run is not a problem. YMMV.


Note that, at least in my case, it wasn't just ls with perhaps more esoteric options. It was _any_ ls, find, etc, or basically any operation that walks the inodes in the filesystem. Super painful. Most commands were talking at least 30 seconds to respond, and some operations were taking hours and hours.

For the time being, I have set up a raid10 with jhfs+ so I can use my system in a reasonable way, but I will keep an eye out for Zevo releases that perhaps resolve some of these issues I was having. I'm still super grateful that this community is pushing the ZFS ball forward on OSX, and I look forward to when I can use it in production.

Keep up the great work,
-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Link

Post by grahamperrin » Fri Dec 28, 2012 2:50 am

gherkins wrote:… some operations were taking hours …


Noted and agreed.

The opening subject of this general discussion might not gain attention from developers, so I'm moving to the troubleshooting area: some actions take longer than expected to complete.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by thkunze » Fri Dec 28, 2012 9:01 pm

Hi Greg,

ghaskins wrote:Hi Thomas,

If you would be so kind, could you post some more details about your setup? Perhaps post the same details (sw_vers, etc), and any other relevant info you can think of (e.g. type of hardware you are on, how much data and how many files are in your pool, any benchmark data you might have from the hardware both outside of and on top of zevo, etc.

I can't imagine the vast majority of people are seeing this problem, as I agree that its virtually unusable. My thought is there seems to be some issue plaguing a small number of us.

Kind Regards,
-Greg


My setup in short:
A Raidz1 with five Samsung DH204UI 2TB drives connected via three USB ports to a MacBook Pro 2,4 GHz Core2 Duo with 4 GB RAM, running Mac OS 10.8.2.
It's basically a testing setup.

Some more details:

sw_vers
ProductName: Mac OS X
ProductVersion: 10.8.2
BuildVersion: 12C60

kextstat | grep zfs
89 1 0xffffff7f807a5000 0x19b000 0x19b000 com.getgreenbytes.filesystem.zfs (2012.09.23) <13 7 5 4 3 1>
90 0 0xffffff7f80942000 0x6000 0x6000 com.getgreenbytes.driver.zfs (2012.09.14) <89 13 7 5 4 3 1>

sudo zpool status -vx -T d
all pools are healthy

sudo zpool status -v
pool: tank
state: ONLINE
scan: resilvered 1,18Gi in 0h10m with 0 errors on Wed Dec 26 17:49:44 2012
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
GPTE_15BCC97B-4CC0-4DC2-B70D-7016FAA08D02 ONLINE 0 0 0 at disk1s2
GPTE_1C52F159-AF27-41F5-B28B-62A90CA66E7A ONLINE 0 0 0 at disk2s2
GPTE_82D6E091-56EE-4F43-A353-7ED513AA7364 ONLINE 0 0 0 at disk3s2
GPTE_451667CA-97EE-428A-8820-5AEF24DC0E37 ONLINE 0 0 0 at disk5s2
GPTE_8273F074-57A6-4EF9-8A5F-383FF167CFFB ONLINE 0 0 0 at disk6s2

diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *120.0 GB disk0
1: EFI 209.7 MB disk0s1
2: Apple_HFS MacOS 119.2 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk1
1: EFI 209.7 MB disk1s1
2: ZFS 2.0 TB disk1s2
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk2
1: EFI 209.7 MB disk2s1
2: ZFS 2.0 TB disk2s2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk3
1: EFI 209.7 MB disk3s1
2: ZFS 2.0 TB disk3s2
/dev/disk5
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk5
1: EFI 209.7 MB disk5s1
2: ZFS 2.0 TB disk5s2
/dev/disk6
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk6
1: EFI 209.7 MB disk6s1
2: ZFS 2.0 TB disk6s2


Best Regards,

Thomas
thkunze Offline


 
Posts: 3
Joined: Sat Sep 15, 2012 5:07 am

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by grahamperrin » Sat Dec 29, 2012 1:21 am

Thomas, thanks.

In the pool, approximately what percentage of space is free?

Code: Select all
zpool list tank


Approximately how many files in the file system(s) of the pool?

Hint: when replying – or when editing a previous post – use the full editor. Formatting relatively small blocks of text as Code can make some things a little easier to digest.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by thkunze » Sat Dec 29, 2012 4:34 am

Hi Graham,

25% of the pool space is free. It contains 7 datasets and approximately 200.000 - 400.000 files.

Best Regards,

Thomas
thkunze Offline


 
Posts: 3
Joined: Sat Sep 15, 2012 5:07 am

Rewrites of blocks of bands of sparse bundle disk images

Post by grahamperrin » Sat Apr 06, 2013 3:10 am

Recommended reading: viewtopic.php?p=4582#p4582 under performance degradation over time, in particular:

raattgift wrote:… for an 8MB band file, when you rewrite a 4096-byte dmg block contained in it, you reduce locality on disk compared to the other blocks backed in that band file. Time machine does this quite a bit, particularly in the blocks holding JHFS+ metadata. So when you are scrubbing away, it's not the compression that slows you down but rather the previous rewriting …


Side note: for Mountain Lion Time Machine writes to ZFS with ZEVO Community Edition 1.1.1, if the .sparsebundle was created normally in the background (with a GUI to Time Machine in front) then the default will be case sensitive –

  • JHFS+X

– but for most use cases, this shouldn't alter the nature of the discussion.

See also: hard disk optimisation for performance purposes
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Previous

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron