sparsebundles w/ JHFS+ on top of ZFS

Moderators: jhartley, MSR734, nola

sparsebundles w/ JHFS+ on top of ZFS

Post by ghaskins » Thu Dec 20, 2012 8:00 am

I am having a ton of usability issues with using native ZFS as my home directory, such as:

  • TimeMachine doesn't see the drive
  • FCPX doesnt support non HFS volumes for media storage
  • Performance is terrible overall

The last one is the most serious. I have approximately 2.9TB of data in about 2.5M files. Performance of things that walk inodes, like "ls", "find", spotlight, DaisyDisk on Zevo are multiple orders of magnitude slower than HFS. For instance, DaisyDisk takes a minute or two to scan my volume in HFS. The exact same dataset in ZFS takes over 8 hours. Spotlight runs for 12+ hours, vs a few minutes on HFS, etc. "ls" take a long time to return. "find" runs visibly slow, etc. It was too painful to use, really.

Before giving up on ZFS (because I really don't want to), I decided to experiment with using HFS sparsebundle mounted on top of ZFS, effectively using ZFS as an LVM/RAID solution for HFS. So far, its working pretty well. Performance is back closer to where I would like to see. The bundle presents ZFS with about 300K files in the form of 8MB bands. Presumably this must be easier for it to digest.

Any comments about this approach? I hope that ZFS is preserving sync barriers all the way through (HFS->sparsebundle->ZFS->cache->disks), but I have yet to confirm this.

-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

for the problem with performance: more details, please

Post by grahamperrin » Thu Dec 20, 2012 10:04 am

ghaskins wrote:FCPX doesnt support non HFS volumes for media storage


Final Cut X is on the shortlist – apps, services etc. that may be imperfect with ZEVO

I guess that Final Cut Pro X has similar peculiarities.

Performance is terrible overall


Outputs please from the following commands:

Code: Select all
sw_vers
kextstat | grep zfs
mount
ls -l /Volumes
ls -l /dev/dsk
ls -l /var/zfs/dsk
diskutil list
diskutil coreStorage list
sudo zpool status -vx -T d
sudo zpool status -v


Please paste each output as a separate block of code. Thanks.
Last edited by grahamperrin on Thu Dec 20, 2012 9:29 pm, edited 1 time in total.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: for the problem with performance: more details, please

Post by ghaskins » Thu Dec 20, 2012 10:31 am

grahamperrin wrote:Outputs please from the following commands:

Code: Select all
sw_vers
kextstat | grep zfs
mount
ls -l /Volumes
ls -l /dev/dsk
ls -l /var/zfs/dsk
diskutil list
diskutil coreStorage list
sudo zpool status -vx -T d
sudo zpool status -v




Here are the results:

Code: Select all
greg:~ ghaskins$ sw_vers
ProductName:    Mac OS X
ProductVersion: 10.7.5
BuildVersion:   11G63


Code: Select all
greg:~ ghaskins$ kextstat | grep zfs
  117    1 0xffffff7f80791000 0x19a000   0x19a000   com.getgreenbytes.filesystem.zfs (2012.09.23) <13 7 5 4 3 1>
  118    0 0xffffff7f8092b000 0x6000     0x6000     com.getgreenbytes.driver.zfs (2012.09.14) <117 13 7 5 4 3 1>


Code: Select all
greg:~ ghaskins$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk2s1 on /Bundles (zfs, local, automounted, journaled, noatime)
/dev/disk8s2 on /Users (hfs, local, nodev, nosuid, journaled)


Code: Select all
greg:~ ghaskins$ ls -l /Volumes
total 8
lrwxr-xr-x  1 root  admin  1 Dec 19 21:29 MacintoshSSD -> /


Code: Select all
greg:~ ghaskins$ ls -l /dev/dsk
lrwxr-xr-x  1 root  wheel  0 Dec 19 21:29 /dev/dsk -> /var/zfs/dsk


Code: Select all
greg:~ ghaskins$ ls -l /var/zfs/dsk
total 48
lrwxr-xr-x  1 root  wheel  12 Dec 19 21:30 GPTE_57A9AB72-098A-43FB-B61E-5D96ABB104ED -> /dev/disk4s2
lrwxr-xr-x  1 root  wheel  12 Dec 19 21:30 GPTE_6564DE62-D915-42FA-A807-C9B10A2CD0B7 -> /dev/disk1s2
lrwxr-xr-x  1 root  wheel  12 Dec 19 21:30 GPTE_76505CB5-3E6F-4CD2-BB56-177FB19F3BBE -> /dev/disk5s2
lrwxr-xr-x  1 root  wheel  12 Dec 19 21:30 GPTE_9DCF9F3D-0F7B-4E2E-8DD9-9EE0E6FD5EDE -> /dev/disk6s2
lrwxr-xr-x  1 root  wheel  12 Dec 19 21:30 GPTE_B8CC1AD8-A621-42E0-BE42-1955F0024510 -> /dev/disk3s2
lrwxr-xr-x  1 root  wheel  12 Dec 19 21:30 GPTE_F813FC3C-A75B-4B2D-86B4-8773B73C4F05 -> /dev/disk7s2


Code: Select all
greg:~ ghaskins$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *256.1 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:                  Apple_HFS MacintoshSSD            255.2 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk1
   1:                        EFI                         209.7 MB   disk1s1
   2:                        ZFS                         2.0 TB     disk1s2
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:             zfs_pool_proxy Tank                   *12.0 TB    disk2
   1:       zfs_filesystem_proxy Bundles                 9.8 TB     disk2s1
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk3
   1:                        EFI                         209.7 MB   disk3s1
   2:                        ZFS                         2.0 TB     disk3s2
/dev/disk4
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk4
   1:                        EFI                         209.7 MB   disk4s1
   2:                        ZFS                         2.0 TB     disk4s2
/dev/disk5
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk5
   1:                        EFI                         209.7 MB   disk5s1
   2:                        ZFS                         2.0 TB     disk5s2
/dev/disk6
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk6
   1:                        EFI                         209.7 MB   disk6s1
   2:                        ZFS                         2.0 TB     disk6s2
/dev/disk7
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk7
   1:                        EFI                         209.7 MB   disk7s1
   2:                        ZFS                         2.0 TB     disk7s2
/dev/disk8
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     Apple_partition_scheme                        *16.4 TB    disk8
   1:        Apple_partition_map                         258.0 KB   disk8s1
   2:                  Apple_HFS Users                   16.4 TB    disk8s2
/dev/disk9
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.2 TB     disk9
   1:                        EFI                         209.7 MB   disk9s1
   2:                  Apple_HFS Data                    3.2 TB     disk9s2


Code: Select all
greg:~ ghaskins$ diskutil coreStorage list
No CoreStorage logical volume groups found


Code: Select all
greg:~ ghaskins$ sudo zpool status -vx -T d
Thu Dec 20 10:30:23 2012
all pools are healthy


Code: Select all
greg:~ ghaskins$ sudo zpool status -v
  pool: Tank
 state: ONLINE
 scan: scrub canceled on Wed Dec 19 11:22:37 2012
config:

        NAME                                           STATE     READ WRITE CKSUM
        Tank                                           ONLINE       0     0     0
          raidz1-0                                     ONLINE       0     0     0
            GPTE_57A9AB72-098A-43FB-B61E-5D96ABB104ED  ONLINE       0     0     0  at disk4s2
            GPTE_F813FC3C-A75B-4B2D-86B4-8773B73C4F05  ONLINE       0     0     0  at disk7s2
            GPTE_76505CB5-3E6F-4CD2-BB56-177FB19F3BBE  ONLINE       0     0     0  at disk5s2
            GPTE_6564DE62-D915-42FA-A807-C9B10A2CD0B7  ONLINE       0     0     0  at disk1s2
            GPTE_B8CC1AD8-A621-42E0-BE42-1955F0024510  ONLINE       0     0     0  at disk3s2
            GPTE_9DCF9F3D-0F7B-4E2E-8DD9-9EE0E6FD5EDE  ONLINE       0     0     0  at disk6s2

errors: No known data errors
Last edited by ghaskins on Fri Dec 21, 2012 10:25 am, edited 1 time in total.
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by ghaskins » Thu Dec 20, 2012 10:41 am

Since I no longer have my dataset in native ZFS, I can no longer do experiments against that data. However, I can reproduce the problems just looking at the sparsebundles, which I now have in "Tank/Bundles" mounted to /Bundles

for instance:

Code: Select all
greg:bands ghaskins$ pwd
/Bundles/Users.sparsebundle/bands
greg:bands ghaskins$ time ls
<snip>

real    0m25.884s
user    0m4.268s
sys     0m0.924s

greg:bands ghaskins$ ls | wc -l
  306352
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by ghaskins » Thu Dec 20, 2012 10:43 am

I should also mention as a related datapoint, I had two copies of my ~2.3TB sparsebundle yesterday, and it took ZFS probably close to 5-6 hours to delete one of them.
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by ghaskins » Thu Dec 20, 2012 10:51 am

Here are some details about the hardware:

  • "Early 2009" MacPro Quad-core (Nahelem Xeon), 2.93Ghz, 16GB 1066Ghz DDR
  • LSI 9207-8e, connected in PCIe2 x16 slot, using Astek A3DRV-HBA driver
  • Sans Digital TR8X+ 8-bay SAS enclosure (twin 8088 miniSAS connections to the 9207 HBA)
  • 6x 2TB WD "RE" WD2000FYYZ 7.2krpm, 64MB cache, 6Gbps SATAIII drives, populating slots 3-8 in the enclosure (I think the "RE" means RAID-Enabled in the WD nomenclature..which roughly translates to RAID friendly TLER handling IIUC)
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Time Machine cannot currently be used to backup ZEVO volumes

Post by grahamperrin » Thu Dec 20, 2012 9:28 pm

ghaskins wrote:… I think we have the problem inverted. …


Yes – sorry. Focusing to much on the subject line, I misread the content of your opening post. I have edited my first reply so that newcomers to this topic are not confused by it.

Greg, you're correct. From ZEVO QuickStart Guide.pdf with Community Edition 1.1:

> Time Machine cannot currently be used to backup ZEVO volumes.

(I'll be interested in your reasons for not using zfs send and zfs receive to backup ZFS, but I shouldn't hijack this topic.)
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

HFS->sparsebundle->ZFS->cache->disks

Post by grahamperrin » Thu Dec 20, 2012 10:15 pm

gherkins wrote:… I hope that ZFS is preserving sync barriers all the way through (HFS->sparsebundle->ZFS->cache->disks) …


disk image approaches to JHFS+ on ZFS: Consistency
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Time Machine cannot currently be used to backup ZEVO vol

Post by ghaskins » Fri Dec 21, 2012 10:32 am

grahamperrin wrote:Yes – sorry. Focusing to much on the subject line, I misread the content of your opening post. I have edited my first reply so that newcomers to this topic are not confused by it.


No problem, I just edited mine in a similar manner to reduce confusion/distraction.

grahamperrin wrote:(I'll be interested in your reasons for not using zfs send and zfs receive to backup ZFS, but I shouldn't hijack this topic.)


Actually, that is the goal long term. The only reason I am not looking in that direction short term is:
  • I dont have a second ZFS box to back up to, but I do have a Synology with TM/rsyncd capability
  • I dont yet fully trust a full ZFS environment with my data given my inexperience with it. Therefore, I will continue to backup my data with something external to the ZFS world for some time. And even once I migrate to a zfs send/recv type scheme, I will conservatively maintain a parallel solution until I am confident the backups are sound.

And the TM feature isn't a huge deal per se. I was already planning on using ChronoSync/rsync for my ZFS homedir before I ran into the other problems. As an aside, I have already discovered a flaw in my approach to using sparsebundles...that is, it doesnt look like TM supports backing up the contents of sparsebundles as of late. So even if I utlimately land on using HFS+sparsebundles on top of ZFS, I will still either need to rely on rsync or zfs-send if I want backups, it looks like.
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: sparsebundles w/ JHFS+ on top of ZFS

Post by thkunze » Wed Dec 26, 2012 8:07 pm

ghaskins wrote:...
[*]Performance is terrible overall[/list]
...


Hi all,

I'm actually having the same grave performance problems. "ls -laR" takes forever. Under those circumstances, zevo is practically unusable for me.
But I'm surprised that there are relatively few posts reporting such problems.
Is there a solution out there which only I don't know yet?

Best Regards,
Thomas
thkunze Offline


 
Posts: 3
Joined: Sat Sep 15, 2012 5:07 am

Next

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron