Need Help with a Backup Solution

Moderators: jhartley, MSR734, nola

First impressions of Amazon with Arq

Post by grahamperrin » Sun Apr 07, 2013 11:02 am

grahamperrin wrote:… Amazon Web Services Simple Storage Service, but I can't sign up – maybe because I have a debit card but by choice, no credit card.

So for me, Arq is a non-starter.


Now:


… I was quickly confused about how much I should pay for a comprehensive backup – to include much more than my home directory, and so on.

I decided to:

  • not start the Arq agent automatically
  • remove the sole item (my Core Storage encrypted ZEVO ZFS home directory gjp22) from the sole backup set.

Screenshots

At a later date I might put my mind to cost effectiveness, availability and other aspects but for now: I'll prefer something simpler.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Need Help with a Backup Solution

Post by ilovezfs » Wed Apr 10, 2013 4:44 am

Be careful about how you are estimating Glacier's cost. You will be charged for data retrieval based not only on how many retrieval requests you make, but also on how quickly you retrieve the data. See the question "How will I be charged when retrieving large amounts of data from Amazon Glacier?" in the FAQ:

http://aws.amazon.com/glacier/faqs/#How ... on_Glacier

However, there is an escape hatch here if you need your data quickly. You can actually mail them a physical hard drive, and they will load it up with your data, and mail it back to you for much less money than a fast retrieval over the Internet.
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Re: Need Help with a Backup Solution

Post by ilovezfs » Wed Apr 10, 2013 5:01 am

Has anyone tried out CrashPlan with ZEVO yet?
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Re: Need Help with a Backup Solution

Post by ilovezfs » Wed Apr 10, 2013 6:13 am

Just tried out CrashPlan + ZEVO on top of CoreStorage, and CrashPlan + ZEVO without CoreStorage, and it seems to work well. I was able to restore data on another machine with permissions and xattrs correctly reproduced.

Also, it seems to work well with afp network volumes at no extra cost.

Another thing that looks promising is that you can choose to use a custom 448-bit key for the encryption, which you are responsible for not losing/forgetting. You can import the key from a local file or use a passphrase. Another choice is to use encryption + password, in which case they will escrow your key for you. All encryption/decryption seems to work locally only, unless you use their website-based recovery feature (no thanks).

By contrast, Backblaze's security looks troublesome. In order to restore your data, if I understand how it works correctly, their system moves your data from their data servers to a recovery server, decrypts your data on the recovery server, and then zips it up and sends it to you. I don't like the sound of that at all.

Hopefully someone who has been using CrashPlan for a while with ZEVO can chime in and let us know of any pitfalls.
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Backblaze burns up metadata

Post by ilovezfs » Sat Apr 13, 2013 12:23 pm

Unless I'm missing something, Backblaze does not like metadata.

For example,
xattrs: kiss them goodbye
file creation time: sorry can't help you there

It seems like it's throwing virtually all metadata away. Crashplan seems a bit more respectful of metadata.

I suppose the obvious way around this is to sock everything away in sparsebundles, but that makes restores painful unless you limit the size of your sparsebundles to, say, less than 10G. Yuck.

I'm hoping I'm wrong, and that there's some way to get your metadata back if you use Backblaze but I don't have my hopes up.
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Backblaze #48209 extended attributes are not restored

Post by grahamperrin » Sat Apr 13, 2013 12:43 pm

Probably not an issue with ZEVO … a copy of my request to Backblaze:

#48209 extended attributes are not restored

Comparing an original (on the desktop) with a restoration from Backblaze (in a folder named ooh on the desktop):

Code: Select all
macbookpro08-centrim:~ gjp22$ xattr -l Desktop/Sophos.txt
com.apple.FinderInfo:
00000000 54 45 58 54 21 52 63 68 00 00 00 00 00 00 00 00 |TEXT!Rch........|
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000020
com.apple.TextEncoding: UTF-8;134217984
macbookpro08-centrim:~ gjp22$ xattr -l Desktop/ooh/Sophos.txt


Is this an issue with restoration?

Or does Backblaze not backup extended attributes?
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Need Help with a Backup Solution

Post by ilovezfs » Sat Apr 13, 2013 1:15 pm

I don't think they're backing them up. If you double-click on a zip file in Finder, Finder should properly expand xattr and restore them if they are in the zip file.

To see what would need to be in the zip file, try the following:

In Finder right click on any file you have with xattr. Then click "Compress." Next, go to Terminal and run /usr/bin/unzip on the zip file that Finder just created. You'll see your original file and a nasty old __MACOSX directory. Then ls -al __MACOSX/ and you will see the hidden dotbar file containing the metadata. If Backblaze's zip files don't have __MACOSX directories, then the metadata is just not there. Alternatively, run /usr/bin/unzip -l on the zip file to see the entire list of files in the archive including the __MACOSX, etc.

Running unzip -l on the zip files they provide for restores shows a glorious lack of anything but the underlying file sans metadata.

Also try running mdls on your originals and your restores. Not pretty.

If we're really lucky, then server side they are storing all of the metadata properly and this could be fixed later, or perhaps you might even get your metadata back if you have them mail you a drive instead of doing the zip file thing. My guess, though, is that they are just shoving everything you back up onto some dumb NFS v3 server, not storing any dotbar files, and calling it a day.
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Re: Need Help with a Backup Solution

Post by raattgift » Sun Apr 14, 2013 7:03 am

File-based vdevs are useful for off-site backups. Set the file to whatever size you want, with whatever replication level you want, send in whatever data you want, export the pool, and back up the files. You should be able to import the pool again on any platform supporting at least the same zpool and zfs versions ("zpool upgrade"), and recover the datasets whatever way you want.

The key here is that you can set a replication level and other pool and dataset properties, and the ability to use zfs send/receive, including for incremental backups. You can of course also create whatever datasets you like and copy data in using tools like "/usr/bin/rsync -avhPE /Volumes/HFS-data/ /Volumes/filepool/HFS-data-archive/".

Here's an example, using a small bit of data. Imagine the mkfile being sized for DVD-Rs for example, so after export you could burn them and send them away in a couple of envelopes, and at the destination copy the data back from the DVD-Rs into files. Lose an envelope? Well, you're raidz3, no problem, you can still import a degraded pool. Scratches on one or more DVDs leading to lost blocks? Also no problem, a pool with sufficient replication will likely self-repair.

The vdev files are also suitable for network transfers of whatever type you want; there's nothing special about the file vdevs in terms of metadata - it's only the data in the files themselves that's needed.

Code: Select all
cla:ssdpool # mkfile 256m d1
cla:ssdpool # mkfile 256m d2
cla:ssdpool # mkfile 256m d3
cla:ssdpool # mkfile 256m d4

cla:ssdpool # zpool create -O checksum=sha256 filepool raidz2 /Volumes/ssdpool/d1 /Volumes/ssdpool/d2 /Volumes/ssdpool/d3 /Volumes/ssdpool/d4

cla:ssdpool # zpool status -v filepool
  pool: filepool
 state: ONLINE
 scan: none requested
config:

   NAME                     STATE     READ WRITE CKSUM
   filepool                 ONLINE       0     0     0
     raidz2-0               ONLINE       0     0     0
       /Volumes/ssdpool/d1  ONLINE       0     0     0
       /Volumes/ssdpool/d2  ONLINE       0     0     0
       /Volumes/ssdpool/d3  ONLINE       0     0     0
       /Volumes/ssdpool/d4  ONLINE       0     0     0

errors: No known data errors

cla:ssdpool # zpool iostat -v filepool
                             capacity       operations       bandwidth
pool                      alloc    free    read   write    read   write
-----------------------  ------  ------  ------  ------  ------  ------
filepool                 1.11Mi   999Mi       0      16     369   157Ki
  raidz2                 1.11Mi   999Mi       0      16     369   157Ki
    /Volumes/ssdpool/d1       -       -       0      11     529   109Ki
    /Volumes/ssdpool/d2       -       -       0      11     639   109Ki
    /Volumes/ssdpool/d3       -       -       0      11     569   109Ki
    /Volumes/ssdpool/d4       -       -       0      11     549   109Ki
-----------------------  ------  ------  ------  ------  ------  ------

cla:ssdpool # zfs send -v Donkey@2013-04-11-055900   | zfs receive -v -u  filepool/Donkey
sending from @ to Donkey@2013-04-11-055900
receiving full stream of Donkey@2013-04-11-055900 into filepool/Donkey@2013-04-11-055900
received 176KiB stream in 1 seconds (176KiB/sec)

cla:ssdpool # zfs list -o space -t all -r filepool
NAME                                AVAIL    USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
filepool                            465Mi   725Ki         0   432Ki              0      293Ki
filepool/Donkey                     465Mi   176Ki         0   176Ki              0          0
filepool/Donkey@2013-04-11-055900       -       0         -       -              -          -

cla:ssdpool # zpool list filepool
NAME        SIZE   ALLOC    FREE     CAP  HEALTH  ALTROOT
filepool  1000Mi  1.49Mi   999Mi      0%  ONLINE  -

cla:ssdpool # zpool export filepool

cla:ssdpool # ls -l d1 d2 d3 d4
-rw------T  1 root  wheel  268435456 14 Apr 12:46 d1
-rw------T  1 root  wheel  268435456 14 Apr 12:46 d2
-rw------T  1 root  wheel  268435456 14 Apr 12:46 d3
-rw------T  1 root  wheel  268435456 14 Apr 12:46 d4

cla:ssdpool # /usr/bin/rsync -avhPE d[1-4] some.backup.host:/Volumes/archive



Naturally you could keep the exported files in a directory that is backed up by your favourite "cloud" system. It would even be reasonably efficient if the "cloud" system archives only changed ranges within files, like rsync can.

Of course, retrieving *one* file would require a bit of work - get the files, zpool import, and copy from the datasets directly, or alternatively, zfs clone a snapshot first and copy from the clone.

There is thus value in using a file-based "cloud" backup system where you can retrieve the one smile file that you've just damaged or destroyed. File-based backups however have their deficiencies too as you have been discussing the thread. They are also horrendous for recovering large numbers of files.

Multiple, overlapping backup strategies are useful because you can turn to whatever backup set has the best fitting restoration/recovery scheme for different emergencies.
raattgift Offline


 
Posts: 98
Joined: Mon Sep 24, 2012 11:18 pm

Re: Need Help with a Backup Solution

Post by ilovezfs » Sun Apr 14, 2013 7:52 am

raattgift, who do you use for your cloud provider(s) (if you do) for your ZFS files and/or file-based vdevs?

For anyone relying on rsync at all, I recommend using a newer version than Apple supplies. /usr/bin/rsync is version 2.6.9, but the current version of rsync is 3.0.9. (MacPorts has it, or you can compile it yourself, of course). The newer version does a better job handling OS X metadata (e.g, crtimes).

The options have changed slightly. I use the following most of the time:
rsync -aXANvhP

Also take a look at --fileflags --hfs-compression, and --protect-decmpfs
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Link

Post by grahamperrin » Sun Apr 14, 2013 9:40 am

raattgift wrote:File-based vdevs are …


… supported but only for testing purposes.

File-based vdevs
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron