Local/Remote Backups TimeMachine style, automatic scrub

Moderators: jhartley, MSR734, nola

Local/Remote Backups TimeMachine style, automatic scrub

Post by jollyjinx » Tue Sep 18, 2012 2:49 am

If you are thinking about backuping your ZFS volumes, I've been using Zevo for some time now and I've created TimeMachine style backup scripts for local and remote backups as well as an auto scrub script. You can get it at github:

https://github.com/jollyjinx/ZFS-TimeMachine

I'm doing snapshots every five minutes now and sending them to a local and multiple remote backups (mac and freebsd) for months now and it works fine for me.

Patrick aka Jolly
jollyjinx Offline


 
Posts: 60
Joined: Sun Sep 16, 2012 12:40 pm
Location: Munich - Germany

zfstimemachinebackup.perl with verbosity

Post by grahamperrin » Wed Sep 19, 2012 4:24 pm

Thanks Jolly!

zfstimemachinebackup.perl with verbosity

I like a little verbosity for both send and receive, so in my local copy of your perl script I added -v in three places. The result, for pool 'gjp22' (my home directory):

Code: Select all
sh-3.2$ cd ~/Documents/com/github/jollyjinx/ZFS-TimeMachine/current && sudo ./zfstimemachinebackup.perl --sourcepool=gjp22 --destinationpool=tall/backups/gjp22 --snapshotstokeeponsource=100 --createsnapshotonsource
Password:
Last common snapshot:   2012-09-18-210301
sending from @2012-09-18-210301 to gjp22@2012-09-18-215455
receiving incremental stream of gjp22@2012-09-18-215455 into tall/backups/gjp22@2012-09-18-215455
sending from @2012-09-18-215455 to gjp22@2012-09-19-073301
received 164MiB stream in 30 seconds (5.47MiB/sec)
receiving incremental stream of gjp22@2012-09-19-073301 into tall/backups/gjp22@2012-09-19-073301
sending from @2012-09-19-073301 to gjp22@2012-09-19-102342
received 236MiB stream in 42 seconds (5.63MiB/sec)
receiving incremental stream of gjp22@2012-09-19-102342 into tall/backups/gjp22@2012-09-19-102342
sending from @2012-09-19-102342 to gjp22@2012-09-19-112342
received 339MiB stream in 37 seconds (9.16MiB/sec)
receiving incremental stream of gjp22@2012-09-19-112342 into tall/backups/gjp22@2012-09-19-112342
sending from @2012-09-19-112342 to gjp22@2012-09-19-122342
received 241MiB stream in 36 seconds (6.69MiB/sec)
receiving incremental stream of gjp22@2012-09-19-122342 into tall/backups/gjp22@2012-09-19-122342
sending from @2012-09-19-122342 to gjp22@2012-09-19-132341
received 104MiB stream in 22 seconds (4.73MiB/sec)
receiving incremental stream of gjp22@2012-09-19-132341 into tall/backups/gjp22@2012-09-19-132341
sending from @2012-09-19-132341 to gjp22@2012-09-19-142341
received 85.5MiB stream in 17 seconds (5.03MiB/sec)
receiving incremental stream of gjp22@2012-09-19-142341 into tall/backups/gjp22@2012-09-19-142341
sending from @2012-09-19-142341 to gjp22@2012-09-19-152341
received 71.1MiB stream in 15 seconds (4.74MiB/sec)
receiving incremental stream of gjp22@2012-09-19-152341 into tall/backups/gjp22@2012-09-19-152341
sending from @2012-09-19-152341 to gjp22@2012-09-19-162340
received 109MiB stream in 20 seconds (5.46MiB/sec)
receiving incremental stream of gjp22@2012-09-19-162340 into tall/backups/gjp22@2012-09-19-162340
sending from @2012-09-19-162340 to gjp22@2012-09-19-172850
received 68.7MiB stream in 16 seconds (4.29MiB/sec)
receiving incremental stream of gjp22@2012-09-19-172850 into tall/backups/gjp22@2012-09-19-172850
sending from @2012-09-19-172850 to gjp22@2012-09-19-194203
received 162MiB stream in 27 seconds (6.00MiB/sec)
receiving incremental stream of gjp22@2012-09-19-194203 into tall/backups/gjp22@2012-09-19-194203
sending from @2012-09-19-194203 to gjp22@2012-09-19-204203
received 216MiB stream in 33 seconds (6.55MiB/sec)
receiving incremental stream of gjp22@2012-09-19-204203 into tall/backups/gjp22@2012-09-19-204203
sending from @2012-09-19-204203 to gjp22@2012-09-19-214203
received 414MiB stream in 44 seconds (9.42MiB/sec)
receiving incremental stream of gjp22@2012-09-19-214203 into tall/backups/gjp22@2012-09-19-214203
sending from @2012-09-19-214203 to gjp22@2012-09-19-221038
received 149MiB stream in 24 seconds (6.21MiB/sec)
receiving incremental stream of gjp22@2012-09-19-221038 into tall/backups/gjp22@2012-09-19-221038
received 71.1MiB stream in 14 seconds (5.08MiB/sec)
Will keep snapshot:  2012-09-18-210301=1347998581 Backup in bucket: $backupbucket{90000}=2012-09-18-210301
Will keep snapshot:  2012-09-14-182143=1347643303 Backup in bucket: $backupbucket{442800}=2012-09-14-182143
Will keep snapshot:  2012-09-10-201915=1347304755 Backup in bucket: $backupbucket{777600}=2012-09-10-201915
Will keep snapshot:  2012-09-05-195520=1346871320 Backup in bucket: $backupbucket{1209600}=2012-09-05-195520
Will keep snapshot:  2012-08-31-204330=1346442210 Backup in bucket: $backupbucket{1641600}=2012-08-31-204330
Will keep snapshot:  2012-08-30-221432=1346361272 Backup in bucket: $backupbucket{1728000}=2012-08-30-221432
Will remove snapshot:2012-08-30-171355=1346343235 Backup in bucket: $backupbucket{1728000}=2012-08-30-221432
Will remove snapshot:2012-08-30-064942=1346305782 Backup in bucket: $backupbucket{1728000}=2012-08-30-221432
Will keep snapshot:  2012-08-29-165742=1346255862 Backup in bucket: $backupbucket{1814400}=2012-08-29-165742
Will keep snapshot:  2012-08-22-185347=1345658027 Backup in bucket: $backupbucket{2419200}=2012-08-22-185347
Will keep snapshot:  2012-08-21-182308=1345569788 Backup in bucket: $backupbucket{2505600}=2012-08-21-182308
Will keep snapshot:  2012-08-19-110836=1345370916 Backup in bucket: $backupbucket{2678400}=2012-08-19-110836
Will keep snapshot:  2012-08-18-102525=1345281925 Backup in bucket: $backupbucket{2764800}=2012-08-18-102525
Will keep snapshot:  2012-08-14-200352=1344971032 Backup in bucket: $backupbucket{3110400}=2012-08-14-200352
Will keep snapshot:  2012-08-09-204012=1344541212 Backup in bucket: $backupbucket{3542400}=2012-08-09-204012
Will keep snapshot:  2012-08-06-115653=1344250613 Backup in bucket: $backupbucket{3801600}=2012-08-06-115653
Will keep snapshot:  2012-07-31-114008=1343731208 Backup in bucket: $backupbucket{4320000}=2012-07-31-114008
Will keep snapshot:  2012-07-26-211243=1343333563 Backup in bucket: $backupbucket{4752000}=2012-07-26-211243
Will remove snapshot:2012-07-26-033322=1343270002 Backup in bucket: $backupbucket{4752000}=2012-07-26-211243
Will keep snapshot:  2012-07-23-230532=1343081132 Backup in bucket: $backupbucket{4924800}=2012-07-23-230532
sh-3.2$
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

hourly snapshots – success

Post by grahamperrin » Wed Sep 19, 2012 4:31 pm

Side note

The hourly snapshots indicated above were created before I experimented with Jolly's scripts, so maybe I had some success around the zfs_autosnapshots-hourly.plist that's integral to version 1.1 of ZEVO Community Edition.

If/when I figure things out, I'll add something under automatic snapshots and scrubs.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by si-ghan-bi » Mon Oct 01, 2012 5:40 pm

I'm doing snapshots every five minutes now and sending them to a local and multiple remote backups (mac and freebsd) for months now and it works fine for me.


According to the description on github, you keep the last n locally and you keep old ones according to specific rules on the remote server.
I would be interested not only on sending snapshots to a remote machine, but also in keeping locally old snapshot according to the rules you apply on the destination: one per hour, one per day, one per week, ...
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by jollyjinx » Tue Oct 02, 2012 3:12 am

The script only keeps the last n-syncs to have a point to restart if things might go wrong or you create snapshots manually. You could a timemachine style removal of snapshots on the source as well by adding a couple of lines to the script, that's why I've put it on github.
jollyjinx Offline


 
Posts: 60
Joined: Sun Sep 16, 2012 12:40 pm
Location: Munich - Germany

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by si-ghan-bi » Tue Oct 02, 2012 6:52 am

I can try, but last time I briefly read the code I found it a bit obscure, that's why I asked you. Anyway, I will check the code again, as soon as I have more time.
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by mk01 » Fri Oct 05, 2012 12:03 pm

Hi,

I modified this script https://github.com/dajhorn/zfs-auto-snapshot . beside small "porting" to osx I added ability to copy (or move) the snapshots from local pool to a different one. My pool reside on two disc stripe and zfs-auto-snapshot is doing time machine like backups - backups pool resides inside sparse bundle disk image, kept on Time Machine.

if someone is interested, you can email me and I will share the modified version - don't have web. ( matuskral at aol dot com )

br,
Matus
mk01 Offline


 
Posts: 65
Joined: Mon Sep 17, 2012 1:16 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by grahamperrin » Thu Oct 25, 2012 2:21 am

Thanks again to Patrick and others for scripting these things.

With automation in mind: would it be feasible to add options, for the user to specify any one or more of the following?

  1. a percentage to be made free – at the pool level – before reception
  2. an amount, in MB, to be made free – at the pool level – before reception
  3. a percentage to be made free – at the file system level – before reception
  4. an amount, in MB, to be made free – at the file system level – before reception
  5. a percentage to be made free – at the pool level – after reception
  6. an amount, in MB, to be made free – at the pool level – after reception
  7. a percentage to be made free – at the file system level – after reception
  8. an amount, in MB, to be made free – at the file system level – after reception

Background

The following screenshot (from a small collection of files on Wuala) demonstrates what can happen if the user is, like me, careless about free space at the receiving file system.

Image

Also: I don't have links handy, but I'm aware of recent/ongoing work by developers of ZFS to more effectively estimate – before send begins – the amount that's to be sent.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by shuman » Thu Oct 25, 2012 6:41 am

Can someone give me some guidance in how to install the snapshot script linked to above (github)? I'm not always sure what to do with code like that. Also, does anyone have any direct experience with how well it is working on the Mac with ZEVO?

- Chris
- Mac Mini (Late 2012), 10.8.5, 16GB memory, pool - 2 Mirrored 3TB USB 3.0 External Drives
shuman Offline

User avatar
 
Posts: 96
Joined: Mon Sep 17, 2012 8:15 am

experience with zfstimemachinebackup.perl

Post by grahamperrin » Fri Oct 26, 2012 2:11 am

shuman wrote:… how well it is working on the Mac with ZEVO? …


I'm extremely pleased with the zfstimemachinebackup.perl script offered by Patrick.

Whilst backups do feature in two recent reports (bus issue after Blackmagic Disk Speed Test during backup; root file system not mounted, some children mounted), my current guess is that nothing in the script was contributory to those issues.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Next

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 0 guests

cron