Local/Remote Backups TimeMachine style, automatic scrub

Moderators: jhartley, MSR734, nola

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by jollyjinx » Mon Oct 29, 2012 6:39 am

@grahamperrin , yes those options could be added - and I will probably add something like it as I myself will look into keeping the destination disk cleaner at one state.

AFAIK ZEVO currently does not support the additions to find out beforehand before a send of a snapshot needs.
jollyjinx Offline


 
Posts: 60
Joined: Sun Sep 16, 2012 12:40 pm
Location: Munich - Germany

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by grahamperrin » Mon Oct 29, 2012 2:27 pm

jollyjinx wrote:AFAIK ZEVO currently does not support the additions to find out beforehand before a send of a snapshot needs.


I vaguely recall running something, a few weeks ago, that gave an estimate. My memory is probably muddled, because now:

Code: Select all
macbookpro08-centrim:~ gjp22$ zfs list -t snapshot | grep 2012-10-29
gjp22@2012-10-29-054917   132Mi       -   337Gi  -
gjp22@2012-10-29-064917   123Mi       -   337Gi  -
gjp22@2012-10-29-074917   157Mi       -   337Gi  -
gjp22@2012-10-29-111852   151Mi       -   337Gi  -
gjp22@2012-10-29-145710   146Mi       -   337Gi  -
gjp22@2012-10-29-164655   163Mi       -   337Gi  -
gjp22@2012-10-29-183815   260Mi       -   337Gi  -
macbookpro08-centrim:~ gjp22$ sudo zfs send -nv gjp22@2012-10-29-164655 gjp22@2012-10-29-183815
Password:
invalid option 'n'
usage:
   send [-RDp] [-[iI] snapshot] <snapshot>

For the property list, run: zfs set|get
macbookpro08-centrim:~ gjp22$


– experiment based on illumos gate - Feature #1646: "zfs send" should estimate size of stream - illumos.org (resolved 2011-11-06).

Maybe what I recall seeing was something at the illumos ZFS Day.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by grahamperrin » Sat Nov 03, 2012 5:39 pm

With verbosity added to the backup script, I'm curious about the order of output.

Always (or nearly always) at the beginning:

sending … 
receiving …
sending … 
received …
receiving …
sending …
received … 

– and so on, like there's an overlap of some sort.

Today, in the midst of a backup, something like:

* a queue of four sends
* then reception of something that preceded those four
* then the 'receiving … received' pair for the four in a row:

Code: Select all
sending from @2012-11-03-163121 to zhandy@2012-11-03-173120
received 1.01GiB stream in 222 seconds (4.67MiB/sec)
receiving incremental stream of zhandy@2012-11-03-173120 into tall/backups/zhandy@2012-11-03-173120
sending from @2012-11-03-173120 to zhandy@2012-11-03-183120
sending from @2012-11-03-183120 to zhandy@2012-11-03-193120
sending from @2012-11-03-193120 to zhandy@2012-11-03-203120
sending from @2012-11-03-203120 to zhandy@2012-11-03-203439
received 975KiB stream in 6 seconds (162KiB/sec)
receiving incremental stream of zhandy@2012-11-03-183120 into tall/backups/zhandy@2012-11-03-183120
received 312B stream in 10 seconds (31B/sec)
receiving incremental stream of zhandy@2012-11-03-193120 into tall/backups/zhandy@2012-11-03-193120
received 312B stream in 5 seconds (62B/sec)
receiving incremental stream of zhandy@2012-11-03-203120 into tall/backups/zhandy@2012-11-03-203120
received 312B stream in 8 seconds (39B/sec)
receiving incremental stream of zhandy@2012-11-03-203439 into tall/backups/zhandy@2012-11-03-203439
received 312B stream in 6 seconds (52B/sec)


In full:

Code: Select all
macbookpro08-centrim:~ gjp22$ cd ~/Documents/com/github/jollyjinx/ZFS-TimeMachine/current && sudo ./zfstimemachinebackup.perl --sourcepool=zhandy --destinationpool=tall/backups/zhandy --snapshotstokeeponsource=100 --createsnapshotonsource
Last common snapshot:   2012-10-30-212704
sending from @2012-10-30-212704 to zhandy@2012-11-02-060126
receiving incremental stream of zhandy@2012-11-02-060126 into tall/backups/zhandy@2012-11-02-060126
sending from @2012-11-02-060126 to zhandy@2012-11-02-072338
received 717MiB stream in 127 seconds (5.65MiB/sec)
receiving incremental stream of zhandy@2012-11-02-072338 into tall/backups/zhandy@2012-11-02-072338
sending from @2012-11-02-072338 to zhandy@2012-11-02-083135
received 6.10MiB stream in 14 seconds (446KiB/sec)
receiving incremental stream of zhandy@2012-11-02-083135 into tall/backups/zhandy@2012-11-02-083135
sending from @2012-11-02-083135 to zhandy@2012-11-02-093135
received 8.84MiB stream in 25 seconds (362KiB/sec)
receiving incremental stream of zhandy@2012-11-02-093135 into tall/backups/zhandy@2012-11-02-093135
sending from @2012-11-02-093135 to zhandy@2012-11-02-230734
received 1.14GiB stream in 210 seconds (5.58MiB/sec)
receiving incremental stream of zhandy@2012-11-02-230734 into tall/backups/zhandy@2012-11-02-230734
sending from @2012-11-02-230734 to zhandy@2012-11-03-044727
sending from @2012-11-03-044727 to zhandy@2012-11-03-054727
sending from @2012-11-03-054727 to zhandy@2012-11-03-064726
sending from @2012-11-03-064726 to zhandy@2012-11-03-074726
sending from @2012-11-03-074726 to zhandy@2012-11-03-084726
sending from @2012-11-03-084726 to zhandy@2012-11-03-100749
sending from @2012-11-03-100749 to zhandy@2012-11-03-110749
sending from @2012-11-03-110749 to zhandy@2012-11-03-120749
sending from @2012-11-03-120749 to zhandy@2012-11-03-133121
received 2.53GiB stream in 449 seconds (5.78MiB/sec)
receiving incremental stream of zhandy@2012-11-03-044727 into tall/backups/zhandy@2012-11-03-044727
received 312B stream in 14 seconds (22B/sec)
receiving incremental stream of zhandy@2012-11-03-054727 into tall/backups/zhandy@2012-11-03-054727
received 312B stream in 15 seconds (20B/sec)
receiving incremental stream of zhandy@2012-11-03-064726 into tall/backups/zhandy@2012-11-03-064726
received 312B stream in 15 seconds (20B/sec)
receiving incremental stream of zhandy@2012-11-03-074726 into tall/backups/zhandy@2012-11-03-074726
received 312B stream in 15 seconds (20B/sec)
receiving incremental stream of zhandy@2012-11-03-084726 into tall/backups/zhandy@2012-11-03-084726
received 312B stream in 14 seconds (22B/sec)
receiving incremental stream of zhandy@2012-11-03-100749 into tall/backups/zhandy@2012-11-03-100749
received 312B stream in 26 seconds (12B/sec)
receiving incremental stream of zhandy@2012-11-03-110749 into tall/backups/zhandy@2012-11-03-110749
received 312B stream in 30 seconds (10B/sec)
receiving incremental stream of zhandy@2012-11-03-120749 into tall/backups/zhandy@2012-11-03-120749
received 312B stream in 26 seconds (12B/sec)
receiving incremental stream of zhandy@2012-11-03-133121 into tall/backups/zhandy@2012-11-03-133121
sending from @2012-11-03-133121 to zhandy@2012-11-03-143121
received 572MiB stream in 117 seconds (4.89MiB/sec)
receiving incremental stream of zhandy@2012-11-03-143121 into tall/backups/zhandy@2012-11-03-143121
sending from @2012-11-03-143121 to zhandy@2012-11-03-153121
received 3.66GiB stream in 575 seconds (6.52MiB/sec)
receiving incremental stream of zhandy@2012-11-03-153121 into tall/backups/zhandy@2012-11-03-153121
sending from @2012-11-03-153121 to zhandy@2012-11-03-163121
received 1.27GiB stream in 206 seconds (6.29MiB/sec)
receiving incremental stream of zhandy@2012-11-03-163121 into tall/backups/zhandy@2012-11-03-163121
sending from @2012-11-03-163121 to zhandy@2012-11-03-173120
received 1.01GiB stream in 222 seconds (4.67MiB/sec)
receiving incremental stream of zhandy@2012-11-03-173120 into tall/backups/zhandy@2012-11-03-173120
sending from @2012-11-03-173120 to zhandy@2012-11-03-183120
sending from @2012-11-03-183120 to zhandy@2012-11-03-193120
sending from @2012-11-03-193120 to zhandy@2012-11-03-203120
sending from @2012-11-03-203120 to zhandy@2012-11-03-203439
received 975KiB stream in 6 seconds (162KiB/sec)
receiving incremental stream of zhandy@2012-11-03-183120 into tall/backups/zhandy@2012-11-03-183120
received 312B stream in 10 seconds (31B/sec)
receiving incremental stream of zhandy@2012-11-03-193120 into tall/backups/zhandy@2012-11-03-193120
received 312B stream in 5 seconds (62B/sec)
receiving incremental stream of zhandy@2012-11-03-203120 into tall/backups/zhandy@2012-11-03-203120
received 312B stream in 8 seconds (39B/sec)
receiving incremental stream of zhandy@2012-11-03-203439 into tall/backups/zhandy@2012-11-03-203439
received 312B stream in 6 seconds (52B/sec)
Snapshots to delete on source: ,2012-09-19-214719
Will keep snapshot:  2012-10-30-212704=1351632424 Backup in bucket: $backupbucket{342000}=2012-10-30-212704
Will keep snapshot:  2012-10-28-211212=1351458732 Backup in bucket: $backupbucket{514800}=2012-10-28-211212
Will keep snapshot:  2012-10-27-140913=1351343353 Backup in bucket: $backupbucket{604800}=2012-10-27-140913
Will keep snapshot:  2012-10-26-072610=1351232770 Backup in bucket: $backupbucket{691200}=2012-10-26-072610
Will keep snapshot:  2012-10-25-201900=1351192740 Backup in bucket: $backupbucket{777600}=2012-10-25-201900
Will remove snapshot:2012-10-25-053242=1351139562 Backup in bucket: $backupbucket{777600}=2012-10-25-201900
Will keep snapshot:  2012-10-23-200739=1351019259 Backup in bucket: $backupbucket{950400}=2012-10-23-200739
Will keep snapshot:  2012-10-22-215619=1350939379 Backup in bucket: $backupbucket{1036800}=2012-10-22-215619
Will keep snapshot:  2012-10-21-203838=1350848318 Backup in bucket: $backupbucket{1123200}=2012-10-21-203838
Will keep snapshot:  2012-10-20-220252=1350766972 Backup in bucket: $backupbucket{1209600}=2012-10-20-220252
Will keep snapshot:  2012-10-19-191515=1350670515 Backup in bucket: $backupbucket{1296000}=2012-10-19-191515
Will keep snapshot:  2012-10-18-200546=1350587146 Backup in bucket: $backupbucket{1382400}=2012-10-18-200546
Will keep snapshot:  2012-10-16-220446=1350421486 Backup in bucket: $backupbucket{1555200}=2012-10-16-220446
Will keep snapshot:  2012-10-15-075739=1350284259 Backup in bucket: $backupbucket{1641600}=2012-10-15-075739
Will keep snapshot:  2012-10-13-210616=1350158776 Backup in bucket: $backupbucket{1814400}=2012-10-13-210616
Will keep snapshot:  2012-10-12-082009=1350026409 Backup in bucket: $backupbucket{1900800}=2012-10-12-082009
Will keep snapshot:  2012-10-10-090408=1349856248 Backup in bucket: $backupbucket{2073600}=2012-10-10-090408
Will keep snapshot:  2012-10-08-192418=1349720658 Backup in bucket: $backupbucket{2246400}=2012-10-08-192418
Will keep snapshot:  2012-10-05-033536=1349404536 Backup in bucket: $backupbucket{2505600}=2012-10-05-033536
Will keep snapshot:  2012-10-01-210250=1349121770 Backup in bucket: $backupbucket{2851200}=2012-10-01-210250
Will keep snapshot:  2012-09-30-190307=1349028187 Backup in bucket: $backupbucket{2937600}=2012-09-30-190307
Will keep snapshot:  2012-09-28-092100=1348820460 Backup in bucket: $backupbucket{3110400}=2012-09-28-092100
Will keep snapshot:  2012-09-27-065807=1348725487 Backup in bucket: $backupbucket{3196800}=2012-09-27-065807
Will keep snapshot:  2012-09-22-195223=1348339943 Backup in bucket: $backupbucket{3628800}=2012-09-22-195223
Will keep snapshot:  2012-09-21-065142=1348206702 Backup in bucket: $backupbucket{3715200}=2012-09-21-065142
macbookpro08-centrim:current gjp22$


– there are nine sends in a row, before a reception.

Are these 'queues' normal? Or peculiar to ZFS on OS X?

Just curious.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

cross reference

Post by grahamperrin » Tue Nov 06, 2012 2:28 pm

… something, a few weeks ago, that gave an estimate. …


Got it. zstreamdump for statistics … experiments with zstreamdump and zdb
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

cannot receive incremental stream: out of space

Post by grahamperrin » Sun Nov 11, 2012 9:44 am

For reference only, an observation of errors and other messages that may be expected when there's insufficient space at reception:

Code: Select all
cannot receive incremental stream: out of space
warning: cannot send 'gjp22@2012-11-11-101543': Broken pipe
Can't execute zfs receive -v -F "tall/backups/gjp22" at ./zfstimemachinebackup.perl line 187.
macbookpro08-centrim:current gjp22$ sending from @2012-11-11-101543 to gjp22@2012-11-11-111543
warning: cannot send 'gjp22@2012-11-11-111543': Broken pipe
sending from @2012-11-11-111543 to gjp22@2012-11-11-121543
warning: cannot send 'gjp22@2012-11-11-121543': Broken pipe
sending from @2012-11-11-121543 to gjp22@2012-11-11-131542
warning: cannot send 'gjp22@2012-11-11-131542': Broken pipe
sending from @2012-11-11-131542 to gjp22@2012-11-11-133503
warning: cannot send 'gjp22@2012-11-11-133503': Broken pipe
Can't execute zfs send -v -I "gjp22@2012-11-10-112403" "gjp22@2012-11-11-133503" at ./zfstimemachinebackup.perl line 179.
macbookpro08-centrim:current gjp22$ clear


The full text surrounding that excerpt is amongst files at http://www.wuala.com/zfs-reference-topi ... de=gallery
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by mk01 » Thu Jan 31, 2013 3:03 pm

if someone still looking this kind of utile, I finished with rewrite of original zfs-auto-snapshot from SUNW package, with ability to send to local and remote systems, with RR or Hanoi backup set rotation on local and remote system (with different strategies). filesystems can be excluded or included via zfs properties. can autocreate remote filesystems if missing and handle other dependencies automatically. can use replication or just send full or incremental.

it's clean posix script, runs on darwin and linux. https://github.com/mk01/zfs-auto-snapshot.
mk01 Offline


 
Posts: 65
Joined: Mon Sep 17, 2012 1:16 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by shuman » Fri Feb 01, 2013 12:48 pm

Awesome! I'll give it a try. Are there any gotchas we should know about or any assumptions that might not be standard?
- Mac Mini (Late 2012), 10.8.5, 16GB memory, pool - 2 Mirrored 3TB USB 3.0 External Drives
shuman Offline

User avatar
 
Posts: 96
Joined: Mon Sep 17, 2012 8:15 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by mk01 » Sun Feb 03, 2013 10:41 am

shuman wrote:Awesome! I'll give it a try. Are there any gotchas we should know about or any assumptions that might not be standard?


there are always some... but currently I don't know of anything what could cause loss of data. but if you would use the options like force overwrite, destroy remote (if needed) and fallback (from incr to full in case of missing source snapshot), I would try it first on a different setup with dry-run and debug log to screen. just to be sure what would happen.

there are so many possible combinations of exceptions and requirements during send and receive and actual status of filesystems and snapshots, that even the original SUNW package was just logging the problems and skipped operations, waiting for manual solving.

even the -R flag (replication) which is native to zfs send, won't work fully automatically for all combinations. the script can do that, but needs from you to be aware of it by using the various flags. you will see in the help. so won't do anything bad (syncing the state by destroy operations) without to be told in the forms of usage options.

the script is an official ubuntu package, although not merged to main repository yet. it needs to be mature and well tested before.
mk01 Offline


 
Posts: 65
Joined: Mon Sep 17, 2012 1:16 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by shuman » Sun Feb 03, 2013 4:16 pm

I noticed it is recommended to install anacron on Darwin. I assume you mean Darwin proper as opposed to OS X. The other documentation I have read indicated that anacron is not necessary with launchd. It also appears that anacron for OS X is no longer being maintained. If the above is true, the only other requirement would be getopt.

Sorry for all the questions, I'm not a total newbie, but given the chance I WILL screw it up. :lol:
- Mac Mini (Late 2012), 10.8.5, 16GB memory, pool - 2 Mirrored 3TB USB 3.0 External Drives
shuman Offline

User avatar
 
Posts: 96
Joined: Mon Sep 17, 2012 8:15 am

Re: Local/Remote Backups TimeMachine style, automatic scrub

Post by amteza » Mon Feb 04, 2013 2:41 am

mk01 wrote:it's clean posix script, runs on darwin and linux. https://github.com/mk01/zfs-auto-snapshot.

Wow! You made my day, thanks!
amteza Offline


 
Posts: 22
Joined: Wed Oct 17, 2012 4:40 am
Location: Spain

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 1 guest

cron