View Single Post
  #13  
Old 05-30-2014, 11:22 AM
beporter beporter is offline
Registered User
 
Join Date: Nov 2008
Posts: 2
As my destination volume has approached capacity I've started running into this as well. It's certainly annoying to have to go find something on the destination disk that is about large enough to delete. The goal being to allow the clone to complete without having to re-copy too much still-identical data needlessly in order to replace the files I manually deleted.

Given the thread's history, I should note that in spite of this inconvenience I am still an ardent Super-Duper! supporter. That said, if it were possible for Super-Duper to be building a "pending-delete" queue of files as it is copying, and to resort to this list in the event that the next copy would cause the destination to run out of room, I feel like that would be ideal. I realize that properly paralleling this is no small task, but this would be my number 1 feature request--because everything else is darn near perfect already.

I sympathize with the argument that preemptively deleting leaves your backup in a temporarily inconsistent state, but I'd much prefer that state to be temporary (during the clone process) than permanent (because the entire cloning process has failed!) And it could be argued, if you wanted to go a little overboard, that really you should have a third copy of your data somewhere else already so that a failure in the current backup process can not result in data loss. (Say for example, a second SuperDuper destination disk that is in rotation with the first one, or a separate TimeMachine volume.) The truly paranoid among us treat even "backing up" as a potentially destructive process.

Last edited by beporter; 10-24-2014 at 05:28 PM.
Reply With Quote