Shirt Pocket Discussions  
    Home netTunes launchTunes SuperDuper! Buy Now Support Discussions About Shirt Pocket    

Go Back   Shirt Pocket Discussions > SuperDuper! > General

Reply
 
Thread Tools Rate Thread Display Modes
  #1  
Old 05-15-2004, 12:51 AM
wsphish420 wsphish420 is offline
Registered User
 
Join Date: May 2004
Posts: 1
Burn to DVD?

I am looking at getting some new backup software, and I am curious if you can backup to DVD with this software. The programs I have tried, won't let you copy a large volume (my laptop) to a DVD, because it is too large. I am looking for a program that will let you backup a large volume to DVD, that automatically breaks it up into multiple DVDs so all you have to do is keep feeding DVDs to the computer. If anyone can tell me if this program can do that, that would be great!

Thanks,

Nick
Reply With Quote
  #2  
Old 05-15-2004, 08:59 AM
dnanian's Avatar
dnanian dnanian is offline
Administrator
 
Join Date: Apr 2001
Location: Weston, MA
Posts: 14,923
Send a message via AIM to dnanian
Unfortunately, SuperDuper! is designed to make backups to things like hard disks and images stored elsewhere, not to DVDs, so we're not the solution for you.

However, Retrospect -- while more complex -- will certainly meet your needs.

A simpler solution would be Apple's own Backup program, which comes with .mac service.

Hope that helps!
__________________
--Dave Nanian
Reply With Quote
  #3  
Old 06-18-2004, 09:54 PM
sjk's Avatar
sjk sjk is offline
Registered User
 
Join Date: May 2004
Location: Eugene
Posts: 252
multi-disk CD/DVD support and backup strategy

Quote:
Originally Posted by dnanian
However, Retrospect -- while more complex -- will certainly meet your needs.
Impression has the ability to create multi-disk DVD backups, with enough scratch space (see developer comment).

The rest of this might be better as a separate post but since I've already started composing it (and prepending this comment now) I'll leave it here since this isn't a particularly busy forum.

I'm in the process of designing a strategy for regular backups of my eMac and iBook using a combination of FireWire and CD/DVD media storage. I'd like to do monthly (or maybe bi-monthly) clone backups to FireWire, with some type of "incremental" backups in between. Certain directory hierarchies would be backed up to CD/DVD at different intervals, some for permanent archival.

I'm mostly familiar with traditional UNIX dump/restore utilities using different levels (0-9) to control what's saved relative to a previous backup level, with level 0 being a complete backup. An advantage with that is full backups can be saved to one media destination and incrementals to others. In my case, full (clone?) eMac/iBook backups could each exist in separate FireWire volumes, and "incrementals" for both could be written as file archives to another volume on the same drive. Fully automating this would be ideal.

My backups to CD/DVD can be distinct from fulls/incrementals, with their own schedule. The second volume of my eMac and/or one on the FireWire drive can be temporarily used for image creation. For example, my local mailstore fits on a single CD and it's trivial to generate a mountable disk image of that using a command like "hdiutil create -srcfolder Mail /Volumes/Space/Mail-20040618.dmg", then burning it at my convenience. Partly automating this would be ideal.

Lastly, there's miscellaneous multimedia data currently on the second volume of my eMac that I want backed up at irregular intervals depending on how it changes. That's the most uncertain part of all this because of the large data sizes involved. Copying some to the FireWire drive may work, while some might best be written to multiple DVDs. Some of this might be automated, some not.

It's still unclear which Apple HFS+ backup products can offer that functionality and I'm open to using a combination of them, within budget. For various reasons Retrospect is not an option.

So, can SuperDuper! be folded into that proposed strategy? I'm also trying to wrap my mind around other ways to achieve a comfortable combination of disaster recovery, regular backups, and archival backups. During about ten years of the ufsdump/ufsrestore (comparable to dump/restore on OS X for UFS filesystems) usage on Sun Solaris systems at home (before migrating to OS X) I never had any irrecoverable files except for a few unimportant ones after a major disaster recovery. That level of data integrity seems elusive with OS X and HFS+ volumes. Actually ditto (which Carbon Copy Cloner is a front-end for) has proven itself the most reliable utility I've used so far, but now I'm exploring further to support the strategy I just explained.

Sometime later I may be interested in synchronization between the eMac and iBook. For that I'm curious about ChronoSync. It's nearly as highly rated on VersionTracker as SuperDuper! (exclamation point) and seems reasonably priced for its functionality. As a backup utility (not synchronization) it doesn't seem to support multi-disk CD/DVD capability, but that may be irrelevant.

Enough, whew. That was sure more than I intended to write when I started.

Last edited by sjk; 06-18-2004 at 09:57 PM. Reason: typos
Reply With Quote
  #4  
Old 06-18-2004, 11:47 PM
dnanian's Avatar
dnanian dnanian is offline
Administrator
 
Join Date: Apr 2001
Location: Weston, MA
Posts: 14,923
Send a message via AIM to dnanian
I'm not even sure where to start here, I have to say!

One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that.

Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out.

SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback.

Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for.

Another option, if you're thinking dump: rsyncx...

Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.
__________________
--Dave Nanian
Reply With Quote
  #5  
Old 06-20-2004, 02:30 AM
sjk's Avatar
sjk sjk is offline
Registered User
 
Join Date: May 2004
Location: Eugene
Posts: 252
Quote:
Originally Posted by dnanian
I'm not even sure where to start here, I have to say!
Somewhere, anywhere, nowhere ...? Thanks for the ultra-quick response, which I've read you have a reputation for.

Quote:
One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that.
Understood.

Hope you can clarify a few details with this simple procedure:

1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume.

* Since it's a bootable clone it must do root authentication but there's no mention of that in the manual.
* What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)?
* Are any cache files removed, similar to Carbon Copy Cloner?
* Are Finder comment fields preserved?

2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume.

* I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")?
* Can any any combination of directory hierarchies be candidates for Smart Update?

And all backups are started manually; no automated scheduling (yet)?

Quote:
Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out.
I'd noticed support for union mounts in the man pages but hadn't considered using them in this context -- cool idea. I played with union mounts a bit to overlay local filesystems over a NFS-mounted /usr/local hierarchies on pre-Solaris versions of SunOS so I'm familiar with the concept. I'd be interested in what you discover and I might do a bit of tinkering, too. I've been trying to get more familiar with creating disk images, ensuring that owners, groups, permissions, etc. are accurately preserved.

Quote:
SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback.
Yep.

Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU.

Quote:
Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for.
Quote:
Another option, if you're thinking dump: rsyncx...
I don't see the correlation. Normally when using dump for backups the destination would be a single archive file whereas an rsync(x) destination would be a directory hierarchy. A dump|restore pipeline to another filesystem would be more like rsync(x), and cloning.

Quote:
Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.
I'm impressed.

Thanks again for the feedback and ideas.
Reply With Quote
  #6  
Old 06-20-2004, 10:44 AM
dnanian's Avatar
dnanian dnanian is offline
Administrator
 
Join Date: Apr 2001
Location: Weston, MA
Posts: 14,923
Send a message via AIM to dnanian
Quote:
Originally Posted by sjk
Somewhere, anywhere, nowhere ...? Thanks for the ultra-quick response, which I've read you have a reputation for.
Hard to keep up my end when you reply at 2:45am my time!

Quote:
Originally Posted by sjk
Hope you can clarify a few details with this simple procedure:

1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume.

* Since it's a bootable clone it must do root authentication but there's no mention of that in the manual.
Yes, in the current version it will prompt for authentication when you select "Start copying".

Quote:
Originally Posted by sjk
* What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)?
Selectivity, scripts, support, UI, and other features like Smart Update, Copy Different, Copy Newer, etc.

Quote:
Originally Posted by sjk
* Are any cache files removed, similar to Carbon Copy Cloner?
You can check out the scripts to see exactly what we do. The cache files are not removed, they're not copied, and are specified in the script. We don't copy things that Apple specifically states shouldn't be copied. (Obviously, it's a bit silly to copy swap files.

Quote:
Originally Posted by sjk
* Are Finder comment fields preserved?
They should be: we clone all Finder attributes and HFS+ metadata.

Quote:
Originally Posted by sjk
2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume.

* I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")?
Yes, it's similar, though significantly faster. I use it all the time, and have never had any kind of problem -- it's quite well tested. No doubt by consciously trying to trick it you could, but in normal (or even abnormal) operation it should be fine.

Quote:
Originally Posted by sjk
* Can any any combination of directory hierarchies be candidates for Smart Update?
Yes. I've changed a full Jaguar into a Panther with Smart Update, for example. Note that, however, we don't do an erase pass before doing a copy pass. This means that there are cases where renaming extremely large directories may end up overflowing the disk because the total of the two directories is larger than the size of the drive. Again, rare... and the speed was worth it. We've only had one report of this in the field.

Quote:
Originally Posted by sjk
And all backups are started manually; no automated scheduling (yet)?
Correct. Yet.

Quote:
Originally Posted by sjk
I'd noticed support for union mounts in the man pages but hadn't considered using them in this context -- cool idea. I played with union mounts a bit to overlay local filesystems over a NFS-mounted /usr/local hierarchies on pre-Solaris versions of SunOS so I'm familiar with the concept. I'd be interested in what you discover and I might do a bit of tinkering, too. I've been trying to get more familiar with creating disk images, ensuring that owners, groups, permissions, etc. are accurately preserved.
I've got to find the time for exploring, but I thought it was an intriguing concept, too.

Quote:
Originally Posted by sjk
Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU.
I think so, yes. But there may be others -- I honestly haven't done a survey of the various solutions. There are quite a few.

Quote:
Originally Posted by sjk
I don't see the correlation. Normally when using dump for backups the destination would be a single archive file whereas an rsync(x) destination would be a directory hierarchy. A dump|restore pipeline to another filesystem would be more like rsync(x), and cloning.
I thought I read somewhere that rsync would also output differential information that you could use. Yes, it's not dump (obviously it's not filesystem structures), but you might be able to cobble together a solution with it and some bailing wire and string!

Quote:
Originally Posted by sjk
Thanks again for the feedback and ideas.
You're welcome. Thanks for your questions and interest.
__________________
--Dave Nanian
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 2 (0 members and 2 guests)
 
Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 06:04 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.