![]() |
|||||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
#1
|
|||
|
|||
Burn to DVD?
I am looking at getting some new backup software, and I am curious if you can backup to DVD with this software. The programs I have tried, won't let you copy a large volume (my laptop) to a DVD, because it is too large. I am looking for a program that will let you backup a large volume to DVD, that automatically breaks it up into multiple DVDs so all you have to do is keep feeding DVDs to the computer. If anyone can tell me if this program can do that, that would be great!
Thanks, Nick |
#2
|
||||
|
||||
Unfortunately, SuperDuper! is designed to make backups to things like hard disks and images stored elsewhere, not to DVDs, so we're not the solution for you.
However, Retrospect -- while more complex -- will certainly meet your needs. A simpler solution would be Apple's own Backup program, which comes with .mac service. Hope that helps!
__________________
--Dave Nanian |
#3
|
||||
|
||||
multi-disk CD/DVD support and backup strategy
Quote:
The rest of this might be better as a separate post but since I've already started composing it (and prepending this comment now) I'll leave it here since this isn't a particularly busy forum. I'm in the process of designing a strategy for regular backups of my eMac and iBook using a combination of FireWire and CD/DVD media storage. I'd like to do monthly (or maybe bi-monthly) clone backups to FireWire, with some type of "incremental" backups in between. Certain directory hierarchies would be backed up to CD/DVD at different intervals, some for permanent archival. I'm mostly familiar with traditional UNIX dump/restore utilities using different levels (0-9) to control what's saved relative to a previous backup level, with level 0 being a complete backup. An advantage with that is full backups can be saved to one media destination and incrementals to others. In my case, full (clone?) eMac/iBook backups could each exist in separate FireWire volumes, and "incrementals" for both could be written as file archives to another volume on the same drive. Fully automating this would be ideal. My backups to CD/DVD can be distinct from fulls/incrementals, with their own schedule. The second volume of my eMac and/or one on the FireWire drive can be temporarily used for image creation. For example, my local mailstore fits on a single CD and it's trivial to generate a mountable disk image of that using a command like "hdiutil create -srcfolder Mail /Volumes/Space/Mail-20040618.dmg", then burning it at my convenience. Partly automating this would be ideal. Lastly, there's miscellaneous multimedia data currently on the second volume of my eMac that I want backed up at irregular intervals depending on how it changes. That's the most uncertain part of all this because of the large data sizes involved. Copying some to the FireWire drive may work, while some might best be written to multiple DVDs. Some of this might be automated, some not. It's still unclear which Apple HFS+ backup products can offer that functionality and I'm open to using a combination of them, within budget. For various reasons Retrospect is not an option. ![]() So, can SuperDuper! be folded into that proposed strategy? I'm also trying to wrap my mind around other ways to achieve a comfortable combination of disaster recovery, regular backups, and archival backups. During about ten years of the ufsdump/ufsrestore (comparable to dump/restore on OS X for UFS filesystems) usage on Sun Solaris systems at home (before migrating to OS X) I never had any irrecoverable files except for a few unimportant ones after a major disaster recovery. That level of data integrity seems elusive with OS X and HFS+ volumes. Actually ditto (which Carbon Copy Cloner is a front-end for) has proven itself the most reliable utility I've used so far, but now I'm exploring further to support the strategy I just explained. Sometime later I may be interested in synchronization between the eMac and iBook. For that I'm curious about ChronoSync. It's nearly as highly rated on VersionTracker as SuperDuper! (exclamation point) and seems reasonably priced for its functionality. As a backup utility (not synchronization) it doesn't seem to support multi-disk CD/DVD capability, but that may be irrelevant. Enough, whew. That was sure more than I intended to write when I started. ![]() Last edited by sjk; 06-18-2004 at 10:57 PM. Reason: typos |
#4
|
||||
|
||||
I'm not even sure where to start here, I have to say!
One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that. Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out. SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback. Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for. Another option, if you're thinking dump: rsyncx... Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.
__________________
--Dave Nanian |
#5
|
|||||||
|
|||||||
Quote:
![]() Quote:
Hope you can clarify a few details with this simple procedure: 1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume. * Since it's a bootable clone it must do root authentication but there's no mention of that in the manual. * What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)? * Are any cache files removed, similar to Carbon Copy Cloner? * Are Finder comment fields preserved? 2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume. * I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")? * Can any any combination of directory hierarchies be candidates for Smart Update? And all backups are started manually; no automated scheduling (yet)? Quote:
Quote:
Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU. Quote:
Quote:
Quote:
![]() Thanks again for the feedback and ideas. |
#6
|
||||||||||||
|
||||||||||||
Quote:
![]() Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
![]() Quote:
__________________
--Dave Nanian |
#7
|
||||
|
||||
Keeping this short. You certainly covered everything to my satisfaction... thanks!
Quote:
Quote:
Off to do some SD testing now... |
#8
|
||||
|
||||
There's no hidden "no copying" anywhere in SuperDuper! except, of course, that we don't copy sockets. We try to be transparent to those who need transparency, and easy for those who need easy. There are many 'building block' scripts that you'll find in the default set, and they should be named in a way that explains what they do.
Good luck with the testing; please let me know if you have any additional questions.
__________________
--Dave Nanian |
#9
|
||||
|
||||
Did a full clone volume-to-volume clone backup and noticed one minor discrepancy between the source and destination:
Two directories and one file under my home directory owned by me (created/modified last month) were owned by root on the destination (clone) volume. No time to do a thorough check for other things but I just wanted to report that one now. So much for the original thread topic but it seems the poster has left the building anyway. ![]() |
#10
|
||||
|
||||
You know, we've seen this happen before, and I think you'll be quite surprised if you do the following:
- On the original drive, open the Terminal and change to the parent of the directories (and/or file) that you noticed a discrepency with - First, do n "ls -l". You should see that they're owned by you, with your current group status. - Now, authenticate with sudo -s. Once authenticated, do an ls -l. What's the ownership now? Needless to say, SuperDuper! runs authenticated... and, when we're authenticated, we get the owner/group the OS gives us... which seems to track the effective UID in some situations. It's weird and kinda subtle, and took us an age to at least figure out what was going on...
__________________
--Dave Nanian |
#11
|
||||
|
||||
authentication snafu
Quote:
Code:
% ls -dl DiskWarrior DiskWarrior/2004-05-17 DiskWarrior/2004-05-17/Macintosh\ HD\ Report.pdf drwxr-xr-x 3 me unknown 102 17 May 19:51 DiskWarrior drwxr-xr-x 3 me unknown 102 17 May 19:52 DiskWarrior/2004-05-17 -rw-r--r-- 1 me unknown 61636 17 May 19:52 DiskWarrior/2004-05-17/Macintosh HD Report.pdf Quote:
Code:
% sudo ls -dl DiskWarrior DiskWarrior/2004-05-17 DiskWarrior/2004-05-17/Macintosh\ HD\ Report.pdf drwxr-xr-x 3 root unknown 102 17 May 19:51 DiskWarrior drwxr-xr-x 3 root unknown 102 17 May 19:52 DiskWarrior/2004-05-17 -rw-r--r-- 1 root unknown 61636 17 May 19:52 DiskWarrior/2004-05-17/Macintosh HD Report.pdf Quote:
A couple more things: Any possibility of adding an option for preserving file access times or would that make SD significantly slower? Not that they're as accurate on OS X as traditional Unix systems but I still find use for that file information. [OS X != Unix, OS X != Unix, ... ![]() Can you briefly describe the logic Smart Update uses and if there's any way it might be accidentally (or intentionally ![]() ![]() |
#12
|
||||
|
||||
Told you you'd be surprised about the ownership thing. If you chown those files to you:staff (or whatever), it'll stick from that point forward. I think this is due to some weirdness with OS X supporting both file systems that respect ownership and file systems with ownership 'overlaid' on them for compatibility.
Basically, if it's trying to maintain compatibility, the ownership of the files on the ownership-ignored volumes tracks your own ownership. BUT, it seems that if you copy those files locally, they have some sort of wacky track-uid-and-group value in there, and they do unexpected things on a volume that respects permissions. Weird stuff. We looked at file access preservation but decided against it: I think that we had problems actually getting the value preserved but truthfully I don't exactly remember. But I'll add the request to the list and we'll take another pass at it in the future. (I believe that it can't be done: when we tried, it just updated the access time...) Smart Update is exactly like "Copy Different" with an added "erase" pass. I can't really think of any accidental "tricking" that might happen, unless you modified something, ended up with exactly the same number of bytes, modified the times and metadata so that it would look the same from that perspective, and then did a SU. In that case, it might not copy the file, since it doesn't look "different", and we don't use a file CRC to be extra super careful. (Frankly, it really isn't necessary when you're doing a single system to backup update, it'd just take an enormous amount of time and basically make Smart Update pointless.) Anyway, once the copying has been completed, we erase things that are on the backup but are no longer on the source. (This is a bit of a simplification -- we don't copy everything and then erase, it happens directory by directory, mostly, not drive-wide -- if we were making an entirely separate pass, we would have done erase-first anyway.) This means that there's one potential error case: if the union of the files being copied in a given directory exceeds the capacity of the drive (assuming that all the files are different), we fail because we erase after we copy. That also means that if you rename a large directory, it's possible that we'll copy the new one before removing the old, causing a disk space failure. In neither cases does the failure result in the loss of any data, nor does it fail silently. Hope that answers your questions! Glad you're happy with the support: it's part of what you're paying for when you -- hopefully -- pay!
__________________
--Dave Nanian Last edited by dnanian; 06-21-2004 at 11:05 PM. Reason: Clarified erase pass for Smart Update. Also updated access time comment. |
#13
|
|||||
|
|||||
Quote:
For "fun" I deleted the ~/Library/Application Support/SuperDuper!/Copy Scripts/Standard Scripts symlink (which worked okay), tried Undo, and Finder griped "The operation cannot be completed because you do not have sufficient privileges for some of the items." Whoops. Reminds me of potentially devastating side effects of omitting the "-h" option on Unix ch{own,grp,mod} commands (OS X versions are susceptible) when symlinks are involved, which an impressive number of root-enabled Unix sysadmins don't realize as they're using those commands (often recursively). My favorite traditional example: Code:
% ls -l /etc/passwd foo -rw-r--r-- 1 root wheel 1374 8 Dec 2003 /etc/passwd lrwxr-xr-x 1 me me 11 22 Jun 16:07 foo -> /etc/passwd % sudo chown me foo Password: % ls -l /etc/passwd foo -rw-r--r-- 1 me wheel 1374 8 Dec 2003 /etc/passwd lrwxr-xr-x 1 me me 11 22 Jun 16:07 foo -> /etc/passwd Quote:
Quote:
Quote:
Quote:
Issue with exclude: The main "Backup - all files" script excludes var/db/BootCache.playlist and var/db/volinfo.database, but those files exist on the destination volume. Not sure if the original backup or the SU copied 'em since I only noticed after the latter. Thanks for responding to my VersionTracker feedback. Hope I didn't sound like I was giving misinformation about the way script editing worked. About the capacity check... after posting I thought of mentioning that a simulation mode would be convenient for certain scenarios with backup media storage planning, as I'm currently doing, especially when the the space rules for dealing with disk image files aren't known. For example, I tested creating a disk image of the system volume (~19GB) on another volume with ~25GB free. If I'd let that run it would have overflowed, as expected. A simulation, safely running non-inactively for an hour or two then warning that the real deal would have failed, would have been nicer than having to manually interact and abort. And/or is it possible for the temporary volume to use space on a different volume than the image file's final destination? Several visits to the hdiutil man page have failed to reveal a way of doing that. Whew. That covers everything and then some for today. ![]() |
#14
|
|||||
|
|||||
Quote:
Quote:
Quote:
Quote:
Quote:
__________________
--Dave Nanian |
#15
|
|||||
|
|||||
Quote:
... overwrite files that exist on the destination with those on the source if the source files are newer (with Copy newer) or different (with Copy different), leaving all other files as-is. Does that mean only files that already exist on the destination are candidates for overwriting? Got a quick example of when those options would be useful? I'm a bit dense today. Quote:
Quote:
I've half figured out the "backup to folder" issue. That's basically what the "backup - user files" script does. With that, is the entire volume erased if the destination is a volume and the erase-then-copy option is set? Other thing was the summary log in addition to the normal "console" log. Something like how each Carbon Copy Cloner session is logged to a separate file. Quote:
Quote:
|
![]() |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
Display Modes | Rate This Thread |
|
|