Shirt Pocket Discussions

Shirt Pocket Discussions (https://www.shirt-pocket.com/forums/index.php)
-   General (https://www.shirt-pocket.com/forums/forumdisplay.php?f=6)
-   -   Burn to DVD? (https://www.shirt-pocket.com/forums/showthread.php?t=75)

wsphish420 05-15-2004 12:51 AM

Burn to DVD?
 
I am looking at getting some new backup software, and I am curious if you can backup to DVD with this software. The programs I have tried, won't let you copy a large volume (my laptop) to a DVD, because it is too large. I am looking for a program that will let you backup a large volume to DVD, that automatically breaks it up into multiple DVDs so all you have to do is keep feeding DVDs to the computer. If anyone can tell me if this program can do that, that would be great!

Thanks,

Nick

dnanian 05-15-2004 08:59 AM

Unfortunately, SuperDuper! is designed to make backups to things like hard disks and images stored elsewhere, not to DVDs, so we're not the solution for you.

However, Retrospect -- while more complex -- will certainly meet your needs.

A simpler solution would be Apple's own Backup program, which comes with .mac service.

Hope that helps!

sjk 06-18-2004 09:54 PM

multi-disk CD/DVD support and backup strategy
 
Quote:

Originally Posted by dnanian
However, Retrospect -- while more complex -- will certainly meet your needs.

Impression has the ability to create multi-disk DVD backups, with enough scratch space (see developer comment).

The rest of this might be better as a separate post but since I've already started composing it (and prepending this comment now) I'll leave it here since this isn't a particularly busy forum.

I'm in the process of designing a strategy for regular backups of my eMac and iBook using a combination of FireWire and CD/DVD media storage. I'd like to do monthly (or maybe bi-monthly) clone backups to FireWire, with some type of "incremental" backups in between. Certain directory hierarchies would be backed up to CD/DVD at different intervals, some for permanent archival.

I'm mostly familiar with traditional UNIX dump/restore utilities using different levels (0-9) to control what's saved relative to a previous backup level, with level 0 being a complete backup. An advantage with that is full backups can be saved to one media destination and incrementals to others. In my case, full (clone?) eMac/iBook backups could each exist in separate FireWire volumes, and "incrementals" for both could be written as file archives to another volume on the same drive. Fully automating this would be ideal.

My backups to CD/DVD can be distinct from fulls/incrementals, with their own schedule. The second volume of my eMac and/or one on the FireWire drive can be temporarily used for image creation. For example, my local mailstore fits on a single CD and it's trivial to generate a mountable disk image of that using a command like "hdiutil create -srcfolder Mail /Volumes/Space/Mail-20040618.dmg", then burning it at my convenience. Partly automating this would be ideal.

Lastly, there's miscellaneous multimedia data currently on the second volume of my eMac that I want backed up at irregular intervals depending on how it changes. That's the most uncertain part of all this because of the large data sizes involved. Copying some to the FireWire drive may work, while some might best be written to multiple DVDs. Some of this might be automated, some not.

It's still unclear which Apple HFS+ backup products can offer that functionality and I'm open to using a combination of them, within budget. For various reasons Retrospect is not an option. ;)

So, can SuperDuper! be folded into that proposed strategy? I'm also trying to wrap my mind around other ways to achieve a comfortable combination of disaster recovery, regular backups, and archival backups. During about ten years of the ufsdump/ufsrestore (comparable to dump/restore on OS X for UFS filesystems) usage on Sun Solaris systems at home (before migrating to OS X) I never had any irrecoverable files except for a few unimportant ones after a major disaster recovery. That level of data integrity seems elusive with OS X and HFS+ volumes. Actually ditto (which Carbon Copy Cloner is a front-end for) has proven itself the most reliable utility I've used so far, but now I'm exploring further to support the strategy I just explained.

Sometime later I may be interested in synchronization between the eMac and iBook. For that I'm curious about ChronoSync. It's nearly as highly rated on VersionTracker as SuperDuper! (exclamation point) and seems reasonably priced for its functionality. As a backup utility (not synchronization) it doesn't seem to support multi-disk CD/DVD capability, but that may be irrelevant.

Enough, whew. That was sure more than I intended to write when I started. :)

dnanian 06-18-2004 11:47 PM

I'm not even sure where to start here, I have to say!

One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that.

Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out.

SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback.

Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for.

Another option, if you're thinking dump: rsyncx...

Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.

sjk 06-20-2004 02:30 AM

Quote:

Originally Posted by dnanian
I'm not even sure where to start here, I have to say!

Somewhere, anywhere, nowhere ...? Thanks for the ultra-quick response, which I've read you have a reputation for. :)

Quote:

One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that.
Understood.

Hope you can clarify a few details with this simple procedure:

1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume.

* Since it's a bootable clone it must do root authentication but there's no mention of that in the manual.
* What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)?
* Are any cache files removed, similar to Carbon Copy Cloner?
* Are Finder comment fields preserved?

2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume.

* I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")?
* Can any any combination of directory hierarchies be candidates for Smart Update?

And all backups are started manually; no automated scheduling (yet)?

Quote:

Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out.
I'd noticed support for union mounts in the man pages but hadn't considered using them in this context -- cool idea. I played with union mounts a bit to overlay local filesystems over a NFS-mounted /usr/local hierarchies on pre-Solaris versions of SunOS so I'm familiar with the concept. I'd be interested in what you discover and I might do a bit of tinkering, too. I've been trying to get more familiar with creating disk images, ensuring that owners, groups, permissions, etc. are accurately preserved.

Quote:

SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback.
Yep.

Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU.

Quote:

Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for.
Quote:

Another option, if you're thinking dump: rsyncx...
I don't see the correlation. Normally when using dump for backups the destination would be a single archive file whereas an rsync(x) destination would be a directory hierarchy. A dump|restore pipeline to another filesystem would be more like rsync(x), and cloning.

Quote:

Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.
I'm impressed. :)

Thanks again for the feedback and ideas.

dnanian 06-20-2004 10:44 AM

Quote:

Originally Posted by sjk
Somewhere, anywhere, nowhere ...? Thanks for the ultra-quick response, which I've read you have a reputation for. :)

Hard to keep up my end when you reply at 2:45am my time! :D

Quote:

Originally Posted by sjk
Hope you can clarify a few details with this simple procedure:

1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume.

* Since it's a bootable clone it must do root authentication but there's no mention of that in the manual.

Yes, in the current version it will prompt for authentication when you select "Start copying".

Quote:

Originally Posted by sjk
* What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)?

Selectivity, scripts, support, UI, and other features like Smart Update, Copy Different, Copy Newer, etc.

Quote:

Originally Posted by sjk
* Are any cache files removed, similar to Carbon Copy Cloner?

You can check out the scripts to see exactly what we do. The cache files are not removed, they're not copied, and are specified in the script. We don't copy things that Apple specifically states shouldn't be copied. (Obviously, it's a bit silly to copy swap files.

Quote:

Originally Posted by sjk
* Are Finder comment fields preserved?

They should be: we clone all Finder attributes and HFS+ metadata.

Quote:

Originally Posted by sjk
2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume.

* I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")?

Yes, it's similar, though significantly faster. I use it all the time, and have never had any kind of problem -- it's quite well tested. No doubt by consciously trying to trick it you could, but in normal (or even abnormal) operation it should be fine.

Quote:

Originally Posted by sjk
* Can any any combination of directory hierarchies be candidates for Smart Update?

Yes. I've changed a full Jaguar into a Panther with Smart Update, for example. Note that, however, we don't do an erase pass before doing a copy pass. This means that there are cases where renaming extremely large directories may end up overflowing the disk because the total of the two directories is larger than the size of the drive. Again, rare... and the speed was worth it. We've only had one report of this in the field.

Quote:

Originally Posted by sjk
And all backups are started manually; no automated scheduling (yet)?

Correct. Yet.

Quote:

Originally Posted by sjk
I'd noticed support for union mounts in the man pages but hadn't considered using them in this context -- cool idea. I played with union mounts a bit to overlay local filesystems over a NFS-mounted /usr/local hierarchies on pre-Solaris versions of SunOS so I'm familiar with the concept. I'd be interested in what you discover and I might do a bit of tinkering, too. I've been trying to get more familiar with creating disk images, ensuring that owners, groups, permissions, etc. are accurately preserved.

I've got to find the time for exploring, but I thought it was an intriguing concept, too.

Quote:

Originally Posted by sjk
Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU.

I think so, yes. But there may be others -- I honestly haven't done a survey of the various solutions. There are quite a few.

Quote:

Originally Posted by sjk
I don't see the correlation. Normally when using dump for backups the destination would be a single archive file whereas an rsync(x) destination would be a directory hierarchy. A dump|restore pipeline to another filesystem would be more like rsync(x), and cloning.

I thought I read somewhere that rsync would also output differential information that you could use. Yes, it's not dump (obviously it's not filesystem structures), but you might be able to cobble together a solution with it and some bailing wire and string! ;)

Quote:

Originally Posted by sjk
Thanks again for the feedback and ideas.

You're welcome. Thanks for your questions and interest.

sjk 06-21-2004 12:34 AM

Keeping this short. You certainly covered everything to my satisfaction... thanks!
Quote:

Originally Posted by dnanian
You can check out the scripts to see exactly what we do. The cache files are not removed, they're not copied, and are specified in the script. We don't copy things that Apple specifically states shouldn't be copied. (Obviously, it's a bit silly to copy swap files.

Excellent. I'd like to minimize figuring out those details and avoid unpleasant surprises but I still want to understand what's happening. Having the scripts as a starting point should work well.
Quote:

I thought I read somewhere that rsync would also output differential information that you could use. Yes, it's not dump (obviously it's not filesystem structures), but you might be able to cobble together a solution with it and some bailing wire and string! ;)
Didn't see any mention of it in the man page. I'll probably use rsync(x) for keeping my iBook updated with some eMac changes (e.g. /usr/local) but be more conservative with backups.

Off to do some SD testing now...

dnanian 06-21-2004 10:37 AM

There's no hidden "no copying" anywhere in SuperDuper! except, of course, that we don't copy sockets. We try to be transparent to those who need transparency, and easy for those who need easy. There are many 'building block' scripts that you'll find in the default set, and they should be named in a way that explains what they do.

Good luck with the testing; please let me know if you have any additional questions.

sjk 06-21-2004 08:15 PM

Did a full clone volume-to-volume clone backup and noticed one minor discrepancy between the source and destination:

Two directories and one file under my home directory owned by me (created/modified last month) were owned by root on the destination (clone) volume.

No time to do a thorough check for other things but I just wanted to report that one now.

So much for the original thread topic but it seems the poster has left the building anyway. :)

dnanian 06-21-2004 08:41 PM

You know, we've seen this happen before, and I think you'll be quite surprised if you do the following:

- On the original drive, open the Terminal and change to the parent of the directories (and/or file) that you noticed a discrepency with

- First, do n "ls -l". You should see that they're owned by you, with your current group status.

- Now, authenticate with sudo -s. Once authenticated, do an ls -l. What's the ownership now?

Needless to say, SuperDuper! runs authenticated... and, when we're authenticated, we get the owner/group the OS gives us... which seems to track the effective UID in some situations. It's weird and kinda subtle, and took us an age to at least figure out what was going on...

sjk 06-21-2004 09:41 PM

authentication snafu
 
Quote:

Originally Posted by dnanian
First, do n "ls -l". You should see that they're owned by you, with your current group status.

Non-auth:
Code:

% ls -dl DiskWarrior DiskWarrior/2004-05-17 DiskWarrior/2004-05-17/Macintosh\ HD\ Report.pdf
drwxr-xr-x  3 me  unknown    102 17 May 19:51 DiskWarrior
drwxr-xr-x  3 me  unknown    102 17 May 19:52 DiskWarrior/2004-05-17
-rw-r--r--  1 me  unknown  61636 17 May 19:52 DiskWarrior/2004-05-17/Macintosh HD Report.pdf

Quote:

Now, authenticate with sudo -s. Once authenticated, do an ls -l. What's the ownership now?
Auth:
Code:

% sudo ls -dl DiskWarrior DiskWarrior/2004-05-17 DiskWarrior/2004-05-17/Macintosh\ HD\ Report.pdf
drwxr-xr-x  3 root  unknown    102 17 May 19:51 DiskWarrior
drwxr-xr-x  3 root  unknown    102 17 May 19:52 DiskWarrior/2004-05-17
-rw-r--r--  1 root  unknown  61636 17 May 19:52 DiskWarrior/2004-05-17/Macintosh HD Report.pdf

Yikes, that's whacky!
Quote:

Needless to say, SuperDuper! runs authenticated... and, when we're authenticated, we get the owner/group the OS gives us... which seems to track the effective UID in some situations. It's weird and kinda subtle, and took us an age to at least figure out what was going on...
That's definitely a rational explanation of what's happening -- thanks! I vaguely remember noticing that in another context; now I won't forget it.

A couple more things:

Any possibility of adding an option for preserving file access times or would that make SD significantly slower? Not that they're as accurate on OS X as traditional Unix systems but I still find use for that file information.

[OS X != Unix, OS X != Unix, ... :)]

Can you briefly describe the logic Smart Update uses and if there's any way it might be accidentally (or intentionally ;)) be "tricked" into overlooking files? I can't test it w/o registering tho' with your smart, superb support so far I'm about *this* close to paying even if I don't use the program. :)

dnanian 06-21-2004 10:00 PM

Told you you'd be surprised about the ownership thing. If you chown those files to you:staff (or whatever), it'll stick from that point forward. I think this is due to some weirdness with OS X supporting both file systems that respect ownership and file systems with ownership 'overlaid' on them for compatibility.

Basically, if it's trying to maintain compatibility, the ownership of the files on the ownership-ignored volumes tracks your own ownership. BUT, it seems that if you copy those files locally, they have some sort of wacky track-uid-and-group value in there, and they do unexpected things on a volume that respects permissions. Weird stuff.

We looked at file access preservation but decided against it: I think that we had problems actually getting the value preserved but truthfully I don't exactly remember. But I'll add the request to the list and we'll take another pass at it in the future. (I believe that it can't be done: when we tried, it just updated the access time...)

Smart Update is exactly like "Copy Different" with an added "erase" pass. I can't really think of any accidental "tricking" that might happen, unless you modified something, ended up with exactly the same number of bytes, modified the times and metadata so that it would look the same from that perspective, and then did a SU. In that case, it might not copy the file, since it doesn't look "different", and we don't use a file CRC to be extra super careful. (Frankly, it really isn't necessary when you're doing a single system to backup update, it'd just take an enormous amount of time and basically make Smart Update pointless.)

Anyway, once the copying has been completed, we erase things that are on the backup but are no longer on the source. (This is a bit of a simplification -- we don't copy everything and then erase, it happens directory by directory, mostly, not drive-wide -- if we were making an entirely separate pass, we would have done erase-first anyway.)

This means that there's one potential error case: if the union of the files being copied in a given directory exceeds the capacity of the drive (assuming that all the files are different), we fail because we erase after we copy. That also means that if you rename a large directory, it's possible that we'll copy the new one before removing the old, causing a disk space failure.

In neither cases does the failure result in the loss of any data, nor does it fail silently.

Hope that answers your questions! Glad you're happy with the support: it's part of what you're paying for when you -- hopefully -- pay!

sjk 06-22-2004 11:26 PM

Quote:

Originally Posted by dnanian
Told you you'd be surprised about the ownership thing.

Yep. Fortunately a more benign surprise than discovering how deleting symbolic links with Finder can sometimes delete the target(!)

For "fun" I deleted the ~/Library/Application Support/SuperDuper!/Copy Scripts/Standard Scripts symlink (which worked okay), tried Undo, and Finder griped "The operation cannot be completed because you do not have sufficient privileges for some of the items." Whoops.

Reminds me of potentially devastating side effects of omitting the "-h" option on Unix ch{own,grp,mod} commands (OS X versions are susceptible) when symlinks are involved, which an impressive number of root-enabled Unix sysadmins don't realize as they're using those commands (often recursively). My favorite traditional example:
Code:

% ls -l /etc/passwd foo
-rw-r--r--  1 root  wheel  1374  8 Dec  2003 /etc/passwd
lrwxr-xr-x  1 me    me      11 22 Jun 16:07 foo -> /etc/passwd

% sudo chown me foo
Password:

% ls -l /etc/passwd foo
-rw-r--r--  1 me  wheel    1374  8 Dec  2003 /etc/passwd
lrwxr-xr-x  1 me  me        11 22 Jun 16:07 foo -> /etc/passwd

Now that I've let that cat out of the bag, back to topic at hand...
Quote:

We looked at file access preservation but decided against it: I think that we had problems actually getting the value preserved but truthfully I don't exactly remember. But I'll add the request to the list and we'll take another pass at it in the future. (I believe that it can't be done: when we tried, it just updated the access time...)
No worries.

Quote:

Smart Update is exactly like "Copy Different" with an added "erase" pass. I can't really think of any accidental "tricking" that might happen, unless you modified something, ended up with exactly the same number of bytes, modified the times and metadata so that it would look the same from that perspective, and then did a SU. In that case, it might not copy the file, since it doesn't look "different", and we don't use a file CRC to be extra super careful. (Frankly, it really isn't necessary when you're doing a single system to backup update, it'd just take an enormous amount of time and basically make Smart Update pointless.)
Is it correct that "Copy newer" will skip files with older created/modified times than when they've actually been added to the filesystem. Downloads are a good example of that kind of file so I'm careful to know how "newer" is being interpreted. But that's not relevant with SU, if I understand things correctly.

Quote:

This means that there's one potential error case: if the union of the files being copied in a given directory exceeds the capacity of the drive (assuming that all the files are different), we fail because we erase after we copy. That also means that if you rename a large directory, it's possible that we'll copy the new one before removing the old, causing a disk space failure.
Got it. Quite unlikely I'll encounter that with SU on the system volume.
Quote:

Hope that answers your questions! Glad you're happy with the support: it's part of what you're paying for when you -- hopefully -- pay!
Registered this morning. Smart Update was too much temptation to hold off any longer.

Issue with exclude:

The main "Backup - all files" script excludes var/db/BootCache.playlist and var/db/volinfo.database, but those files exist on the destination volume. Not sure if the original backup or the SU copied 'em since I only noticed after the latter.

Thanks for responding to my VersionTracker feedback. Hope I didn't sound like I was giving misinformation about the way script editing worked.

About the capacity check... after posting I thought of mentioning that a simulation mode would be convenient for certain scenarios with backup media storage planning, as I'm currently doing, especially when the the space rules for dealing with disk image files aren't known. For example, I tested creating a disk image of the system volume (~19GB) on another volume with ~25GB free. If I'd let that run it would have overflowed, as expected. A simulation, safely running non-inactively for an hour or two then warning that the real deal would have failed, would have been nicer than having to manually interact and abort.

And/or is it possible for the temporary volume to use space on a different volume than the image file's final destination? Several visits to the hdiutil man page have failed to reveal a way of doing that.

Whew. That covers everything and then some for today. :)

dnanian 06-23-2004 07:50 AM

Quote:

Is it correct that "Copy newer" will skip files with older created/modified times than when they've actually been added to the filesystem. Downloads are a good example of that kind of file so I'm careful to know how "newer" is being interpreted. But that's not relevant with SU, if I understand things correctly.
SuperDuper!'s "Newer" isn't "Newer since last backup". It's "Newer than the equivalent file on the destination". Every file is always evaluated: files aren't skipped because they're newer/older than some global timestamp. So, no, this isn't a problem.

Quote:

Issue with exclude:

The main "Backup - all files" script excludes var/db/BootCache.playlist and var/db/volinfo.database, but those files exist on the destination volume. Not sure if the original backup or the SU copied 'em since I only noticed after the latter.
I'm fairly sure that excludes are excluded as they should be. If you didn't use erase-then-copy or smart update, those files would indeed still be there. Or, if you checked after you booted, they'd get recreated...

Quote:

Thanks for responding to my VersionTracker feedback. Hope I didn't sound like I was giving misinformation about the way script editing worked.
Well, frankly, I wasn't quite sure what you were getting at, and VT is a terrible place to do support/ask questions. So... fill me in about what you were seeing/confused by!

Quote:

About the capacity check... after posting I thought of mentioning that a simulation mode would be convenient for certain scenarios with backup media storage planning, as I'm currently doing, especially when the the space rules for dealing with disk image files aren't known. For example, I tested creating a disk image of the system volume (~19GB) on another volume with ~25GB free. If I'd let that run it would have overflowed, as expected. A simulation, safely running non-inactively for an hour or two then warning that the real deal would have failed, would have been nicer than having to manually interact and abort.
Yes, we've considered that as an extension of What's going to happen?, but other things have priority at present.

Quote:

And/or is it possible for the temporary volume to use space on a different volume than the image file's final destination? Several visits to the hdiutil man page have failed to reveal a way of doing that.
Not that we've seen. Conversion is done in place. But conversion isn't strictly necessary... manual use of a sparseimage can resolve this issue, allow faster backups (by skipping the other steps), and allow future smart updates of the image to boot.

sjk 06-23-2004 09:39 PM

Quote:

Originally Posted by dnanian
SuperDuper!'s "Newer" isn't "Newer since last backup". It's "Newer than the equivalent file on the destination". Every file is always evaluated: files aren't skipped because they're newer/older than some global timestamp. So, no, this isn't a problem.

The manual says:

... overwrite files that exist on the destination with those on the source if the source files are newer (with Copy newer) or different (with Copy different), leaving all other files as-is.

Does that mean only files that already exist on the destination are candidates for overwriting? Got a quick example of when those options would be useful? I'm a bit dense today.
Quote:

I'm fairly sure that excludes are excluded as they should be. If you didn't use erase-then-copy or smart update, those files would indeed still be there. Or, if you checked after you booted, they'd get recreated...
First ran a full erase-then-copy backup (unregistered version) to a FW drive volume, then did a couple smart updates (registered version). Haven't rebooted since installed SD. The supposedly excluded files do exist on the destination. I can try same full+SD backup again later (checking after each run) to be 100% certain there's a glitch somewhere.
Quote:

Well, frankly, I wasn't quite sure what you were getting at, and VT is a terrible place to do support/ask questions. So... fill me in about what you were seeing/confused by!
Yeah, I agree about VT. The forum's a bit clumsy, too, especially when the accesskey Control key shortcuts interfere with emacs-style navigation during text editing with Safari (grrr!). Know any tricks to make these inline quoted replies any easier?

I've half figured out the "backup to folder" issue. That's basically what the "backup - user files" script does. With that, is the entire volume erased if the destination is a volume and the erase-then-copy option is set?

Other thing was the summary log in addition to the normal "console" log. Something like how each Carbon Copy Cloner session is logged to a separate file.
Quote:

Yes, we've considered that as an extension of What's going to happen?, but other things have priority at present.
What about a simple warning of the overflow possibility right before confirmation of the backup? Or maybe that would be more confusing than helpful.
Quote:

Not that we've seen. Conversion is done in place. But conversion isn't strictly necessary... manual use of a sparseimage can resolve this issue, allow faster backups (by skipping the other steps), and allow future smart updates of the image to boot.
Not sure what all that mean -- "conversion is done in place" and "manual use of a sparseimage"? Maybe skip that until I come up with a specific example of something I want to do.


All times are GMT -4. The time now is 12:21 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.