Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Operating Systems BSD

NetBSD - Live Network Backup 156

dvl writes "It is possible but inconvenient to manually clone a hard disk drive remotely, using dd and netcat. der Mouse, a Montreal-based NetBSD developer, has developed tools that allow for automated, remote partition-level cloning to occur automatically on an opportunistic basis. A high-level description of the system has been posted at KernelTrap. This facility can be used to maintain complete duplicates of remote client laptop drives to a server system. This network mirroring facility will be presented at BSDCAN 2005 in Ottawa, ON on May 13-15."
This discussion has been archived. No new comments can be posted.

NetBSD - Live Network Backup

Comments Filter:
  • use rsync (Score:2, Informative)

    by dtfinch ( 661405 ) * on Friday April 29, 2005 @11:00AM (#12383586) Journal
    It's much less network and hardware intensitive and with the right parameters, will keep past revisions of every changed file. Your hard disks will live longer.
  • by hal2814 ( 725639 ) on Friday April 29, 2005 @11:04AM (#12383631)
    Maybe setup is inconvenient. Remote backups using dd and ssh (our method) was a bit of a bear to initially setup, but thanks to shell scripting and cron and key agents, it hasn't given us any problems. I've seen a few guides with pretty straightforward and mostly universal instructions for this type of thing. That being said, I do hope this software will at least get people to start looking seriously at this type of backup since it lets you store a copy off-site.
  • Re:use rsync (Score:5, Informative)

    by FreeLinux ( 555387 ) on Friday April 29, 2005 @11:06AM (#12383656)
    This is a block level operation, whereas rsync is file level. With this system you can restore the disk image including partitions. Restoring from rsync would require you to create the partition, format the partition and the restore the files. Also, if you need the MBR...

    As the article says, this is drive imaging whereas rsync is file copying.
  • by Anonymous Coward on Friday April 29, 2005 @11:14AM (#12383728)
    Wel, not a solution for BSD people (unless you're running a bsd under Xen and the toplevel linux kernel is doing the DRBD).

  • Re:Mac OS X (Score:3, Informative)

    by Anonymous Coward on Friday April 29, 2005 @11:22AM (#12383805)
    If you want something for OSX
    I'd suggest either
    CCC (Carbon Copy Cloner)
    ASR (Apple System Restore)
    Rsync
    Radmind

    Have fun on version tracker....
  • Re:use rsync (Score:3, Informative)

    by dtfinch ( 661405 ) * on Friday April 29, 2005 @11:51AM (#12384171) Journal
    Just make sure the backup server is properly configured (or very nearly so) I guess.

    Our nightly rsync backups have saved us many times from user mistakes (oops, I deleted this 3 months ago and I need it now), but we haven't had a chance to test our backup server in the event of losing one of our main servers. We figure we could have it up and running in a couple hours or less, since it's configured very closely to our other servers, be we won't know until we need it.

  • by gordon_schumway ( 154192 ) on Friday April 29, 2005 @12:00PM (#12384303)

    I'd like a system library that would modify the rename(2), truncate(2), unlink(2), and write(2) calls to move the deleted stuff to some private directory (/.Trash, /.Recycler, whatever). Obviously the underlying routine would have to do its own garhage collection, deleting trash files by some FIFO or largest-older-first algorithm.

    Done. [netcabo.pt]

  • Re:Wacky idea (Score:1, Informative)

    by Anonymous Coward on Friday April 29, 2005 @12:02PM (#12384323)
    check frisbee (emulab.net) for fast reliable
    multi/unicasting system images
  • WTF (Score:5, Informative)

    by multipartmixed ( 163409 ) on Friday April 29, 2005 @12:52PM (#12384963) Homepage
    Why on earth are people always so insistent on doing raw-level dupes of disks?

    First of all, it means backing up a 40GB with 2 GB of data may actually take 40GB of bandwidth.

    Second of all, it means the disk geometries have to be compatible.

    Then, I have to wonder if there will be any wackiness with things like journals if you're only restoring a data drive and the kernel versions are different...

    I have been using ufsdump / ufsrestore on UNIX for ...decades!. It works great, and its trivial to pump over ssh:

    # ssh user@machine ufsdump 0f - /dev/rdsk/c0t0d0s0 | (cd /newdisk && ufsrestore f -)

    or


    # ufsdump 0f - /dev/rdsk/c0t0d0s0 | ssh user@machine 'cd /newdisk && ufsrestore 0f -' .. it even supports incremental dumps (see: "dump level"), which is the main reason to use it over tar (tar can to incremental with find . -newer X | tar -cf filename -T -, but it won't handle deletes).

    So -- WHY are you people so keen on bit-level dumps? Forensics? That doesn't seem to be what the folks above are commenting on.

    Is it just that open source UNIX derivative and clones don't have dump/restore utilities?
  • by RonBurk ( 543988 ) on Friday April 29, 2005 @12:59PM (#12385069) Homepage Journal
    Image backups have great attraction. Restoring is done in one big whack, without having to deal with individual applications. Absolutely everything is backed up, so no worries about missing an individual file. etc. So why haven't image backups replaced all other forms of backup? The reason is the long list of drawbacks.

    • All your eggs are in one basket. If a single bit of your backup is wrong, then the restore could be screwed -- perhaps in subtle ways that you won't notice until it's too late to undo the damage.
    • Absolutely everything is backed up. If you've been root kitted, then that's backed up too. If you just destroyed a crucial file prior to the image backup, then that will be missing in the restore.
    • You really need the partition to be "dead" (unmounted) while it's being backed up. Beware solutions that claim to do "hot" image backups! It is not possible, in the general case, for a backup utility to handle the problem of data consistency. E.g., your application stores some configuration information on disk that happens to require two disk writes. The "hot" image backup software happens to backup the state of the disk after the first write, but before the second. If you then do an install, the disk is corrupted as far as that application is concerned. How many of your applications are paranoid enough to survive arbitrary disk corruption gracefully?
    • Size versus speed. Look at the curve of how fast disks are getting bigger. Then look at the curve of how fast disk transfer speeds are getting faster. As Jim Gray [microsoft.com] says, disks are starting to behave more like serial devices. If you've got a 200GB disk to image and you want to keep your backup window down to an hour, you're out of luck.
    • Lack of versioning. Most disk image backups don't offer versioning, certainly not at the file level. Yet that is perhaps the most common need for a backup -- I just messed up this file and would like to get yesterday's version back, preferably in a few seconds by just pointing and clicking.
    • Decreased testing. If you're using a versioned form of file backup, you probably get to test it on a fairly regular basis, as people restore accidental file deletions and the like. How often will you get to test your image backup this month? Then how much confidence can you have that the restore process will work when you really need it?

    Image backups certainly have their place for people who can understand their limitations. However, a good, automatic, versioning file backup is almost certainly a higher priority for most computer users. And under some circumstances, they might also want to go with RAID for home computers [backupcritic.com].

  • by Kent Recal ( 714863 ) on Friday April 29, 2005 @02:49PM (#12386337)
    Ummm. Well, there's DAR [linux.free.fr] and there's kdar [sourceforge.net]. I think there's even a win32 version for the clueless.

    It doesn't get much easier than this. You can have a sane, incremental backup setup in a single line cronjob or even point and click one up.

    If that's not simple enough for you then you have no business of storing or working with sensible data.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...