Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Operating Systems BSD

NetBSD - Live Network Backup 156

dvl writes "It is possible but inconvenient to manually clone a hard disk drive remotely, using dd and netcat. der Mouse, a Montreal-based NetBSD developer, has developed tools that allow for automated, remote partition-level cloning to occur automatically on an opportunistic basis. A high-level description of the system has been posted at KernelTrap. This facility can be used to maintain complete duplicates of remote client laptop drives to a server system. This network mirroring facility will be presented at BSDCAN 2005 in Ottawa, ON on May 13-15."
This discussion has been archived. No new comments can be posted.

NetBSD - Live Network Backup

Comments Filter:
  • Pros and Cons (Score:5, Insightful)

    by teiresias ( 101481 ) on Friday April 29, 2005 @11:01AM (#12383593)
    This would be an extremely sensitive server system. With everyones harddrive image just waiting to be blasted to a blank harddrive, the potential for misdeeds is staggering. Even in an offical capacity, I really feel uneasy if my boss was able to take a copy of my harddrive image and see what I've been working on. Admittely, yes it should all be work but here we are allowed a certain amount of freedom with our laptops and I wouldn't want to have that data at my bosses fingertips.

    On the flipside, this would be a boon to company network admins especially with employees at remote sites who have a hard crash.

    Another reason to build a high speed backbone. Getting my 80GB harddrive image from Seattle, while I'm in Norfolk would be a lot of downtime.
  • by Bret Tobey ( 844402 ) on Friday April 29, 2005 @11:02AM (#12383603) Homepage
    Assuming you can get around bandwidth monitoring, how long before this becomes incorporated into hacking tools. Add this to a little spyware and a zombie network and things get very interesting for poorly secured networks & computers.
  • Re:use rsync (Score:2, Insightful)

    by x8 ( 879751 ) on Friday April 29, 2005 @11:18AM (#12383764)
    What's the fastest way to get a server running again after a disk crash? With rsync, if I backup /home and /etc, I still have to install and configure the OS and other software. That could take a significant amount of time (possibly days). Not to mention the time spent answering the phone (is the server down? when will it be back up?)

    But if I have a drive image, I could just put it on a spare server and be back up and running almost immediately. That would require an identical spare server though.

    What do the big enterprises who can't afford downtime do to handle this?
  • Wacky idea (Score:3, Insightful)

    by JediTrainer ( 314273 ) on Friday April 29, 2005 @11:44AM (#12384080)
    Maybe I should patent this. Ah well, I figure if I mention it now it should prevent someone else from doing so...

    I was thinking - I know how Ghost supports multicasting and such. I was thinking about how to take that to the next level. Something like Ghost meets BitTorrent.

    Wouldn't it be great to be able to image a drive, use multicast to get the data to as many machines as possible, but then use BitTorrent to get pieces to any machines that weren't able to listen to the multicast (ie it's on another subnet or something) and to pick up any pieces that were missed in the broadcast, or get the rest of the disk image if that particular machine joined in the session a little late and missed the first part?

    I think that would really rock if someone wanted to image hundreds of machines quickly and reliably.

    I'm thinking it'd be pretty cool to have that server set up, and find a way to cram the client onto a floppy or some sort of custom Knoppix. Find server, choose image, and now you're part of both the multicast AND the torrent. That should take care of error checking too, I guess.

    Anybody care to take thus further and/or shoot down the idea? :)
  • Re:use rsync (Score:3, Insightful)

    by Skapare ( 16644 ) on Friday April 29, 2005 @11:52AM (#12384206) Homepage

    In most cases, file backups are better. Imaging a drive that is currently mounted writable and actively updated can produce a corrupt image on the backup. This is worse that what can happen when a machine is powered off and restarted. Because the sectors are read from the partition over a span of time, things can be extremely inconsistent. Drive imaging is safest only when the partition being copied is unmounted.

    The way I make backups is to run duplicate servers. Then I let rsync keep the data files in sync on the backups. If the primary machine has any problems, the secondary can take over. There are other things that need to be done for this, like separate IP addresses for administrative access, and the network services being provided (so that the service addresses can be moved between machines as needed while the administrator can still SSH in to each one individually).

  • by Anonymous Coward on Friday April 29, 2005 @11:58AM (#12384279)
    I've used Linux for years to do this using md running RAID1 over a network block device. It works very well unless you have to do a resync. Is this better than that?

    I'm asking because I'm backing-up about a dozen servers in real-time using this method, and if this method is more efficient, then I might be able to drop my bandwidth usage and save money.
  • by mrbooze ( 49713 ) on Friday April 29, 2005 @03:50PM (#12387032)
    It's not that complicated. Disk image backups and file-level backups are not intended to serve the same purpose.

    Disk image backups are pure disaster recovery or deployment. Something is down and needs to be back up ASAP, where even the few minutes of recreating partitions and MBRs is unwanted. Or it's about deploying dozens or hundreds of client systems as quickly as possible with as few staff as possible.

    File level backups are insurance for users. Someone deletes/edits/breaks something important and needs it back or an old version back, etc.

    Sometimes, separating those two business needs (DR from user restoration) is the most sensible thing to do.
  • by setagllib ( 753300 ) on Friday April 29, 2005 @10:05PM (#12389841)
    RTFA: It responds to heavy load by making a log (journal?) of the blocks that need backing up, and then does them when the load is lesser. If you do it on swap, then you're insane and deserve whatever you get :)

    This is a good idea, even if its niche is small, but I'm interested in how it handles the encryption. If it doesn't allow key re-generation on the fly, HMACs, certificates (or at least PSKs) and other things we expect from modern (SSH, IPSec/IKE, etc) systems then it's not going to be very useful. And unless I missed something it's going to be difficult to tunnel through a system that does do these things.

    Personally I use SSH to tunnel everything possible, especially from Windows where IPSec is a joke, and the thought of sending all of my disk writes over a security system that is any less secure is a worry. Just imagine the problems if a man in the middle (or just a sniffer) catches plaintext: they know what you're doing, they know the contents of what you're doing, and highly likely they know what to do to exploit what you're doing. It's a very good thing that system entropy under nix is stored in the kernel, not on disk :)
  • Re:Wacky idea (Score:3, Insightful)

    by evilviper ( 135110 ) on Saturday April 30, 2005 @12:56AM (#12390532) Journal
    I must shoot down your idea. I have lots of experience with this sort of thing.

    then use BitTorrent to get pieces to any machines that weren't able to listen to the multicast (ie it's on another subnet or something) and to pick up any pieces that were missed in the broadcast, or get the rest of the disk image if that particular machine joined in the session a little late and missed the first part?

    Bittorrent poses NO advantage for this sort of thing. Why not just a regular network service, unicasting the extra data to hosts that require it? Bittorrent has lots of features that make it more useful for internet downloads, but NONE that would help on a LAN. If a node on a 100Mbps LAN is missing 1GB of an image, it can just request it from a single machine that already has it, and it will get it at 100Mbps. Requesting pieces from two or more different machines will not speed things up. Bittorrents anti-leech technology would be useless on a LAN, as would extra hashing, as would randomized chunks, as would everything else bittorrent does.

    The only place I think you have a real point is dealing with systems on other broadcast domains... I haven't yet seen any multicast systems that do what I needed in that case, to unicast the drive image to a machine on a different network, then have that machine multicast it to all the local machines on that network... Instead, you have to manually do that yourself, in a 2-step process, which makes the process take at least twice as long.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...