NetBSD - Live Network Backup 156
dvl writes "It is possible but inconvenient to manually clone a hard disk
drive remotely, using dd and netcat. der Mouse, a Montreal-based NetBSD
developer, has developed tools
that allow for automated, remote partition-level cloning to occur
automatically on an opportunistic basis. A high-level description of the system has been posted at KernelTrap. This facility can be used to
maintain complete duplicates of remote client laptop drives to a server
system. This network mirroring facility will be presented at BSDCAN 2005 in Ottawa, ON on May 13-15."
Not scalable. (Score:3, Interesting)
Now, rsync would have been fine if we'd unmounted the filesystem and done it on the raw partition. But there's a couple of problems with that:
It's no
Re:Not scalable. (Score:2)
Mac OS X (Score:1, Interesting)
Re:Mac OS X (Score:3, Informative)
I'd suggest either
CCC (Carbon Copy Cloner)
ASR (Apple System Restore)
Rsync
Radmind
Have fun on version tracker....
Re:Mac OS X (Score:1)
The cloning part of the program is free.
use rsync (Score:2, Informative)
Re: (Score:2)
Re:use rsync (Score:5, Informative)
As the article says, this is drive imaging whereas rsync is file copying.
Re:use rsync (Score:3, Insightful)
In most cases, file backups are better. Imaging a drive that is currently mounted writable and actively updated can produce a corrupt image on the backup. This is worse that what can happen when a machine is powered off and restarted. Because the sectors are read from the partition over a span of time, things can be extremely inconsistent. Drive imaging is safest only when the partition being copied is unmounted.
The way I make backups is to run duplicate servers. Then I let rsync keep the data files i
Re:use rsync (Score:3, Interesting)
Re:use rsync (Score:2)
Sure, but that's not difficult. Systemimager [systemimager.org] for Linux keeps images of disks of remote systems via rsync, and has scripts that take care of partition tables and such.
Yes, it's written for Linux, but it wouldn't be difficult to update it to work with NetBSD or any other OS. The reason it's Linux specific is that it makes some efforts to customize the image to match the destination machin
Re:use rsync (Score:1)
I think you're talking about distributing a built system to multiple machines in a file farm. At least that's what Brian built SystemImager for, originally, to mass install a system image to a server farm. As long as the source image is in a stable state, that's fine. But if you are making backups of machines, backing up their actively mounted and working partition by the disk image is the bad idea, regardless of the tool. I used to do that once after I built a system just so I have an image of the whol
Re:use rsync (Score:2, Insightful)
But if I have a drive image, I could just put it on a spare server and be back up and running almost immediately. That would require an identical spare server though.
What
Re:use rsync (Score:3, Informative)
Our nightly rsync backups have saved us many times from user mistakes (oops, I deleted this 3 months ago and I need it now), but we haven't had a chance to test our backup server in the event of losing one of our main servers. We figure we could have it up and running in a couple hours or less, since it's configured very closely to our other servers, be we won't know until we need it.
Re:use rsync (Score:2)
We were developers plagued with an IT department that wanted to take control of the application and add red tape to our deployment cycle. While we understood there was a place for it, we worked for a
Re:use rsync (Score:2)
Your IT dept. probably has more of a clue than you do.
Re:use rsync (Score:2)
Besides, there's many larger companies who use IIS than those guys...
Re:use rsync (Score:2)
They have hot standby servers and/or clusters (so that individual server downtime becomes irrelevant), automated installation procedures (so that reinstalling machines takes maybe a couple of hours at the most) and centralised configuration management tools (so that restoring the new machine to the same state as the old one is simply a matter kicking off the config management tool and letting it reconfigure the machine appropriately).
No, it's not. (Score:2)
We use a scheme which actually seems better for systems which are always on: DRBD for Linux [drbd.org]. Basically, ever
Re:No, it's not. (Score:2)
Re:use rsync (Score:2)
Anybody have any experience with that?
Pros and Cons (Score:5, Insightful)
On the flipside, this would be a boon to company network admins especially with employees at remote sites who have a hard crash.
Another reason to build a high speed backbone. Getting my 80GB harddrive image from Seattle, while I'm in Norfolk would be a lot of downtime.
Re:Pros and Cons (Score:1)
(from the comments [kerneltrap.org] below article)
Another reason to build a high speed backbone. Getting my 80GB harddrive image from Seattle, while I'm in Norfolk would be a lot of downtime. (parent)
Seems that this thing will sync up everytime you call home. So when you're on the road downloading that just updated massive PPT presentation for your conference.... you'll be downloading one copy from the server while the server is desperately try
Re:Pros and Cons (Score:1)
your boss has the right and the ability (at least at my company) to do that. plus, i leave my personal and secret stuff on my box at home, not at work, where it belongs. if i was a boss, i would want the ability to see what my employees are working on. that's why i pay them.
Re:Pros and Cons (Score:2)
See what I've been working on... (Score:1)
It isn't your laptop. You have noe freedom to do anything with it.
Re:See what I've been working on... (Score:2)
So what if people have some MP3s on their hard disk - if listening to music is affecting their work then it's the responsibility of their supervisor to deal with that.
I've worked support before, and as much as users can be a pain in the ass, the only reason you have a job is because of them - without users, there is no point in an IT department.
Re:See what I've been working on... (Score:2)
I'm pretty sure this is just a troll, but since there are probably quite a few inexperienced people out there who really do think like this...
Sorry. As an IT guy I routinely peruse people's harddrives looking for interesting material. I use Windows scripting host to search everyone's drives for mp3's wma'a avi's and mpg's.
Idiots like you are why IT departments have to struggle to do their jobs properly.
It isn't your laptop. You have noe freedom to do anything with it.
It isn't *yours*, either, hots
Re:Pros and Cons (Score:1)
Re:Pros and Cons (Score:1)
Perfect for those moments... (Score:4, Interesting)
Now we can just hit a button and restore everything, a few thousand miles away.
The only thing left is to write code to block stupid people from reproducing.
Re:Perfect for those moments... (Score:1)
Re:Perfect for those moments... (Score:2)
Theoretically, a drive defrag should have no effect on how an operating system runs, only that it is re-sorting the physical drive to make file access faster. But for some reason, it messes things up.
Re:Perfect for those moments... (Score:2)
Re:Perfect for those moments... (Score:1)
Re:Perfect for those moments... (Score:4, Funny)
Unfortunately the user interface for the relevant hardware has a very intuitive point and shoot interface.
Not really. (Score:1)
Re:DOS of the backup server (Score:3, Insightful)
This is a good idea, even if its niche is small, but I'm interested in how it handles the encryption. If it doesn't allow key re-generation on the fly, HMACs, certificates (or at least PSKs) and other things we expect from modern (SSH, IPSec/IKE, etc) systems then it's not going to be very useful.
Re:Entropy on Disk (Score:2)
I hate that I only have one machine with a hardware random number generator under my administrative control, and it currently runs Windows, so I can't even import the entropy over to more important machines. But then there's always the "roll your own user-space entropy harvester" option.
How long before this becomes a hack? (Score:4, Insightful)
Re:How long before this becomes a hack? (Score:2)
If somebody has the ability to install this on a machine then the problem is not with this module, it's that the person somehow got root privileges. In that sense this is no more of a "hack" than ssh, rsync, ne
Re:How long before this becomes a hack? (Score:1)
Re:How long before rsync becomes a hack? (Score:1)
Done this for years (Score:5, Funny)
Re:Done this for years (Score:1)
I've been using der Mouse to copy files for years. First I user der Mouse to click on the file, then I use der Mouse to drag it to a new location!
Best. Comment. Ever. Wish I still had the mod points from yesterday.
What is the origin of "der" in "der Mouse" (Score:3, Interesting)
Should be obvious. (Score:1)
Bork, bork, bork.
Re:What is the origin of "der" in "der Mouse" (Score:2)
Re:What is the origin of "der" in "der Mouse" (Score:2)
ttyl
Farrell
Dump? (Score:1)
Maybe setup is inconvenient. (Score:3, Informative)
How does this handle active filesystems? (Score:2)
Automatic Backup for Paranoids? (Score:1)
Well, I'm not really paranoid, but I had some cases where faulty file system drivers or bad RAM modules changed the content of some of my files and where I have then overwritten my backup with these bad files.
Isn't there any automatic backup solution that avoids such a thing? What I have in mind: there should be several autonomous instances of backup servers (which may actually reside on desktop PCs linked via LAN) that control each o
Re:Automatic Backup for Paranoids? (Score:3, Interesting)
I really like having several months worth of nightly snapshots, all conveniently accessible just like any other filesystem, and just taking up slightly more than the space of the changed files.
Re:Automatic Backup for Paranoids? (Score:1)
Yes, there is, but it's expensive
IBM Tivoli Storage Manager [ibm.com] Just Works (after a rather complicated setup process), does its job in the background on whatever schedule you choose, does it without complaint, maintains excruciatingly detailed logs, maintains multiple back-revisions of files, works over a network, SAN, or shared-media, and talks to tape drives and optical drives and pools of cheap disk. If you want, backups can be mirrored across multiple TSM serves, and you can always fire up the (simple, ug
Meh. You can use DRBD on Linux anyway. (Score:1, Informative)
Right solution, wrong problem (Score:3, Interesting)
Most (all) of my quick restore needs result from users deleting or overwriting files - the hardware is more reliable than the transaction. I do have on-disk backups of the most important stuff, but sometimes they surprise me.
I'd like a system library that would modify the rename(2), truncate(2), unlink(2), and write(2) calls to move the deleted stuff to some private directory (/.Trash,
Just a thought.
Re:Right solution, wrong problem (Score:2)
WTF?
Re:Right solution, wrong problem (Score:2)
>WTF?
Right tool for the right job. See this [phrases.org.uk].
Re:Right solution, wrong problem (Score:2)
Re:Right solution, wrong problem (Score:5, Informative)
I'd like a system library that would modify the rename(2), truncate(2), unlink(2), and write(2) calls to move the deleted stuff to some private directory (/.Trash, /.Recycler, whatever). Obviously the underlying routine would have to do its own garhage collection, deleting trash files by some FIFO or largest-older-first algorithm.
Done. [netcabo.pt]
Re:Right solution, wrong problem (Score:1)
Why modify the system calls? Keep the system calls simple and orthogonal, so the kernel codebase stays small(er). Write this functionality in userland, starting wherever you are
Re:Right solution, wrong problem (Score:2)
So, are you saying that the parent should modify every single binary on the system???? Including binaries that he may not have source to? Sounds pretty much unworkable. While I wouldn't propose that the parent poster actually implement such a system, the only reasonable place to do this IS at the system call level where it can be applied to everything.
Personally, I think you are better
Re:Right solution, wrong problem (Score:1)
Here is the problem: Existing programs are written knowing that deleted programs dissappear immediately. Therefore, since programs may be writing temporary files to /tmp or elsewhere, or even have their own backup systems, a garbage system with limited space could end up playing housekeeper for thousands of unused or redundant files, and few of the legitimate ones.
Yes, my solution only works for future use; but the system call solution breaks the expectancies of already written programs, and muddles the un
Re:Right solution, wrong problem (Score:1)
I'd really like to use this for backup and disaster recovery. Couple it with FreeBSD's snapshot and you have a large part of the NetApp functionality.
nothing new (Score:2, Interesting)
It is nice though to have something like this in the open source world though. Competition is good.
How Soon (Score:1)
SIGS!!!We don't need no stinkin sigs
Wacky idea (Score:3, Insightful)
I was thinking - I know how Ghost supports multicasting and such. I was thinking about how to take that to the next level. Something like Ghost meets BitTorrent.
Wouldn't it be great to be able to image a drive, use multicast to get the data to as many machines as possible, but then use BitTorrent to get pieces to any machines that weren't able to listen to the multicast (ie it's on another subnet or something) and to pick up any pieces that were missed in the broadcast, or get the rest of the disk image if that particular machine joined in the session a little late and missed the first part?
I think that would really rock if someone wanted to image hundreds of machines quickly and reliably.
I'm thinking it'd be pretty cool to have that server set up, and find a way to cram the client onto a floppy or some sort of custom Knoppix. Find server, choose image, and now you're part of both the multicast AND the torrent. That should take care of error checking too, I guess.
Anybody care to take thus further and/or shoot down the idea?
Re:Wacky idea (Score:2)
Multicast will work across subnets (you just need to set the TTL > 1).
Re:Wacky idea (Score:1)
With BitTorrent you could set up your server as the tracker and multicaster for your images. BitTorrent doesn't HAVE to make it out onto the internet, you just keep the BT traffic inside your corporate network. The BT would be extremely helpful to distribute the load across multiple computers instead of just hitting one machine.
Another thing, I was thinking (usually a bad thi
Re:Wacky idea (Score:2)
Either way, bittorrent is completely useless in an environment where multicast is available.
Re:Wacky idea (Score:1, Informative)
multi/unicasting system images
Re:Wacky idea (Score:3, Insightful)
Bittorrent poses NO advantage for this sort of thing. Why not just a regular network service, unicasting t
How does this compare to md over a network block (Score:1, Insightful)
I'm asking because I'm backing-up about a dozen servers in real-time using this method, and if this method is more efficient, then I might be able to drop my bandwidth usage and save money.
Way better. (Score:1)
dd over a LAN (Score:1)
ghost 4 unix (Score:3, Interesting)
http://www.feyrer.de/g4u/ [feyrer.de]
Re:ghost 4 unix (Score:2)
- Hubert
This is great (Score:2)
And since we're running OpenBSD on those machines, porting this sho
Scalability Forking? (Score:2)
WTF (Score:5, Informative)
First of all, it means backing up a 40GB with 2 GB of data may actually take 40GB of bandwidth.
Second of all, it means the disk geometries have to be compatible.
Then, I have to wonder if there will be any wackiness with things like journals if you're only restoring a data drive and the kernel versions are different...
I have been using ufsdump / ufsrestore on UNIX for
# ssh user@machine ufsdump 0f -
or
# ufsdump 0f -
So -- WHY are you people so keen on bit-level dumps? Forensics? That doesn't seem to be what the folks above are commenting on.
Is it just that open source UNIX derivative and clones don't have dump/restore utilities?
Re:WTF (Score:1)
Yes!
EnCase Enterprise Edition costs $10,000 per license. This software basically mimmicks EnCase's functionality for free.
If der Mouse were to port this to the Windoze world, and get CFTT (http://www.cftt.nist.gov/ [nist.gov] to validate it's forensic soundness, he could make a fortune undercutting Guidance Software.
Re:WTF (Score:1)
I used to QA a series of imaging tools on Windows boxes, which envolved performing a series of regression tests over the software install and operation. The software had to work on 98/2K/2000/XP, with or without any number of updates and service packs, and in concert with several versions of either IE or Netscape (4,6,and 7 series). Having a block level copy of the disk of a test machine in various system configuration states
Re:WTF (Score:2, Interesting)
I know some Linux distros don't come with dump/restore. Maybe that's why more people don't use it.
Re:WTF (Score:3, Interesting)
I can think of a few reasons. It makes time-consuming partioning/formatting unnecesary. It does not require as much work to restore the bootable partion (ie. no need to bootstrap to run "lilo", "installboot" or whatnot). But mainly, because there are just no good backup tools...
Full dumps work fine, despite
Re:WTF (Score:3, Interesting)
The Dark Side of Image Backups (Score:5, Informative)
Image backups certainly have their place for people who can understand their limitations. However, a good, automatic, versioning file backup is almost certainly a higher priority for most computer users. And under some circumstances, they might also want to go with RAID for home computers [backupcritic.com].
Re:The Dark Side of Image Backups (Score:3, Interesting)
Great. Now, could you please enlighten us as to what a good, automatic, versioning file-based backup system might consist of?
AFAICT, this doesn't seem to exist. It doesn't matter how much sense it makes, or how perfect the idea is. It is simply unavailable.
In fact, the glaring lack of such a capable s
Re:The Dark Side of Image Backups (Score:3, Informative)
It doesn't get much easier than this. You can have a sane, incremental backup setup in a single line cronjob or even point and click one up.
If that's not simple enough for you then you have no business of storing or working with sensible data.
Re:The Dark Side of Image Backups (Score:2, Insightful)
Disk image backups are pure disaster recovery or deployment. Something is down and needs to be back up ASAP, where even the few minutes of recreating partitions and MBRs is unwanted. Or it's about deploying dozens or hundreds of client systems as quickly as possible with as few staff as possible.
File level backups are insurance for users. Someone deletes/edits/breaks something important and
the shared secret (Score:1)
----------- Confucious say: "The shared secret is no longer a secret."
Have this been invented? (Score:1)
rsync does a fine job for backups. (Score:1)
It makes it alot easier to find a file, cause it exists in the same location, uncompressed.
The huge advantage though, is that rsync only transfers those files that have changed. Which means that backups are very quick.
I also mount samba shares on the backup server, and do rsync backups of "My Documents" folders for the windows boxes. Works great there too!
Even better, the My Documents folders are available as
Re:BSD is 10 years too old (Score:1)
Re:der Mouse? (Score:2)
It does seem wrong to allow for an anonymous developer as NetBSD has, Mike Parker sounds much better than der Mouse.
That they allow this is their choice however, that he is bitter about Theo de Raadt and his OpenBSD project does not warrent that kind of behaviour true, but it is up to NetBSD to choose what they view as proper behavoir within their developer circle.
They made that choice with Theo years ago, maybe with time they will choose to reign Mike in. Mike doesn
Re:der Mouse? (Score:1)