How To: Use Dump to back up a full filesystem

How To: Use Dump to back up a full filesystem

Way back in the day when tape drives first started being heavily used to do backups of Unix machines, the dump command was created. In typical Unix simplicity the dump command “dumps” files from one device to another device. This can be a tape drive, a hard drive, even a network share. rsync does a similar process but is meant for immediate use of those files.

First step is to be sure dump is installed. If not use, rpm, yum, port, apt-get, or your local repository method to install dump on your system.

The quickest command to get started is: dump -0 -j9 -f /pathtosavebackup /pathtobackup

This would give us a down and dirty dump of the requested path, or if / is used in the second part of the command, the full file system starting at the root.

-j9 tells the command to compress (using tar) the file as much as possible.

-f defines the device (or filesystem path) to dump too. Keep in mind that this command could not be used to dump files from the local filesystem back to the local filesystem.

The bad news is that this will take quite a while depending on how much data needs to be dumped, how fast the hard drives (or tape drive) is, and if backing up to a network share, how fast the ethernet connection is. In my tests 100gb filesystem got compressed down to 23gb, and took about fives hours across a 10/100 connection.

After the dump is done, tar can be used to unarchive the file to a new file system.

How To: Budget Network Attached Raid 5 Fileserver, Part 2

How To: Budget Network Attached Raid 5 Fileserver, Part 2

In Part One I talked about selecting the hardware and my thoughts behind the choices I made. It’s all been put together and the next step is to install an OS. For various reasons I decided to install FreeNAS which is based off of FreeBSD. It can be grabbed from here.

After a few minutes of looking at FreeNAS I was really quite impressed with it. They’ve taken a quite hard and convoluted process, added menus and made quite easy to setup. Like FreeBSD in general it can feel picky about hardware. If you’re using some off the shelf no-name SATA RAID controller, the odds are it’s not going be supported. But a lot of the more popular and better quality models are supported. The FreeNAS website (and also the FreeBSD website,) are both a tiny bit hard to find information and support when first using it. Counter intuitively you need to click on the Wiki link first, then knowledge base, not the Support link to find Installation and Configuration documentation. Luckily the menus within FreeNAS are fairly self explanatory.

The first step is of course to download the image. In my case I grabbed the live CD so that I could simple have the machine boot off of it and was good to go. Another option is to use a USB thumb drive to boot off of. I’m personally disinclined to use one as they stick out and get broken easily.A UNIX installer screen will come up and start probing and self configuring hardware in the machine. A FreeNAS graphic screen may come up, and eventually it’ll beep when ready. Hit the escape key and choose option 2 to get an IP address via DHCP. Make sure to hit “Yes” when it wants to choose a IPv6 address. That step messed me up the first time I saw it, but it’ll simply fail as most likely there is not an IPv6 server around. Most home routers have a DHCP server built in, but there may be some configuration needed so check the router’s documentation.

Once the IP address has been discovered, type the address it gives you into a web browser to open up the FreeNAS configuration page. The default user name and password is admin:freenas, it’s highly suggest you change the password ASAP. Once everything is fully configured we’ll go ahead and change this. At this point the instructions proved useful.

Step One is to add the physical disks. Under Disks Click on Management, then the + sign. This brings up the disk management screen:

As can be seen the available disks are at the top. In this case ad0 is the 40gb IDE drive I’m going to eventually use as a boot disk. Per the instructions change the “Preformatted file system” option to “Software raid” (the other options in that article may not be available). In this case I have four SATA drives so each needs to be added individually. Hit the apply button and each drive is added.

The next step is to create the raid partition. Go to Disks, Software RAID, then choose RAID5. Choose a RAID name it doesn’t matter what. For simplicity I used “server” put a check next to all the disks that are going to be part of the RAID then another on the “Format and Initialize” box. Hit OK, then once again “Apply”. Now sit back and wait. On this screen very little is going on. But on the RAID server itself messages will start popping up. Even better, they’re helpful!GEOM_RAD5: server: all(-1): re-sync in progress: 0.01% p:x ETA:232min (cause: store verify progess). After 232 minutes of waiting we then got this screen:

As can be seen my four 250gb SATA drives have been built into a 715gb RAID5 Partition. It now needs to be formatted which is done in Disk, Format. Choose the RAID array, give it a name again, no hurt in using the same name again. We’ll format it out to UFS+ with GPT and Softupdates as the as the filesystem. The other options may work, but are not recommended by the FreeNAS team. Hit the format button and thirty seconds later the drive is ready to mount.

The mounting screen is sort of confusing at this point. After all we’ve already created and formatted the RAID drive so it should be ready. But this physically mounts it so that it can actually be accessed.

Under Disks, click on Mount Point. Pick the disk. Then choose “EFI GPT” under the partition menu. This menu was a bit confusing for me at first, and once again FreeNAS’s documentation left this step out. Reading it at first it seems option 1 was wanted as we’d setup UFS before. The filesystem stays UFS though, and the name can be what ever. I choose the simple “raid” moniker for simplicity. The last option could be a real life saver if the power ever goes out. “Enable foreground/background file system consistency check during boot process,” would run fsck and other filesystem utilities when the machine was powered back on. It might take longer to get the RAID back up, but could save problems in the long run.At this point we’re ready to start mounting the RAID and writing data to it. I’ll talk about doing that in the next article in this series, including troubleshooting and setting up Time Machine to back up to the RAID automatically.

Part Three

How To: Budget Network Attached Raid 5 Fileserver, Part 1

I recently came to the conclusion that I need three things.

  1. More Disk Space
  2. Reliable File Backup
  3. Centrally Available File Storage for multiple machines and operating systems

There are multiple solutions to these three problems. Each can easily be tackled separately and there are a ton of good products that do each quite well. But I wanted something that provides all three. With the announcement of Apple’s Time Capsule a NAS or “Network Attached Storage” system suddenly sounded like a great idea. Unluckily Time Capsule does not have a firm ship date, and I’m not sure I wanted to invest in a new Airport Base Station too.

Turning to Google provided hundreds of links to products that were limited to only two hard drives, USB 2.0 only, or didn’t support RAID 5 or higher. Or worse of all did what I wanted but cost way too much. So eying the spare hardware pile, I decided to save some money and build my own NAS server with Raid 5 and Gigabyte network. I figured I’d save several hundred dollars and have a system that was more upgradeable and more reliable.

Tom’s Hardware Guide built one in August 2006. Looking over the list of parts used I wasn’t happy with purchasing a separate SATA RAID card as that would quickly add to the price. It would also significantly add to the complexity of the entire system, and reduce recoverability if the system went down and I had to put the hard drives into another system to recover their data. So a new requirement, Software Raid became quickly apparent.

My Hardware Requirements were pretty basic:

  1. Motherboard
  2. CPU
  3. Memory
  4. 4 Hard Drives for the Raid
  5. 1 Hard Drive for Boot
  6. Case
  7. Power Supply
  8. CDRom (for software install only)

Between various upgrades I was able to scrape together a decent older PC case the boot drive and two of the four drives for the Raid array.

I’m a big fan of the LX-104 case. These were made by a no-name Chinese company for OEM builders. Despite that they were very good construction, thick steel, a working and easy to use snap together design, rounded corners and an attractive $75 retail price. Since their main competitor at the time was first generation Enlight cases this was all pretty attractive. Plus it has two hard drive slots and two 3 1/2 inch bays so that I could install 4 hard drives there.

The original motherboard an Intel PII-233 capable Asus board would have worked well, but then I would have had to go back to a SATA Raid card of some sort. I decided to pick up the Asus M2A-VM board. Not only does it have four on board SATA slots, it had onboard Video and Gigabyte. All of which use fairly common chip sets and are thus supported by Linux and FreeBSD. It also has hardware RAID, but I didn’t want to use that for the reasons outlined above.

I also needed to pick up a new power supply. I’m not an Antec fanatic like a lot of people, but I decided that quite was a definite plus. This machine would be effectively replacing an Apple G5 system which is dead quite. The Antec TP Trio 430w supply looked like it’d do what I need.

Add in two more Western Digital 250gb hard drives, a single chip of DDR2 512mb memory, and a retail Athlon68 3800 cpu at 2.4ghz and I was under $400 total. If I didn’t have some of the other parts on hand the price would have been a lot more making something likeLacie’s Ethernet Disk Raid much more attractive.

Of course, this would also be a great use for that previous generation PC sitting in the closet. With the addition of a couple of extra hard drives it’d be easy to build a budget NAS for under $200.

The boot drive (a 40gb IDE Maxtor) went into a 5 1/4 inch to 3 1/2 inch bay adapter so that it could be master to the boot CD drive. The four SATA drives installed easily, and the entire machine booted perfectly fine.

Next Step: Install Software and Configure