Backing up *EVERYTHING*

Hi guys.

I think I’ve identified everything that would be needed to completly backup an InterWorx install including InterWorx itself, settings, and SiteWorx accounts. What I’d like to know is:

  • Will taring certian things up let us write a fully functional InterWorx install to disk as a backup? (Say the server swims in 3 feet of water, and we start over from scratch with a new box. I’d like to install the same version of CentOS and InterWorx, untar, then fire everything up.) Can this happen live? (I’m mostly concerned with MySQL being stubborn about databases backed up from thier data files while the DB is live…)
  • What needs to be backed up to do this? I’ve got /home, the MySQL data directory, and /etc/httpd/conf.d
  • Is there a better way to do this? Sure, I could pick out specific files and SQL queries, but things could change in the future. I’d rather have too much than not enough. It is hurricane season, you know :slight_smile:

I’ve got half a mind to drive down there and evacuate it myself. :wink:

I can’t speak for an entire Interworx backup (though, it sounds like you’ve identified everything Interworx needs to operate), but very soon there will be a Siteworx backup feature, that will create a complete backup of a Siteworx account. This should help automate (one aspect of) the restoration of an Interworx box. However, we do not yet have a feature that creates a backup of the Nodeworx settings, which is probably what you are looking for, but I’m sure it’ll be hitting my to-do list very soon. :slight_smile:

Hey all.

So, in current Interworx version (i.e. 1.8), there is no way to backup Interworx to minimze impact of nightmare scenarios (e.g. hardrive getting fried)?

If that’s the case, looks like the best I can do is backup pertinent directories and hope for the best. Yikes, that doesn’t sound too comforting.

Who wants to go through the hastle of resetting Siteworx accounts, creating NodeWorx accounts, remembering that config file you tweaked to get SpamAssassin running properly, etc.

Well, I guess nothing’s perfect, though I’m sure the Interworx crew is working on that -_-

–Noah

If you’re worried about a bad hard drive, get a second hard drive installed in your server and then periodically use rsync or tar to back up your directory structure to the second hard drive. (MAKE SURE TO STOP YOUR SERVICES FIRST (httpd, smtp, mysql, etc.). If something goes you have your files to restore, though it’s not as easy as simply copying the files back.

As Tim said you can use ‘normal backup means’ to do a worst-case backup but you’re right on both your points 1) that it’s not a great solution and that 2) we are working on it :).

Chris

Might want to consider rsync ‘snapshots’

Re: Rsync backup solution, with latest release (I believe it’s rsync 2.63), is it necessary to shutdown all web services (mysql, apache, mail server, etc) for rsync to run “without a hitch”?

I’m looking to run a nightly rsync backup via cron on my server. I’d like to avoid possible backup corruptions due to running web services.

This is my first time setting up an rsync backup, or any automated backup solution for that matter, so please advise as to whether or not I should be concerned about the web services issue.

TIA,

–Noah

If it was I’d think the script would do it for you since it’s designed to be a set it up in cron and forget it, but this is an interesting question. I do know from experience that certain files are either not copied or corrupted with the cp command, and Chris advised me to turn them off when I did a manual backup with tar.

If a file is open when the backup script hits it, you can have problems. The only problems we had showing up in the CRON messages were with a log file or two and database files.

Our problems were very minor, and I didn’t really care much as none of the problem files were critical. Still, we moved to something a little different for the database part of the backup. I have a script set up to pull database names out of /var/lib/mysql and pass them to MySQL dump. This way, we have the SQL queries necessary to rebuild the databases, and don’t have to worry about inconsistancies / switching table formats / etc…

Yes, you can have MySQL dump export statements for ALL databases, but we wanted to keep them seperate for quicker selective restoration.

None of our user files have been in a problematic state during a backup… yet…

We do basically what CMI does now as well, mysqldump for MySQL data and rsync /home and /etc, the rest (at least in our case) isn’t necessary for disaster recovery.

Chris

I though our backup script might be useful to some of you, so I’m posting it after some tweaks today. Its not spectacular, but it gets the job done.

Two things to note: /var/lib/mysql/ contans files other than directories named after databases. This means that when these file names are fed into mysqldump, it will produce an error. I figure this isn’t any worse time-wise than filtering out those two bogus names, and is it a bit easier for the other admins to read :slight_smile: Second, IWorx uses a seperate instance of MySQL for itself. It uses a password that isn’t the same as what you set the MySQL root password to through IWorx. You can find this password in /home/interworx/iworx.ini in the [iworx] section in the dsn.

As I mentioned above, you can have mysqldump dump ALL databases at once (which I should probably do for the IWorx database) but doing it database by database makes it easier to selectively pull out the SQL statements needed to restore data.

#!/bin/bash

tempdir="/root/backuptmp"
backupdir="/root/backup"
datestring=$(date +%F)

echo USER BACKUPS

echo Backing up SQL databases:
for db in $(ls /var/lib/mysql); do
	echo "	$db"
	mysqldump --add-drop-table --add-locks --delayed-insert --extended-insert --lock-tables --password=MYSQL_ROOT_PASSWORD $db > $tempdir/$db-$datestring.sql
done

echo Consolidating and compressing SQL backups
tar -czf $backupdir/sql/sql-$datestring.tar.gz $tempdir/*-$datestring.sql
rm -f $tempdir/*-$datestring.sql

echo Backing up home directories
tar -cz -X /root/nobackup -f $backupdir/home/home-$datestring.tar.gz /home


echo IWORX BACKUPS

echo Backing up InterWorx databases:
for db in $(ls /home/interworx/var/lib/mysql); do
	echo "	$db"
	mysqldump --add-drop-table --add-locks --delayed-insert --extended-insert --lock-tables --password=IWORX_DB_PASSWORD --socket=/home/interworx/var/run/mysql.sock $db > $tempdir/$db-$datestring.sql
done

echo Consolidating and compressing SQL backups
tar -czf $backupdir/iworx/sql-$datestring.tar.gz $tempdir/*-$datestring.sql
rm -f $tempdir/*-$datestring.sql

echo Backing up InterWorx files
tar -czf -X /root/nobackup-iworx $backupdir/iworx/iworx-$datestring.tar.gz /home/interworx


echo CLEANING UP
find /root/backup -exec chown root:wheel {} \;
find /root/backup -type d -exec chmod 750 {} \;
find /root/backup -type f -exec chmod 640 {} \;

I cant seem to get CMI’s backup solution to work.

Do I need to create all directories before hand?

Yup :slight_smile: It doesn’t bother to try and create directories. You have to set that all up before.

Ok I got mysql stuff to back up, still lacking the home dir.

tar -cz -X /root/nobackup -f $backupdir/home/home-$datestring.tar.gz /home

what is with /root/nobackup? What is this for?

The -X option gives a file with simple pattern matches to NOT back up. By simple, I mean things like this. These are matches against the full path of the file.

There is no point in backing up the InterWorx files there becaues they get backed up later, so /root/nobackup could have a line with /home/interworx/*. This would make sure that the tar ball being created has none of the InterWorx files in it.