Considering the switch...

When you add “–siteworx” to the backup.pex param list, that automatically sets the filename to a standard filename, <domain>[backup-type]<timestamp>.tar.gz, and a standard location, /home/<uniqname>/backups. things in <> are required, are optional. _backup-type indicates what type of partial backup was made, and is omitted on a full backup.

However, if you replace --siteworx with --file=<path/and/filename.tar.gz> you can specify any filename and location you want. However, if you do not place this file in /home/<uniqname>/backups, it won’t show up in the NodeWorx/SiteWorx backup management interface.

We can add a --help. :slight_smile:

This is correct. Even if you backed up all that information, it won’t be easy to restore all of NodeWorx (though obviously easier if you did manage to have all backed up somewhere).

As for permissions, there are two scripts that can fix most of them (but not all), and must be run as root: cvspermsfix.pex and varpermsfix.pex.

cvspermsfix.pex has no additional parameters, and will fix all the permissions in the Interworx install directory.

varpermsfix.pex has one paramater --siteworx=<domainname.com>, and will set all the var directory permissions of the account belonging to domainname.com back to the defaults. Specifically, everything in /home/<uniqname>/var/. This does not touch web data, since those can be set to anything you want, so we wouldn’t even know what to set them to.

Most of my stuff is database driven and I back up my mysql databases hourly. The pressing need is to be able to restore everything as is to a single point in time and then update dynamic data such as database data to within the hour if possible.

Webmins restore routines don’t mess with permissions at all, the file will be replaced as they were, permissions intact, restoring is always messy to some degree no matter what you use to do it, the desire it to minimize it as much as possible.

The onus is on me to make sure I get everything, which is what this discussion is about, I use webmin because Its capabilities are very sophisticated, I’m very familiar with the product and because it works, something that could not be said about Ensim at all. I do realize that I am taking a chance by using it which is the reason for hashing this out, however it’ll only be until a better method comes about for doing it through Interworx.

As you can probably tell, backups are one of the most important things on my list, just my long experience in IT playing itself out:>)

I’m inpressed with the support over here, glad I’m making the switch!

This is why I backup what I do on my box. I’m concerned mostly about the DoomsDay event. I can re-create a domain or two, and recover files and email during an hour’s outage. However, if the server dies, I want hdc to have a complete backup for the event of a bare-metal reinstall to hda.

/etc/ is backed up because it contains server configs, including the entire httpd/ config tree.
/var/ is backed up because it has logs, qmail configs, dbases, etc.
/home/ is backed up for user mail and site files, as well as iworx specific data
/service/ is for various djb software junk.

the -a option in rsync is a shortcut for a huge amount of options (equivalent to -rlptgoD) which, among other things, ensures that permissions are preserved (which will match new UIDs since /etc/passwd is backed up).

Incidentally, since interworx doesn’t recognize more than one disk, the -x option isn’t necessary. I hadn’t thought about the --delete option, but I think I’ll add it to mine; however, you should consider using --delete-after instead, since by default rsync deletes files before transferring. Also, -v is annoying in the logwatch reports. This leaves me with:


#!/bin/bash
rsync -a --delete-after /etc /mnt/backup
rsync -a --delete-after /home /mnt/backup
rsync -a --delete-after /service /mnt/backup
rsync -a --delete-after /var /mnt/backup

Can anyone from Interworx comment on whether I’m missing anything in my backups? I know that there’s no click-a-button-restore from my method, but I should be able to rebuild it, databases and all.

Not sure if you’ve tested InterWorx’s restore yet, but it isn’t messay at all. It’s literally “one click” if the restore file is in the right place.

Socheat, can that shell command be set up to just back up databases?

Yea, I’m more referring to a situation of a hd failure or a scenario where I had a rootkit infect the server and had to trash it .

I guess I should have qualified that, in the scenario of the rootkit, once the rootkit was discovered, they give you a small amount ot time before you get to wake up with a clean server with your old drive accessable for a period of time, as I suspect anyone like EV1 would do.

So my process is to login to the new server, wget the webmin rpm, install it, login to webmin and access the restore file and tell it to restore /home from last nights dump then restore all the config files, then restore live database data from the latest hourly dump, etc.

Before I decided to make the switch EV1 would have set me up with a server with Ensim on it already, which is what happend last time and when I found out about how bad their backup/restore process was as it literally took me 2.5 days to fully recover, thus I went the webmin route and tested restoring to a new server and the entire process took me less than two hours from the time the new server was handed to me.

I have not tested Interworx’s restore yet, but it looks like I’d have to have a method of accessing the system. Accessing the secondary mounted drive (FWIW, I now have a mirror drive I backup to), navigatingto the domain name, identifying the file name of the latest backup and then typing all of that into the restore box, restoring the site and then moving on to the next one.

And that’s after I’ve gone through the non-painless process of reinstalling Interworx, of course Interworx is going to have to be reinstalled anyway, thus I’m assuming that I’d install it first after webmin anyway, then restore /home, but I’m actually unsure of the process.

the rootkit was nasty, really upset my clients that I one got infected with one and two that it took me so long to recover, something that I guaranteed then would never happen again.

It was my fault, I had left an easy crack of a password on the server along with root login, thus I got what I deserved. Me and whoever it was were actually attempting to lock each other out for a time, I won because I had webmin and was able to quickly install a firewall with only the webmin port open, this allowed me to find out where the rk was, this is important because you want to insure you don’t restore it, and 2 take some last minute backups before I trashed the server.

I’d rather deal with a HD failure personally:>)

So based on this what would be the process with Interworx, if we were to awaken to a brand new server starting from scratch?

It’s still easier to do this with InterWorx :wink:

I guess I should have qualified that, in the scenario of the rootkit, once the rootkit was discovered, they give you a small amount ot time before you get to wake up with a clean server with your old drive accessable for a period of time, as I suspect anyone like EV1 would do.

So my process is to login to the new server, wget the webmin rpm, install it, login to webmin and access the restore file and tell it to restore /home from last nights dump then restore all the config files, then restore live database data from the latest hourly dump, etc.

This is how you would do it with IntrWorx:

  1. Get a new server and have a fresh OS installed onthe new hard drive and hav the old one slaved on
  2. Install InterWorx (not super hard – wget the install script and fillow the prompts installing your liscence when prompted) The most imortant thing here is to have a new fresh “stock” OS install without apache or any of the other stuff (InterWorx installs it’s own RPM’s) to minimize conflicts.
  3. Log Into NodeWorx
  4. Configure the DNS server settings NodeWorx => Server Settings (ns1 and ns2.whatever)
  5. Go to NodeWorx => Restore
  6. Point the restore to your backup file on the server’s file system and let it do it’s work. It really is “one click”

Before I decided to make the switch EV1 would have set me up with a server with Ensim on it already, which is what happend last time and when I found out about how bad their backup/restore process was as it literally took me 2.5 days to fully recover, thus I went the webmin route and tested restoring to a new server and the entire process took me less than two hours from the time the new server was handed to me.

I have not tested Interworx’s restore yet, but it looks like I’d have to have a method of accessing the system. Accessing the secondary mounted drive (FWIW, I now have a mirror drive I backup to), navigatingto the domain name, identifying the file name of the latest backup and then typing all of that into the restore box, restoring the site and then moving on to the next one.

And that’s after I’ve gone through the non-painless process of reinstalling Interworx, of course Interworx is going to have to be reinstalled anyway, thus I’m assuming that I’d install it first after webmin anyway, then restore /home, but I’m actually unsure of the process.

Anything you restore from the shell or Webmin has the possibility of NOT being recognized by InterWors and you’d end up making a lot bigger mess.

One thing I’ve learned about this software ofer the last year or so from my own experience and that of others I’ve talked to is that it’s a lot easier in the long run to work with InterWors than against it.

the rootkit was nasty, really upset my clients that I one got infected with one and two that it took me so long to recover, something that I guaranteed then would never happen again.

It was my fault, I had left an easy crack of a password on the server along with root login, thus I got what I deserved. Me and whoever it was were actually attempting to lock each other out for a time, I won because I had webmin and was able to quickly install a firewall with only the webmin port open, this allowed me to find out where the rk was, this is important because you want to insure you don’t restore it, and 2 take some last minute backups before I trashed the server.

I’d rather deal with a HD failure personally:>)

So based on this what would be the process with Interworx, if we were to awaken to a brand new server starting from scratch?

Ultumately it’s up to you how you do this, but I think you are making things harder on yourself using Webmin.

shrug to each his own.

Tim

I’d like to try different things like just restoring the InterWorx mysql database files which should restore all the siteworx settings since they are stored in the interworx database.

Is there any “testing” (something free :wink: ) version of InterWorx available?

Maybe limited somehow, but enough to test different things on a test server?

Here’s the “–help” of backup.pex

Optional:
–web (backup web files)

–mail (backup email files)

–databases (backup databases)

–all (backup everything in /home/uniqname. default if previous 3 are all omitted)

–email (email to send a “Done!” email to)

Required:
–domain (domain to backup)

–file (set the path and filename of the backup file. however, this will change in a future release to just setting the path, and the backup filename will always be autogenerated by the script to maintain consistency)

–siteworx (set the path and filename to the default locations and filename template)

Only --file OR --siteworx needs to be supplied, since they obviously achieve the same result.

Were just hashing it out that’s all, I’ve yet to implement anything yet.

First item, is that on an Red Hat enterprise system the install of Interworx is somewhat more than just the clean system and running the install script, there are some bundling issues aparently with the redhat rpms.

However I fully anticipate that after the server is installed the second step in the process is a ticket to Interworx to install the product on the server.

In the scenario using Interworx, I’d be restoring anywhere between 30 and 50 sites on a server depending on the server configuration, this is 30-50 restores vs 1 with webmin. It is also 30-50 backups per day vs 1 with webmin, not to mention that I take full backups of database data hourly, which is one backup and one restore to get all of it.

However, I’d prefer to use Interworx, given some enhancements to their process, 1. scheduled backups, and 2. The ability to dump all the sites at once via the control panel as well as restore all of them at once or one at a time from this one dump file. I never had have much need myself for a dated/timed backup file, never really had the need to restore anything except from the latest backup and for most of my sites recovering their database data to within the hour is much more important than anything else.

My SLA’s with my customers state that I’ll restore the site to the latest nightly backup and I’ll restore the database data to within an hour of the failure.

Boy did I miss on that with the rootkit failure:>)

Yes BUT siteworx users can’t use it in a cron job. The cron has to be set for user iworx :-p

Any solutions ?

======
Bleusin :

A comment :
if you want to backup all your siteworx accounts in one shot, you’ll have a huge cpu load during a long time, so keep in mind to do that during no peek hours (I have a PIV 3.06 Ghz HT with 2GB ram, and when I did a backup of my 40 siteworx account in one shot, the loadaverage and cpu loads goes through the “sky” during at least 30mn)

Why don’t you use a backup script as RSNAPSHOT for example. I use it to backup every 6 hours (4 per days) and every days (7 days) :
/etc
/home
/var
/usr

I also created a SH script that dump all mysql databases and this scripts is launch by rsnapshot

Pascal

Yes. In 1.9.2. :slight_smile:

Lol, so cool :slight_smile:

Ask for it and you’ll get it :-p

Pascal

Socheat,

What do you mean by “filename template” ?

–siteworx (set the path and filename to the default locations and filename template)

It could be great to provide an example :slight_smile:

Thanks a ton for your very good job iworx guys
(lol that made a long time that I had said this kind of trick :-p )

pascal, I’m not running any production data on interworx as of yet. I’m still running all of my sites under Ensim.

Over on that server I use webmin to backup all of my data because ensim is not reliable, this discussion is concerning whether or not I use webmin on Interworx or use Interworx.

I use Webmins backup programs to backup all the data at /home, all the config files for apache, ssh, proftpd, cron jobs, boot up and shutdown, users and groups, etc, etc (there are a lot of config files), all of that is dumped at midnight and it indeed does consume CPU, although it does not come close to bringing the server to its knees.

And I use webmin to dump my sql databases once an hour.

These are all automated scripts that get done as yours are.

Why don’t you use a backup script as RSNAPSHOT for example. I use it to backup every 6 hours (4 per days) and every days (7 days) : /etc /home /var /usr I also created a SH script that dump all mysql databases and this scripts is launch by rsnapshot

Thats what the discussion is about, I’m used to webmin and because of the lack of a scheduled ability with Interworx (until 1.9.2 aparently), one of the first considerations before I start moving sites from the old server is backups, thus I began this discussion to see what all I had to backup with webmin should I need to recover.

I think I’m just going to run a test, wish me luck:>)

Well , I wish you good luck :slight_smile:

For me, there is two kinds of backups. The box admin and user backup.

If you need a “all done” backup script to backups your system, then webmin or rsnapshot or every other backup script should do the job you need.

The box admin backup must take in consideration the fact that you’ll need to restore all important data as soon as possible. In this case, the RAID and a backup script are really enough. Take a look at http://www.rsnapshot.org/ as it is very simple to use / configure, indeed, you only have to edit the conf file to have it running.

Here is a sql dump script integrated to rsnapshot (the one I use)


# The assumption is that this will be invoked from rsnapshot. Also, since it
# will run unattended, the user that runs rsnapshot (probably root) should have
# a .my.cnf file in their home directory that contains the password for the
# MySQL root user.
#
# This script simply needs to dump a file into the current working directory.
# rsnapshot handles everything else.
##########################################################
# backup the database
PATH2DB=/var/lib/mysql

RSLT=`ls -1 --ignore mysql.sock --ignore padawan.carat-hosting.com.err /$PATH2DB`

for tocreate in $RSLT; do
        mysqldump -u root $tocreate > $tocreate.sql
        chmod 600 $tocreate.sql
done

With this and rsnapshot you easily can create an incremental backups at the frequency you need

#########################################

BACKUP INTERVALS

Must be unique and in ascending order

i.e. hourly, daily, weekly, etc.

#########################################

interval hourly 4
interval daily 7
#interval weekly 4
#interval monthly 3

This example means you’ll have and keep 4 backups per day (every 6 hours), 7 days of backup, 4 weeks of backup, etc …

I good idea, (I’m going to test this) would be to also integrate the iworx backup script in rsnapshot to backup all siteworx account for example. Like this you sure the old files will be deleted automaticly…

This easily allow you to respect your SLA.

The interworx users, may also create their own backup files, As like they want, by using the interworx-cp functions :slight_smile:

Here is how my backups work.

Good luck

Pascal

Tested some of it

I think I’m just going to run a test, wish me luck:>)

Ok I logged on and renamed did a full backup of /home with webmin. Webmin just basically uses the dump command. This was backed up to a file called “backups”.

After dumping everything I renamed the /home directory to /homeb and re-booted.

Everything of course broke as expected.

I restored the backup from “backups” and re-booted.

Everything came back up just fine with everything intact.

Then I renamed one if my /home/sitename directories.

Verified the site was not available.

Restored /home/sitename from “backups”.

Site was available with no problems.

This is a very basic test, however it seems to work so far.

I’m thinking that if I found a rootkit on this server today, then after I got a fresh system, got interworx reinstalled and working and ready for sites, then I could restore

/etc
/home
/var
/root

And reboot and I’d pretty much be where I was at the time of the backup. Anyone have any comments?

Also, I noticed the siteworx backup I took backed up to /home/sitename/backups. Using that directory, If something happened to /sitename, perhaps in a drunken stupor I accidentaly delete it or perhaps in a fit of rage at an old girlfriend who has a couple of sites on my server and is not very keen on paying me for them, I delete them, or perhaps a bad guy from 24 targets an EMP device at a specific site on my server.

I guess the question is can I backup to a directory outside of my /sitename directory for saftey’s sake, cause if that’s gone the backups are gone with it…

Back when I started in the IT biz, one of my early jobs as a newbie systems programmer trainee was to check the backups each morning and insure reruns, this consisted of checking the return code for the backup in the backup printouts for the backup jobs (this was an MVS system) and filing them away for safekeeping every day.

This was a very tedious job with over 200 jobs that had to be checked, that I focused on very well early on but after a few weeks at it and the fact that I had not had any errors show up in quite a while I got lazy sort of going through the motions checking some here and there.

Then came the day we had a couple of disk drive failures, these things always happened in twos because the actual volumes were in tandem on a pyhsical drum.

Needless to say, I as luck would have it one of the failures was on a drive that I had not checked very carefully, the backup had been failing for about a week and we had to restore from a backup that was nearly two weeks old as a result.

Thus was it beaten into my pea brain the importance of backups and after the dang rootkit I’ve developed backup phobia:>)

Mind you, I was prepared for a disk drive failure, the RK was a new twist because of the fact that it basically renders your backups useless unless you can make sure it does not live within any of the backup files and many of them do hook themselves into many places.

That’s why I’m taking pains to iron all of this out now…

Thanks for everyone’s input and advice…

I guess the question is can I backup to a directory outside of my /sitename directory for saftey’s sake, cause if that’s gone the backups are gone with it…

I gather that this is actually something they are working on, allowing you to schedule and store your backups remotely and later restore fron said backups.

The fact that Socheat said 1.9.2 not 2.0 indicates that they are well on their way with this with a deployment date soonar rathar than later (they never say an ETA so don’t ask). Just give them time, they like to do things right the first time. :wink:

Cool, I love it when other people do all the work so I dont have to. Thats what I thought would happen as the Iworx database is just a mysql database stored under the /home/interworx directory.

I already asked this question (look up several post in this thread) and Socheat was kind enough to give a detailed answer explaining how this can be done from the command line. I’m not sure if they will incoporate this into the 1.9.2 update so this can be done from NodeWorx. But the InterWorx team seems to program the same way I do, get something up and running that is good (and stable) then build on functionality and improve it. So the NodeWorx backup you see now will look very different from the one you see 6 months from now.

And I dont think you have to worry about the EMPs (yet). As far as I know it can’t be targeted to a SiteWorx account yet, just like a 10 mile radius or so. I think it’s safe to say if you lose your data so will a lot of other people :smiley:

That’s that I love about companies like this, they do it right the first time.

I’ll be moving about 10 sites this weekend, I’m really not that worried about things at all, I’ve got the new server locked down tight with a firewall and all the appropriate settings set up and installed rkhunter the first day which is scheduled to run and I review my logs daily for intrusion attempts.

So I’m not really that worried about a rootkit again, simple security measures shuch as secret user names that are the only ones allowed to log on the the system along with long phrase passwords will make all but he most dedicated intruder go away, the stuff on my server is just not that important.

I actually thought all along that it was likely a help desk employee from EV1 actually did it after I gave them access to resolve an issue one day, but who knows. I do know that they tried their best for days on end trying to get back in, I would see hundreds and hundreds of attempts a day for a while there.

All that is said to say that I think its unlikely that I’ll be experencing a rootkit again and thus won’t have to go through a nightmare restore scenario again.

And I backup / to a mirror drive in case of a disk failure so I’m pretty much covered there.

I’m not worried in the least, lookin forward to the journey. I have to tell you one of the things I really have fun with when encountering a company like Nexcess/Interworx and a great product like this is the enjoyment I get seeing it mature and getting to use it as it does.

This company reminds me of pmachine, a company I have the utmost respect for as Express Engine is one of the best products I’ve EVER run across.

Thanks again for all of the insight and advice…

Glad to have you with us in our little cult . . . err group :wink:

If you have questions, give us a hollar. Someone here will no doubt know the answer :slight_smile:

I actually thought all along that it was likely a help desk employee from EV1 actually did it after I gave them access to resolve an issue one day, but who knows. I do know that they tried their best for days on end trying to get back in, I would see hundreds and hundreds of attempts a day for a while there.

Be carefull saying things like this on a public forum unless you have some sort of proof of something malitious from them. Libel also counts on the Internet, and this comes close to that line.