Any easyish / automated way to keep an backup server, mostly offline, in sync with production for fail over

My goal is to have a relatively inexpensive way to have a backup cloud / virtual server at another provider from production, in case of an extended downtime on production server. My idea is to update this backup server like once every two weeks (give or take) and besides this synching time it would be spun down so the cost would be drastically reduced. So this wouldn’t have the latest version of a website available doing it this way, but it would be better than having the website offline. This would be a bit of a pain having to switch DNS, but if I use CF for all my sites it could make it a quick switch (and think CF has a way to automate DNS changes based on server being up or down).

One way I was thinking that might work and would get around the DNS issue is InterWorx Clustering. I’m not sure if there is a way to utilize InterWorx Clustering to achieve my goals? Guessing it would require getting a cheap cloud server to be always on and act as just the director. Then the primary server would be run through that as a node. Then I could spin up my backup server every two weeks to allow it to sync from the primary server?

Other idea was thinking maybe I could setup some kind of rsync between the production and backup servers. Basically spin up the backup server, turn database services and then just rysnc the mysql and home directories?


InterWorx clustering is mostly just for load balancing. I don’t believe that it will work the way that you are describing, because no site data outside of vhosts are really stored on the nodes–they’re just NFS shares that refer back to the cluster manager–all of the actual site data is stored on the manager. I also am uncertain the ramifications of a node going on and offline constantly. There will end up being a LOT of commands that would need to be processed by the node between gaps and I’ve seen the past that causing issues on the node, with the potential for race conditions, or commands ending up being processed out of order. And if the cluster manager (main server) goes down, so does everything else.

You could potentially set up a backup script on your main server that is connected to a cron that runs every two weeks or so, and transfers the backups to the backup server. You would just have to make sure that the backup server is on during that period.

Or you could just set up the script and then trigger it manually when you have the backup server up and active.

Here is some information on creating backup scripts: How to: Backup and Restore SiteWorx Accounts from NodeWorx and the CLI — InterWorx documentation

This will just create backups and move them, though. It won’t automatically import them. If you want an easy “this domain’s information is on this secondary server as an active and existing SiteWorx account and I just need to switch DNS” system, the script may not be useful, but if you want to have a quick way to create the account from a backup, it may be.

1 Like

Thanks for your thoughts Jenna. This seems like a good plan. With the scripts I guess I could write a script on both ends, one to create the backup and sends it over SSH, and one on the backup that will scan a directory for new backups, delete the SiteWorx account, and import it.

Or like you said, I could just leave them sitting there on the backup server and then do an import if I need them which will just add a little bit of time.

Both approaches seem good to me. I like that ideas can be shared.