Just wondering if anybody has succeeded in this here. I’m talking about 2 servers in separate data centers. I’m thinking it would be something like…
rsync home dirs, /etc, qmail, vpopmail
The only thing I’m not sure about is the iworx database. I haven’t read enough through mysql replication to see if this is even possible. http://dev.mysql.com/doc/refman/5.0/en/replication.html
Is this something that can be done? Any foreseen problems?
MySQL replication works fine. We have had it in production for over a year w/o problems (not on an Interworx environment though). Only thing to check from now and then is that the status of the replication is consistent on both nodes.
Interested to see how you get on with this…
How would the DNS setup work for this?
mydomain.com A ip18.104.22.168
mydomain.com A ip22.214.171.124
Where ip2 would only be used if ip1 is unavailable. I know that tinydns supports multiple A records for the domain but it will return a random address to the client. That might be okay for static sites or lower-traffic dynamic sites, but my intention is to use a lower-end server as the “sidekick” with some additional features to support the main, higher-end server. As well as a failover solution should anything happen to the main server/dc, or if the main server ever needed maintenance/hardware upgrades/replacements/etc.
I’ve read a good page about it here: http://www.tenereillo.com/GSLBPageOfShame.htm … but not sure how to set this up with tinydns.
The document you reference is about why DNS based global server load balancing does not work. While what you want sounds good in theory, I think you’ll encounter some difficulties in implementation if this is for general hosting. (If you are doing this for your own sites only, then it might work out).
Some issues I would consider:
Session persistence. Any scripts that depend on cookies will not work reliably in your proposed setup.
Database access can also be tricky if you’re writing to the database in multiple locations.
How do you control which server your customers publish their pages to? If you’re using rsync for example, they could only publish to the “master” server, since publishing to the slave would cause their files to be overwritten.
Scalability. The more sites you host, the harder this will be to scale as you’ll have more data to sync, and sites will be inconsistent in between syncs.
DNS. If you only want the secondary site IP address to appear if the first is down, then you’ll have to find a way to update DNS after the failure is detected. That is also a very tricky issue since you’ll have to establish a method to check heartbeat between the two servers. You could end up in a situation where network connectivity between your two servers is bad (but in fact both server networks are up), and both will try to take control.
Yeh, right now I’m just exploring my options. I know there isn’t an absolute way to achieve 100% uptime but I’d like to get a good failover solution. I only host my design clients and I currently work a full-time job separate from running a web server and developing sites, so I can’t really devote a full week to recovery if I had to.
BTW, raid makes me nervous when its a rented server at a data center in another state. I’m just not comfortable with it unless I can get my hands on it. Call me crazy, but I’m just more comfortable with a 2nd server. Easier to test and debug a single server failure than a hard drive failure (which I’m not even sure is possible)
Something like http://www.autofailover.com/Features/Autofailover.htm or http://www.dnsmadeeasy.com/s0306/prod/dnsfosm.html … sound interesting, although webtrader’s points and the GSLB document make me continue to seek. But I may use it as a 3rd point of failure after some sort of failover option with the current server/data center.