Currently the way interworx works is by having one cluster manager to load balance the dns, web, mail, database and such to the other nodes.
However this leads to a single point of failure within the cluster. Since interworx use LVS, is there a way in which we can make Interworx become a high availability solution?
I’ve seen it working with redundant cluster managers - I believe they use DRBD to keep the data between the two in sync. I’ve been eagerly awaiting the release of HA clustering, but so far no ETA as far as I’m aware.
Does anyone know what data needs to be synced? Using DRBD or NFS + heartbeat/corosync/whatever shouldn’t be too difficult, assuming there aren’t hundreds of small files scattered around the filesystem that are required. Of course, in this case “HA” would be a relative term - I’m sure existing sessions would all die, but new sessions available within a min versus real downtime is close enough for my purposes.
Actually, that appears to be exactly what they’re doing in that video, using openais, DRBD, and 3 partitions - home, iworx, and mysql. Assuming you can spin mysql off to a remote host, and that you are already using NFS to store /home, this really leaves a single “iworx” partition to worry about. At a glance at least
It’s been several months since I’ve messed with HA linux. I’ll obviously have to get my feet wet again, real soon.