Has anyone managed to add any sort of HA/redundancy to InterWorx “unofficially”, i know they are working on official support, but i’m looking for possible solutions now.
Mysql will be a remote cluster, DNS will be offloaded (Amazon Route 53) and Storage will be a glusterfs storage array, nfs bound to each node + manager.
I just need a way to sync cluster manager actions to other cluster manager instances. If this is not possible, whether just setting up a single cluster manager to handle the cluster node synchronization, and then setup multiple load balancers to do the job of the cluster managers built in LVS meaning no single point of failure.
Any ideas/advice welcome,
Well, you can already run external database servers with interworx. There is a configuration for it in the MySQL section of nodeworx. Though i don’t think you have support for limiting database sizes. Considering the cluster manager being redundant… I think you will just have to wait for iworx to release theirs.
Limiting of database sizes won’t be too much of a problem, i can code something to keep tabs on that.
Regarding the redundancy, im aiming more for the non single point of failure. Im not too fussed about running two cluster managers in active/active currently (will be nice in the future though). Im planning on running my own load balancers so if the manager goes down, then i can keep serving content. It seems interworx plants the cluster manager ip within the apache vhost configs etc, which could be a pain. Hopefully if i setup ip tunnelling to my load balancers and enable those ip’s in the manager they will automatically be included, otherwise i can knock something up to monitor the vhost files and add my load balancer ip’s to the vhost block.
I hope you don’t mind but I’ve been thinking about this since I read it and I think you may be better waiting for interworx to produce the supported version.
Reason been it will most likely be easier to use, update and faster to implement, whilst keeping the session states as far as possible.
However, there are some ha/failover already available, such as pacemaker, but I’m confused over your loadbancers, and how they will handle load when the cluster manager handles these, sorry if I’ve misunderstood.
I suppose my ideal ha/load balance would be instigated at dns level, for instance I believe using powerDNS, someone has already scripted for failover should a particular a record not respond, and then change the a record to a different one, hence failover to a different node, but the downside I suppose if everyone did this, would be loading in dns, but powerDNS handles an awful amount of queries thankfully, but there’s the issue over dns cache etc…,
I am so confused why you can’t have a floating ip between the 2 manager nodes at load balances, and use internal/internal for cluster managers level, thereby saving making it complicated. Sorry if I’m not understanding correctly.
Lastly, I think if you restrict the databases, where 1 database runs out of space, it will bring the site down, and interworx uses the allocated space to the siteworx account to include the database size, because if the siteworx account space becomes full, again the website stops. I could be entirely wrong and apologise if I am or of I’ve misunderstood sorry.
I would love to wait for official support for this, but could end up waiting a long time. I’m also not sure if the solution InterWorx are working on will just be a failover solution or a proper active/active solution. Maybe they could clarify this?
I know the cluster manager handles the load with LVS, but im wanting to get rid of that single point of failure. I want to be able to host several load balancers/nodes in different datacentres, so floating ip’s is not really an option.
Session states not too much of a problem, can either make sessions mysql or even memcache based.
I want to be able to offer latency based/route optimized web serving, if i was to host the dns myself i would most likely end up spending more on infrastructure vs offloading to a managed dns provider which supports this already.
The mysql cluster will be mariadb based, and externally hosted. If space becomes an issue, i can either code some hooks to automatically keep a tab on this and suspend the account/other actions if required, or be able to easily move the mysql cluster onto bigger and better hardware with minimum disruption to services. I could even offer dedicated mysql cluster solutions to the client.
Lastly, no need to apologize, even i get lost in the jargon. It will take a lot of trial and error to get the functionality i want/require, but i believe it to be worth it in the end, the ability to still serve content if and when the cluster manager along with LVS goes down. Ofcourse, the easy way would be to stick the cluster manager on a HA cloud instance to minimize the downtime, but this route makes putting nodes in different datacentres pointless, as all traffic still has to go via the cluster manager.
If you have any more thoughts, please don’t hesitate to express them,
Many thanks, I try not to offend people and perhaps some of my responses may not be relevant.
I’m still not too sure of the full extent your trying to achieve, as it appears to be expanding, but I understand what your trying to achieve, and perhaps Cloudflare may help, which is incorporated within Interworx.
I hope no one minds, but here is 2 links (which I take no responsibility over the accuracy and anyone who implements either, do so at their own risk), which may be of a little help, or give more ideas on how to best attain your solution. (I’m pretty sure you have already seen these though, so apologies of you have).
I hope this helps