Planning a new cluster config with Interworx.. help

Hi… first I would like to start by saying I am very impressed by what I am seeing here. I’m currently already colocated with my own cpanel servers and would like to migrate them to Interworx so that they are all clustered… but besides that I am also planning on an expansion of services which will involve purchasing 50 servers over the next month or two.

Because of the massive amounts of traffic and server resources required I would love to be able to cluster all 50 of these servers into a single cluster. Services would include all the typical type of hosted services, nothing unusual. I have been reading through forums and couldn’t find much in the way of current information about the capabilities of Interworx solution.

I am thinking of something along the lines of having all 50 servers setup with Pentium D CPUs, 2GB RAM, and 160GB SATA II HD and then have a CP Manager RAID 10 5TB NAS storage solution.

I am a little concerned about the single NAS storage as a point of failure. Would it be more prudent to have the network divided into 5 groups of 10 with each group having it’s own NAS? Or would it be possible to have a dual NAS solution of some kind? I know it’s a RAID10 and all but all it takes is a board blowing and everything goes, even though there are tons of drives.

Any kind of feedback on this would be much appreciated. Especially from Interworx :slight_smile:

Besides this… if anybody has any tips on how to migrate 9 Cpanel servers to Interworx into a cluster config it would also be appreciated. Is it just a matter of setting up a Custer manager with one node setup then just kill each server and move the IPs over to the cluster manager?

Thanks!

Jonathan


how to make a vaporizer

I had similar plans but decided to use Interworx for new sites going forward rather than migrating sites from cPanel to it because the migration would have created extremely large amounts of trouble. Things like MySQL, while migrating the data, cease to work because the connection strings must be reset.

Other than that, I’d definitely break your cluster into 5-10 clusters, as Chris pointed out to me that Apache load times get affected as you load up the cluster with sites.

50 Pentium D’s must mean you have like 10,000+ sites… that’s a big migration.

For the record, the other reasons I didn’t migrate were actually a bigger deal:

  1. No FrontPage support.
  2. Email doesn’t get transferred.

Just to clarify: Email accounts get created but, as of InterWorx-CP 2.1.2, the contents of their mailboxes do not get transferred. This feature will be added in a future release.

Socheat

new release

Socheat,
Is there any idea when this new update would be out? Cause I would really like to be able to transfer mail before we completely switch over to interworx, also any update on frontpage? thats another major holdback.

cPanel email maildir importing won’t be in this next release, and while fleshing out the cPanel importer (and all the others) is high on our to do list, we have yet to decide which release it will be in.

Unfortunately, we have no current plans to support Frontpage.

Socheat

I was thinking of approximately 200-500 sites per server load average. So in a cluster of 10 there would be 2000-5000 sites being servered within that unit. So I’m looking at 5 seperate clusters each served by their own NAS. Little more rack space then I was hoping to use… but oh well :slight_smile:
[SIZE=2][/SIZE]
[SIZE=2]I am understanding from these replies that the issue with migrating would be FP and email issues. So it’s advisable to hold off on a move… fair enough… perhaps should just look at setting up new servers with interworks and deal with moving each client over slowly one by one. Was kinda hoping the mass import worked as indicated in the sales letter :)[/SIZE]
[SIZE=2][/SIZE]
[SIZE=2]I don’t care much about Frontpage… I don’t think any of my clients are using it really.[/SIZE]
[SIZE=2][/SIZE]
[SIZE=2]Thanks for your feedback! Has anybody actually used the cluster setup in a production environment and have something to say about it?[/SIZE]
[SIZE=2][/SIZE]
[SIZE=2]Jonathan[/SIZE]


honey oil

How are IPs handled in this cluster config? If I have 100 Ips per server… if one server goes down… how do the IPs get handled? Or are all the servers essentially sharing all the same IP aliases?

Jonathan

If you use the Direct Routing technique, which requires the server be in the same physical network segement (no problem at most DC’s!), the load balancer is your key point of failure, as it binds to your IPs that clients connect to.

See http://www.linuxvirtualserver.org/VS-DRouting.html for an explanation.

And Interworx use the direct routing technique.

I think they also work on a HA solution (not sure but I think)

Pascal

We use direct routing as has been said above. All of the nodes are mapped the IPs used for hosting purposes. Only the CM answers ARP requests for the IPs so if it goes down then the whole shebang is down. If any of the cluster nodes go down things are “ok” except that the LB will still pipe traffic to teh downed node in the current version.

We are working on a HA solution so stay tuned :slight_smile:

Chris

So basically this means all the IPs are routed directly to to the CM and then the CM forwards the requests to the appropriate servers. I understand that the CM is a point of failure in the entire setup. However if you have a decent RAID5/6 server with some backup, it’s an acceptable risk.

Are you saying that if a cluster server goes down that traffic will continue to be sent to it? If I understand that right… that means any client sites that are located on IPs that are routing to a particular server that went down, then the CM will effectively just keep trying to route them there and all the sites for that server will be down.

If this is the case, when will this be overcome? Or am I not understanding you correctly.

I am looking at getting started very shortly and would like to know I am getting what is advertised.

Thanks

Jonathan

Clustering

Since I received and email this morning from one of my customers saying that he posted a comment about my company in the Beer Garden area, I thought I would jump in here are lend a hand.

If a node goes down in a cluster the sites will not go down as the CM
has all the site data and so does any other node in the cluster. That’s
what a cluster is all machines have the same data.

No matter what setup you use the CM is always your single point of failure.
What Interworx offers is a real bargain. We have spent $7,000 for a
Coyote Point cluster setup in the past. Take a look at the graphic at the bottom of this page. http://www.coyotepoint.com/e250si.htm
What they are telling you is that if you want redundance you will have to buy a second one so now the price is $14,000. Other companies such as F5
charge far more than this.

If a person runs RAID in the CM and if you want extra protection use a server with redundant power supplies. That’s about as good as it gets.

Regards
Larry

Since I received and email this morning from one of my customers saying that he posted a comment about my company in the Beer Garden area, I thought I would jump in here are lend a hand.

If a node goes down in a cluster the sites will not go down as the CM
has all the site data and so does any other node in the cluster. That’s
what a cluster is all machines have the same data.

No matter what setup you use the CM is always your single point of failure.
What Interworx offers is a real bargain. We have spent $7,000 for a
Coyote Point cluster setup in the past. Take a look at the graphic at the bottom of this page. http://www.coyotepoint.com/e250si.htm
What they are telling you is that if you want redundance you will have to buy a second one so now the price is $14,000. Other companies such as F5
charge far more than this.

If a person runs RAID in the CM and if you want extra protection use a server with redundant power supplies. That’s about as good as it gets.

Regards
Larry

If a node goes down does the CM redirect to a working one, or do users still get directed to that node and recieve an error?

I agree about the cost vs. load balancers. For small-midsize hosting/applications I think Interworx offers a great value as a “plug n’ play” control panel plus load balancing.

With any load balancing setup there are normally a couple of schemes that can be used. The popular ones are Round Robin and Least connections.
Least connections directs traffic to the node with the least number of connections. If a node goes down the CM stops sending any traffic to
that node but the websites will still function as normal as they are serviced
the other node or the CM directly. The CM will still try to contact the failed node just to see if it comes back up.

Somebody from Interworx will have to comment on their Round Robin setup
as I have not used it. If you have ever worked with Round Robin DNS it works as follows. The DNS server directs traffic to a number of servers in turn. Computer A gets a request, then computer B and so on. The down
fall to Round Robin DNS is that the DNS server has no way of knowing if
one of the nodes is down or not as it is simply directing traffic to IP numbers in a certain order. I would think that the CM would be much smarter than the old Round Robin DNS setup but again the Interworx
folks will have to clarify.

If a node goes down, in the current version. the CM will continue to push traffic to it as I’ve said on many other posts on the subject. We’ll be changing this along with many clustering features in future versions.

Chris