Possible cluster configuration for storage usage need help

I just submitted a ticket about this, or actuallyi added this into a ticket to see if they might be able to help. I wanted to make a thread on this though to get a braoder scope on this.

I have 3 servers set up, the manager is a octacore with 32 gb ram and 2 x 3TB sw raid 1. I wanted to use this for shared / wordpresss hosting.
The second box is a 24 processor, 8gb ram with only 250gb storage, I added this in just for load balancing purposes. The third that I am about to add in is a e3 1230 v2 quad core ( 8threads) 32 gb ram 2 x 2tb sw raid 1. I want to ultimately use this third box for reseller hosting.

Now what I am trying to figure out is the storage. In the docs it seems almost like you have to select one point of storage. I want to first tsee if there is a way to set up to where new resellers from here on out ( cause I added a reseller for my sites which I would keep on the first box) are saved on the third box, even if I need to symlink that or whatever to be called home2 or something. In my head this seems like it would be a pretty hard task, but I am not a linux admin, more general support tech.

If this isn’t possible I would like to then see about making the third boxes storage become an extension so that it would pool itself with box1’s storage and have 5tb total.

Anyone have any suggestions or ideas?

ok, met me see if i get this correct.
Box 1:
Strong performance all around … Depending on expansion slots available, I would probably look into a RAID 10, 5, or 7. You want your custer master to have good HDD I/O since it will handle a lot of read/write, and that will be hard on HDD durability, so you want some redundance incase of HDD failure.

Box 2:
More or less a perfect cluster slave, I would how ever prefer to have 2GB RAM/core. Giving it 1-2 sister machines wouldn’t hurt, If one goes down, You will always have the others taking up the slack. Raid is unimportant here, since you can quickly replace hdd and spin up a new machine if you get a HDD failure on a slave, the other slave(s) will cover the service while you work.

Box 3:
This is something of the joker of the group. If I was you, I would stick this one full with HDDs and pick up a license of Server backup from Idera. I wouldn’t host any services on this one, just have it manage and store your backups.

Your cluster of box 1 & 2 can handle both the shared and reseller hosting. Specially if you have 2 or more of ‘box 2’. 200+ customers can easily live on 500GB storage. In my experience, most accounts rarely break 200 MB. So your issue will rather be with I/O, and the fact that all your eggs are in the same basket.

I don’t know what budget you have, and if you have any spare change to make the changes I proposed. you could always start with how they are now, and add the additional HW when you get the cash for it.

response

Thing is though, the reason I was wanting to set up the accounts to be housed like that ( shared / wp on box1 and reseller on box3 ) is because box1 is limited to being able to have only a few IPs, and box 3 can have plenty, and I figure that resellers would have more of a chance to want to have a private IP, so that is why I wanted to set it up like that.

Also the raid arrays are software base, which I am really not to thrilled about, but itll be okay. With that I wouldn’t really think doing anything beyond a mirror / raid 1 array would be to great. especially if I get hundreds of customers and more system resources are being used just to operate the raid functionality you know?

Hi

I hope you don’t mind but I’m not understanding why box1 can only have a few ip whilst box3 can have plenty, and in a cluster, I think the cluster manager handles all ip assignments, therefore the way I see it, is rather like a group over single. If your servers will be housed seperate to each other in the datacentre, you could add additional network cards and bring in ip from different nodes/gateways, which we have done.

I think Evanion advice is a good setup, for many reason, as an instance, have you considered how your going manage backup in event of failure. Also, and I suppose I may be an exception, but some of our hosted accounts are very large, with anywhere between 50 and 200gb, but ofcourse they pay extra for this

I’m thinking if you still want it setup the way you first posted, would it not be easier to have box 1 and 2 as a cluster and box 3 as a standalone, which would save you a lot of time and achieve the same result.

I’m also not too sure why you prefer hardware raid but using software raid, all our servers have full hardware raid as standard, have you looked to see if your servers are capable.

I’m sorry if I have not fully understood and these are just my thoughts.

Many thanks

John

response

Right now I am working with the resources that are available. You see for a year now I have worked with a few people setting up hosting companies, and learned the hard way that even if you know that person really well, that it isn’t promised to work. After learning some lessons, I just decided to start my own services. I have worked for some pretty large hosting providers and web work is what I love. This plus last month, October, I had spent most the month sick and in the hospital.

So I do not perfer software RAID, who would? The IPs limitation is the suppliers , that is why I am deciding to put shared and Wordpress hosting on that.

Now, it doesn’t matter the setup I have really. After I invoice some more clients and expand my resources I will get different setups. I just wanted to know about the validity of my idea, and wondering if anyone had some suggestions as to how to go about setting something like what I metioned up. Does anyone have any suggestions on the setup and how it might be plausible, because asking why I have the servers I do really isn’t offering much progression you know?

OK!
That clears up some questions.

My recommendation would then be to use my first setup, but keep the Box 3 as a standalone, non clustered server. It will probably start to exhibit some performance issues in a few months time… Depending on the number of resellers and sub client accounts, but by then you should have enough to add 1-2 slave boxes to it, and make box 3 in to a cluster master.

What scares me though is that this doesn’t leave much security in the form of backup.
The included backup system in Interworx is slow, and resource intensive. And I wouldn’t recommend using it for regular backups. The system is great if you want to make a one off backup for migration, or as a backup for some maintenance etc.

Hi Oxilary

I’m sorry, I didn’t mean to offend sorry.

I think your real question you were asking is as follows:

using 3 servers, setup as 1 cluster master with 2 slaves, is it possible to:

have 2 separate /home folders, and have the cluster/slaves operate on both ( i.e. home1 and home 2) and to have the cluster manager setup accounts on the 2 different /home folders depending upon criteria.

or

merge 2 separate storage devices into 1 /home folder for the cluster to use (i.e /home1 + /home2 = a large /home3)

(please note the storage devices are the hard drives held on the master cluster (/home1) and 1 slave (/home2) )

I would consider this to be a lot of effort to achieve, and quickly thinking about the methods which I suppose could do this, may actually cause more resources to be used, but I am not expert sorry.

I could be entirely wrong, so I apologise in advance.

I am glad your feeling better, and hope you continue to make a full recovery, but I would also if I may, ask you to seriously consider Evanion advice, as it is well founded, and you may not have the responses on this forum to answer your question, and any advice/suggestion posted, is free based on your post only, which is at your own risk if used, but it is your decision.

Many thanks

John

The included backup system in Interworx is slow, and resource intensive. And I wouldn’t recommend using it for regular backups. The system is great if you want to make a one off backup for migration, or as a backup for some maintenance etc.