Clustering questions

Hi all,

I definitly like to upgrade to a cluster solution.

The question is an hadware choice.

I currently have 2 boxes. One PIV 3.06 Ghz qnd one Atlas 3.0 Ghz with 2 sata disks

I think that in the cluster solution the critical line is the cluster manager itself ?
So in the new architecture I plan to have a strong cluster manager and lower other boxes for nodes. Also the share storage is on the cluster manager, so I need to have huge and speed scsi disks attached to it.

Is it sound right ?

For example :
The cluster manager : Xeon with 2 scsi disks on raid 1
The nodes : Maybe a celeron or a PIV 2.8 with only one 80 GB disk

What do you think about an architecture like this one ?

The second step will be to secure the cluster manager by installing a backup server manager and user so,ething like heartbeat or something else…
But first we have to go to the first step and the hardware architecture seems to be important

All your experiences, comments, etc … are welcome

Thanks

Pascal

Since the current solution isn’t HA by itself Pascal the Cluster Manager should be as redundant hardware-wise as possible. RAID 1 is a good choice.

Chris

I’ll add also that if IWorx does go the route of HA eventually, you’ll wish you had matched/identical hardware between the boxes.

For the moment, if it’s just load balancing, I don’t imagine it will matter much to have ‘lesser’ hardware for the nodes – but HA is such a tasty option you may regret not having matched the hardware from the beginning.

JB

Thanks all for your coments

Does Interworx May do the solution design and implementation ?

Pascal

We don’t at this time Pascal but if you wanted to post your thoughts on systems/configs I’m sure we’d all be happy to help.

Chris

Thanks Chris.

Our hardware choice would be something like this :

  • For the manager
    1 Atlas Server 1850-300 Pro
    . Two 64-bit Intel XeonTM processors at 3.2GHz
    . 1MB L2 cache
    . 2GB DDR-2 SDRAM, upgradeable to 12GB
    . Two 300GB 10,000RPM Ultra320 SCSI drives in RAID1
    . Hot Swap Drives
    . 100Mbps Network Connection
    . Intel PRO/1000 MT single port Gigabit NIC (at 100Mbps)
    . 2000GB Data Transfer

  • For the node
    1 Atlas Server 850-160
    . Intel Pentium IV processor at 3.0GHz DUAL CORE (new)
    . 2MB L2 cache
    . 800MHZ Front Side Bus
    . 1GB 533MHZ DDR-2 SDRAM, upgradeable to 8GB
    . ONE 60GB 7200RPM S-ATA drives Maybe less ?
    . 100Mbps Network Connection
    . Intel PRO/1000 MT single port Gigabit NIC (at 100Mbps)
    . 2000GB Data Transfer

Our plan is :
-1- Install the centos 4.1 on both boxes
-2- Perform the software implementation and configuration for LB and cluster (system and Iworx-cp)
-3- Create few tests account and perform some tests
-4- Validation of the hardware, system, software config : Go or NoGo for migration
-5- Migrate ALL siteworx accounts from all our exsting boxes to the node manager (share storage, in fact)
-6- Do some tests : Go or NoGo for production

We still have some questions.

Today we have two boxes that are not on the same IP node (branch ?), tomorrow they will and have to be.

-1- What about the resellers migration ?
I’m just a little afraid about this.

-2- My other question is about the backup.
Today we do the backup off ALL siteworx account on all our boxes in one shot. The backups are stored on a NAS. Our main pbm with this is that it take 3-4 Hours and use a high cpu usage.
Tomorrow, with the LB we plan to launch the backup on the node boxe, and not on the manager. This way, the node boxe will be busy during 4-5 hours and will have a very high cpu usage but all the services should be kept alive and all requests should be take by the manager, so the final client should not be impacted ? isn’t it ?
It is our main reason to migrate to a LB/cluster solution, do we right ?

-3- We have some problems with the migration :
The main one is that if the siteworx account has reached is storage limit the full backup of his account will not be done at all. It could be great to allow, the nodeworx admin to perform full backup without tqking in account the Quota of the siteworx account.
How could w be sure to have all our siteworx account backuped well ?

The second one is about reseller account adn theirs siteworx account

So here we are now. We’ll begin the config asap, and if we have a pbm we’ll not hesitate to let you know :slight_smile:

Thanks all for your comments helps thoughts about all these things

Pascal

Pascal, keep us posted. I’m also very interested to switch to Interworx with the full-blown clustering option.

If I may suggest, I’d prefer to use Dual Opterons server, than Dual Xeons. The Opterons smoke out Xeons anytime :smiley:

And since when there’s a 300 GB 10k rpm U320 SCSI drive? I think you mean 300 GB 7200rpm SATA drives? If you do intend to use SATA drives, the new Western Digital SATA2 drives are very fast and you may couple them with the new 3ware 9550SX RAID card in RAID-5/RAID-10 configuration for best performance.

Thanks BlueFace

Is there some iworx customers whom use the iworx cluster/LB ?

Is there some user returns ? Some iworx users experiences should be great :slight_smile:

Also Am I right when I say that the DB server is on the Cluster manager ?

Thanks

Pascal

Yeah, I believe the DB server is in the cluster manager server.

I’m also wondering how fast the disk read/write access from the slave/node servers to the master server (cluster manager)? Is it quite same like on local hard drives?

I’m afraid of I/O bottleneck or iowait issue on the cluster manager if there is heavy disk usage.

We will also be running this solution with the following h/w: (still CAN’T get the line break from the keyboard! why not? what do you press? i can’t seem to find the combination! :frowning: switching to notepad now).

Cluster Manager:
Dual Xeon 2.8 1MB Cache
2GB RAM
2x 73GB SCSI 10.000 RAID1 Disks
2x gigabyte lan cards
First cluster node:
Pentium 4 3.0GHz
2GB RAM
2x 80GB SATA Disks
2x gigabyte lan cards

I don’t think there will be a bottleneck here, will there?

Best regards from Spain!

Hi

Why 2x 80GB SATA Disks on the clustered node ?

Is some of you already done this iworx solution ?

Pascal

SATA Disks are faster than IDE and slower than SCSI, but the price difference between SATA and SCSI is very high, so for a node, I think it’s enough. Having 2x80GB allows RAID1 and I have the disks sitting on on my table at the moment and don’t need to buy new ones! :wink:

Yeah, for a node, SATA is more than enough. If possible get the WD Raptor 10k rpm SATA HDD - they are very fast. Software RAID-1 is also more than enough for a node.

I’m surprised that none of the Interwork guys replied to this thread, makes me wondering if the clustering feature would really work as expected in live production environment.

Exactly !!!

Still waiting for iworx users returns just to know if this solution is “mature” or I’d better wait for few upgrades (that I may understand)

Pascal

Just to clarify: you’re wondering if anyone has used SATA in our InterWorx clustering solution? We’ve had a very busy week last week, especially Chris who is the authority on InterWorx clustering, so we apologize if we’ve been a little slow to answer.

I think we are all talking about the real clustering: Will two (or ten) servers work with just one cluster manager? Will the cluster manager be the limiter? What hardware do you guys at Iworx recommend? Will the bottleneck be at the processor level, memory, network throughput? I think this is what we are all talking about… (Correct me if I’m wrong ;))

And in general what are the iworx users returns and comments about the iworx LB/cluster solution ?

Do you some known problems ?

I’d really like to know if I’d better wait for few updates or go on now ?

Also in my precedent post I ask for few questions see http://www.interworx.com/forums/showpost.php?p=5498&postcount=6

Thanks Socheat, and don’t worry I can wait :-p

Pascal

Still no update? :confused:

I think our clustering docs may answer a few of these general questions:

The “Overview” sections contain a good deal of background info (how the clustering works, etc).

I’ll make a few comments, but Chris will definitely correct me if I’m wrong. :slight_smile:

Currently, there is only one cluster manager in a cluster (that is, you can’t set up two managers in the same cluster). In the current setup, all SiteWorx account data is stored on the cluster manager’s hard drive(s), including database data. The nodes access this data on the cluster manager via NFS.

Socheat

Hi,

An important question.

I know that LVS use generaly NAT for the communication beetween the cluster manager and the nodes clustered servers.

What I have to ask for my datacenter ?
Do I need 2 ethernet cards on the cluster manager ?

If Chris or Paul could contact me, it could be great. Indeed, my datacenter confused me with their proposal. I need to be sure that I have to ask for

  • 2 independant boxes with public IP
  • 1 boxe with public IP and one with private IP

Ok what I’m sure is that these boxes have to be on the same network Branch.

Chris, Paul, could you please contact me by email (the one store in your forum conf), I have to ask you few important questions before buying my new servers. Or maybe I may open a support ticket ?

Thanks a ton

Pascal