So I’ve started getting some users with large (~2GB) IMAP accounts. At various times throughout the day the load creeps up from it’s normal 1 to about 5-10 based mostly on the IOwait usage.
I’m not sure if there is a way to make IMAP not put a D priority when reading the directory so that it doesn’t take over the server. Or some other config I can do to lessen the load?
If that isn’t possible, how hard do you think it would be to setup InterWorx to use /home for website and a /email partition on a separate SSD drive?
I’m guessing I would have to do some hard links manually after setting up an account?
Or I guess I can just tell my users archive emails and not allow inboxes that large.
Yeah, I figured the move wouldn’t be easy within the InterWorx setup.
I have 7,200K drives. I could maybe see the price for 15k, since that should be cheaper than SSD and maybe just make those my main drives for the whole server.
I think SSD would still be faster in my example, but if 15k can solve the issue that’s all I care about.
If your looking at ssd, then 1gb are expensive (enterprise class) and there are circumstances which you are better to use plater drives (15K)
We have introduced a new server, using hybrid drives (ssd + plater - 7.5K), which are working very well, using a 6 drive raid array in raid 1, each drive is 1Gb, so that gives us 3 x 1gb drives
This appears to be working very well (touch wood), and largest spike we have seen is just under 2 (about 1,8), so you may want to consider similar
If it helps you decide, we can let you test on a old server we are bringing off line, and you can load on a 2gb mailbox and see how it performs and we can load a test server using ssd3, 250gb as well if you want
Thanks for the offer John, I just told the few users (all under one account/domain) to archive their inboxes or we would have to move them to an Exchange or similar hosted system. When I’m charging shared hosting prices, I have to set some limits somewhere.
My first thought is that SpamAssassin or any other spam defense is the main cause of load spikes, especially if you’re using Bayes on a site-by-site basis. But I’m sure there’s more to your case than meets my eye. First thing I’d do is what you already did and set rules about house cleaning.