Siteworx Cron Using wget

I’m trying to setup a simple cron job via Siteworx and cant’ get things to work the way I need them to. We’re running an Interworx Cluster with 2 nodes and the cluster manager.

The actual cron is simple… wget http://www.[domain].com/script.php . From what I can tell, the cluster manager ( which I understand handles the crons entered in Siteworx ), isn’t able to properly resolve www.[domain].com and the cron just times out.

I’ve been told that I can use wget http://[cluster manager ip]/~[account name]/script.php which would work in most cases but not in this one. The cron that I need to run depends on the host name being www.[domain].com and not the ip for licensing reasons.

So, I somehow need to get wget http://www.[domain].com/script.php to work properly. I’m hung up on the thought that this seems to be a very simple task and I’m sure someone has to have come across this before and hopefully found a solution. I’m hoping that either I’m missing something or that there’s a simple solution out there.


I have exactly the same problem. The reason why it won’t work in my particular case is, that with http://[cluster manager ip]/~[account name]/script.php you can only access the primary domain of one account.

I already found out, that on the cluster manager I get “couldn’t connect to host” when I try to access the shared IP using curl/telnet/wget/… tcpdump shows that requests are sent out but no replies are coming back.

My current explanation is, that it is some kind of problem with the loadbalancer (LVS) when connecting from localhost.

One solution could be to configure curl and wget with an external proxy server that routes HTTP connections outside of the cluster.

But this would only solve the problem for curl and wget, but if someone for example tries to connect to his own website through php-functions inside his webpage the problem would still persist.

Have you found a solution?

Hi arusa


I hope you don’t mind, but I read your post this morning and was thinking about it, but I class myself as newbie so please forgive me if I am totally wrong or my thoughts cause unexpected issues if you try it.

If I understand correctly, the slaves and master connect to the same /home dir, and as your trying to wget from a domain held on IW siteworx domain, why not tell it to go direct to only slave or master.

I.e edit your hosts file and add domain your wanting to resolve too internally my.domain.url domain.url

I could be entirely wrong though, which is why I have started my post in capitals to only try on a test server.

I believe this should bypass the LVS whist allowing normal DNS to resolve and serve as normal, but I am not too sure if it would stop LVS working as expected.

I hope it helps a little perhaps

Many thanks



I have same problem, but only by centos 6 (centos 5 working good) If cluster manager don’t serving web requests then this problem occurs (“couldn’t connect to host”)
If cluster manager serving web requests in cluster then no problem by centos 6 with cron wget.
I didn’t find the solution for this problem (i don’t want to allow http request to cluster manager)


Hi Kacsa

I hope you don’t mind, but if you have a test sever to try to above, it might work but is not perhaps a nice resolve to the issue, and could cause unexpected issues.

If you edit your host file, you should be able to tell your siteworx domain which slave to use for wget, my problem is I am in the dark as I do not know if it would work, or might cause unexpected issues, or perhaps even stop the LVS from working, or working as expected, but I am thinking not, as external http requests would go through LVS I think from the CM.


CM (set to not server http requests)
node1 (serving http requests)
node 2 (serving http request)

As I understand it, all nodes use the CM /home dir, or if using a dropin NAS/SAN are set to use that home directory.

Domain for wget www.domain.url (held on CM /chroot/home but both nodes can access this area)

hosts file held on node1 www.domain.url (node1)

hosts file held on node2 www.domain.url (node1)

or you could set hosts file held on node2 to node2, and not node1, as whichever node is been used, would need access to /chroot/home for domain

If I am correct in my thinking, with the above, any http requests internally from domain www.domain.url back to www.domain.url, would first use hosts file, and therefore internally route directly to www.domain.url, even if it is node2 serving the http request. The CM LVS would simply serve external http requests as per load balance policies to either of the nodes (node1 or node2).

As I said though, this is untested, and only my thoughts for a quick solution/workaround, but I appreciate it would not work if all domains required wget cron back to itself or internall domain held on IW, as it would become unmangeable very quickly, and has other pitfalls I’m sure.

I could be entirely wrong though, so I apologise in advance and hope another more experienced user would post their thoughts.

Many thanks


My Solution with the Proxy Server is working.

d2d4j: your Solution should work also, but it would mean manual work for every domain and adding every new domain in future…

Hi arusa

Many thanks, I thought it might work but as we have not had any needs to do this, it’s just my thought.

The adding of new domains should be able to be done from a hook when adding new siteworx account, but I can instantly see issues if the hosts file contained domain.url, and not www.domain.url as the email may also use hosts file perhaps, and would cause issues, so my thought is not a long term solution sorry.

Yours sounds the better option

Many thanks