Announcement

Collapse
No announcement yet.

Considering the switch...

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Guest's Avatar
    Guest started a topic Considering the switch...

    Considering the switch...

    Have reviewed your site in detail and have some questions about the product.

    1. Can I import from ensim?

    I currently have ensim and recently had a problem while attempting to upgrade mysql to a more current version. I managed to screw up the upgrade without knowing it (broke php connections to mysql) and because ensim has this horrible method of updating sites via maintenance settings, when I ran the process I completely killed ensim because of the bad mysql install.

    Ensim is really really integrated into the whole management of the server much more than I really am comfortable with, not to mention their support sucks on their forums.

    I have purchased a clean server with no control panel on it and am planning on moving all of my sites off of ensim as soon as I can, so being able to automate this process somewhat with your control panel would be a great plus, but not mandatory as I can just rebuild the sites for the most part.

    2. How integrated is Interworx?

    From question 1, how likely am I to break interworx if I upgrade and install software? If I want to upgrade mysql for instance, or php, or perl, or anything else for that matter how do these upgrades get propagated to the sites?

    I use webmin for most of my server management tasks, so I don't want to conflict with interworx while doing so.

    Your system appears to be much much cleaner in its method of maintaining the server and itself than ensim ever thought of being so the integration is probably not as big of an issue with you as it is with ensim.

    I've pretty much decided on getting interworx, but just want to know what's ahead of me before I commit.

    Thanks!

  • IWorx-Socheat
    replied
    In 1.9.2, we've added a --file-path=<path> where you can specify the path, and the name will be generated for you.

    Leave a comment:


  • kipper3d
    replied
    Thanks Socheat,

    The problem with using backup from nodeworx is that we do not have the control where the files are backed up.

    Ok, How do you get --siteworx option for filenaming conventions and still use --files option to place the location?
    Last edited by kipper3d; 03-17-2005, 02:11 PM.

    Leave a comment:


  • IWorx-Socheat
    replied
    Originally posted by kipper3d
    The info below is good. Exactly what i needed. Who needs gui if you have cli?

    Ok question, whats the output file type? tar.gz?
    Yes, it outputs a .tar.gz.

    Originally posted by kipper3d
    So I could back up to second hard drive, but for restore, i need to place customers backup within their site location? What if it exceeds their disk quota usage?
    From the CLI you can simply run: /home/interworx/bin/./convert.pex

    There are parameters you can pass to convert.pex, but it is completely interactive and will prompt for any missing information. Here are the main params of interest:

    --archive=<backupfilename>. Tell convert.pex where the backup file is.

    --control-panel=<panel>. Valid options here are "siteworx" or "cPanel". It will tell convert.pex what control panel backup it's working with

    --ip-address. Specify the IP address that will be used for the restored account. IP address must exist on the box.

    Again, if you leave any or all of the above parameters out, interactive mode will be started and will ask for the missing information. Additional parameters are:

    --force. If an account exists with the same domain as the one in the backup, it will automatically be deleted and overwritten by the backup data without prompting you.

    --quiet. Tell convert.pex to run quietly (disables interactive mode). convert.pex will abort if any required information is missing (i.e., it has to prompt for it). For example, if an account exists with the same domain as the one in the backup, and --force is *NOT* specified, convert.pex will abort.

    Originally posted by kipper3d
    Is there a way I can get this to increment in number or date/time for filename?
    Don't want to overwrite the backup each time. Like to keep a few days of backups before overwriting.
    Originally posted by pascal
    What do you mean by "filename template" ?
    By using --siteworx, it will automatically generate the filename of the backup file for you. The filename "template" is:

    <domain.tld>_<type>_<timestamp>.tar.gz

    Where domain.tld is the domain being backed up, <timestamp> is the time the backup was *started* (not finished) and in the format YYYYMMDD_hhmmss. <type> indicates the type of backup. If <type> is missing, this is a full backup. A partial backup type takes the form: partial-dmw, where "d" = database, "m" = mail, and "w" = web files.

    So, for example:
    abc.com_20050303_155912.tar.gz is a full backup for domain 'abc.com' made on March 3rd, 2005 at 15:59:12 (3:59:12PM)

    abc.com_partial-m_20050316_23632.tar.gz is a partial backup of mail for the domain 'abc.com' made on March 16th, 2005 at 2:36:32 (2:36:32AM).

    abc.com_partial-dm_20050316_23632.tar.gz is the same as above, except it's a partial backup of both database and mail files.

    Originally posted by kipper3d
    --domain is one of the requirements. --domain all?
    Currently, the only way to do this is from NodeWorx. Go to the Backup page, and you can check all the domains you want to run a backup from. If you know the all the domain names, you could write a shell script that loops over them. For example:

    Code:
    #!/bin/sh
    for domain in "abc.com" "nexcess.net" "interworx.info" "slashdot.org"; do
      /home/interworx/bin/./backup.pex --domain=$domain
    done
    I would suggest not backgrounding this command because as some of you may have noticed, the backup is fairly resource intensive, so you might not want multiple backups running at once.

    Leave a comment:


  • kipper3d
    replied
    Also, how do you back up all domains, instead of just a domain?

    --domain is one of the requirements. --domain all?

    Leave a comment:


  • kipper3d
    replied
    Hi,

    One last missing peice.

    Is there a way I can get this to increment in number or date/time for filename?
    Don't want to overwrite the backup each time. Like to keep a few days of backups before overwriting.

    Leave a comment:


  • kipper3d
    replied
    The info below is good. Exactly what i needed. Who needs gui if you have cli?

    Ok question, whats the output file type? tar.gz?

    So I could back up to second hard drive, but for restore, i need to place customers backup within their site location? What if it exceeds their disk quota usage?

    Thanks!

    Originally posted by IWorx-Socheat
    Here's the "--help" of backup.pex

    Optional:
    --web (backup web files)

    --mail (backup email files)

    --databases (backup databases)

    --all (backup everything in /home/uniqname. default if previous 3 are all omitted)

    --email (email to send a "Done!" email to)

    Required:
    --domain (domain to backup)

    --file (set the path and filename of the backup file. however, this will change in a future release to just setting the path, and the backup filename will always be autogenerated by the script to maintain consistency)

    --siteworx (set the path and filename to the default locations and filename template)

    Only --file OR --siteworx needs to be supplied, since they obviously achieve the same result.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Cult eh? Is there a philosophy that goes along with this cult?
    not really, just that server/website managers should be easy to use, yet powerfull.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Understand, I qualified it that I said "I thought" and I I did have a long discussion with them about it, not to mention that the exact same problem was pasted across their forums.

    This is not meant to be a stab at EV1 at all, EV1 is great. As I said, its just something that came across my mind.

    As I said, who knows, I do know that 99% of the security breaches come from employees within and not from the outside. All I know is I opened a ticket, whereas I gave them access to my system and 48 hours later I was dead in the water.

    As a matter of fact that was the only way I got them to give me some time on the system, I basically pointed that finger right back at them and told them that based on industry analysis the most liklely suspect was someone within their orginazation.

    This stuff I do here is just a hobby so far, my real job is with a somewhat large computer services company and I've had many a discussion about security breaches that occured from within that never make it to the public forum with some pretty big customers.

    The point there is that no-one should trust anyone.

    Least of all your wife, your best friends, as a matter of fact the only folks you can trust are your drinking buddies, they actually never remember anything:>)

    Cult eh? Is there a philosophy that goes along with this cult?

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Glad to have you with us in our little cult . . . err group ;-)

    If you have questions, give us a hollar. Someone here will no doubt know the answer :-)

    I actually thought all along that it was likely a help desk employee from EV1 actually did it after I gave them access to resolve an issue one day, but who knows. I do know that they tried their best for days on end trying to get back in, I would see hundreds and hundreds of attempts a day for a while there.
    Be carefull saying things like this on a public forum unless you have some sort of proof of something malitious from them. Libel also counts on the Internet, and this comes close to that line.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    That's that I love about companies like this, they do it right the first time.

    I'll be moving about 10 sites this weekend, I'm really not that worried about things at all, I've got the new server locked down tight with a firewall and all the appropriate settings set up and installed rkhunter the first day which is scheduled to run and I review my logs daily for intrusion attempts.

    So I'm not really that worried about a rootkit again, simple security measures shuch as secret user names that are the only ones allowed to log on the the system along with long phrase passwords will make all but he most dedicated intruder go away, the stuff on my server is just not that important.

    I actually thought all along that it was likely a help desk employee from EV1 actually did it after I gave them access to resolve an issue one day, but who knows. I do know that they tried their best for days on end trying to get back in, I would see hundreds and hundreds of attempts a day for a while there.

    All that is said to say that I think its unlikely that I'll be experencing a rootkit again and thus won't have to go through a nightmare restore scenario again.

    And I backup / to a mirror drive in case of a disk failure so I'm pretty much covered there.

    I'm not worried in the least, lookin forward to the journey. I have to tell you one of the things I really have fun with when encountering a company like Nexcess/Interworx and a great product like this is the enjoyment I get seeing it mature and getting to use it as it does.

    This company reminds me of pmachine, a company I have the utmost respect for as Express Engine is one of the best products I've EVER run across.

    Thanks again for all of the insight and advice...

    Leave a comment:


  • Justec
    replied
    Cool, I love it when other people do all the work so I dont have to. Thats what I thought would happen as the Iworx database is just a mysql database stored under the /home/interworx directory.

    Originally posted by bluesin
    Also, I noticed the siteworx backup I took backed up to /home/sitename/backups. Using that directory, If something happened to /sitename, perhaps in a drunken stupor I accidentaly delete it or perhaps in a fit of rage at an old girlfriend who has a couple of sites on my server and is not very keen on paying me for them, I delete them, or perhaps a bad guy from 24 targets an EMP device at a specific site on my server.
    I already asked this question (look up several post in this thread) and Socheat was kind enough to give a detailed answer explaining how this can be done from the command line. I'm not sure if they will incoporate this into the 1.9.2 update so this can be done from NodeWorx. But the InterWorx team seems to program the same way I do, get something up and running that is good (and stable) then build on functionality and improve it. So the NodeWorx backup you see now will look very different from the one you see 6 months from now.

    And I dont think you have to worry about the EMPs (yet). As far as I know it can't be targeted to a SiteWorx account yet, just like a 10 mile radius or so. I think it's safe to say if you lose your data so will a lot of other people :D

    Leave a comment:


  • Guest's Avatar
    Guest replied
    I guess the question is can I backup to a directory outside of my /sitename directory for saftey's sake, cause if that's gone the backups are gone with it...
    I gather that this is actually something they are working on, allowing you to schedule and store your backups remotely and later restore fron said backups.

    The fact that Socheat said 1.9.2 not 2.0 indicates that they are well on their way with this with a deployment date soonar rathar than later (they never say an ETA so don't ask). Just give them time, they like to do things right the first time. ;-)
    Last edited by timryberg; 03-11-2005, 07:48 PM.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Tested some of it

    I think I'm just going to run a test, wish me luck:>)
    Ok I logged on and renamed did a full backup of /home with webmin. Webmin just basically uses the dump command. This was backed up to a file called "backups".

    After dumping everything I renamed the /home directory to /homeb and re-booted.

    Everything of course broke as expected.

    I restored the backup from "backups" and re-booted.

    Everything came back up just fine with everything intact.

    Then I renamed one if my /home/sitename directories.

    Verified the site was not available.

    Restored /home/sitename from "backups".

    Site was available with no problems.

    This is a very basic test, however it seems to work so far.

    I'm thinking that if I found a rootkit on this server today, then after I got a fresh system, got interworx reinstalled and working and ready for sites, then I could restore

    /etc
    /home
    /var
    /root

    And reboot and I'd pretty much be where I was at the time of the backup. Anyone have any comments?

    Also, I noticed the siteworx backup I took backed up to /home/sitename/backups. Using that directory, If something happened to /sitename, perhaps in a drunken stupor I accidentaly delete it or perhaps in a fit of rage at an old girlfriend who has a couple of sites on my server and is not very keen on paying me for them, I delete them, or perhaps a bad guy from 24 targets an EMP device at a specific site on my server.

    I guess the question is can I backup to a directory outside of my /sitename directory for saftey's sake, cause if that's gone the backups are gone with it...

    Back when I started in the IT biz, one of my early jobs as a newbie systems programmer trainee was to check the backups each morning and insure reruns, this consisted of checking the return code for the backup in the backup printouts for the backup jobs (this was an MVS system) and filing them away for safekeeping every day.

    This was a very tedious job with over 200 jobs that had to be checked, that I focused on very well early on but after a few weeks at it and the fact that I had not had any errors show up in quite a while I got lazy sort of going through the motions checking some here and there.

    Then came the day we had a couple of disk drive failures, these things always happened in twos because the actual volumes were in tandem on a pyhsical drum.

    Needless to say, I as luck would have it one of the failures was on a drive that I had not checked very carefully, the backup had been failing for about a week and we had to restore from a backup that was nearly two weeks old as a result.

    Thus was it beaten into my pea brain the importance of backups and after the dang rootkit I've developed backup phobia:>)

    Mind you, I was prepared for a disk drive failure, the RK was a new twist because of the fact that it basically renders your backups useless unless you can make sure it does not live within any of the backup files and many of them do hook themselves into many places.

    That's why I'm taking pains to iron all of this out now...

    Thanks for everyone's input and advice..

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Well , I wish you good luck :)

    For me, there is two kinds of backups. The box admin and user backup.

    If you need a "all done" backup script to backups your system, then webmin or rsnapshot or every other backup script should do the job you need.

    The box admin backup must take in consideration the fact that you'll need to restore all important data as soon as possible. In this case, the RAID and a backup script are really enough. Take a look at http://www.rsnapshot.org/ as it is very simple to use / configure, indeed, you only have to edit the conf file to have it running.

    Here is a sql dump script integrated to rsnapshot (the one I use)

    Code:
    # The assumption is that this will be invoked from rsnapshot. Also, since it
    # will run unattended, the user that runs rsnapshot (probably root) should have
    # a .my.cnf file in their home directory that contains the password for the
    # MySQL root user.
    #
    # This script simply needs to dump a file into the current working directory.
    # rsnapshot handles everything else.
    ##########################################################
    # backup the database
    PATH2DB=/var/lib/mysql
    
    RSLT=`ls -1 --ignore mysql.sock --ignore padawan.carat-hosting.com.err /$PATH2DB`
    
    for tocreate in $RSLT; do
            mysqldump -u root $tocreate > $tocreate.sql
            chmod 600 $tocreate.sql
    done
    With this and rsnapshot you easily can create an incremental backups at the frequency you need
    #########################################
    # BACKUP INTERVALS #
    # Must be unique and in ascending order #
    # i.e. hourly, daily, weekly, etc. #
    #########################################

    interval hourly 4
    interval daily 7
    #interval weekly 4
    #interval monthly 3
    This example means you'll have and keep 4 backups per day (every 6 hours), 7 days of backup, 4 weeks of backup, etc ...

    I good idea, (I'm going to test this) would be to also integrate the iworx backup script in rsnapshot to backup all siteworx account for example. Like this you sure the old files will be deleted automaticly..

    This easily allow you to respect your SLA.

    The interworx users, may also create their own backup files, As like they want, by using the interworx-cp functions :)

    Here is how my backups work.

    Good luck

    Pascal
    Last edited by pascal; 03-11-2005, 06:25 PM.

    Leave a comment:

Working...
X