I have recently blogged about Slicehost vs. Linode and my decision to move my sites to the latter. Since then I can safely say that I made the right choice about the move. Linode's support is phenomenal. There has never been a ticket unanswered more than 5 minutes and all the tickets have been resolved. Even when I asked about a configuration issue regarding Heartbeat, which was clearly not in the realm of support, the support engineers did look at my configuration and did identify the error area. That alone saved me hours of troubleshooting and trying to find where the error was.
Your website must be available - all the time. Why? Because if you have something to say (a blog, a journal, a rant page, a forum etc.) you need your audience to be able to read your material all the time. Discussion forums, websites with custom applications or services, even information based websites need to have as close to 100% uptime as possible.. How can this be achieved? The solution is a High Availability (or HA) cluster.
Linux has been used for HA tasks for many years. The Linux-HA's wiki has a lot of valuable information as well as guides and resources that one can use to increase the redundancy of their websites. In addition to this, Google is your friend. There are numerous bloggers that have shared their experiences with the public on how to create HA resources. Finally, one can check Linode's Library - a set of guides to allow you to create HA clusters with your Linodes.
My task for the last few weeks has been to create a High Availability cluster of services to serve PHP and MySQL and also have the ability to grow infinitely (well close to that is). In the next few weeks I will post a series of blog posts outlining how to achieve a HA cluster for your sites.
The cluster will be build using CentOS as the OS of choice. I have also experimented with Gentoo and Ubuntu. You can do everything I do here with Ubuntu if you wish. There are slight differences in certain commands and steps which the blog posts will not cover. As far as Gentoo is concerned, at the moment there is a block between Pacemaker and Heartbeat. Once that is resolved, I will try to redo the whole thing using Gentoo - as it is my OS of preference.
We will build two boxes to serve as load balancers. The boxes (or Linodes) will have Heartbeat, Pacemaker and nginx installed on them. A "floating" IP address will be used to move from one node to another in the case of a failure. Each Linode uses nginx as a proxy to forward all requests to a set of web servers using nginX's proxy functionality.
We will also build a web server. This again is a CentOS box running PHP and nginx. The web server will store the data locally and connect to the database cluster. Once the last part of this How-To is completed (creation of a HA NFS) then we will be able to add as many web servers as we need.
The next set of boxes form the database cluster. The setup is two servers with Heartbeat, Pacemaker and MySQL setup with a Master/Master replication. Again a "floating" IP address is used to move from one server to another in the case of failure.
The NFS is also a 2 box setup. Again a "floating" IP address is used to connect to the file system. DRBD is installed on them to cater for the replication.
All Linodes are located in the same data center. At the moment there is no way to create Linodes in different data centers and implement the above mentioned setup in an effort to achieve geographical redundancy. The whole setup is using 9 Linodes the last one being used for administrative tasks (note in my count I used 2 boxes as web servers).
All Linodes have active iptables configurations. All nodes have been configured to work with 322 as the SSH port to avoid the novice hacker. Every port is blocked from communicating with the Internet apart from the ones needed for essential services. For instance the Load Balancers allow connections on ports 80 and 443 (http and https). However, they only communicate with each other on the Heartbeat port. Equally the web servers do not allow connections to their 80/443 ports to any machine other than the Load Balancers. The Database Servers allow connections only from the Web servers etc.
The administrative node resides on a different data center. It runs Nagios and it monitors all the nodes of our HA cluster. Naturally the iptables setup of each node is adjusted to allow connections from this particular node.
I hope that these blog posts will serve as a learning exercise/guide to those who want to delve in High Availability websites. The primary reason I built this cluster is to offer these services to customers that have busy sites that require maximum availability. I am currently setting up the final touches of the hosting service I am going to offer so stay tuned for the details.
Through the years I have hosted my sites on various hosting companies. I had the really good experiences like Vertexhost and really terrible ones - I don't remember the name of the host, but that kid, as it turned out to be later on, managed to lose 1.6GB of my data. You can safely say that I have been burned by bad hosting companies but also have enjoyed the services of good ones. In the case of Vertexhost, I part owned that company a few years back and I know that the current owner is a straight up guy and really cares for his customers.
Since I moved my emails to Google Apps I only need the hosting for my personal sites such as my blog, my wife's sites (burntoutmom.com, greekmommy.net) and a few other small sites.
I used to host those sites on one of my company's clusters. The bandwidth consumed was nothing to write home about (I think in total it was a couple of GB per month ~ 1.00 USD) so it didn't matter that I had them there. However, recent events forced me to move them out of that cluster. I was on the market for good and relatively cheap hosting. I did not want to purchase my own server or co-locate with someone else. My solution was going to be a VPS since I would be in control of what I install and what I need.
I got their 4GB package (250.00 USD per month) and installed Gentoo on it. Apart from the price which was a bit steep, everything else was fine. I was happy to be able to host my sites in a configuration that I was comfortable with, under the understanding that if the VPS failed, then all my sites would go down. That however is the risk that everyone takes while hosting their sites on a single machine. The higher the availability and redundancy the higher the cost.
I must admit that signing up was not a very happy experience. I went and paid with my credit card, as they pro-rate your month based on your package. Almost immediately after signing up, came the email informing me that my credit card has been charged for the relevant amount. I got into the box through ssh, updated the
/etc/make.conf file with the USE flags that I needed, run
emerge --sync and then
emerge --update --deep --newuse --verbose world so as to update the system.
It must have been around 5-10 minutes into the process that I received an email from Slicehost saying that they are checking my account information and that I need to confirm my credit card details. I immediately replied to their email (gotta love the desktop notifications on GMail), with the information they needed.
After I sent the email, I noticed that the box was not responding. I tried to log back in and could not. I was also logged out (and could not log back in) to their management console on Slicehost site. I was fuming! They severed the connection to the VPS in the middle of compilation to check my credit card information. I understand that they need to perform checks for fraud but two questions came to mind:
No more than 10 minutes later the whole thing had been resolved. I received an email saying that "everything is OK and your account has been restored", at which point I logged back in to redo the compilations. I also received emails from their support/billing team apologizing but stating that although the initial email states that they charge the credit card, they don't. It is something they need to correct because it pisses people (like me) off.
There was nothing wrong with my setup - everything was working perfectly but the price was really what was bothering me. I would be able to support the sites for a few months, but since literally none of them is making money (maybe a dollar here or there from my blog but that is about it), I would have to pay out of pocket for the hosting every month. I had to find a different solution that would be:
After a lot of research I ended up with two winners: Linode and Prgrm. I opted for Linode, because although it was quite a bit more expensive than Prgmr, it had the better console in handling your VPS. I will, however, try out Prgmr's services in the near future so as to assess how good they are. They definitely cannot be beat in price.
Setting up an account with Linode was very easy. I didn't have any of the mini-saga I had with Slicehost. The account was created right there and then, my credit card charged and I was up and running in no time. Immediately I could see a difference in price. Linode's package for 4GB or RAM is 90.00 USD cheaper (159.00 USD vs. 250.00 USD for Slicehost). For the same package, the price difference is huge.
I started testing the network, creating my VPS in the Atlanta, GA datacenter (Linode offers a number of data centers for you to create your own). The functionality that was available to me was identical and in some cases superior to that of Slicehost. There are a lot more distributions to choose from, and you can partition your VPS the way you want it to name a couple.
Shifting through the documentation, I saw a few topics regarding high availability websites. The articles described using DRDB, nginX, heartbeat and pacemaker etc. to keep your sites highly available. I was intrigued by the information and set off to create a load balancer using two VPSs and nginX. I have documented the process and this is another blog post that will come later on this week.
While experimenting with the load balancer (and it was Saturday evening) I had to add a new IP address to one of the VPS instances. At the time my account would not allow such a change and I had to contact support. I did and got a reply in less than 5 minutes. I was really impressed by this. Subsequent tickets were answered within the 5 minute time frame. Kudos to Linode support for their speed and professionalism.
For a lot cheaper, Linode offered the same thing that Slicehost did. Moving my sites from one VPS to another was a matter of changing my DNS records to point to the new IP address.
I have been using Linode for a week and so far so good. The support is superb and the documentation is full of how-to's that allows me to experiment with anything I want to - and the prices are not going to break me.