Tag Archives: MySQL

The Switch from HostGator to DigitalOcean

Introduction and History

About 6 years ago, I was in desperate need of a reliable webhost. I had a small personal website that I needed to be online most of the time, and I preferred to have something that was cheap, yet customizable. The solution that I found was HostGator’s shared hosting. The service was fairly cheap, had full access to MySQL servers, mail servers, and everything else that I needed. I stayed on their shared hosting plan for about two years. Then, I upgraded to a HostGator VPS.

I stayed on this VPS until about 2 weeks ago. The VPS was running CentOS and had WHM and cPanel installed. It was mostly managed, meaning that everything was installed and managed by the HostGator team and changes that I needed to make were usually much faster if I contacted the HostGator team directly instead of attempting to make the changes myself. This served my purposes very well since at the time I knew next-to-nothing about web server management. However, as I learned more about the way that web servers and UNIX systems worked, I desired to manage my own web server. Thus, I made the decision to switch to an unmanaged Digital Ocean VPS. There were also a few benefits of doing this.

The Benefits of Making the Switch

  1. The most notable benefit of switching away from any service to a Digital Ocean droplet is the decrease in price. I was able to nearly triple all of the specs of my webserver and still pay less per month. I could use this extra money to donate to some of the Open Source projects that I decided to use or to pay more skilled individuals to build a new website for me. Whatever I decide to use it for, paying less for more is something that should never be passed up.

  2. Speed is also a great benefit. Since Digital Ocean’s droplets all run on SSDs, there is a noticeable speed difference when working with my Digital Ocean VPS compared to my HostGator VPS. I’m not trying to say that my HostGator VPS was unbearably slow in any fashion, but rather than the Digital Ocean VPS is just so much faster.

  3. An unmanaged VPS is great. Instead of relying on other people to setup my system and maintain my system, with a Digital Ocean VPS, I am in control of everything. I get to choose what operating system I use, what software I install, and how frequently things get updated. It’s a nice shift from having people manage the entire webserver. Of course, there are some drawbacks to this. For instance, when a customer’s product spontaneously decides to malfunction, it is very difficult to work on a solution without knowing anything about the webserver itself. In this particular example, it is definitely useful to have people who know the server more than you; however, the goal is to learn through these experiences and ultimately know as much as the server-managers know.

  4. The cool factor is, well, cool. When you tell people that your VPS is from Digital Ocean, there is some sort of coolness to it. People seem to respect those who have an unmanaged VPS, especially from Digital Ocean. It’s a plus, I guess.

The Process of Switching

The great part about setting up a Digital Ocean VPS is that they charge by the hour; thus, there is no reason to fear wasting your money on a server only to spend hours setting it up (and ultimately failing). On top of this, they allow their servers to be dynamically resized and upgraded, which allows you to simply purchase the cheap server, mess around with it, and then upgrade when everything is ready.

With that being said, the first step of switching to a Digital Ocean VPS would be to set one up. I did exactly this. I chose the smallest VPS available, perused the options of software, and chose what I was comfortable with: CentOS 6.5. Despite running Arch Linux on my own machine and having previous experience with Ubuntu, I knew that CentOS had a lot of options available to it as far as webserver management software was concerned. The first steps were to pick that webserver management software.

The Web Panel

Like I said previously, my old VPS had WHM installed to manage all of the server software, packages, and cPanel accounts. Then, cPanel was used to manage individual websites. This software setup was perfect for my needs. In addition to myself, I also had several clients who paid for hosting on my VPS. These clients, of course, needed to manage their own website, so I needed some sort of web panel with strong admin/client separation.

The first WebPanel I ended up trying was Sentora, which is an Open-Source fork of the once legendary zPanel. Upon installing Sentora and having a look, however, it seemed to lack all of the features I wanted in a webpanel. Namely, Sentora lacked the ability to assign domain names to user accounts (Or maybe I simply did not see the option). This is something that was completely necessary for me so that I could keep track of all the domains that were hosted on my server. Although it had a beautiful design and seemed to be functional, I had to leave it.

What I ended up going with instead CentOS Web Panel, which was very similar in design and functionality as WHM and cPanel’s Paper Lantern theme. The installation of the software was very straightforward and the steps were explicitly outlined on the CentOS Web Panel website. After installing the software, it’s very easy to setup the nameservers, FTP accounts, and other domain names on the server. With CWP, everything just seemed to work, which was a plus.

The WebPanel also featured a decent admin/client separation in its design. When one logged in with an admin account, the options available were very different from the options available when logged in with a client account. This is exactly what I was looking for. There are a few drawbacks to the webpanel, however. Since the software is free and in its early stages, it is not the most efficient webpanel available and feels slow or clunky when typing in information or editing fields. On top of this, the team is a little slow to update the associated versions of PHP, MySQL, phpMyAdmin, etc. However, you can’t ask for too much from free software. That is, afterall, the best part of CWP. It’s free.

CentOS Web Panel Admin Page

CentOS Web Panel Admin Page

The NameServer Switch

Since I know nothing about how nameservers work, this was the hardest part of getting everything setup for me. I essentially had to switch my namesevers from the preconfigured HostGator setup to my own custom setup. Luckily, CWP has native support for this. Basically, the first thing that I had to do was go to my domain name registrar and configure my root domain (http://brandonsoft.com) to be a nameserver. I then pointed both the NS1 and NS2 subdomains to the IP address of my new Digital Ocean VPS. Once this was complete, I configured the ns1 and ns2 A Records on Digital Ocean’s DNS Management panel. Once this was complete, I made sure to change CWP’s nameserver options to reflect the IP Address of my Digital Ocean server. After waiting about a day for all of these changes to take effect, everything worked perfectly. Since all of the domain names that were hosted on my HostGator account were already pointing to my nameserver (whose IP Address was just changed), they automatically began to point to the new location.

The Data Migration

Now that all of the software was installed and all of the domains pointed to the right location, the process of migrating data could begin. This was a daunting task, of course, since I had about 10 websites worth of data whose files and databases needed to stay in tact, preferably without any noticeable change. Luckily, the process was actually rather easy to do. Of course, I made many mistakes along the way, so here are the steps that I took that ended up working properly:

  1. In the individual cPanels of all of the accounts hosted on my HostGator VPS (Which I still could access through the old IP Address of the server since the domain names were no longer attached), I performed a full backup and sent the .tar.gz files to the /home directory of the new Digital Ocean VPS using cPanels built in “backup-over-SCP” with the root account. This took a while, but I got a handy-dandy email notification when each account finished backing up.

  2. After all backups were completed, the .tar.gz files were named something like “-.tar.gz”. In order to quickly migrate the accounts into my new CWP setup, I had to change the naming scheme to “-cpmove.tar.gz”. Once this was complete, I could login to the admin account of CWP and create new accounts using the “migrate from cPanel” functionality, which automatically filled in the necessary information.

  3. Unfortunately, at the time of my migration, CWP’s “migrate from cPanel” functionality was not working properly. Thus, the migration didn’t actually transfer any data. However, it did seem to setup a MySQL user and a home directory in the same username that was provided in the .tar.gz backup file. Now that a home directory was created, the rest of the backup process could be completed. I started by (Still logged in with the root account) using chown to change the ownership of the “.tar.gz” backup files and then used cp to copy them into the user’s home directories. After this was complete, I logged into CWP and made sure that SSH Shell access was enabled for all of the users on my VPS so that the next step could be completed.

  4. The next step was to actually process the backup file. This could be done by SSHing into the server using the newly created user’s SSH credentials. Once logged in, I extracted the backup file using tar -xvf. This put everything into a nice backup folder. Inside of the backup folder, the entire user’s mail, MySQL data, and file hierarchy from the old HostGator VPS was maintained. In order to get their site up and running as fast as possible, I copied backup/homedir/public_html into their ~/public_html directory. This put their website online and made it viewable to others.

  5. Finally, all I had to do was restore the user’s MySQL data. There were two processes that had to be completed. The first of which involved restoring all of their tables. This could be done by navigating to the ~/backup/mysql folder and using MySQL to process all the *.create files and then all the *.sql files. This created all the tables and then populated them with their data. The command calls looked like

mysql -u -p < TABLENAME.create
mysql -u -p TABLENAME < TABLENAME.sql

Once this was completed for all of the files in the ~/backup/mysql directory, we would need to restore the MySQL user accounts and passwords that existed on the old HostGator VPS so that it would seem like nothing was changed. This could be completed my finding the file ~/backup/mysql.sql and calling its queries as the root MySQL user, as such:

mysql -u root -p < mysql.sql

After this is completed, everything should be setup.

The Final Moments

After all of these steps were completed, the entire server was successfully migrated. There were a few issued I encountered; mainly, the site would stop working after scheduled backups ran since the backup files ate up disk space and prevented the MySQL server from running properly. After backups were disabled, however, this issue was fixed.

After running the server for around 2 weeks without any issues, I decided to close my HostGator account. So far, I have encountered no issues and everyone on the VPS seems to be happy with CWP (Some people even said it was less confusing than cPanel!). The next steps I am going to take are customizing CWP to be branded for my company, and as I complete this task, I will post a guide.

Hopefully this post has helped you migrate your server or inspired you to do so!

Building an Offline Program with Online Syncing

As I become more and more experienced with software development, I am very quickly discovering the limitations of the web. Having a desktop application that constantly needs to retrieve large amounts of data from the Internet is very, very slow. Sometimes, when network speeds are spotty, a simple interaction can take up to 30 seconds, which is definitely unacceptable for the end-user.

In order to solve this problem, I decided to restrict myself from looking at online solutions. I wanted to create a solution on my own.

First, I wanted to create a robust system for managing a database for an offline program. I looked into several different services, including Microsoft’s SQL Stack and a local MongoDB server. However, I found that most of these solutions were either too difficult to learn (I was used to setting up a MySQL server using cPanel and phpMyAdmin) or that they over-complicated the issue. Then, I came across SQLite, which happened to be exactly what I was looking for. It was a lightweight, speedy, local replica of MySQL. After spending about an hour reading the documentation, I realized that it was also super easy to setup. I began working on a library that would make working with a local SQLite database super easy.

Thus, SQLiteDatabaseManager was born. The project was meant to house functionality that would be used in all of my projects, and here are some of the key features that were packed into it:

  • The ability to quickly create a local database
  • The ability to quickly insert, delete, and update records knowing the table names, the fields, and the new values.
  • The ability to have a small table-verification system that allowed the user to define table schemas and check the existing SQLite tables against these schemas for integrity verification.
  • The ability to retry operations when they fail (the fail could be a result of a spontaneous locked file and a retry could fix everything)

SQLiteDatabaseManager worked well because all you needed to do was create a new instance of it, and everything worked out of the box. A new SQLite database was setup and managed by the library. After rigorous testing, this worked perfectly. Of course, the library still needs major work and many more features, but it does what it was intended to do at the moment. My offline data management system worked.

Now the problem was not just managing data offline, but getting it to sync to an online counter-part when a network was available (In a perfect world, all operations would be done on the local SQLite database (fast) and then they would be replicated on the duplicate remote MySQL database (slower) on a background thread). The idea was simple.
Local SQL and Online SQLThe SQLite database and the MySQL database would communicate with eachother via a simple REST API. The databases would be identical and the ID numbers created by the SQLite database would simply be moved to the MySQL database.

 

Creating the REST API for the remote MySQL portion of this project was rather simple. Relying on the methods of the LAMP stack, I used a set of PHP files that would receive GET, POST and DELETE requests so that data could be managed and manipulated fairly easily. Using PHP’s json_encode() function also proved very useful for creating readable responses.

Thus, with the creation of a working SQLite local database and a working and accessible remote MySQL database, I went ahead with the first iteration of implementing this idea. So far, I have run into a few problems.

 

An easily-solvable problem arose when thinking about how to handle many, many clients. Since there was only one remote MySQL database and many, many instances of the local SQLite database, it was a pain tracking when the local client’s changes actually had to be applied or if the remote version was actually newer. This problem was solved by creating a dateAltered field in both the remote and local databases that would contain the last date of modification of the data that it pertained to. Thus, the data with the latest dateAltered value was put permanently into the remote MySQL database.

But what about when there are many, many clients creating information. Surely, some of the client’s SQLite databases will assign ID numbers that coincide with other clients’ SQLite databases. Thus, when information is uploaded to the remote server, different pieces of information may have the same ID numbers. But since all data is modifiable, how is this detectable? This is a problem that I have not yet worked out a solution to since the first implementation of this design has a one-client one-server relationship. However, at the moment, I am hypothesizing that each data-entry contain a unique hash of its original state. Thus, database operations can be performed by referencing the ID of the element, but conflict-resolutions will use this hashed code. Thus, we will always be able to track if a certain piece of information has been modified from its original form or not. Thus, when uploading to the remote server, the server will manage some conflict resolutions and return the ID number of the object. The local clients will then reassign the ID number of the object to that ID number in order to match with the remote MySQL database.

 

I will continue to work on this project in the coming weeks, and I plan on making some detailed blog posts about it, so stay tuned! Eventually, I hope to have this process wrapped up in an easy-to-install and easy-to-implement library. We will see how that turns out.