Drawer content here

Configuring MySQL for High Availability

Updated by Bryan Friedman on Oct 26, 2021
Article Code: kb/971

Description

Simple scalability and easy elasticity are key benefits of any cloud environment, and along with features like self-service load balancer configurations and autoscaling, Lumen Cloud enables users to easily deploy applications with high-availability architectures. While this is especially true for most web applications, the database tier, particularly when it is a traditional relational database like MySQL, can often become the performance bottleneck or the single point of failure for many workloads because it can sometimes be a challenge to scale it out. While a vertical autoscale can help handle peak usage for performance, it doesn't cut it when you're looking for high availability from your relational database.

MySQL offers a number of high-availability configurations, but perhaps the two most commonly implemented are Replication and Clustering. In this article, we will explore both types of MySQL high-availability options and walkthrough the steps for deploying them on the Lumen Cloud.

Replication

MySQL Replication allows for data to be asynchronously copied from a "master" database to one or more "slave" database servers automatically as data in the master is updated. This means you can read data from any slave database server, but should only ever write to the master. There are many potential solutions that can take advantage of this type of configuration. Perhaps you want to use the slaves as database backups, distribute some data to more geographically desirable locations for certain user queries, or even use them as the connection points for read-only analytics applications so as not to affect performance of the master (be careful though, too many slaves can result in performance degradation of the master).
In the case of a failover situation, if your master goes down, you can (manually) turn any slave into the master and have the system back up in less time than it might take to rebuild from backups. And, of course, you may simply want to use them to scale out your environment and spread your load across multiple servers as you might do on the web tier as well (remember though, the app can read from any slave, but it must always write to the master). Whatever the scenario you plan to use replication for, the below steps will walk you through getting MySQL master and slave instances up and running on the Lumen Cloud.

Steps to Configure MySQL Replication on Lumen Cloud

These steps describe the process to bring up one master and one slave, but you can repeat the steps to create the slave and deploy multiple ones if you wish. While there are many possible MySQL replication options (including the ability to replicate all databases, a specific database, or specific tables), the following outlines a good baseline to configure replication for a single database. You should consult the MySQL documentation for more details.

In this example, we will use Ubuntu 12 servers, but with the exception of using a different package manager/installation in the first few steps, this MySQL configuration will work on any system that supports MySQL replication.

  1. Create Server (Master). You can use the Ubuntu 12 64-bit template.
  2. Install MySQL. This will become the "master" server.
    You can install the MySQL package using Ubuntu's package manager. You could also create a blueprint script to do this, but since we are only installing on one server for now, we will logon to the server and run the commands manually.
    apt-get update
    apt-get -y install mysql-server mysql-client

    During the installation, you will be prompted to enter a password for the MySQL admin user, which you should not leave blank.

  3. Clone Server (Slave). Since we have a basic vanilla install of MySQL now, we will clone the master server before configuring it. This clone will become the "slave."
  4. Configure Master.
    Log into the master server with SSH and perform the following steps:
    1. Create database.
      Login to the mysql client and use the create database command to create a database.
      After creating the database, you can quit the mysql client for now.
    2. Update config file.
      You must edit the MySQL config file located at /etc/mysql/my.cnf .
      vi /etc/mysql/my.cnf

      First, you need to find the line that says bind-address = 127.0.0.1 and replace it with the IP of this server, which you can get from the control portal or by using the ifconfig command. Here's what ours will look like in this example:

      bind-address = 10.126.32.14

      Next, find the line that has the server-id, which will probably be commented out and set to a value of 1. Uncomment this line (remove the # character at the beginning of the line) and set the number to anything you want as long as it is unique among all masters and slaves. Here, we will leave it set to 1:

      server-id = 1

      Now we'll uncomment one more line relating to the log_bin file where the details of the replication are kept on the master:

      log_bin = /var/log/mysql/mysql-bin.log

      Finally, we set the binlog_do_db parameter to the database that we want to replicate, in this case the database we created in the first step above:

      binlog_do_db = my_database

      Then, all we have to do is restart mysql from the command prompt with service mysql restart.

    3. Grant privileges and get status.
      Back in the mysql command line client, use the following command to grant privileges on the database to a user that the slave servers will use to read the data from the master. Here, you can use any password you want.
      GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'password';
      FLUSH PRIVILEGES;


      Now we need to get some information about the master database with the following command:
      SHOW MASTER STATUS;
      This will show us a table that includes the position from which the slave database will start replicating. Take note of this information as we will need it later to configure the slave.
    4. Export database.
      Back at the shell prompt, we'll use the mysqldump command to export the database. Note: Since we just created this database, we don't actually have any data in it yet, but if we did, this step would ensure the slave starts with the same data the master already has.
      mysqldump -u root -p --opt my_database > my_database.sql
  5. Configure Slave.
    Now we'll log into the slave server with SSH and perform the following steps:
    1. Create database.
      Same as before, login to the mysql client and use the create database command to create a database called my_database.
    2. Import database.
      First, we need to retrieve the export file from the master server. We can use scp to do this:
      scp root@10.126.32.14:~/my_database.sql .

      Then we will import the database:

      mysql -u root -p my_database < my_database.sql
    3. Update config file.
      Next we update the configuration file in a similar way as we did on the master. We don't need to worry about the IP address in this case, but we need to set the server-id (to something other than 1), uncomment the log_bin line, and uncomment and set the binlog_do_db value:
      server-id = 2
      log_bin = /var/log/mysql/mysql-bin.log
      binlog_do_db = my_database
      We also need to add one more line that isn't in the file by default:
      relay-log               = /var/log/mysql/mysql-relay-bin.log
      A MySQL restart is required to apply the changes.
    4. Start slave.
      Now we need that information from the master server that we took down. From the mysql client on the slave, we'll enter the following command based on the IP address of the master and the information we got from before, and then start the slave:
      CHANGE MASTER TO MASTER_HOST='10.126.32.14',MASTER_USER='slave_user',MASTER_PASSWORD='password',MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=557;

      START SLAVE;
  6. Confirm data replication.
    You can test the setup by inserting data into a table on the master, then querying the slave to ensure the data is replicated correctly.
  7. (Optional) Repeat steps 4 and 6 for each additional slave.
Cluster

MySQL Clustering provides an option that is designed not to have any single point of failure. It enables clustering utilizing a shared-nothing architecture, meaning it works well with minimally specced hardware, each with its own memory and disk. MySQL Cluster combines the standard MySQL server with an in-memory clustered storage engine called NDB (Network DataBase), and so it sometimes referred to as an NDB cluster. A MySQL Cluster consists of multiple nodes of different types, including MySQL servers (for access to NDB data), data nodes (for storage of the data), and one or more management servers. All data in an NDB cluster is stored in the data nodes and the tables are directly accessible from all other SQL nodes in the cluster. This differs from the replication methodology in that there is no master-slave relationship - all data nodes contain copies of the data, but you can read or write to the tables using any of the SQL nodes in the cluster. The following steps will walk you through getting MySQL Cluster installed, configured, and started up on the Lumen Cloud.

Steps to Configure MySQL Cluster on Lumen Cloud

These steps describe the process to bring up one management (ndb_mgmd) node, one SQL (mysqld) node, and two data (ndbd) nodes. This is the absolute minimum required number of servers for a cluster to work. It is highly recommended to create multiple SQL nodes (following the same steps listed below for the SQL node), and additional data nodes as needed, and it is even possible to set up multiple management nodes (though this is less critical since a cluster can function for some time without the ndb_mgmd node being up). There are plenty of configuration options as well as some limitations to using clustering, so the MySQL documentation can be consulted for more information.

In this example, we will use RedHat Enterprise Linux 6 servers, but with the exception of using a different package manager/binary installation in the second step, this MySQL configuration will work on any system that supports MySQL clusters. (More information in the MySQL documentation.)

  1. Create Server (Management). Here we will use the RedHat Enterprise Linux 6 template.
  2. Install MySQL cluster binary. This will become the "management node" (ndb_mgmd).
    For clustered versions of MySQL, unfortunately we cannot use the regular binaries that come from our public Linux package manager libraries, so we'll have to download the binary from Oracle's site and then copy it on to our server for installation. We can find the binaries available at http://dev.mysql.com/downloads/cluster/ where for this example we will select and download the MySQL-Cluster-server-gpl-7.3.6-2.el6.x86_64.rpm package for Red Hat Linux. (For instructions on other operating systems, refer to the MySQL manual.) Once downloaded, we can use scp (or another file transfer tool) to copy the file onto the server.
    Once it's there, we'll login to the server with SSH, change to the directory where it's located, and run the following commands to get it installed. The first command ensures we won't have any version conflicts, and the second command installs the package.
    yum -y remove mysql-libs
  3. rpm -Uhv MySQL-Cluster-server-gpl-7.3.6-2.el6.x86_64.rpm
  4. Clone Server (Data 1). Since we have MySQL installed already, we will clone the server before configuring it. This clone will become a data (ndbd) node.
  5. Clone Server (Data 2). This clone will become another data (ndbd) node.
  6. Clone Server (SQL). This clone will become the SQL (mysqld) node.
  7. Configure SQL Node and Data Nodes.
    Now that we have all of the servers we will use with the software installed, it's time to configure each one. We'll start with the SQL and data nodes, which will have identical configurations. Begin by editing the MySQL configuration file located at /etc/my.cnf (if it's not there it should be created). You can use vi (or any editor of your choice) and make sure it looks like the following:
    [mysqld]
    
    # Options for mysqld process:
    
    ndbcluster                      # run NDB storage engine
    
    
    
    [mysql_cluster]
    
    # Options for MySQL Cluster processes:
    
    ndb-connectstring=10.71.9.12  # location of management server
    This file should be the same for all data and SQL nodes, so be sure to edit and save the file as above on all three of these servers in our cluster.
    Now, on the data nodes only, we will need to create the directory /usr/local/mysql/data, which we can do with the following commands:
    mkdir /usr/local/mysql
    mkdir /usr/local/mysql/data

    Once this is complete, we're ready to move on to configuring the management node.

  8. Configure Management Node.
    We will start by creating the directory /var/lib/mysql-cluster and then creating a config.ini file in it:
    mkdir /var/lib/mysql-cluster
    
    cd /var/lib/mysql-cluster
    
    vi config.ini
    
    

    (Again, you can use any editor you want, here we are using vi.) The file should contain the following:

    [ndbd default]
    
    # Options affecting ndbd processes on all data nodes:
    
    NoOfReplicas=2    # Number of replicas
    
    DataMemory=80M    # How much memory to allocate for data storage
    
    IndexMemory=18M   # How much memory to allocate for index storage
    
                      # For DataMemory and IndexMemory, we have used the
    
                      # default values. Since the "world" database takes up
    
                      # only about 500KB, this should be more than enough for
    
                      # this example Cluster setup.
    
    
    
    [tcp default]
    
    # TCP/IP options:
    
    portnumber=2202   # This the default; however, you can use any
    
                      # port that is free for all the hosts in the cluster
    
                      # Note: It is recommended that you do not specify the port
    
                      # number at all and simply allow the default value to be used
    
                      # instead
    
    
    
    [ndb_mgmd]
    
    # Management process options:
    
    hostname=10.71.9.12             # Hostname or IP address of MGM node
    
    datadir=/var/lib/mysql-cluster  # Directory for MGM node log files
    
    
    
    [ndbd]
    
    # Options for data node "A":
    
                                    # (one [ndbd] section per data node)
    
    hostname=10.71.9.13             # Hostname or IP address
    
    datadir=/usr/local/mysql/data   # Directory for this data node's data files
    
    
    
    [ndbd]
    
    # Options for data node "B":
    
    hostname=10.71.9.14             # Hostname or IP address
    
    datadir=/usr/local/mysql/data   # Directory for this data node's data files
    
    
    
    [mysqld]
    
    # SQL node options:
    
    hostname=10.71.9.15             # Hostname or IP address
    
                                    # (additional mysqld connections can be
    
                                    # specified for this node for various
    
                                    # purposes such as running ndb_restore)
    (These are the minimum settings required for the cluster to work. For more options, you may consult the MySQL documentation.) Once this file is saved, we're ready to startup the cluster.
  9. Startup the Cluster. Order matters here, so follow these steps closely.
    1. Start the management node.
      Logon to the management node server and issue the following command to start the management node process:
      ndb_mgmd -f /var/lib/mysql-cluster/config.ini
    2. Start the data nodes.
      Logon to each of the data nodes and issue the following command to start them up:
      ndbd
    3. Start the SQL node.
      Logon to the SQL node and issue the following command to start the mysqld process:
      service mysql start
    4. Confirm successful startup.
      From the management node server, you can issue the ndb_mgm command to start the management client. From this prompt, type SHOW to confirm all the nodes are up and running successfully. You can use this command at any time to investigate the health of the cluster.
  10. Create database and tables.
    Now that you have a cluster up and running, you can use it for high-availability databases. You can logon to the SQL node with any MySQL client and create a database. If you want the tables in this database to use clustering, you must use engine=NDBCLUSTER at the end of each create statement when creating tables. (Most SQL scripts exported from a MySQL database will use the InnoDB engine, so you can often to a find and replace of InnoDB with NDBCLUSTER to convert your scripts to be usable by a cluster.)
  11. Confirm data access when one node goes down. 
    You can test your cluster by reading and writing data to a table while one of the data nodes is down. Of course, with this configuration, the one SQL node is a single point of failure. As such, it is recommended to setup additional SQL nodes to allow for failover or load balancing.
  12. (Optional) Repeat above steps as needed to set up additional nodes of any type.

Finally, if you want to get really fancy, you can even combine the above methods and use Cluster Replication to replicate a master cluster to slave clusters. Consult the MySQL manual for more on this.