Highly Available OpenStack
OpenStack is a suite of software designed to offer scale-out cloud environments, deployed in data centers around the world. Managing the installation of software in a remote location is different (and sometimes challenging), compared to being able to install software locally, and so tools and techniques have been developed to ease this task. Design considerations of how to deal with hardware and software failure must also be taken into consideration in operational environments. Identifying single points of failure (SPOF) and adding ways of making them resilient ensures that our OpenStack environment remains available when something goes wrong.
This field of topics introduces some methods and software to help manage OpenStack in production data centers.
Using Galera for MySQL clustering
OpenStack can be backed by a number of database backends, and one of the most common options is MySQL. There are a number of ways to make MySQL which are more resilient and highly available. The following approach uses a load balancer to front a multi-read/write master with Galera, taking care of the synchronous replication required in such a setup. Galera is a synchronous multi-master cluster for MySQL InnoDB databases. Galera clusters allow synchronous data writes across all nodes with any node being able to take that write in a fully active/active topology. It features automatic node management, meaning that failed nodes are removed from the cluster and new nodes are automatically registered. The advantage of this is that we are adding resilience in the event of a database node failure, as each node store a copy of the data.
We’ll be using a free online configuration tool from SeveralNines.com to configure a 3-node, multi-master MySQL setup with Galera, monitored using the free cluster management interface, cmon, using a fourth node. This implies that we have four servers available, running Ubuntu (other platforms are supported) with enough memory and disk space required for our environment and at least two CPUs available. The diagram below shows the nodes we will be installing and configuring:
How to achieve it…
To cluster MySQL using Galera, carry out the following steps:
Configuring MySQL and Galera
1. We first use a Web browser from our desktop and head over to https://www.severalnines.com/galera-configurator/, where we will input some information about our environment to produce the script required to install our Galera-based MySQL cluster.
This is a third-party service asking for details pertinent to our environment. Do not include passwords for the environment that this will be deployed to. The process downloads scripts and configuration files that should be edited to suit before execution with real settings.
1. The first screen asks for the Vendor. Select Codership (based on MySQL 5.5) as shown in the following screenshot:
The next screen asks for general settings, as follows:
We have specified the OS User as galera. This is a Linux user account existing on our 4 nodes that we will be using for this installation.
>> Next, we’ll configure server properties (configure as appropriate):
>> On the next screen, we’ll configure the nodes and addresses. The first section asks for details about our ClusterControl Server running Cmon, as follows:
>> Further down the screen, we can now configure the Galera nodes. The following table lists the IP address, data directory, and installation directory for the servers.
>> The final step asks which e-mail address the configuration and deployment script should be sent to. Once a valid e-mail address has been entered, press the Generate Deployment Scripts. You will be taken to a summary screen where you will be presented with an API key. You will require this key to complete the installation.
The API key is also e-mailed to you and presented again at the end of the installation script that gets run on the nodes.
1. Each node is configured such that the user used to run the setup routine (the OS user as configured in step 2 in the previous section) which can SSH to each node— including itself—and run commands through sudo without being asked for a password. To do this, we first create the user’s SSH key as follows:
ssh-keygen -t rsa -N ""
2. We now need to copy this to each of our nodes, including the node we’re on now (so that it can SSH to itself):
The user specified here, galera, has to match the OS User option specified when we configured Galera using the SeveralNines configurator.
>> This will ask for the password of the Galera user on each of the nodes, but following this, we should not be prompted. To test, simply do the following, which should get executed without intervention:
>> We now need to ensure that the Galera user can execute commands using sudo without being asked for a password. To do this, we execute the following on all nodes:
1. From the e-mail that has been sent, download the attached gzipped tarball, and copy it over to the first of our nodes that we specified in the configuration as the ClusterControl Server (for example, 172.16.0.100). The tarball is small and contains the pre-prepared shell scripts and our configuration options to allow for a semi-automated installation of Galera and MySQL.
2. Log in to the ClusterControl Server as the OS User specified in step 2 of the MySQL and Galera Configuration section (for example, galera)
3. Unpack the tarball copied over and change to the install directory in the unpacked archive, as follows:
4. Once in this directory, we simply execute the deploy.sh script:
5. A question will be asked regarding the ability to shell to each node. Answer Y to this. Installation will then continue, which will configure MySQL with Galera as well as cmon, to monitor the environment.
6. After a period of time, once installation has completed, we point our Web browser to the ClusterControl server to finalize the setup at the address specified, for example, https://172.16.0.100/cmonapi/, and when prompted to Register your cluster with ClusterControl, change the server listening address to be https://172.16.0.100/clustercontrol as shown in the following screenshot:
7. Once done, click on the Login Now button and we will then be presented with a login screen. To login as the admin user, enter the e-mail address you used to retrieve the script from SeveralNines.com and the password admin. See the screenshot below:
8. Once you have logged in, we will be asked to register the cluster with ClusterControl by using the API key that was presented at the end of the installation script as well as the address of our ClusterControl server API, for example, https://172.16.0.100/cmonapi. The following screenshot shows an example of this:
9. Once complete, click on Register and this will take us to the ClusterControl administration screen.
Configuration of database cluster for OpenStack
1. Once the cluster has been set up, we can now create the databases, users, and privileges required for our OpenStack environment, as we would do for any other OpenStack installation. To do this, we click on the Manage
link as shown in the following screenshot:
2. From this screen, choose the Manage menu, and select the Schemas and Users menu option as shown in the following screenshot:
3. Under Schema and Users, we can create and drop databases, create and delete users, and grant and revoke privileges. For OpenStack, we need to create five users and the five databases, with appropriate privileges, that relate to our OpenStack installation. These are nova, keystone, glance, quantum (used by Neutron), and cinder. First, we create the nova database. To do this, click on the Create Database button as shown in the following screenshot:
4. Once entered, click on the Create Database button and a popup will acknowledge the request as shown as follows:
5. Repeat the process to create the keystone, glance, quantum and cinder.
6. Once done, we can now create our users. To do this, we click on the Privileges button as shown below:
7. To create a user called nova, that we will use to connect to our nova database, click on the Create User button and fill in the details as shown in the following screenshot:
8. Repeat this step for each of the required usernames for our other database, which we will call by the same name for ease of administration: glance, keystone, quantum, and cinder. We will end up with the users as shown below:
9. With the users created, we assign their privileges to the corresponding databases. We will create a user named nova, which is allowed to access our database cluster from any host (using the MySQL wildcard character %). The following screenshot shows this:
9. Repeating this step for the other users gives us the required privileges for us to utilize our new cluster for OpenStack as shown in the following screenshot:
Explore OpenStack Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download Now!
How it works…
Galera replication is a synchronous multi-master plugin for InnoDB. It has the advantage that any client can write to any node in the cluster and not suffer from write conflicts or a data replication lag. There are some caveats to a Galera-backed MySQL cluster that must be considered though. Any database write is only as fast as the slowest node, to maintain synchronicity. As the number of nodes in a Galera cluster size increases, the time to write to the database can increase. Finally, given that each node maintains a copy of the database on its local storage, it isn’t as space-efficient as using a cluster based on shared storage.
Setting up a highly available MySQL cluster with Galera for data replication is easily achieved using the freely available online configuration tool from SeveralNines. By following the process, we end up with four nodes, of which three are assigned to running MySQL with Galera and the fourth allows us to manage the cluster.
With the automatic routine installation complete, we can create our databases and users and can assign privileges using the ClusterControl interface, without needing to think about any Replication
issues. In fact, we can create these by attaching to any one of the three MySQL servers we would normally treat independently, and the data will automatically sync to the other nodes.
For OpenStack, we create five databases (nova, glance, quantum, cinder, and keystone) and assign appropriate users and privileges to these databases. We can then use this information to put into the appropriate configuration files for OpenStack.