Highly Available OpenStack


OpenStack is a suite of software designed to offer scale-out cloud environments, deployed in data centers around the world. Managing the installation of software in a remote location is different (and sometimes challenging), compared to being able to install software locally, and so tools and techniques have been developed to ease this task. Design considerations of how to deal with hardware and software failure must also be taken into consideration in operational environments. Identifying single points of failure (SPOF) and adding ways of making them resilient ensures that our OpenStack environment remains available when something goes wrong.

This field of topics introduces some methods and software to help manage OpenStack in production data centers.

Using Galera for MySQL clustering

OpenStack can be backed by a number of database backends, and one of the most common options is MySQL. There are a number of ways to make MySQL which are more resilient and highly available. The following approach uses a load balancer to front a multi-read/write master with Galera, taking care of the synchronous replication required in such a setup. Galera is a synchronous multi-master cluster for MySQL InnoDB databases. Galera clusters allow synchronous data writes across all nodes with any node being able to take that write in a fully active/active topology. It features automatic node management, meaning that failed nodes are removed from the cluster and new nodes are automatically registered. The advantage of this is that we are adding resilience in the event of a database node failure, as each node store a copy of the data.

Getting ready

We’ll be using a free online configuration tool from SeveralNines.com to configure a 3-node, multi-master MySQL setup with Galera, monitored using the free cluster management interface, cmon, using a fourth node. This implies that we have four servers available, running Ubuntu (other platforms are supported) with enough memory and disk space required for our environment and at least two CPUs available. The diagram below shows the nodes we will be installing and configuring:


How to achieve it…

To cluster MySQL using Galera, carry out the following steps:

Configuring MySQL and Galera

  • We first use a Web browser from our desktop and head over to http://www.severalnines.com/galera-configurator/, where we will input some information about our environment to produce the script required to install our Galera-based MySQL cluster.


This is a third-party service asking for details pertinent to our environment. Do not include passwords for the environment that this will be deployed to. The process downloads scripts and configuration files that should be edited to suit before execution with real settings.

  • The first screen asks for the Vendor. Select Codership (based on MySQL 5.5) as shown in the following screenshot:
  • Screenshot_708The next screen asks for general settings, as follows:
Infrastructure: none/on-premise
Operating System: Ubuntu 12.04
Platform: Linux 64-bit (x86_64)
Number of Galera Servers: 3+1
MySQL PortNumber: 3306
Galera PortNumber: 4567
Galera SST PortNumber: 4444
SSH PortNumber: 22
OS User: galera
MySQL Server Password (root user): openstack
CMON DB password (cmon user): cmon
Firewall (iptables): Disabled


We have specified the OS User as galera. This is a Linux user account existing on our 4 nodes that we will be using for this installation.

  • Next, we’ll configure server properties (configure as appropriate):
System Memory (MySQL Servers): (at least 512Mb)
WAN: no
Skip DNS Resolve: yes
Database Size < 8Gb
Galera Cache (gache): 128Mb
MySQL Usage: Medium write/high read
Number of cores: 2
Max connections per server: 200
Innodb_buffer_pool_size: 48 Mb
Innodb_file_per_table: checked
  • On the next screen, we’ll configure the nodes and addresses. The first section asks for details about our ClusterControl Server running Cmon, as follows:
ClusterControl Server:
System Memory: (at least 512Mb)
Datadir: <same as for mysql>
Installdir: /usr/local
Web server(apache) settings
Apache User: www-data
WWWROOT: /var/www/
  • Further down the screen, we can now configure the Galera nodes. The following table lists the IP address, data directory, and installation directory for the servers.
Config Directory: /etc/mysql


Server-id IP-address Datadir Installdir
1 /var/lib/mysql/ /usr/local/
2 same as mentioned earlier same as mentioned earlier same as mentioned earlier same as mentioned earlier
  • The final step asks which e-mail address the configuration and deployment script should be sent to. Once a valid e-mail address has been entered, press the Generate Deployment Scripts. You will be taken to a summary screen where you will be presented with an API key. You will require this key to complete the installation.


The API key is also e-mailed to you and presented again at the end of the installation script that gets run on the nodes.

Node preparation

  • Each node is configured such that the user used to run the setup routine (the OS user as configured in step 2 in the previous section) which can SSH to each node— including itself—and run commands through sudo without being asked for a password. To do this, we first create the user’s SSH key as follows:
ssh-keygen -t rsa -N ""
  • We now need to copy this to each of our nodes, including the node we’re on now (so that it can SSH to itself):
    copy ssh key to,,
    for a in {100..103} do
    ssh-copy-id -i .ssh/id_rsa.pub galera@172.16.0.${a} done


The user specified here, galera, has to match the OS User option specified when we configured Galera using the SeveralNines configurator.

  • This will ask for the password of the Galera user on each of the nodes, but following this, we should not be prompted. To test, simply do the following, which should get executed without intervention:
for a in {100..103} 
ssh galera@172.16.0.${a} ls 
  • We now need to ensure that the Galera user can execute commands using sudo without being asked for a password. To do this, we execute the following on all nodes:
echo "galera ALL=(ALL:ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/galera
# Then fix the permissions to prevent future warnings
sudo chmod 0440 /etc/sudoers.d/galera


  • From the e-mail that has been sent, download the attached gzipped tarball, and copy it over to the first of our nodes that we specified in the configuration as the ClusterControl Server (for example, The tarball is small and contains the pre-prepared shell scripts and our configuration options to allow for a semi-automated installation of Galera and MySQL.
  • Log in to the ClusterControl Server as the OS User specified in step 2 of the MySQL and Galera Configuration section (for example, galera)
ssh galera@
  • Unpack the tarball copied over and change to the install directory in the unpacked archive, as follows:
tar zxf s9s-galera-codership-2.4.0.tar.gz
cd s9s-galera-codership-2.4.0/mysql/scripts/install
  • Once in this directory, we simply execute the deploy.sh script:
bash ./deploy.sh 2>&1 |tee cc.log
  • A question will be asked regarding the ability to shell to each node. Answer Y to this. Installation will then continue, which will configure MySQL with Galera as well as cmon, to monitor the environment.
  • After a period of time, once installation has completed, we point our Web browser to the ClusterControl server to finalize the setup at the address specified, for example,, and when prompted to Register your cluster with ClusterControl, change the server listening address to be as shown in the following screenshot:


  • Once done, click on the Login Now button and we will then be presented with a login screen. To login as the admin user, enter the e-mail address you used to retrieve the script from  SeveralNines.com and the password admin. See the screenshot below:


  • Once you have logged in, we will be asked to register the cluster with ClusterControl by using the API key that was presented at the end of the installation script as well as the address of our ClusterControl server API, for example, The following screenshot shows an example of this:


  • Once complete, click on Register and this will take us to the ClusterControl administration screen.

Configuration of database cluster for OpenStack

  • Once the cluster has been set up, we can now create the databases, users, and privileges required for our OpenStack environment, as we would do for any other OpenStack installation. To do this, we click on the Manage link as shown in the following screenshot: Screenshot_712
  • From this screen, choose the Manage menu, and select the Schemas and Users menu option as shown in the following screenshot:


  • Under Schema and Users, we can create and drop databases, create and delete users, and grant and revoke privileges. For OpenStack, we need to create five users and the five databases, with appropriate privileges, that relate to our OpenStack installation. These are nova, keystone, glance, quantum (used by Neutron), and cinder. First, we create the nova database. To do this, click on the Create Database button as shown in the following screenshot:


  • Once entered, click on the Create Database button and a popup will acknowledge the request as shown as follows:


  • Repeat the process to create the keystone, glance, quantum and cinder.
  • Once done, we can now create our users. To do this, we click on the Privileges button as shown below:


  • To create a user called nova, that we will use to connect to our nova database, click on the Create User button and fill in the details as shown in the following screenshot:


  • Repeat this step for each of the required usernames for our other database, which we will call by the same name for ease of administration: glance, keystone, quantum, and cinder. We will end up with the users as shown below:


  • With the users created, we assign their privileges to the corresponding databases. We will create a user named nova, which is allowed to access our database cluster from any host (using the MySQL wildcard character %). The following screenshot shows this:


  • Repeating this step for the other users gives us the required privileges for us to utilize our new cluster for OpenStack as shown in the following screenshot:


How it works…

Galera replication is a synchronous multi-master plugin for InnoDB. It has the advantage that any client can write to any node in the cluster and not suffer from write conflicts or a data replication lag. There are some caveats to a Galera-backed MySQL cluster that must be considered though. Any database write is only as fast as the slowest node, to maintain synchronicity. As the number of nodes in a Galera cluster size increases, the time to write to the database can increase. Finally, given that each node maintains a copy of the database on its local storage, it isn’t as space-efficient as using a cluster based on shared storage.

Setting up a highly available MySQL cluster with Galera for data replication is easily achieved using the freely available online configuration tool from SeveralNines. By following the process, we end up with four nodes, of which three are assigned to running MySQL with Galera and the fourth allows us to manage the cluster.

With the automatic routine installation complete, we can create our databases and users and can assign privileges using the ClusterControl interface, without needing to think about any replication issues. In fact, we can create these by attaching to any one of the three MySQL servers we would normally treat independently, and the data will automatically sync to the other nodes.

For OpenStack, we create five databases (nova, glance, quantum, cinder, and keystone) and assign appropriate users and privileges to these databases. We can then use this information to put into the appropriate configuration files for OpenStack.



0 Responses on Highly Available OpenStack"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.