Configuring HA Proxy for MySQL Galera load balancing In Highly Available OpenStack

With our MySQL Galera cluster configured, each of the nodes is able to take traffic, and the writes are seamlessly replicated to other nodes in the cluster. We could use any of the MySQL node addresses and place them in our configuration files, but if that node fails, we would not have a database to attach to and our OpenStack environment would fail. A possible solution to this is to front the MySQL cluster using load balancing. Given that any of the nodes are able to take reads and writes, with data consistency, load balancing is a great solution.

The steps in the following section configure a highly available 2-node HA Proxy setup that we can use as a MySQL endpoint to place in our OpenStack configuration files. In production, if load balancing is desired, it is recommended that dedicated HA load balancers are used.

Getting started

Configure two servers, both running Ubuntu 12.04, that are configured on the same network as our OpenStack environment and MySQL Galera cluster. In the following steps, the two nodes will be on IP addresses and, with a floating IP address (that will be set up using keepalived) of This address is used when we configure database connections in our OpenStack configuration files.

How to do it…

As we are setting up identical servers to act in a pair, we will configure a single server first, and then repeat the process for the second server. Firstly, we will utilize the IP address We then repeat the steps utilizing the IP address

To configure HA Proxy for MySQL Galera load balancing, carry out the following steps for each of our HA Proxy pair:

Installation of HA Proxy for MySQL

The first step is to install the database that sits at the heart of the cluster. To implement high availability, run an instance of the database on each controller node and use Galera Cluster to provide replication between them.

  • We first install HA Proxy using the usual apt-get process, as follows:
sudo apt-get update
sudo apt-get -y install haproxy
  • With HA Proxy installed, we’ll simply configure this first proxy server appropriately for our MySQL Galera cluster. To do this, we edit the /etc/haproxy/haproxy.cfg file with the following content:
global local0
log local1 notice
#log loghost local0 info
maxconn 4096
#chroot /usr/share/haproxy user haproxy
group haproxy daemon
log global mode http option tcplog
option dontlognull retries 3
option redispatch maxconn 4096
timeout connect 50000ms timeout client 50000ms timeout server 50000ms
listen      mysql
mode tcp
balance roundrobin option tcpka
option mysql-check user haproxy
server mysql1 weight 1 server mysql2 weight 1 server mysql3 weight 1
  • Save and exit the file and start up HA Proxy, as follows:
sudo sed -i 's/^ENABLED.*/ENABLED=1/' /etc/defaults/haproxy sudo service haproxy start
  • Before we can use this HA Proxy server to access our three MySQL nodes, we must create the user specified in the cfg file that is used to do a very simple check to see if MySQL is up. To do this, we add a user into our cluster that is simply able to connect to MySQL. Using the ClusterControl interface, or using the mysql client and attaching to any of the MySQL instances in our cluster, create the user haproxy with no password set that is allowed access from the IP address of the HA Proxy server.


At this point, we can use a MySQL client and point this to the HA Proxy address, and MySQL will respond to it, as expected.


Repeat steps 1 to 4 replacing the IP address with the IP address of our second node,

  • Having a single HA Proxy server sitting in front of our multi-master MySQL cluster makes the HA Proxy server our single point of failure. To overcome this, we repeat the previous steps for our second HA Proxy server, and then we use a simple solution provided by keepalived for VRRP (Virtual Redundant Router Protocol) management. To do this, we need to install keepalived on both of our HA Proxy servers. Like before, we will configure one server then repeat the steps for our second server. We do this as follows:
sudo apt-get update
sudo apt-get -y install keepalived
  • To allow running software to bind to an address that does not physically exist on our server, we add in an option to conf, to allow this. Add the following line to /etc/sysctl.conf.
  • To pick up the change, issue the following command:
sudo sysctl -p
  • We can now configure keepalived. To do this, we create a /etc/keepalived/keepalived.conf file with the following contents:
vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid exists ornot
interval 2 # check every 2 seconds
weight 2 # add 2 points if OK
vrrp_instance VI_1 { # interface to monitor
interface eth1
state MASTER
virtual_router_id 51 # Assign one ID for this route priority 101 # 101 on master, 100 on backup virtual_ipaddress {          # the virtual IP
track_script { chk_haproxy
  • We can now start up keepalived on this server, by issuing the following command:
sudo service keepalived start
  • With keepalived now running on our first HA Proxy server, which we have designated as the Master node, we repeat the previous steps for our second HA Proxy server with only two changes to the keepalived.conf file (state backup and priority 100) to give the complete file on our second host the following content:
vrrp_script chk_haproxy {
script "killall -0
haproxy" # verify the pid exists or not
interval 2
check every 2 seconds
weight 2
add 2 points if OK
vrrp_instance VI_1 {
interface to monitor
interface eth1
state BACKUP
virtual_router_id 51  # Assign one ID for this route
priority 100
101 on master, 100 on backup
virtual_ipaddress {      # the virtual IP
track_script { chk_haproxy
  • Start up keepalived on this second node, and they will be acting in co-ordination with each other. So if you powered off the first HA Proxy server, the second will pick up the floating IP address,, after 2 seconds, and new connections can be made to our MySQL cluster without disruption.

OpenStack configuration using the floating IP address

With both HA Proxy servers running the same HA Proxy configuration, and with both running keepalived, we can use the virtual_ipaddress address (our floating IP address) configured as the address that we would then connect to and use in our configuration files. In OpenStack, we would change the following to use our floating IP address of

  • Nova
  • Keystone
connection = mysql://keystone:openstack@
  • Glance
sql_connection = mysql://glance:openstack@
  • Neutron
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini [DATABASE] sql_connection=mysql://quantum:openstack@
  • Cinder
sql_connection = mysql://cinder:openstack@

How it works…

HA Proxy is a very popular and useful proxy and load balancer that makes it ideal for fronting a MySQL cluster to add load-balancing capabilities. It is simple to set up the service to front MySQL.

The first requirement is listening on the appropriate port, which for MySQL is 3306. The listen line in the configuration files here also specifies that it will listen on all addresses by using as the address, but you can bind this to a particular address by specifying this to add an extra layer of control in our environment.

To use MySQL, the mode must be set to tcp and we set keepalived with the tcpka option, to ensure long-lived connections are not interrupted and closed when a client opens up a connection to our MySQL servers.

The load balance method used is roundrobin, which is perfectly suitable for a multi-master cluster where any node can perform reads and writes.

We add in a basic check to ensure our MySQL servers are marked off-line appropriately. Using the inbuilt mysql-check option (which requires a user to be set up in MySQL to log in to the MySQL nodes and quit), when a MySQL server fails, the server is ignored and traffic passes to a MySQL server that is alive. Note that it does not perform any checks for whether a particular table exists—though this can be achieved with more complex configurations using a check script running on each MySQL server and calling this as part of our checks.

The final configuration step for HA Proxy is listing the nodes and the addresses that they listen on, which forms the load balance pool of servers.

Having a single HA Proxy acting as a load balancer to a highly available multi-master cluster is not recommended, as the load balancer then becomes our single point of failure. To overcome this, we can simply install and configure keepalived, which gives us the ability to share a floating IP address between our HA Proxy servers. This allows us to use this floating IP address as the address to use for our OpenStack services.





0 Responses on Configuring HA Proxy for MySQL Galera load balancing In Highly Available OpenStack"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.