This recipe represents two nodes running both GLANCE and Keystone, controlled by Pacemaker with Corosync in active/passive mode, which allows for a failure of a single node. In a production environment, it is recommended that a cluster consists of at least three nodes to ensure resiliency and consistency in the case of a single node failure.
We must first create two servers configured appropriately for use with OpenStack. As these two servers will just be running Keystone and Glance, only a single network interface and address on the network that our OpenStack services communicate on will be required. This interface can be bonded for added resilience.
How to achieve it…
To increase the resilience of OpenStack services, carry out the following steps:
If Keystone is not installed on this first host, install it and configure it appropriately, as if we are configuring a single host (refer KEYSTONE OPENSTACK IDENTITY SERVICE). Ensure the keystone database is backed by a database backend such as MySQL.
With Keystone running on this host, we should be able to query Keystone using both its own IP address (172.16.0.111) and the floating IP (172.16.0.253) from a client that has access to the OpenStack environment.
# Assigned IP export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_TENANT_NAME=cookbook export OS_AUTH_URL=https://172.16.0.111:5000/v2.0/ keystone user-list # FloatingIP (Keepalived and HA Proxy) export OS_AUTH_URL=https://172.16.0.253:5000/v2.0/ keystone user-list
- On the second node, controller2, install and configure Keystone; configured such that Keystone is pointing at the same database backend.
sudo apt-get update sudo apt-get install keystone python-mysqldb
- Copy over the /etc/keystone/keystone.conf file from the first host, put it in place on the second node, and then restart the Keystone service. There is no further work required, as the database has already been populated with endpoints and users when the install was completed on the first node. Restart the service to connect to the database.
sudo stop keystone sudo start keystone
- We can now interrogate the second Keystone service on its own IP address.
# Second Node export OS_AUTH_URL=https://172.16.0.112:5000/v2.0/ keystone user-list
Glance across 2 nodes with FloatingIP
Subscribe to our youtube channel to get new updates..!
In order to have Glance able to run across multiple nodes, it must be configured with a shared storage backend (such as Swift) and be backed by a database backend (such as MySQL). On the first host, install and configure Glance, as described in STARTING OPENSTACK IMAGE SERVICE.
- On the second node, simply install the required packages to run Glance, which is backed by MySQL and Swift:
sudo apt-get install glance python-swift
- Copy over the configuration files in /etc/glance to the second host, and start the glance-api and glance-registry services on both nodes, as follows:
sudo start glance-api sudo start glance-registry
- We can now use either the Glance server to view our images as well, as the FloatingIP address that is assigned to our first node:
# First node glance -I admin -K openstack -T cookbook -N https://172.16.0.111:5000/v2.0 index # Second node glance -I admin -K openstack -T cookbook -N https://172.16.0.112:5000/v2.0 index # FloatingIP glance -I admin -K openstack -T cookbook -N https://172.16.0.253:5000/v2.0 index
Configuring Pacemaker for use with Glance and Keystone
- With Keystone and Glance, running on both nodes, we can now configure Pacemaker to take control of this service, so that we can ensure Keystone and Glance are running on the appropriate node when the other node fails. To do this, we first disable the upstart jobs for controlling Keystone and Glance services. To do this, we create upstart override files for these services (on both nodes). Create
/etc/init/keystone.override, /etc/init/glance-api.override and /etc/init/glance-registry.override with just the keyword, manual, in:
- We now grab the OCF (Open Cluster Format) resource agents that are shell scripts or pieces of code that are able to control our Keystone and Glance services. We must do this on both our nodes.
wget https://raw.github.com/madkiss/keystone/ha/tools/ocf/keystone wget https://raw.github.com/madkiss/glance/ha/tools/ocf/glance-api wget https://raw.github.com/madkiss/glance/ha/tools/ocf/glance-registry sudo mkdir -p /usr/lib/ocf/resource.d/openstack sudo cp keystone glance-api glance-registry /usr/lib/ocf/resource.d/openstack sudo chmod 755 /usr/lib/ocf/resource.d/openstack/*
- We should now be able to query these new OCF agents available to us, which will return the three OCF agents:
sudo crm ra list ocf openstack
- We can now configure Pacemaker to use these agents to control our Keystone service. To do this, we run the following set of commands:
sudo crm cib new conf-keystone sudo crm configure property stonith-enabled=false sudo crm configure property no-quorum-policy=ignore sudo crm configure primitive p_keystone ocf:openstack:keystone params config="/etc/keystone/keystone.conf" os_auth_url="https://localhost:5000/v2.0/" os_password="openstack" os_tenant_name="cookbook" os_username="admin" user="keystone" client_binary="/usr/bin/keystone" op monitor interval="5s" timeout="5s" sudo crm cib use live sudo crm cib commit conf-keystone
- We then issue a similar set of commands for the two Glance services, as follows:
sudo crm cib new conf-glance-api sudo crm configure property stonith-enabled=false sudo crm configure property no-quorum-policy=ignore sudo crm configure primitive p_glance_api ocf:openstack:glance-api params config="/etc/glance/glance-api.conf" os_auth_url="https://localhost:5000/v2.0/" os_password="openstack" os_tenant_name="cookbook" os_username="admin" user="glance" client_binary="/usr/bin/glance" op monitor interval="5s" timeout="5s" sudo crm cib use live sudo crm cib commit conf-glance-api sudo crm cib new conf-glance-registry sudo crm configure property stonith-enabled=false sudo crm configure property no-quorum-policy=ignore sudo crm configure primitive p_glance_registry ocf:openstack:glance-registry params config="/etc/glance/glance-registry.conf" os_auth_url="https://localhost:5000/v2.0/" os_password="openstack" os_tenant_name="cookbook" os_username="admin" user="glance" op monitor interval="5s" timeout="5s" sudo crm cib use live sudo crm cib commit conf-glance-registry
- We can verify that we have our Pacemaker configured correctly, by issuing the following command:
sudo crm_mon -1
This brings back something similar to the following:
Last updated: Sat Aug 24 22:55:25 2013 Last change: Tue Aug 24 21:06:10 2013 via crmd on controller1 Stack: openais Current DC: controller1 - partition with quorum Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c 2 Nodes configured, 2 expected votes 4 Resources configured. ============ Online: [ controller1 controller2 ] FloatingIP (ocf::heartbeat:IPaddr2): Started controller1 p_keystone (ocf::openstack:keystone):Started controller1 p_glance_api (ocf::openstack:glance_api):Started controller1 p_glance_registry (ocf::openstack:glance_registry):Started controller1
Here’s methods of what to do if you receive an error similar to the following error:
Failed actions: p_keystone_monitor_0 (node=ubuntu2, call=3, rc=5, status=complete): not installed
Issue the following to clear the status and then view the status again:
sudo crm_resource -P sudo crm_mon -1
We are now able to configure our client so that they use the FloatingIP address of 172.16.0.253 for both Glance and Keystone services. With this in place, we can bring down the interface on our first node and still have our Keystone and Glance services available on this FloatingIP address.
We now have Keystone and Glance running on two separate nodes, where a node can fail and services will still be available.
How it works…
Configuration of Pacemaker is predominantly done with the crm tool. This allows us to script the configuration, but if invoked on its own, allows us to invoke an interactive shell that we can use to edit, add, and remove services as well as query the status of the cluster. This is a very powerful tool to control an equally powerful cluster manager.
With both nodes running Keystone and Glance, and with Pacemaker and Corosync running and accessible on the floating IP provided by Corosync, we configure Pacemaker to control the running of the Keystone and Glance services by using an OCF agent written specifically for this purpose. The OCF agent uses a number of parameters that will be familiar to us—whereby they require the same username, password, tenant, and endpoint URL that we would use in a client to access that service.
A timeout of 5 seconds was set up for both the agent and when the floating IP address moves to another host.
After this configuration, we have a Keystone and Glance active/passive configuration as shown in the diagram below: