The compute node handles connectivity and security groups for instances.
With the network node configured, there are some services that need to run our Compute nodes. The services that run our compute node for Neutron are nova-compute, quantum-ovs-plugin-agent, and openvswitch-server.
Ensure that you are logged on to the compute node in our environment. If you created this using Vagrant, you can issue the following command:
vagrant ssh compute
Steps to Configure Compute Nodes
To configure our OpenStack Compute node, carry out the following steps:
- First update the packages installed on the node:
- We then install the kernel headers package as the installation will compile some new kernel modules:
- We now need to install some supporting applications and utilities:
- We are now ready to install Open vSwitch which also runs on our Compute node:
- After this has installed and configured some kernel modules we can simply start our OVS service:
- We can now proceed to install the Neutron plugin component that run on this node:
- With the installation of the required packages complete, we can now configure our environment. To do this, we first configure our OVS switch service. We need to configure a bridge that we will call br-int. This is the integration bridge that glues our VM networks together within our SDN environment.
Subscribe to our youtube channel to get new updates..!
- We need to ensure that we have IP forwarding on within our Network node:
- We can now configure the relevant configuration files to get our Compute node working with the Neutron services. We first edit the /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini file.
The first is to configure the database credentials to point to our MySQL installation:
- Further down the file, we will see also a section called [OVS]. We need to edit this section to include the following values:
In a similar way to configuring other OpenStack services, the Neutron services have a paste ini file. Edit /etc/quantum/api-paste.ini to configure Keystone authentication. We add the auth and admin lines to the [filter:authtoken] section:
- We must ensure that our Neutron server configuration is pointing at the right RabbitMQ in our environment. Edit /etc/quantum/quantum.conf and locate the following and edit to suit our environment:
rabbit_host = 172.16.0.200
- We need to edit the familiar [keystone_authtoken] located at the bottom of the file to match our Keystone environment:
- We can now configure the /etc/nova/nova.conf file to tell the OpenStack Compute components to utilize Neutron. Add the following lines under [Default] to our /etc/nova/nova.conf configuration:
- Restart our nova services running on this node to pick up the changes in the /etc/nova/nova.conf file:
Installing a Network node
Configuring our OpenStack Compute node to use Neutron is straightforward. We follow a similar set of initial steps that were conducted on our Network node, which involves installing a number of packages as follows:
- Operating system:
- Generic networking components:
- Open vSwitch:
Once installed, we also configure the Open vSwitch service running on our Compute node and configure the same integration bridge, br-int.
We utilize the same
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini file with only one
difference—the local_ip setting is the IP address of the Compute node that we are configuring.
Lastly, we configure /etc/nova/nova.conf— all the important configuration file for our OpenStack Compute services.
The preceding code tells our OpenStack Compute service to use Neutron networking.
The preceding is the address of our Neutron server API (running on our Controller node).
This tells Neutron to utilize the OpenStack identity and authentication service, Keystone.
The name of the service tenant in Keystone.
The username that Neutron uses to authenticate with in Keystone.
The password that Neutron uses to authenticate with in Keystone.
The address of our Keystone service.
This tells Libvirt to use the OVS Bridge driver.
This is the driver used to create Ethernet devices on our Linux hosts.
This is the driver to use when managing the firewalls.