Install and Configure Compute Node – OpenStack
Recommended by 0 users
Configure Compute Node
OpenStack Compute is used to host and manage cloud computing systems. OpenStack Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. OpenStack Compute interacts with OpenStack Identity for authentication, OpenStack Image Service for disk and server images, and OpenStack dashboard for the user and administrative interface.
Compute node handles connectivity and security groups for an instance.
The /etc/nova/nova.conf file is a very important file and is referred to many times in this book. This file informs each OpenStack Compute service how to run and what to connect to in order to present OpenStack to our end users. This file will be replicated amongst our nodes as our environment grows.
The same /etc/nova/nova.conf file is used in all of our OpenStack Compute service nodes. Create this once and copy it to all other nodes in our environment.
We will be configuring the /etc/nova/nova.conf file on both the Controller host and Compute host.
To log on to our OpenStack Controller and Compute hosts that was created using Vagrant, issue the following commands in separate shells:
vagrant ssh controller
vagrant ssh compute
How to achieve it…
To run our sandbox environment, we will configure OpenStack Compute so that it is accessible from our underlying host computer. We will have the API service (the service our client tools talk to) to listen our public interface and configure the rest of the services to run on the correct ports. The complete nova.conf file as used by the sandbox environment is laid out next and an explanation of each line (known as flags) follows. We will be configuring our environment to use the Nova Networking Service that predates Neutron but is still widely used:
1. First, we amend the /etc/nova/nova.conf file to have the following contents:
[DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True
- Libvirt and Virtualization libvirt_use_virtio_for_bridges=True connection_type=libvirt libvirt_type=qemu
# Messaging rabbit_host=172.16.0.200
# EC2 API Flags ec2_host=172.16.0.200 ec2_dmz_host=172.16.0.200 ec2_private_dns_show_ip=True
# Networking public_interface=eth1 force_dhcp_release=True auto_assign_floating_ip=True
# Images image_service=nova.image.glance.GlanceImageService glance_api_servers=172.16.0.200:9292
# Scheduler scheduler_default_filters=AllHostsFilter
- Object Storage iscsi_helper=tgtadm
2. Repeat Step 1 and create the file /etc/nova/nova.conf on the Compute host.
3. Back on the Controller host, we then issue a command that ensures that the database has the correct table schema installed and initial data populated with the right information:
sudo nova-manage DB sync
There is no output when this command successfully runs.
4. We can then proceed to create the private network that will be used by our OpenStack Compute instances internally:
sudo nova-manage network, create privateNet \ –fixed_range_v4=10.0.10.0/24 \ –network_size=64 \ –bridge_interface=eth2
5. As we have the flag set to auto-assign a floating IP address when we launch an instance, we set a public network range that will be used by our OpenStack Compute instances:
sudo nova-manage floating create –ip_range=172.16.10.0/24
How it works…
The /etc/nova/nova.conf file is an important file in our OpenStack Compute environment and the same file is used on all Compute and Controller nodes. We create this once and then we ensure this is present on all of our nodes. The following are the flags that are present in our /etc/nova/nova.conf configuration file:
dhcpbridge_flagfile=: It is the location of the configuration (flag) file for the dhcpbridge service.
dhcpbridge=: It is the location of the dhcpbridge service. force_dhcp_release: It releases the DHCP assigned IP address when the instance is terminated.
logdir=/var/log/nova: It writes all service logs here. This area will be written as root user.
state_path=/var/lib/nova: It is an area on your host that Nova will use to maintain various states about the running service. lock_path=/var/lock/nova: It is where Nova can write its lock files. root_helper=sudo nova-rootwrap: It specifies a helper script to allow the OpenStack Compute services to obtain root privileges.
verbose: It sets whether more information should be displayed in the logs or not.
api_paste_config: It is the location of the paste file containing the paste.deploy configuration for nova-api service. connection_type=libvirt: It specifies the connection to use libvirt. libvirt_use_virtio _for_ bridges: It uses the virtio driver for bridges. libvirt_type=qemu: It sets the virtualization mode. Qemu is software virtualization, which is required for running under VirtualBox. Other options include kvm and xen.
sql_connection=mysql://nova:firstname.lastname@example.org/nova: It is our SQL connection line created in the previous section. It denotes the
user:password@HostAddress/database name (in our case nova).
rabbit_host=172.16.0.200: It tells OpenStack services where to find the rabbitmq message queue service.
ec2_host=172.16.0.200: It denotes the external IP address of the nova-api service.
ec2_dmz_host=172.16.0.200: It denotes the internal IP address of the nova-api service.
ec2_private_dns_show_ip: It returns the IP address for the private hostname if set to true, else returns the hostname if set to false.
public_interface=eth1: It is the interface on your hosts running Nova that your clients will use to access your instances.
force_dhcp_release: It releases the DHCP assigned private IP address on instance termination.
auto_assign_floating_ip: It automatically assigns a floating IP address to our instance on creation when this is set to true. A floating range must be defined before booting an instance. This allows our instances to be accessible from our host computer (that represents the rest of our network).
image_service=nova.image.glance.GlanceImageService: It specifies that for this installation, we’ll be using Glance in order to manage our images.
glance_api_servers=172.16.0.200:9292: It specifies the server that is running the Glance Imaging service.
scheduler_default_filters=AllHostsFilter: It specifies the scheduler can send requests to all compute hosts.
iscsi _helper=tgtadm: It specifies that we are using the tgtadm daemon as our iSCSI target user-land tool.
The networking is set up so that internally the guests are given an IP in the range 10.0.0.0 /24. We specified that we would use only 64 addresses in this network range. Be mindful of how many you want. It is easy to create a large range of address, but it will also take a longer time to create these in the database, as each address is a row in the nova.fixed_ips table where they ultimately get recorded and updated. Creating a small range now allows you to try OpenStack Compute and later on you can extend this range very easily.
There are a wide variety of options that are available for configuring OpenStack Compute. These will be explored with more details in later on chapters as the nova.conf file underpins most of OpenStack Compute services.