The OpenStack Networking service provides an API that allows users to set up and define network connectivity and addressing in the cloud. OpenStack supports three modes of networking in the current Grizzly release. These are Flat networking, VLAN Manager, and the very latest, Software Defined Networking (SDN). Software Defined Networking is an approach to networking in which Network Administrators and Cloud Operators can programmatically define virtual network services. The Software Defined Network component of OpenStack Networking is called Neutron. This project code name is widely used in the OpenStack community to describe the SDN mode of OpenStack Networking and was previously known as Quantum but due to copyright reasons, the codename Quantum had to be replaced. As a result, this project is now known as Neutron. At present, during the Grizzly release, the paths and service names still refer to Quantum but will change in future releases.
With SDN, we can describe complex networks in a secure multi-tenant environment that overcomes the issues often associated with the Flat and VLAN OpenStack networks. For Flat networks, as the name describes, all tenants live within the same IP subnet regardless of tenancy. VLAN networking overcomes this by separating the tenant IP ranges with a VLAN ID, but VLANs are limited to 4096 IDs, which is a problem for larger installations, and the user is still limited to a single IP range within their tenant to run their applications. With both these modes, ultimate separation of services is achieved through effective Security Group rules.
SDN in OpenStack is also a pluggable architecture, which means we are able to plug-in and control various switches, firewalls, load balancers and achieve various functions as Firewall as a Service—all defined in software to give you the fine grain control over your complete cloud infrastructure.
VLAN Manager is the default in OpenStack and allows for a multi-tenant environment where each of those separate tenants is assigned an IP address range and VLAN tag that ensures project separation. In Flat networking mode, isolation between tenants is done at the Security Group level.
In Flat networking with DHCP, the IP addresses for our instances are assigned from a running DHCP service on the OpenStack Compute host. This service is provided by dnsmasq. As with Flat networking, a bridge must be configured manually in order for this to function.
To begin with, ensure you’re logged into the controller. If this was created using Vagrant we can access this using the following command:
vagrant ssh controller
If you are using the controller host created in Starting Openstack Compute, we will have three interfaces in our virtual instance:
In a physical production environment, that first interface wouldn’t be present, and references to this NATed eth0 in the following section can be ignored.
To configure our OpenStack environment to use Flat networking with DHCP, carry out the following steps:
sudo /etc/init.d/networking restart
We now configure OpenStack Compute to use the new bridged interface as part of our Flat network. Add the following lines to /etc/nova/nova.conf:
sudo restart nova-compute sudo restart nova-network
This shows output like the following:
sudo sysctl -w net.ipv4.ip_forward=1
FlatDHCPManager networking is a common option for networking, as it provides a Flat network that is only limited by the IP address range assigned. It doesn’t require a Linux operating system and the /etc/network/interfaces file in order to operate correctly through the use of standard DHCP for assigning addresses.
In order to make FlatDHCPManager work, we manually configure our hosts with the same bridging, which is set to br100, as specified in /etc/nova/nova.conf:
Once set up, we configure our network range, where we can specify in our /etc/nova/nova.conf configuration file the start of this range that our instances get when they start:
When creating the fixed (private) range using nova-manage network create, we assign this fixed range to a particular tenant (project). This allows us to have a specific IP ranges that are isolated from different projects in a multi-tenant environment.
When our instance boots up, our dnsmasq service that is running on our nova-network host assigns an address from its dhcp pool to the instance.
Also note that we don’t assign an IP address to the interface that we connect to our bridge, in our case it is eth2. We simply bring this interface up so we can bridge to it (and therefore forward traffic to the instance interfaces that are bridged to it).
Openstack Interview Questions
Free Demo for Corporate & Online Trainings.