OpenStack is a collection of software tools that help to build and manage cloud computing platforms for storage, compute and networking resources, especially for private and public clouds. It can be accessed using the OpenStack dashboard or OpenStack API. OpenStack Networking offers APIs for networking resources such as a switch, router, port, interface etc. OpenStack Networking was initially developed as a part of OpenStack Nova as Nova Networking then networking became an area of interest for many and it was taken up as a separate project called OpenStack Networking or Neutron.
In order to have a smooth integration with OpenStack Networking, the networking infrastructure should accommodate plugins for OpenStack Neutron. It is up to the developer to decide which network construct to be deployed such as, router, switch, interface, or port. The construct can be either physical or virtual, can be a part of the traditional network environment or SDN (Software Defined Networking) environment. The one and only need is that the vendor should have supporting infrastructures for neutron API. The creation and management of the network constructs (router, switch etc) can be done using OpenStack Dashboard.
Below image shows the 4 networks in OpenStack architecture. Management network which handles internal combination between OpenStack components. Data Network handles the Virtual Machine communication inside cloud deployment. External Network connects the Virtual Machines with Internet thereby providing Internet access to the cloud deployment. API Network offers all the APIs including OpenStack Networking API to the tenants.
OpenStack Neutron is a part of SDN networking project that is developed for offering Networking-as-a-Service (NaaS) in a virtual environment. Neutrons are developed in order to overcome the issues such as poor control over tenants in a multi-tenant environment, address deficiencies etc in the previous API called Quantum. The neutron is designed in such a way that it provides easy plugin mechanisms which enable the operators to access different technologies through the APIs.
In OpenStack Neutron Network, tenants can create multiple private networks and control their IP Addresses. With this extension the organisations can have more control over the security policies, monitoring, troubleshooting, Quality of Service, firewall etc. The Neutron extension includes IP address management, support for layer 2 networking and extension for layer 3 router construct. The OpenStack Networking team is actively working on the improvements and enhancements for Neutron to release it with Havana and IceHouse.
There are prerequisites such as creating a database, API Endpoints and service credentials for installing OpenStack Networking.
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)] GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ \ IDENTIFIED BY ‘NEUTRON_DBPASS’
MariaDB [(none)] GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ \ IDENTIFIED BY ‘NEUTRON_DBPASS’
$ openstack user create --domain default --password-prompt neutron
[Related Article: Creating a sandbox Network server for Neutron with VirtualBox]
OpenStack role add --project service --user neutron admin
Openstack service create --name neutron --description “OpenStack Networking” network
$ Openstack endpoint create --region RegionOne \ network public http://controller:9696 $ Openstack endpoint create --region RegionOne \ network internal http://controller:9696 $ Openstack endpoint create --region RegionOne \ network admin http://controller:9696
Networking service can be deployed using 2 architecture options,
Metadata Agent provides the credentials for the attached instances. To configure follow the below step:
Open /etc/neutron/metadata_agent.ini in edit mode and in [DEFAULT] section configure the nova_metadata_host and shared_secret
[DEFAULT] #... Nova_metadata_host = controller Metadata_proxy_shared_secret = METADATA_SECRET
To use the networking service, compute service has to be configured. Follow the steps to configure compute service,
There would be symbolic link to Networking service initialization script /etc/neutron/plugin.ini pointing ML2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini. if you can’t find it create using the command,
# ln -s /etc/neutron/plugins/m12/m12 conf.ini/etc/neutron/plugin.ini
# su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
# systemct1 restart openstack-nova-api.service
#systemct1 enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ Neutron-metadata-agent.service #systemct1 start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ Neutron-metadata-agent.service
OpenStack Networking is a standalone service that deploys many other process across nodes which interact with each other. The architecture has a dedicated node to perform DHCP and L3 routing. To run the Open vswitch there are two compute nodes each having one physical network card. One of the compute node is for managing tenant traffic and other one for managing connectivity.
The below diagram shows the OpenStack Networking Architecture with two Compute nodes and one Network that are connected to a Physical router.
A security group is a container object with a set of security rules. It acts as a virtual firewall for other resources and servers on the same network. The rules and security groups filter the type and control the direction of the traffic sent to and received from neutron port providing an extra layer of security.
One security group created can manage traffic to multiple compute instances. Security group is also linked with the ports created for LBaaS, Floating IP Addresses and other instances. If the name of security group is not mentioned the ports are associated with the default security group. New security groups can be created to have new properties or modifications can be done to default security group to change their behavior. A new security group can be created using the following command,
Openstack security group create [--description <description>] [--project <project> [--project-domain <project-domain>]] <name>
Open vSwitch is a virtual switch that helps in virtualizing the networking layer. It allows more number virtual machines that run on one or more physical nodes. The virtual machines are connected to virtual ports present on the virtual bridges. The virtual bridge allows the virtual machines to communicate with each other also allows them to connect with physical machines outside the node. With the help of layer 2 features (LACP, STP and 802.1Q) the open vSwitches are integrated with physical switches.
ovs-vsctl list-ports br-int
In order to enable OpenStack Networking in utilizing various layer 2 networking technologies from the real world data centres, Modular Layer 2 (ml2) plugin framework has been designed. At present it is operated with existing technologies such as open vswitch, hyperv L2 agents and linux bridge. ML2 has replaced the monolithic plugins from L2 layers. It also simplifies the support for upcoming new L2 technologies to reduce the efforts to add new monolithic plugins.
[ml2] Type_drivers = local, GRE, flat, VLAN, VXLAN
[ml2] mechanism _drivers = openvswitch, linuxbridge, 12population
OpenStack platform offers two different networking backends they are - Nova Networking and OpenStack Networking. Nova networking has become obsolete after the arrival of OpenStack networking but still available in the market. Presently there are no migration paths available from Nova to OpenStack Networking. Any attempt to migrate from one technology to another must be performed manually with many outages.
OpenStack Networking Services: In the initial phases it is important to make sure that there is proper expertise in the area to help in designing the physical networking infrastructure and to initiate appropriate auditing mechanisms and security controls. The components involved in OpenStack Networking Services are as follows:
Tenant and Provider networks are part of the compute node in OpenStack Networking where tenant networks are created by users and provider networks are created by administrators of OpenStack. This section explains briefly about tenant and provider networks:
Type_drivers =vxlan, flat Flat_networks = *
neutron net-create public01 --provider:network_type flat
# neutron subnet-create -- name public_subnet
--enable dhcp=False --allocation_pool
--gateway=192.168.100.1 public01 192.168.100.0/24
DEVICE = br-ex
TYPE = OVSBridge
DEVICETYPE = ovs
ONBOOT = yes
NM_CONTROLLED = no
DEVICE = eth1
TYPE = OVSPort
DEVICETYPE = ovs
ONBOOT = yes
NM_CONTROLLED = no
Bridge_mappings = physnet1:br-ex
Systemctl restart neutron-l3-agent
While designing virtual networks one should predict beforehand where maximum network traffic would be present. The traffic within the same logical networks would be faster than the traffic between different logical networks which is because the traffic passing between the logical networks should pass through a router which imposes network latency.
Use switching where possible
Usually, the switching happens in layer 2, lower level of the network. Hence layer 2 can function quicker than layer 3 where the routing happens. The hops between systems that communicate often should be as low as possible. An encapsulation tunnel such as GRE or VXLAN can be used to make instances in different nodes to communicate with each other. The MTU size can be adjusted to accommodate the extra bits for the tunnel header else it would affect the results of fragmentation.
OpenStack Networking enables users to build consistent and effective network topologies programmatically. The neutron component in the OpenStack Networking, with its pluggable open source architecture, allows users to develop their own plugins and drivers that can interact with other physical and network devices to bring add on functionalities to the cloud.
Free Demo for Corporate & Online Trainings.