Watch all our Tutorials and Training Videos for Free on our Youtube Channel, Get Online Web Tools for Free on swebtools.com

Search Suggest

Install Openstack Neutron Networking on Controller node

install openstack neutron networking on controller node, openstack neutron on controller node, openstack neutron networking node installation

Install Openstack Neutron Networking on Controller node


This article shows you that how to install Openstack Neutron Networking Service on CentOS 7 (Controller node) and How to Configure OpenStack Neutron.


Continue with the previous post, we will explain you how to install Neutron Openstack networking components and configure it on controller node (node1).

Install Openstack Neutron Networking on Controller node

What is Openstack Neutron?

Basically, OpenStack Neutron is also a important component in openstack and a networking-as-a-service project within the OpenStack, which allows us to create or attach network interface device to networks. Neutron creates networking virtualization in openstack environment and it manages the Virtual Networking Infrastructure, it provides the networks, subnets and router object abstractions.

Earlier, neutron was called as quantum, later on it has been changed to neutron due to some trademark name conflict with other company.

Lab Setup for this Openstack Deployment:
Server Names : node1 and node2.
OS :  CentOS 7.2 - 64 Bit
Enabled Internet connection.

Prerequisites:

  1. Installed Centos 7.1 – 64 Bit

  2. Make sure each hosts are reachable. I made an entries in hosts file only.
    192.168.2.1     controller-node1.learnitguide.net controller-node1
    192.168.2.2     compute-node1.learnitguide.net compute-node1

  3. Verify the internet connection, because we use public repo's to install these components.

  4. Take a backup or snapshot at different stages to restore in case of failure.

  5. Disable Selinux and Stop firewall to avoid issues during the installations (systemctl stop firewalld ; systemctl disable firewalld) or allow each component services on firewall after installations.

  6. Enabled Openstack Liberty Repositories.

  7. Installed Openstack Liberty Packages.

  8. Installed and Configured MariaDB(MySQL) Database.


ALSO WATCH THIS TUTORIAL VIDEO FREE ON OUR YOUTUBE CHANNEL - HOW TO INSTALL OPENSTACK NEUTRON NETWORKING


Install Openstack Neutron networking service on Node1 (controller node).

Login into 'node1' and create neutron database to store its data in order to install openstack neutron service.

[root@node1 ~]# mysql -u root -p
Enter password:
Create neutron database.
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'node1'
IDENTIFIED BY 'redhat';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'
IDENTIFIED BY 'redhat';
MariaDB [(none)]> exit

Replace 'node1' with your controller node name and replace 'redhat' with your Database Neutron password you wish to set.

Load the admin-openrc.sh file which was created earlier to gain access to commands,

[root@node1 ~]# source admin-openrc.sh

Create the neutron user and add roles.

[root@node1 ~]# openstack user create --domain default --password-prompt neutron

Add the admin role to the neutron user:

[root@node1 ~]# openstack role add --project service --user neutron admin

Create the neutron service entity:

[root@node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network

Create the Networking service API endpoints:

[root@node1 ~]# openstack endpoint create --region RegionOne network public https://node1:9696
[root@node1 ~]# openstack endpoint create --region RegionOne network internal https://node1:9696
[root@node1 ~]# openstack endpoint create --region RegionOne network admin https://node1:9696

Replace 'node1' with your controller host name. Before configuring the neutron service, We must choose any one of the following network option to configure services specific to it.

Networking Option 1: Provider networks

Simplest possible architecture that only supports attaching instances to public (provider) networks. No self-service networks, routers, or floating IP addresses. Only the admin or other privileged user can manage provider networks.

Networking Option 2: Self-service networks

This is option 1 with layer-3 services that support attaching instances to self-service (private) networks. Additionally, floating IP addresses provide connectivity to instances using self-service networks from external networks such as the Internet.

We go with option 2 in this post, self service networks.

Install and configure Networking components on the controller node.

Install the required packages of Neutron Network component on controller node.

[root@node1 ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset

Configure the networking components.

Edit the /etc/neutron/neutron.conf file and make the below changes.

Change the connection parameter under the DATABASE section, configure database access,

[database]
connection = mysql://neutron:redhat@node1/neutron

Replace 'node1' with your controller node name and replace 'redhat' with your Database Nova password you have set before.

Make the following changes in the DEFAULT section to enable and configure the Modular Layer 2 (ML2) plug-in, router service, overlapping IP addresses, RabbitMQ message queue access, Identity service access, Networking to notify Compute of network topology changes, lock path.

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = https://node1:8774/v2
rpc_backend = rabbit
auth_strategy = keystone
verbose = True

Replace 'node1' with your controller node name.

Configure RabbitMQ message queue access in the [oslo_messaging_rabbit] sections as below.

[oslo_messaging_rabbit]
rabbit_host = node1
rabbit_userid = openstack
rabbit_password = redhat

Replace 'node1' with your controller node name and replace 'openstack' and 'redhat' with the username and password you have set for the openstack account in RabbitMQ configuration.

Configure Identity service access in the [keystone_authtoken] sections,

[keystone_authtoken]
auth_uri = https://node1:5000
auth_url = https://node1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutron

Replace 'node1' with your controller node name and replace 'neutron' with the username and password you chosen while creating the neutron user.

Configure Networking to notify Compute of network topology changes in the [nova] sections,

[nova]
auth_url = https://node1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = nova

Replace 'nova' with the username and password you chosen for the nova user in the Identity service.

Configure the lock path in the [oslo_concurrency] section,

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

Configure the Modular Layer 2 (ML2) plug-in:

The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and make the following changes:

Enable flat, VLAN, VXLAN networks and port security extension driver in the [ml2] section,

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

Configure the public flat provider network in the [ml2_type_flat] section,

[ml2_type_flat]
flat_networks = public

Configure the VXLAN network identifier range for private networks in the [ml2_type_vxlan] section,

[ml2_type_vxlan]
vni_ranges = 1:1000

Enable ipset to increase efficiency of security group rules in the [securitygroup] section,

[securitygroup]
enable_ipset = True

Configure the Linux bridge agent

The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances including VXLAN tunnels for private networks and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and make the changes as below.

Map the public virtual network to the public physical network interface in the [linux_bridge] section,

[linux_bridge]
physical_interface_mappings = public:enp0s8

Replace 'enp0s8' with your physical public network interface of node1.

Enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population in the [vxlan] section,

[vxlan]
enable_vxlan = True
local_ip = 192.168.2.1
l2_population = True

Replace '192.168.2.1' with the IP address of the physical network interface that handles overlay networks.

Enable ARP spoofing protection in the [agent] section,

[agent]
prevent_arp_spoofing = True

Enable security groups and configure the Linux bridge iptables firewall driver in the [securitygroup] section,

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure the layer-3 agent

The Layer-3 (L3) agent provides routing and NAT services for virtual networks.

Edit the /etc/neutron/l3_agent.ini file and make the below changes.

Configure the Linux bridge interface driver and external network bridge in the [DEFAULT] section,

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
verbose = True

Configure the DHCP agent

The DHCP agent provides DHCP services for virtual networks.

Edit the /etc/neutron/dhcp_agent.ini file and make the below changes.

Configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on public networks can access metadata over the network in the [DEFAULT] section,

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
verbose = True

Create a /etc/neutron/dnsmasq-neutron.conf file to enable the DHCP MTU option (26) and configure it to 1450 bytes:

[root@node1 ~]# echo "dhcp-option-force=26,1450" > /etc/neutron/dnsmasq-neutron.conf

Configure the metadata agent

The metadata agent provides configuration information such as credentials to instances.

Edit the /etc/neutron/metadata_agent.ini file and make the below changes.

Configure access parameters in the [DEFAULT] section,

[DEFAULT]
auth_uri = https://node1:5000
auth_url = https://node1:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutron
nova_metadata_ip = node1
metadata_proxy_shared_secret = METADATA_SECRET
verbose = True

Replace 'node1' with your controller host name and replace 'neutron' with the username and password you chose for the neutron user in the Identity service.

Configure Compute to use Networking

Edit the /etc/nova/nova.conf file and make the below changes,

Configure access parameters, enable the metadata proxy, and configure the secret in the [neutron] section,

[neutron]
url = https://node1:9696
auth_url = https://node1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET

Replace 'node1' with your controller host name and replace 'neutron' with the username and password you chose for the neutron user in the Identity service.

The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. so lets create it.

[root@node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Populate the database:

[root@node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Restart the below Compute service to make the changes effect:

[root@node1 ~]# systemctl restart openstack-nova-api.service

Enable the services to start when the system boots.

[root@node1 ~]# systemctl enable neutron-server.service
[root@node1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@node1 ~]# systemctl enable neutron-dhcp-agent.service
[root@node1 ~]# systemctl enable neutron-metadata-agent.service
[root@node1 ~]# systemctl enable neutron-l3-agent.service

Start the Networking services.

[root@node1 ~]# systemctl start neutron-server.service
[root@node1 ~]# systemctl start neutron-linuxbridge-agent.service
[root@node1 ~]# systemctl start neutron-dhcp-agent.service
[root@node1 ~]# systemctl start neutron-metadata-agent.service
[root@node1 ~]# systemctl start neutron-l3-agent.service

Hope this post is useful for you to know How to Install OpenStack Neutron Networking Service. In the next post, we will explain you how to Install OpenStack Neutron Networking Service on Compute node. openstack neutron, 

Also download 100% free eBooks related to OpenStack Cloud.
1. A Brief Look at OpenStack
2. OpenStack Cloud Computing Cookbook
3. Concepts of Cloud Computing in simple terms

Keep practicing and have fun. Leave your comments if any.

Support Us: Share with your friends and groups.

Stay connected with us on social networking sites, Thank you.