Networking service overview
OpenStack Networking (neutron) allows you to
create and attach interface devices managed by other OpenStack services to
networks. Plug-ins can be implemented to accommodate different networking
equipment and software, providing flexibility to OpenStack architecture and
deployment.
It includes the following components:
neutron-server
Accepts and routes API requests to the
appropriate OpenStack Networking plug-in for action.
OpenStack
Networking plug-ins and agents
Plugs and unplugs ports, creates networks or
subnets, and provides IP addressing. These plug-ins and agents differ depending
on the vendor and technologies used in the particular cloud. OpenStack
Networking ships with plug-ins and agents for Cisco virtual and physical
switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the VMware
NSX product.
The common agents are L3 (layer 3), DHCP
(dynamic host IP addressing), and a plug-in agent.
Messaging
queue
Used by most OpenStack Networking
installations to route information between the neutron-server and various
agents. Also acts as a database to store networking state for particular
plug-ins.
OpenStack Networking mainly interacts with
OpenStack Compute to provide networks and connectivity for its instances.
Networking (neutron) concepts
OpenStack Networking
(neutron) manages all networking facets for the Virtual Networking
Infrastructure (VNI) and the access layer aspects of the Physical Networking
Infrastructure (PNI) in your OpenStack environment. OpenStack Networking
enables tenants to create advanced virtual network topologies which may include
services such as a firewall, a load balancer, and a virtual private network (VPN).
Networking provides
networks, subnets, and routers as object abstractions. Each abstraction has
functionality that mimics its physical counterpart: networks contain subnets,
and routers route traffic between different subnets and networks.
Any given Networking set up
has at least one external network. Unlike the other networks, the external
network is not merely a virtually defined network. Instead, it represents a
view into a slice of the physical, external network accessible outside the
OpenStack installation. IP addresses on the external network are accessible by
anybody physically on the outside network.
In addition to external
networks, any Networking set up has one or more internal networks. These
software-defined networks connect directly to the VMs. Only the VMs on any
given internal network, or those on subnets connected through interfaces to a
similar router, can access VMs connected to that network directly.
For the outside network to
access VMs, and vice versa, routers between the networks are needed. Each
router has one gateway that is connected to an external network and one or more
interfaces connected to internal networks. Like a physical router, subnets can
access machines on other subnets that are connected to the same router, and
machines can access the outside network through the gateway for the router.
Additionally, you can
allocate IP addresses on external networks to ports on the internal network.
Whenever something is connected to a subnet, that connection is called a port.
You can associate external network IP addresses with ports to VMs. This way,
entities on the outside network can access VMs.
Networking also supports security groups. Security
groups enable administrators to define firewall rules in groups. A VM can
belong to one or more security groups, and Networking applies the rules in
those security groups to block or unblock ports, port ranges, or traffic types
for that VM.
Each plug-in that Networking
uses has its own concepts. While not vital to operating the VNI and OpenStack
environment, understanding these concepts can help you set up Networking. All Networking
installations use a core plug-in and a security group plug-in (or just the
No-Op security group plug-in). Additionally, Firewall-as-a-Service (FWaaS) and
Load-Balancer-as-a-Service (LBaaS) plug-ins are available.
Install
and configure controller node
Prerequisites
Before
you configure the OpenStack Networking (neutron) service, you must create a
database, service credentials, and API endpoints.
1. To create the database, complete these
steps:
Use the database
access client to connect to the database server as the root
user:
$ mysql -u root -p
Create the neutron database:
CREATE DATABASE neutron;
Grant proper access
to the neutron database, replacing NEUTRON_DBPASS
with a suitable password:
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
Exit the database
access client.
2. Source the admin
credentials to gain access to admin-only CLI commands:
$ . admin-openrc
3. To create the service credentials,
complete these steps:
o
Create the neutron
user:
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | b20a6692f77b4258926881bf831eb683 |
| name | neutron |
+-----------+----------------------------------+
Add the admin role to the neutron user:
$ openstack role add --project service --user neutron admin
Note
This command provides no output.
o
Create the neutron
service entity:
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
4. Create the Networking service API
endpoints:
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
Configure
networking options
You
can deploy the Networking service using one of two architectures represented by
options 1 and 2.Option 1 deploys the simplest possible architecture that only supports attaching instances to provider (external) networks. No self-service (private) networks, routers, or floating IP addresses. Only the admin or other privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching instances to self-service networks. The demo or other unprivileged user can manage self-service networks including routers that provide connectivity between self-service and provider networks. Additionally, floating IP addresses provide connectivity to instances using self-service networks from external networks such as the Internet.
Self-service networks typically use overlay networks. Overlay network protocols such as VXLAN include additional headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. The Networking service automatically provides the correct MTU value to instances via DHCP. However, some cloud images do not use DHCP or ignore the DHCP MTU option and require configuration using metadata or a script.
Note
Option 2 also supports attaching instances to
provider networks.
Choose
one of the following networking options to configure services specific to it.
Afterwards, return here and proceed to Configure
the metadata agent.
Networking
Option 2: Self-service networks
Install
and configure the Networking components on the controller node.
Install
the components
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
Configure
the server component
·
Edit the /etc/neutron/neutron.conf
file and complete the following actions:
o
In the [database]
section, configure database access:
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace NEUTRON_DBPASS
with the password you chose for the database.
o
In the [DEFAULT]
section, enable the Modular Layer 2 (ML2) plug-in, router service, and
overlapping IP addresses:
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
o
In the [DEFAULT]
and [oslo_messaging_rabbit]
sections, configure RabbitMQ message queue access:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace RABBIT_PASS
with the password you chose for the openstack
account in RabbitMQ.
o
In the [DEFAULT]
and [keystone_authtoken]
sections, configure Identity service access:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS
with the password you chose for the neutron
user in the Identity service.
Note
Comment out or
remove any other options in the [keystone_authtoken]
section.
o
In the [DEFAULT]
and [nova]
sections, configure Networking to notify Compute of network topology changes:
[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS
with the password you chose for the nova
user in the Identity service.
o
In the [oslo_concurrency]
section, configure the lock path:
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
Configure
the Modular Layer 2 (ML2) plug-in
The
ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and
switching) virtual networking infrastructure for instances.
·
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini
file and complete the following actions:
o
In the [ml2]
section, enable flat, VLAN, and VXLAN networks:
[ml2]
...
type_drivers = flat,vlan,vxlan
o
In the [ml2]
section, enable VXLAN self-service networks:
[ml2]
...
tenant_network_types = vxlan
In the [ml2] section, enable
the Linux bridge and layer-2 population mechanisms:
[ml2]
...
mechanism_drivers = linuxbridge,l2population
Warning
After you configure the ML2 plug-in, removing
values in the type_drivers
option can lead to database inconsistency.
Note
The Linux bridge agent only supports VXLAN
overlay networks.
o
In the [ml2]
section, enable the port security extension driver:
[ml2]
...
extension_drivers = port_security
o
In the [ml2_type_flat]
section, configure the provider virtual network as a flat network:
[ml2_type_flat]
...
flat_networks = provider
o
In the [ml2_type_vxlan]
section, configure the VXLAN network identifier range for self-service networks:
[ml2_type_vxlan]
...
vni_ranges = 1:1000
[securitygroup]
...
enable_ipset = True
Configure
the Linux bridge agent
The
Linux bridge agent builds layer-2 (bridging and switching) virtual networking
infrastructure for instances and handles security groups.
·
Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini
file and complete the following actions:
o
In the [linux_bridge]
section, map the provider virtual network to the provider physical network
interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME
with the name of the underlying provider physical network interface. See Host networking for
more information.
o
In the [vxlan]
section, enable VXLAN overlay networks, configure the IP address of the
physical network interface that handles overlay networks, and enable layer-2
population:
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
Replace OVERLAY_INTERFACE_IP_ADDRESS
with the IP address of the underlying physical network interface that handles
overlay networks. The example architecture uses the management interface to
tunnel traffic to the other nodes. Therefore, replace OVERLAY_INTERFACE_IP_ADDRESS
with the management IP address of the controller node. See Host networking for
more information.
o
In the [securitygroup]
section, enable security groups and configure the Linux bridge iptables firewall
driver:
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure
the layer-3 agent
The
Layer-3 (L3) agent
provides routing and NAT services for self-service virtual networks.
·
Edit the /etc/neutron/l3_agent.ini
file and complete the following actions:
o
In the [DEFAULT]
section, configure the Linux bridge interface driver and external network
bridge:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
Note
The external_network_bridge
option intentionally lacks a value to enable multiple external networks on a
single agent.
Configure
the DHCP agent
The
DHCP agent provides
DHCP services for virtual networks.
·
Edit the /etc/neutron/dhcp_agent.ini
file and complete the following actions:
o
In the [DEFAULT]
section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and
enable isolated metadata so instances on provider networks can access metadata
over the network:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
Return
to Networking controller node
configuration.
Configure
the metadata agent
The
metadata agent
provides configuration information such as credentials to instances.
·
Edit the /etc/neutron/metadata_agent.ini
file and complete the following actions:
o
In the [DEFAULT]
section, configure the metadata host and shared secret:
[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
Replace METADATA_SECRET
with a suitable secret for the metadata proxy.
Configure
Compute to use Networking
·
Edit the /etc/nova/nova.conf
file and perform the following actions:
o
In the [neutron]
section, configure access parameters, enable the metadata proxy, and configure
the secret:
[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
Replace NEUTRON_PASS
with the password you chose for the neutron user in the
Identity service.
Replace METADATA_SECRET
with the secret you chose for the metadata proxy.
Finalize
installation
1.
The Networking service initialization
scripts expect a symbolic link /etc/neutron/plugin.ini
pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini.
If this symbolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.
Populate the database:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Note
Database population occurs later for
Networking because the script requires complete server and plug-in
configuration files.
3.
Restart the Compute API service:
# systemctl restart openstack-nova-api.service
4.
Start the Networking services and
configure them to start when the system boots.
For both networking options:
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
For networking option 2, also enable
and start the layer-3 service:
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
Install
and configure compute node (compute1)
The
compute node handles connectivity and security groups for
instances.
Install
the components
# yum install openstack-neutron-linuxbridge ebtables ipset
Configure
the common component
The
Networking common component configuration includes the authentication
mechanism, message queue, and plug-in.
Note
Default configuration files vary by
distribution. You might need to add these sections and options rather than
modifying existing sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you should
retain.
·
Edit the /etc/neutron/neutron.conf
file and complete the following actions:
o
In the [database]
section, comment out any connection
options because compute nodes do not directly access the database.
o
In the [DEFAULT]
and [oslo_messaging_rabbit]
sections, configure RabbitMQ message queue access:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace RABBIT_PASS
with the password you chose for the openstack
account in RabbitMQ.
o
In the [DEFAULT]
and [keystone_authtoken]
sections, configure Identity service access:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS
with the password you chose for the neutron
user in the Identity service.
Note
Comment out or remove any other options in
the [keystone_authtoken]
section.
o
In the [oslo_concurrency]
section, configure the lock path:
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
Configure
networking options
Choose
the same networking option that you chose for the controller node to configure
services specific to it. Afterwards, return here and proceed to Configure Compute to use Networking.
Networking
Option 2: Self-service networks
Configure
the Networking components on a compute
node.
Configure
the Linux bridge agent
The
Linux bridge agent builds layer-2 (bridging and switching) virtual networking
infrastructure for instances and handles security groups.
·
Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini
file and complete the following actions:
o
In the [linux_bridge]
section, map the provider virtual network to the provider physical network
interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME
with the name of the underlying provider physical network interface. See Host networking for
more information.
o
In the [vxlan]
section, enable VXLAN overlay networks, configure the IP address of the
physical network interface that handles overlay networks, and enable layer-2
population:
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
Replace OVERLAY_INTERFACE_IP_ADDRESS
with the IP address of the underlying physical network interface that handles
overlay networks. The example architecture uses the management interface to
tunnel traffic to the other nodes. Therefore, replace OVERLAY_INTERFACE_IP_ADDRESS
with the management IP address of the compute node. See Host networking for
more information.
o
In the [securitygroup]
section, enable security groups and configure the Linux bridge iptables firewall
driver:
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Return
to Networking compute node configuration.
Configure
Compute to use Networking
·
Edit the /etc/nova/nova.conf
file and complete the following actions:
o
In the [neutron]
section, configure access parameters:
[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS
with the password you chose for the neutron
user in the Identity service.
Finalize
installation
1.
Restart the Compute service:
2. # systemctl restart openstack-nova-compute.service
3.
Start the Linux bridge agent and
configure it to start when the system boots:
4. # systemctl enable neutron-linuxbridge-agent.service
5. # systemctl start neutron-linuxbridge-agent.service
Verify
operation
Note
Perform these commands on the controller node.
1.
Source the admin
credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2.
List loaded extensions to verify
successful launch of the neutron-server
process:
$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| net-mtu | Network MTU |
| availability_zone | Availability Zone |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| flavors | Neutron Service Flavors |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| extraroute | Neutron Extra Route |
| timestamp_core | Time Stamp Fields addition for core resources |
| router | Neutron L3 Router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| dns-integration | DNS Integration |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| router_availability_zone | Router Availability Zone |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| dvr | Distributed Virtual Router |
+---------------------------+-----------------------------------------------+
Note
Actual output may differ slightly from this
example.
Use
the verification section for the networking option that you chose to deploy.
Networking
Option 2: Self-service networks
List agents to verify successful launch of
the neutron agents:
$ neutron
agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id |
agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 08905043-5010-4b87-bba5-aedb1956e27a
| Linux bridge agent | compute1 |
:-) | True | neutron-linuxbridge-agent |
| 27eee952-a748-467b-bf71-941e89846a92
| Linux bridge agent | controller | :-)
| True |
neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc
| L3 agent | controller |
:-) | True | neutron-l3-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391
| DHCP agent | controller |
:-) | True | neutron-dhcp-agent |
| f49a4b81-afd6-4b3d-b923-66c8f0517099
| Metadata agent | controller |
:-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
The output should indicate four agents on the
controller node and one agent on each compute node.
Next
steps
Your
OpenStack environment now includes the core components necessary to launch a
basic instance. You can Launch an instance or
add more OpenStack services to your environment.
Dashboard
updated: 2016-08-24 10:41
This example deployment uses an Apache web server.
Install
and configure
This
section describes how to install and configure the dashboard on the controller
node.The dashboard relies on functional core services including Identity, Image service, Compute, and either Networking (neutron) or legacy networking (nova-network). Environments with stand-alone services such as Object Storage cannot use the dashboard. For more information, see the developer documentation.
Note
This section assumes proper installation,
configuration, and operation of the Identity service using the Apache HTTP
server and Memcached service as described in the Install and configure the Identity
service section.
Install
and configure components
Note
Default configuration files vary by
distribution. You might need to add these sections and options rather than
modifying existing sections and options. Also, an ellipsis (...) in the
configuration snippets indicates potential default configuration options that
you should retain.
1.
Install the packages:
2. # yum install openstack-dashboard
2.
Edit the /etc/openstack-dashboard/local_settings
file and complete the following actions:
o
Configure the dashboard to use
OpenStack services on the controller
node:
OPENSTACK_HOST = "controller"
o
Allow all hosts to access the
dashboard:
ALLOWED_HOSTS = ['*', ]
o
Configure the memcached
session storage service:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
Note
Comment out any other session storage
configuration.
o
Enable the Identity API version 3:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
o
Enable support for domains:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
o
Configure API versions:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
o
Configure default
as the default domain for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
o
Configure user
as the default role for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
o
If you chose networking option 1,
disable support for layer-3 networking services:
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
o
Optionally, configure the time zone:
TIME_ZONE = "TIME_ZONE"
Replace TIME_ZONE
with an appropriate time zone identifier. For more information, see the list of time
zones.
Finalize
installation
·
Restart the web server and session
storage service:
· # systemctl restart httpd.service memcached.service
Note
The systemctl
restart
command starts each service if not currently running.
Verify
operation
Verify
operation of the dashboard.Access the dashboard using a web browser at http://controller/dashboard.
Authenticate using admin or demo user and default domain credentials.
Next
steps
Your
OpenStack environment now includes the dashboard. You can Launch an instance or
add more services to your environment.After you install and configure the dashboard, you can complete the following tasks:
·
Provide users with a public IP
address, a username, and a password so they can access the dashboard through a
web browser. In case of any SSL certificate connection problems, point the
server IP address to a domain name, and give users access.
·
Customize your dashboard. See section Customize
the dashboard.
·
Set up session storage. See Set up
session storage for the dashboard.
·
To use the VNC client with the
dashboard, the browser must support HTML5 Canvas and HTML5 WebSockets.
For details about browsers that
support noVNC, see README and browser support.
Launch
an instance
This
section creates the necessary virtual networks to support launching instances.
Networking option 1 includes one provider (external) network with one instance
that uses it. Networking option 2 includes one provider network with one
instance that uses it and one self-service (private) network with one instance
that uses it. The instructions in this section use command-line interface (CLI)
tools on the controller node. For more information on the CLI tools, see the OpenStack
End User Guide. To use the dashboard, see the OpenStack End User
Guide.
Create
virtual networks
Create
virtual networks for the networking option that you chose in Networking service. If
you chose option 1, create only the provider network. If you chose option 2,
create the provider and self-service networks.
Provider
network
Before
launching an instance, you must create the necessary virtual network
infrastructure. For networking option 1, an instance uses a provider (external)
network that connects to the physical network infrastructure via layer-2
(bridging/switching). This network includes a DHCP server that provides IP
addresses to instances.The admin or other privileged user must create this network because it connects directly to the physical network infrastructure.
Note
The following instructions and diagrams use
example IP address ranges. You must adjust them for your particular
environment.
Networking Option 1: Provider networks
- Overview
Networking Option 1: Provider networks
- Connectivity
Create
the provider network
1.
On the controller node, source the admin credentials to
gain access to admin-only CLI commands:
$ . admin-openrc
2.
Create the network:
$ neutron net-create --shared --provider:physical_network provider \
--provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | d84313397390425c8ed50b2f6e18d092 |
+---------------------------+--------------------------------------+
The --shared
option allows all projects to use the virtual network.
The --provider:physical_network
provider
and --provider:network_type
flat
options connect the flat virtual network to the flat (native/untagged) physical
network on the eth1
interface on the host using information from the following files:
ml2_conf.ini:
[ml2_type_flat]
flat_networks = provider
linuxbridge_agent.ini:
[linux_bridge]
physical_interface_mappings = provider:eth1
3.
Create a subnet on the network:
$ neutron subnet-create --name provider \
--allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS \
--dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY \
provider PROVIDER_NETWORK_CIDR
Replace PROVIDER_NETWORK_CIDR
with the subnet on the provider physical network in CIDR notation.
Replace START_IP_ADDRESS
and END_IP_ADDRESS
with the first and last IP address of the range within the subnet that you want
to allocate for instances. This range must not include any existing active IP
addresses.
Replace DNS_RESOLVER
with the IP address of a DNS resolver. In most cases, you can use one from the /etc/resolv.conf file on the
host.
Replace PROVIDER_NETWORK_GATEWAY
with the gateway IP address on the provider network, typically the ”.1” IP
address.
Example
The provider network uses
203.0.113.0/24 with a gateway on 203.0.113.1. A DHCP server assigns each
instance an IP address from 203.0.113.101 to 203.0.113.250. All instances use
8.8.4.4 as a DNS resolver.
$ neutron subnet-create --name provider \
--allocation-pool start=203.0.113.101,end=203.0.113.250 \
--dns-nameserver 8.8.4.4 --gateway 203.0.113.1 \
provider 203.0.113.0/24
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.250"} |
| cidr | 203.0.113.0/24 |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | 5cc70da8-4ee7-4565-be53-b9c011fca011 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider |
| network_id | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad |
| subnetpool_id | |
| tenant_id | d84313397390425c8ed50b2f6e18d092 |
+-------------------+----------------------------------------------------+
Self-service
network
If
you chose networking option 2, you can also create a self-service (private)
network that connects to the physical network infrastructure via NAT. This
network includes a DHCP server that provides IP addresses to instances. An
instance on this network can automatically access external networks such as the
Internet. However, access to an instance on this network from external networks
such as the Internet requires a floating IP address.The demo or other unprivileged user can create this network because it provides connectivity to instances within the demo project only.
Warning
You must create the provider network
before the self-service network.
Note
The following instructions and diagrams use
example IP address ranges. You must adjust them for your particular
environment.
Networking Option 2: Self-service
networks - Overview
Networking Option 2: Self-service
networks - Connectivity
Create
the self-service network
1.
On the controller node, source the demo credentials to
gain access to user-only CLI commands:
$ . demo-openrc
2.
Create the network:
$ neutron net-create selfservice
Created a new network:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
| mtu | 0 |
| name | selfservice |
| port_security_enabled | True |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-----------------------+--------------------------------------+
Non-privileged users typically cannot
supply additional parameters to this command. The service automatically chooses
parameters using information from the following files:
ml2_conf.ini:
[ml2]
tenant_network_types = vxlan
[ml2_type_vxlan]
vni_ranges = 1:1000
3.
Create a subnet on the network:
$ neutron subnet-create --name selfservice \
--dns-nameserver DNS_RESOLVER --gateway SELFSERVICE_NETWORK_GATEWAY \
selfservice SELFSERVICE_NETWORK_CIDR
Replace DNS_RESOLVER
with the IP address of a DNS resolver. In most cases, you can use one from the /etc/resolv.conf file on the
host.
Replace SELFSERVICE_NETWORK_GATEWAY
with the gateway you want to use on the self-service network, typically the
”.1” IP address.
Replace SELFSERVICE_NETWORK_CIDR
with the subnet you want to use on the self-service network. You can use any
arbitrary value, although we recommend a network from RFC 1918.
Example
The self-service network uses
172.16.1.0/24 with a gateway on 172.16.1.1. A DHCP server assigns each instance
an IP address from 172.16.1.2 to 172.16.1.254. All instances use 8.8.4.4 as a
DNS resolver.
$ neutron subnet-create --name selfservice \
--dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \
selfservice 172.16.1.0/24
Created a new subnet:
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| allocation_pools | {"start": "172.16.1.2", "end": "172.16.1.254"} |
| cidr | 172.16.1.0/24 |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 172.16.1.1 |
| host_routes | |
| id | 3482f524-8bff-4871-80d4-5774c2730728 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | selfservice |
| network_id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
| subnetpool_id | |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-------------------+------------------------------------------------+
Create
a router
Self-service
networks connect to provider networks using a virtual router that typically
performs bidirectional NAT. Each router contains an interface on at least one
self-service network and a gateway on a provider network.The provider network must include the router:external option to enable self-service routers to use it for connectivity to external networks such as the Internet. The admin or other privileged user must include this option during network creation or add it later. In this case, we can add it to the existing provider provider network.
1.
On the controller node, source the admin credentials to
gain access to admin-only CLI commands:
$ . admin-openrc
2.
Add the router:
external
option to the provider
network:
$ neutron net-update provider --router:external
Updated network: provider
3.
Source the demo
credentials to gain access to user-only CLI commands:
$ . demo-openrc
4.
Create the router:
$ neutron router-create router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 89dd2083-a160-4d75-ab3a-14239f01ea0b |
| name | router |
| routes | |
| status | ACTIVE |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-----------------------+--------------------------------------+
5.
Add the self-service network subnet as
an interface on the router:
$ neutron router-interface-add router selfservice
Added interface bff6605d-824c-41f9-b744-21d128fc86e1 to router router.
6.
Set a gateway on the provider network
on the router:
$ neutron router-gateway-set router provider
Set gateway for router router
Verify
operation
We
recommend that you verify operation and fix any issues before proceeding. The
following steps use the IP address ranges from the network and subnet creation
examples.
1.
On the controller node, source the admin credentials to
gain access to admin-only CLI commands:
$ . admin-openrc
2.
List network namespaces. You should
see one qrouter
namespace and two qdhcp
namespaces.
$ ip netns
qrouter-89dd2083-a160-4d75-ab3a-14239f01ea0b
qdhcp-7c6f9b37-76b4-463e-98d8-27e5686ed083
qdhcp-0e62efcd-8cee-46c7-b163-d8df05c3c5ad
3.
List ports on the router to determine
the gateway IP address on the provider network:
$ neutron router-port-list router
+--------------------------------------+------+-------------------+------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+------------------------------------------+
| bff6605d-824c-41f9-b744-21d128fc86e1 | | fa:16:3e:2f:34:9b | {"subnet_id": |
| | | | "3482f524-8bff-4871-80d4-5774c2730728", |
| | | | "ip_address": "172.16.1.1"} |
| d6fe98db-ae01-42b0-a860-37b1661f5950 | | fa:16:3e:e8:c1:41 | {"subnet_id": |
| | | | "5cc70da8-4ee7-4565-be53-b9c011fca011", |
| | | | "ip_address": "203.0.113.102"} |
+--------------------------------------+------+-------------------+------------------------------------------+
4.
Ping this IP address from the
controller node or any host on the physical provider network:
$ ping -c 4 203.0.113.102
PING 203.0.113.102 (203.0.113.102) 56(84) bytes of data.
64 bytes from 203.0.113.102: icmp_req=1 ttl=64 time=0.619 ms
64 bytes from 203.0.113.102: icmp_req=2 ttl=64 time=0.189 ms
64 bytes from 203.0.113.102: icmp_req=3 ttl=64 time=0.165 ms
64 bytes from 203.0.113.102: icmp_req=4 ttl=64 time=0.216 ms
--- 203.0.113.102 ping statistics ---
rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 msReturn to Launch an instance - Create virtual networks.
Create
m1.nano flavor
The
smallest default flavor consumes 512 MB memory per instance. For environments
with compute nodes containing less than 4 GB memory, we recommend creating the m1.nano flavor that only
requires 64 MB per instance. Only use this flavor with the CirrOS image for
testing purposes.$ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
Generate
a key pair
Most
cloud images support public key authentication
rather than conventional password authentication. Before launching an instance,
you must add a public key to the Compute service.
1.
Source the demo
tenant credentials:
$ . demo-openrc
Generate and add a
key pair:
$ ssh-keygen -q -N ""
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
| name | mykey |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+-------------+-------------------------------------------------+
Note
Alternatively, you can skip the ssh-keygen command and use an
existing public key.
2.
Verify addition of the key pair:
$ openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
+-------+-------------------------------------------------+
Add
security group rules
By
default, the default
security group applies to all instances and includes firewall rules that deny
remote access to instances. For Linux images such as CirrOS, we recommend
allowing at least ICMP (ping) and secure shell (SSH).
·
Add rules to the default security group:
o
Permit ICMP (ping):
$ openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | a1876c06-7f30-4a67-a324-b6b5d1309546 |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | b0d53786-5ebb-4729-9e4a-4b675016a958 |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
o
Permit secure shell (SSH) access:
$ openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 3d95e59c-e98d-45f1-af04-c750af914f14 |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | b0d53786-5ebb-4729-9e4a-4b675016a958 |
| port_range | 22:22 |
| remote_security_group | |
+-----------------------+--------------------------------------+
cont... part-4