OpenStack Project
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complemental services. Each service offers an Application Programming Interface (API) that facilitates this integration.
This guide covers step-by-step deployment of the following major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience:
Example
architecture
The
example architecture requires at least two nodes (hosts) to launch a basic virtual machine or
instance. Optional services such as Block Storage and Object Storage require
additional nodes.This example architecture differs from a minimal production architecture as follows:
- Networking
agents reside on the controller node instead of one or more dedicated
network nodes.
- Overlay
(tunnel) traffic for self-service networks traverses the management
network instead of a dedicated network.
Networking Option 1: Provider networks
The
provider networks option deploys the OpenStack Networking service in the
simplest way possible with primarily layer-2 (bridging/switching) services and
VLAN segmentation of networks. Essentially, it bridges virtual networks to
physical networks and relies on physical network infrastructure for layer-3
(routing) services. Additionally, a DHCP service provides
IP address information to instances.
Note
This option lacks support for self-service
(private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the
self-service networks option if you desire these features.
Networking Option 2: Self-service networks
The
self-service networks option augments the provider networks option with layer-3
(routing) services that enable self-service networks
using overlay segmentation methods such as VXLAN. Essentially, it
routes virtual networks to physical networks using NAT. Additionally, this
option provides the foundation for advanced services such as LBaaS and FWaaS.
For
best performance, we recommend that your environment meets or exceeds the
hardware requirements in Hardware
requirements.
The
following minimum requirements should support a proof-of-concept environment
with core services and several CirrOS
instances:
- Controller Node: 1 processor, 4
GB memory, and 5 GB storage
- Compute Node: 1 processor, 2 GB
memory, and 10 GB storage
As
the number of OpenStack services and virtual machines increase, so do the
hardware requirements for the best performance. If performance degrades after
enabling additional services or virtual machines, consider adding hardware
resources to your environment.
To minimize clutter and provide more
resources for OpenStack, we recommend a minimal installation of your Linux
distribution. Also, you must install a 64-bit version of your distribution on
each node.
Security
OpenStack
services support various security methods including password, policy, and encryption.
Additionally, supporting services including the database server and message
broker support at least password security.To ease the installation process, this guide only covers password security where applicable. You can create secure passwords manually, generate them using a tool such as pwgen, or by running the following command:
$ openssl rand -hex 10For OpenStack services, this guide uses SERVICE_PASS to reference service account passwords and SERVICE_DBPASS to reference database passwords.
The following table provides a list of services that require passwords and their associated references in the guide:
Passwords
|
|
Password name
|
Description
|
Database password (no
variable used)
|
Root password for the
database
|
ADMIN_PASS
|
Password of user admin
|
CEILOMETER_DBPASS
|
Database password for the
Telemetry service
|
CEILOMETER_PASS
|
Password of Telemetry
service user ceilometer
|
CINDER_DBPASS
|
Database password for the
Block Storage service
|
CINDER_PASS
|
Password of Block Storage
service user cinder
|
DASH_DBPASS
|
Database password for the
dashboard
|
DEMO_PASS
|
Password of user demo
|
GLANCE_DBPASS
|
Database password for
Image service
|
GLANCE_PASS
|
Password of Image service
user glance
|
HEAT_DBPASS
|
Database password for the
Orchestration service
|
HEAT_DOMAIN_PASS
|
Password of Orchestration
domain
|
HEAT_PASS
|
Password of Orchestration
service user heat
|
KEYSTONE_DBPASS
|
Database password of
Identity service
|
NEUTRON_DBPASS
|
Database password for the
Networking service
|
NEUTRON_PASS
|
Password of Networking
service user neutron
|
NOVA_DBPASS
|
Database password for
Compute service
|
NOVA_PASS
|
Password of Compute
service user nova
|
RABBIT_PASS
|
Password of user guest of
RabbitMQ
|
SWIFT_PASS
|
Password of Object Storage
service user swift
|
Also, the Networking service assumes default values for kernel network parameters and modifies firewall rules. To avoid most issues during your initial installation, we recommend using a stock deployment of a supported distribution on your hosts. However, if you choose to automate deployment of your hosts, review the configuration and policies applied to them before proceeding further.
Host
networking
After
installing the operating system on each node for the architecture that you
choose to deploy, you must configure the network interfaces. We recommend that
you disable any automated network management tools and manually edit the
appropriate configuration files for your distribution. For more information on
how to configure networking on your distribution, see the documentation
.All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that the physical network infrastructure provides Internet access via NAT or other method. The example architectures use routable IP address space for the provider (external) network and assume that the physical network infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly to the provider network. In the self-service (private) networks architecture, instances can attach to a self-service or provider network. Self-service networks can reside entirely within OpenStack or provide some level of external network access using NAT through the provider network.
·
Management on 10.0.0.0/24 with gateway
10.0.0.1
This network requires a gateway to
provide Internet access to all nodes for administrative purposes such as
package installation, security updates, DNS, and NTP.
·
Provider on 203.0.113.0/24 with
gateway 203.0.113.1
This network requires a gateway to
provide Internet access to instances in your OpenStack environment.
You
can modify these ranges and gateways to work with your particular network
infrastructure.Network interface names vary by distribution. Traditionally, interfaces use “eth” followed by a sequential number. To cover all variations, this guide simply refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number.
Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Also, each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.
Warning
Reconfiguring network
interfaces will interrupt network connectivity. We recommend using a local
terminal session for these procedures.
Note
Your distribution enables a
restrictive firewall
by default. During the installation process, certain steps will fail unless you
alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide.
- Controller
node
- Compute
node
- Block
storage node (Optional)
- Object
storage nodes (Optional)
- Verify
connectivity
Controller
node
Configure
network interfaces
1.
Configure the first interface as the
management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
2.
The provider interface uses a special configuration
without an IP address assigned to it. Configure the second interface as the
provider interface:
Replace INTERFACE_NAME
with the actual interface name. For example, eth1
or ens224.
o
Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
file to contain the following:
Do not change the HWADDR and UUID keys.
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
3.
Reboot the system to activate the
changes.
Configure
name resolution
1.
Set the hostname of the node to controller.
2.
Edit the /etc/hosts
file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning
Some distributions add an
extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP
address such as 127.0.1.1. You must comment out or remove this entry to prevent name
resolution problems. Do
not remove the 127.0.0.1 entry.
Note
To reduce complexity of this
guide, we add host entries for optional services regardless of whether you
choose to deploy them.
Compute
node
Configure
network interfaces
1.
Configure the first interface as the
management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
Note
Additional compute nodes should
use 10.0.0.32, 10.0.0.33, and so on.
2.
The provider interface uses a special
configuration without an IP address assigned to it. Configure the second
interface as the provider interface:
Replace INTERFACE_NAME
with the actual interface name. For example, eth1
or ens224.
o
Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
file to contain the following:
Do not change the HWADDR and UUID keys.
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
3.
Reboot the system to activate the
changes.
Configure
name resolution
1.
Set the hostname of the node to compute1.
2.
Edit the /etc/hosts
file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning
Some distributions add an
extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP
address such as 127.0.1.1. You must comment out or remove this entry to prevent name
resolution problems. Do
not remove the 127.0.0.1 entry.
Note
To reduce complexity of this
guide, we add host entries for optional services regardless of whether you
choose to deploy them.
Block
storage node (Optional)
If
you want to deploy the Block Storage service, configure one additional storage
node.
Configure
network interfaces
- Configure
the management interface:
- IP
address: 10.0.0.41
- Network
mask: 255.255.255.0
(or /24)
- Default
gateway: 10.0.0.1
Configure
name resolution
1.
Set the hostname of the node to block1.
2.
Edit the /etc/hosts
file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning
Some distributions add an
extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP
address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution
problems. Do not remove
the 127.0.0.1 entry.
Note
To reduce complexity of this
guide, we add host entries for optional services regardless of whether you
choose to deploy them.
3.
Reboot the system to activate the
changes.
Verify
connectivity
We
recommend that you verify network connectivity to the Internet and among the
nodes before proceeding further.
1.
From the controller node, test access to the Internet:
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
2.
From the controller node, test access to the
management interface on the compute
node:
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
3.
From the compute node, test access to the Internet:
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
4.
From the compute node, test access to the management
interface on the controller
node:
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
Note
Your distribution enables a
restrictive firewall
by default. During the installation process, certain steps will fail unless you
alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide.
Network
Time Protocol (NTP)
You
should install Chrony, an implementation of NTP, to properly
synchronize services among nodes. We recommend that you configure the
controller node to reference more accurate (lower stratum) servers and other
nodes to reference the controller node.
Controller
node
Perform
these steps on the controller node.
Install
and configure components
1.
Install the packages:
# yum install chrony
2.
Edit the /etc/chrony.conf
file and add, change, or remove the following keys as necessary for your
environment:
server NTP_SERVER iburst
Replace NTP_SERVER
with the hostname or IP address of a suitable more accurate (lower stratum) NTP
server. The configuration supports multiple server
keys.
Note
By default, the controller node
synchronizes the time via a pool of public servers. However, you can optionally
configure alternative servers such as those provided by your organization.
3.
To enable other nodes to connect to
the chrony daemon on the controller, add the following key to the /etc/chrony.conf file:
allow 10.0.0.0/24
If necessary, replace 10.0.0.0/24 with a description
of your subnet.
4.
Start the NTP service and configure it
to start when the system boots:
# systemctl enable chronyd.service
# systemctl start chronyd.service
Other
nodes
Other
nodes reference the controller node for clock synchronization. Perform these
steps on all other nodes.
Install
and configure components
1.
Install the packages:
# yum install chrony
2.
Edit the /etc/chrony.conf
file and comment out or remove all but one server
key. Change it to reference the controller node:
server controller iburst
3.
Start the NTP service and configure it
to start when the system boots:
# systemctl enable chronyd.service
# systemctl start chronyd.service
Verify
operation
We
recommend that you verify NTP synchronization before proceeding further. Some
nodes, particularly those that reference the controller node, can take several
minutes to synchronize.
1.
Run this command on the controller node:
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
=========================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
Contents in the Name/IP address column
should indicate the hostname or IP address of one or more NTP servers. Contents
in the S column
should indicate *
for the server to which the NTP service is currently synchronized.
2.
Run the same command on all other nodes:
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
=========================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
Contents in the Name/IP address column
should indicate the hostname of the controller node.
OpenStack
packages
Distributions
release OpenStack packages as part of the distribution or using other methods
because of differing release schedules. Perform these procedures on all nodes.
Warning
Your hosts must contain the
latest versions of base installation packages available for your distribution
before proceeding further.
Note
Disable or remove any automatic
update services because they can impact your OpenStack environment.
Prerequisites
Warning
We recommend disabling EPEL
when using RDO packages due to updates in EPEL breaking backwards
compatibility. Or, preferably pin package versions using the yum-versionlock plugin.
Note
CentOS does not require the
following steps.
1.
On RHEL, register your system with Red
Hat Subscription Management, using your Customer Portal user name and password:
# subscription-manager register --username="USERNAME" --password="PASSWORD"
2.
Find entitlement pools containing the
channels for your RHEL system:
# subscription-manager list --available
3.
Use the pool identifiers found in the
previous step to attach your RHEL entitlements:
# subscription-manager attach --pool="POOLID"
4.
Enable required repositories:
# subscription-manager repos --enable=rhel-7-server-optional-rpms \
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
Enable
the OpenStack repository
·
On CentOS, the extras
repository provides the RPM that enables the OpenStack repository. CentOS
includes the extras
repository by default, so you can simply install the package to enable the
OpenStack repository.
# yum install centos-release-openstack-mitaka
·
On RHEL, download and install the RDO
repository RPM to enable the OpenStack repository.
# yum install https://rdoproject.org/repos/rdo-release.rpm
Finalize
the installation
1.
Upgrade the packages on your host:
2. # yum upgrade
Note
If the upgrade process includes
a new kernel, reboot your host to activate it.
3.
Install the OpenStack client:
# yum install python-openstackclient
3.
RHEL and CentOS enable SELinux by default. Install the openstack-selinux package to
automatically manage security policies for OpenStack services:
# yum install openstack-selinux
NoSQL
database
The
Telemetry service uses a NoSQL database to store information. The database
typically runs on the controller node. The procedures in this guide use
MongoDB.
Note
The installation of the NoSQL
database server is only necessary when installing the Telemetry service as
documented in Telemetry service.
Install
and configure components
1.
Install the MongoDB packages:
# yum install mongodb-server mongodb
2.
Edit the /etc/mongod.conf
file and complete the following actions:
o
Configure the bind_ip
key to use the management interface IP address of the controller node.
bind_ip = 10.0.0.11
o
By default, MongoDB creates several 1
GB journal files in the /var/lib/mongodb/journal
directory. If you want to reduce the size of each journal file to 128 MB and
limit total journal space consumption to 512 MB, assert the smallfiles key:
smallfiles = true
You can also disable journaling. For
more information, see the MongoDB
manual.
Finalize
installation
·
Start the MongoDB service and
configure it to start when the system boots:
# systemctl enable mongod.service
# systemctl start mongod.service
Message
queue
OpenStack
uses a message queue to
coordinate operations and status information among services. The message queue
service typically runs on the controller node. OpenStack supports several
message queue services including RabbitMQ,
Qpid, and ZeroMQ.
However, most distributions that package OpenStack support a particular message
queue service. This guide implements the RabbitMQ message queue service because
most distributions support it. If you prefer to implement a different message
queue service, consult the documentation associated with it.
Install
and configure components
1.
Install the package:
# yum install rabbitmq-server
2.
Start the message queue service and
configure it to start when the system boots:
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
3.
Add the openstack
user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
...done.
Replace RABBIT_PASS
with a suitable password.
4.
Permit configuration, write, and read
access for the openstack
user:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
...done.
Memcached
The
Identity service authentication mechanism for services uses Memcached to cache
tokens. The memcached service typically runs on the controller node. For
production deployments, we recommend enabling a combination of firewalling,
authentication, and encryption to secure it.
Install
and configure components
1.
Install the packages:
# yum install memcached python-memcached
Finalize
installation
·
Start the Memcached service and
configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service
Identity
service overview
The
OpenStack Identity service
provides a single point of integration for managing authentication,
authorization, and service catalog services. Other OpenStack services use the
Identity service as a common unified API. Additionally, services that provide
information about users but that are not included in OpenStack (such as LDAP
services) can be integrated into a pre-existing infrastructure.In order to benefit from the Identity service, other OpenStack services need to collaborate with it. When an OpenStack service receives a request from a user, it checks with the Identity service whether the user is authorized to make the request.
The Identity service contains these components:
Server
A
centralized server provides authentication and authorization services using a
RESTful interface.
Drivers
Drivers
or a service back end are integrated to the centralized server. They are used
for accessing identity information in repositories external to OpenStack, and
may already exist in the infrastructure where OpenStack is deployed (for
example, SQL databases or LDAP servers).
Modules
Middleware
modules run in the address space of the OpenStack component that is using the
Identity service. These modules intercept service requests, extract user
credentials, and send them to the centralized server for authorization. The
integration between the middleware modules and OpenStack components uses the
Python Web Server Gateway Interface.
When
installing OpenStack Identity service, you must register each service in your
OpenStack installation. Identity service can then track which OpenStack
services are installed, and where they are located on the network.
Install
and configure
This
section describes how to install and configure the OpenStack Identity service,
code-named keystone, on the controller node. For performance, this
configuration deploys Fernet tokens and the Apache HTTP server to handle
requests.
Prerequisites
Before
you configure the OpenStack Identity service, you must create a database and an
administration token.
1.
To create the database, complete the
following actions:
o
Use the database access client to
connect to the database server as the root
user:
$ mysql -u root -p
o
Create the keystone
database:
CREATE DATABASE keystone;
o
Grant proper access to the keystone database:
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
Replace KEYSTONE_DBPASS
with a suitable password.
o
Exit the database access client.
2.
Generate a random value to use as the
administration token during initial configuration:
$ openssl rand -hex 10
Install
and configure components
Note
Default configuration files
vary by distribution. You might need to add these sections and options rather
than modifying existing sections and options. Also, an ellipsis (...)
in the configuration snippets indicates potential default configuration options
that you should retain.
Note
This guide uses the Apache HTTP
server with mod_wsgi to serve Identity service requests on ports 5000 and 35357. By
default, the keystone service still listens on these ports. Therefore, this
guide manually disables the keystone service.
1.
Run the following command to install
the packages:
# yum install openstack-keystone httpd mod_wsgi
3.
Edit the /etc/keystone/keystone.conf
file and complete the following actions:
o
In the [DEFAULT]
section, define the value of the initial administration token:
[DEFAULT]
...
admin_token = ADMIN_TOKEN
Replace ADMIN_TOKEN
with the random value that you generated in a previous step.
o
In the [database]
section, configure database access:
[database]
...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
Replace KEYSTONE_DBPASS
with the password you chose for the database.
o
In the [token]
section, configure the Fernet token provider:
[token]
...
provider = fernet
4.
Populate the Identity service
database:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
Note
Ignore any deprecation messages
in this output.
5.
Initialize Fernet keys:
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
Configure
the Apache HTTP server
1.
Edit the /etc/httpd/conf/httpd.conf
file and configure the ServerName
option to reference the controller node:
ServerName controller
2.
Create the /etc/httpd/conf.d/wsgi-keystone.conf
file with the following content:
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
Finalize
the installation
·
Start the Apache HTTP service and
configure it to start when the system boots:
# systemctl enable httpd.service
# systemctl start httpd.service
Create
the service entity and API endpoints
The
Identity service provides a catalog of services and their locations. Each
service that you add to your OpenStack environment requires a service entity and
several API endpoints in the
catalog.
Prerequisites
By
default, the Identity service database contains no information to support
conventional authentication and catalog services. You must use a temporary
authentication token that you created in the section called Install and configure
to initialize the service entity and API endpoint for the Identity service.You must pass the value of the authentication token to the openstack command with the --os-token parameter or set the OS_TOKEN environment variable. Similarly, you must also pass the value of the Identity service URL to the openstack command with the --os-url parameter or set the OS_URL environment variable. This guide uses environment variables to reduce command length.
Warning
For security reasons, do not
use the temporary authentication token for longer than necessary to initialize
the Identity service.
1.
Configure the authentication token:
$ export OS_TOKEN=ADMIN_TOKEN
Replace ADMIN_TOKEN
with the authentication token that you generated in the section called Install and configure.
For example:
$ export OS_TOKEN=294a4c8a8a475f9b9836
2.
Configure the endpoint URL:
$ export OS_URL=http://controller:35357/v3
3.
Configure the Identity API version:
$ export OS_IDENTITY_API_VERSION=3
Create
the service entity and API endpoints
1.
The Identity service manages a catalog
of services in your OpenStack environment. Services use this catalog to
determine the other services available in your environment.
Create the service entity for the
Identity service:
$ openstack service create \
--name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 4ddaae90388b4ebc9d252ec2252d8d10 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
Note
OpenStack generates IDs
dynamically, so you will see different values in the example command output.
2.
The Identity service manages a catalog
of API endpoints associated with the services in your OpenStack environment.
Services use this catalog to determine how to communicate with other services
in your environment.
OpenStack uses three API endpoint
variants for each service: admin, internal, and public. The admin API endpoint
allows modifying users and tenants by default, while the public and internal
APIs do not allow these operations. In a production environment, the variants
might reside on separate networks that service different types of users for
security reasons. For instance, the public API network might be visible from
the Internet so customers can manage their clouds. The admin API network might
be restricted to operators within the organization that manages cloud
infrastructure. The internal API network might be restricted to the hosts that
contain OpenStack services. Also, OpenStack supports multiple regions for
scalability. For simplicity, this guide uses the management network for all
endpoint variations and the default RegionOne
region.
Create the Identity service API
endpoints:
$ openstack endpoint create --region RegionOne \
identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 30fff543e7dc4b7d9a0fb13791b78bf4 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c8c0927262a45ad9066cfe70d46892c |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 57cfa543e7dc4b712c0ab137911bc4fe |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6f8de927262ac12f6066cfe70d99ac51 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 78c3dfa3e7dc44c98ab1b1379122ecb1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 34ab3d27262ac449cba6cfe704dbc11f |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v3 |
+--------------+----------------------------------+
Note
Each service that you add to
your OpenStack environment requires one or more service entities and three API
endpoint variants in the Identity service.
Create
a domain, projects, users, and roles
The
Identity service provides authentication services for each OpenStack service.
The authentication service uses a combination of domains, projects (tenants), users, and roles.
1.
Create the default
domain:
$ openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | e0353a670a9e496da891347c589539e9 |
| name | default |
+-------------+----------------------------------+
2.
Create an administrative project,
user, and role for administrative operations in your environment:
o
Create the admin
project:
$ openstack project create --domain default \
--description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | 343d245e850143a096806dfaefa9afdc |
| is_domain | False |
| name | admin |
| parent_id | None |
+-------------+----------------------------------+
Note
OpenStack generates IDs
dynamically, so you will see different values in the example command output.
o
Create the admin
user:
$ openstack user create --domain default \
--password-prompt admin
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | ac3377633149401296f6c0d92d79dc16 |
| name | admin |
+-----------+----------------------------------+
o
Create the admin
role:
$ openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | cd2cb9a39e874ea69e5d4b896eb16128 |
| name | admin |
+-----------+----------------------------------+
o
Add the admin
role to the admin
project and user:
$ openstack role add --project admin --user admin admin
Note
This command provides no
output.
Note
Any roles that you create must
map to roles specified in the policy.json file in the configuration file directory of each OpenStack
service. The default policy for most services grants administrative access to
the admin role. For more information, see the Operations Guide - Managing Projects and Users.
3.
This guide uses a service project that
contains a unique user for each service that you add to your environment.
Create the service
project:
$ openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | 894cdfa366d34e9d835d3de01e752262 |
| is_domain | False |
| name | service |
| parent_id | None |
+-------------+----------------------------------+
4.
Regular (non-admin) tasks should use
an unprivileged project and user. As an example, this guide creates the demo project and user.
o
Create the demo
project:
$ openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | ed0b60bf607743088218b0a533d5943f |
| is_domain | False |
| name | demo |
| parent_id | None |
+-------------+----------------------------------+
Note
Do not repeat this step when creating
additional users for this project.
o
Create the demo
user:
$ openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | 58126687cbcc4888bfa9ab73a2256f27 |
| name | demo |
+-----------+----------------------------------+
o
Create the user
role:
$ openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 997ce8d05fc143ac97d83fdfb5998552 |
| name | user |
+-----------+----------------------------------+
o
Add the user
role to the demo
project and user:
$ openstack role add --project demo --user demo user
Note
This command provides no
output.
Note
You can repeat this procedure
to create additional projects and users.
Verify
operation
Verify
operation of the Identity service before installing other services.
Note
Perform these commands on the
controller node.
1.
For security reasons, disable the
temporary authentication token mechanism:
Edit the /etc/keystone/keystone-paste.ini
file and remove admin_token_auth
from the [pipeline:public_api],
[pipeline:admin_api],
and [pipeline:api_v3]
sections.
2.
Unset the temporary OS_TOKEN and OS_URL environment
variables:
$ unset OS_TOKEN OS_URL
3.
As the admin
user, request an authentication token:
$ openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
+------------+---------------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------------+
| expires | 2016-02-12T20:14:07.056119Z |
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+---------------------------------------------------------+
Note
This command uses the password
for the admin user.
4.
As the demo
user, request an authentication token:
$ openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
+------------+---------------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------------+
| expires | 2016-02-12T20:15:39.014479Z |
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
| project_id | ed0b60bf607743088218b0a533d5943f |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+--------------------------------------------------------+
Note
This command uses the password
for the demo user and API port 5000 which only allows regular (non-admin)
access to the Identity service API.
Create
OpenStack client environment scripts
The
previous section used a combination of environment variables and command
options to interact with the Identity service via the openstack
client. To increase efficiency of client operations, OpenStack supports simple
client environment scripts also known as OpenRC files. These scripts typically
contain common options for all clients, but also support unique options. For
more information, see the OpenStack
End User Guide.
Creating
the scripts
Create
client environment scripts for the admin
and demo
projects and users. Future portions of this guide reference these scripts to
load appropriate credentials for client operations.
1.
Edit the admin-openrc
file and add the following content:
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Replace ADMIN_PASS
with the password you chose for the admin
user in the Identity service.
2.
Edit the demo-openrc
file and add the following content:
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Replace DEMO_PASS
with the password you chose for the demo
user in the Identity service.
Using
the scripts
To
run clients as a specific project and user, you can simply load the associated
client environment script prior to running them. For example:
1.
Load the admin-openrc
file to populate environment variables with the location of the Identity
service and the admin
project and user credentials:
$ . admin-openrc
2.
Request an authentication token:
$ openstack token issue
+------------+---------------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------------+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+---------------------------------------------------------+
cont... part-2