Image
service
The
Image service (glance) enables users to discover, register, and retrieve
virtual machine images. It offers a REST API that enables
you to query virtual machine image metadata and retrieve an actual image. You
can store virtual machine images made available through the Image service in a
variety of locations, from simple file systems to object-storage systems like
OpenStack Object Storage.
Important
For
simplicity, this guide describes configuring the Image service to use the file back end, which
uploads and stores in a directory on the controller node hosting the Image
service. By default, this directory is /var/lib/glance/images/.Before you proceed, ensure that the controller node has at least several gigabytes of space available in this directory.
For information on requirements for other
back ends, see Configuration
Reference.
Image
service overview
The
OpenStack Image service is central to Infrastructure-as-a-Service (IaaS) as
shown in Conceptual architecture.
It accepts API requests for disk or server images, and metadata definitions
from end users or OpenStack Compute components. It also supports the storage of
disk or server images on various repository types, including OpenStack Object
Storage.A number of periodic processes run on the OpenStack Image service to support caching. Replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers.
The OpenStack Image service includes the following components:
glance-api
Accepts
Image API calls for image discovery, retrieval, and storage.
glance-registry
Stores, processes,
and retrieves metadata about images. Metadata includes items such as size and
type.
Warning
The registry is a private
internal service meant for use by OpenStack Image service. Do not expose this
service to users.
Database
Stores
image metadata and you can choose your database depending on your preference.
Most deployments use MySQL or SQLite.
Storage repository for image files
Various
repository types are supported including normal file systems, Object Storage,
RADOS block devices, HTTP, and Amazon S3. Note that some repositories will only
support read-only usage.
Metadata definition service
A
common API for vendors, admins, services, and users to meaningfully define
their own custom metadata. This metadata can be used on different types of
resources like images, artifacts, volumes, flavors, and aggregates. A
definition includes the new property’s key, description, constraints, and the
resource types which it can be associated with.
Install
and configure
This
section describes how to install and configure the Image service, code-named
glance, on the controller node. For simplicity, this configuration stores
images on the local file system.
Prerequisites
Before
you install and configure the Image service, you must create a database,
service credentials, and API endpoints.
1.
To create the database, complete these
steps:
o
Use the database access client to connect
to the database server as the root
user:
$ mysql -u root -p
o
Create the glance
database:
CREATE DATABASE glance;
o
Grant proper access to the glance database:
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
Replace GLANCE_DBPASS
with a suitable password.
o
Exit the database access client.
2.
Source the admin
credentials to gain access to admin-only CLI commands:
$ . admin-openrc
3.
To create the service credentials,
complete these steps:
o
Create the glance
user:
$ openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | e38230eeff474607805b596c91fa15d9 |
| name | glance |
+-----------+----------------------------------+
o
Add the admin
role to the glance
user and service
project:
$ openstack role add --project service --user glance admin
Note
This command
provides no output.
o
Create the glance
service entity:
$ openstack service create --name glance \
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| name | glance |
| type | image |
+-------------+----------------------------------+
4.
Create the Image service API
endpoints:
$ openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 340be3625e9b4239a6415d034e98aace |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0c37ed58103f4300a84ff125a539032d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
Install
and configure components
Note
Default configuration files
vary by distribution. You might need to add these sections and options rather
than modifying existing sections and options. Also, an ellipsis (...)
in the configuration snippets indicates potential default configuration options
that you should retain.
1.
Install the packages:
# yum install openstack-glance
2.
Edit the /etc/glance/glance-api.conf
file and complete the following actions:
o
In the [database]
section, configure database access:
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
Replace GLANCE_DBPASS
with the password you chose for the Image service database.
o
In the [keystone_authtoken]
and [paste_deploy]
sections, configure Identity service access:
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
Replace GLANCE_PASS
with the password you chose for the glance
user in the Identity service.
Note
Comment out or remove any other
options in the [keystone_authtoken] section.
o
In the [glance_store]
section, configure the local file system store and location of image files:
[glance_store]
...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3.
Edit the /etc/glance/glance-registry.conf
file and complete the following actions:
o
In the [database]
section, configure database access:
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
Replace GLANCE_DBPASS
with the password you chose for the Image service database.
o
In the [keystone_authtoken]
and [paste_deploy]
sections, configure Identity service access:
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
Replace GLANCE_PASS
with the password you chose for the glance
user in the Identity service.
Note
Comment out or remove any other
options in the [keystone_authtoken] section.
4.
Populate the Image service database:
# su -s /bin/sh -c "glance-manage db_sync" glance
Note
Ignore any deprecation messages
in this output.
Finalize
installation
·
Start the Image services and configure
them to start when the system boots:
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service
Verify
operation
Verify
operation of the Image service using CirrOS,
a small Linux image that helps you test your OpenStack deployment.For more information about how to download and build images, see OpenStack Virtual Machine Image Guide. For information about how to manage images, see the OpenStack End User Guide.
Note
Perform these commands on the
controller node.
1.
Source the admin
credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2.
Download the source image:
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Note
Install wget if your distribution
does not include it.
3.
Upload the image to the Image service
using the QCOW2 disk format, bare container format,
and public visibility so all projects can access it:
$ openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2015-03-26T16:52:10Z |
| disk_format | qcow2 |
| file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file |
| id | cc5c6982-4910-471e-b864-1098015901b5 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | ae7a98326b9c455588edd2656d723b9d |
| protected | False |
| schema | /v2/schemas/image |
| size | 13200896 |
| status | active |
| tags | |
| updated_at | 2015-03-26T16:52:10Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------+
For information about the openstack image create
parameters, see Image
service command-line client in the OpenStack
Command-Line
Interface
Reference.
For information about disk and
container formats for images, see Disk and
container formats for images in the OpenStack
Virtual
Machine
Image
Guide.
Note
OpenStack generates IDs
dynamically, so you will see different values in the example command output.
4.
Confirm upload of the image and
validate attributes:
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+
Compute
service
Compute
service overview
Use OpenStack Compute to host and manage cloud computing
systems. OpenStack Compute is a major part of an Infrastructure-as-a-Service (IaaS)
system. The main modules are implemented in Python.
OpenStack Compute interacts
with OpenStack Identity for authentication; OpenStack Image service for disk and
server images; and OpenStack dashboard for the user and administrative
interface. Image access is limited by projects, and by users; quotas are
limited per project (the number of instances, for example). OpenStack Compute
can scale horizontally on standard hardware, and download images to launch
instances.
OpenStack Compute consists of
the following areas and their components:
nova-api service
Accepts and responds to end user compute API calls. The service
supports the OpenStack Compute API, the Amazon EC2 API, and a special Admin API
for privileged users to perform administrative actions. It enforces some
policies and initiates most orchestration activities, such as running an
instance.
nova-api-metadata service
Accepts metadata requests from instances. The nova-api-metadata service is generally used when you run in multi-host mode with nova-network installations. For
details, see Metadata
service in the OpenStack
Administrator Guide.
nova-compute service
A worker daemon that
creates and terminates virtual machine instances through hypervisor APIs. For
example:
·
XenAPI for XenServer/XCP
·
libvirt for KVM or QEMU
·
VMwareAPI for VMware
Processing is fairly
complex. Basically, the daemon accepts actions from the queue and performs a
series of system commands such as launching a KVM instance and updating its
state in the database.
nova-scheduler service
Takes a virtual machine instance request from the queue and determines
on which compute server host it runs.
nova-conductor module
Mediates interactions between the nova-compute service and the
database. It eliminates direct accesses to the cloud database made by the nova-compute service. The nova-conductor module scales
horizontally. However, do not deploy it on nodes where the nova-compute service runs. For more
information, see Configuration
Reference Guide.
nova-cert module
A server daemon that serves the Nova Cert service for X509
certificates. Used to generate certificates for euca-bundle-image.
Only needed for the EC2 API.
nova-network worker daemon
Similar to the nova-compute service, accepts networking tasks from the queue and manipulates
the network. Performs tasks such as setting up bridging interfaces or changing
IPtables rules.
nova-consoleauth daemon
Authorizes tokens for users that console proxies provide. See nova-novncproxy and nova-xvpvncproxy.
This service must be running for console proxies to work. You can run proxies
of either type against a single nova-consoleauth service in a cluster
configuration. For information, see About
nova-consoleauth.
nova-novncproxy daemon
Provides a proxy for accessing running instances through a VNC
connection. Supports browser-based novnc clients.
nova-spicehtml5proxy daemon
Provides a proxy for accessing running instances through a SPICE
connection. Supports browser-based HTML5 client.
nova-xvpvncproxy daemon
Provides a proxy for accessing running instances through a VNC
connection. Supports an OpenStack-specific Java client.
nova-cert daemon
x509 certificates.
nova client
Enables users to submit commands as a tenant administrator or
end user.
The queue
A central hub for passing messages between daemons. Usually
implemented with RabbitMQ, but
can be implemented with an AMQP message queue, such as Zero MQ.
SQL database
Stores most build-time
and run-time states for a cloud infrastructure, including:
·
Available instance types
·
Instances in use
·
Available networks
·
Projects
Theoretically, OpenStack
Compute can support any database that SQL-Alchemy supports. Common databases
are SQLite3 for test and development work, MySQL, and PostgreSQL.
Install
and configure Controller node
Prerequisites
1.
To create the databases, complete
these steps:
Use the database
access client to connect to the database server as the root
user:
$ mysql -u root -p
Create the nova_api and nova databases:
CREATE DATABASE nova_api;
CREATE DATABASE nova;
Grant proper access
to the databases:
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
Replace NOVA_DBPASS
with a suitable password.
Exit the database
access client.
Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
To create the
service credentials, complete these steps:
Create the nova user:
$ openstack user create --domain default \
--password-prompt nova
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | 8c46e4760902464b889293a74a0c90a8 |
| name | nova |
+-----------+----------------------------------+
Add the admin role to the nova user:
$ openstack role add --project service --user nova admin
Note
This command provides
no output.
Create the nova service entity:
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
Create the Compute
service API endpoints:
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e702f6f497ed42e6a8ae3ba2e5871c78 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e702f6f497ed42e6a8ae3ba2e5871c78 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e702f6f497ed42e6a8ae3ba2e5871c78 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
Install
and configure components
Note
Default configuration files vary by
distribution. You might need to add these sections and options rather than
modifying existing sections and options. Also, an ellipsis (...)
in the configuration snippets indicates potential default configuration options
that you should retain.
1.
Install the packages:
2. # yum install openstack-nova-api openstack-nova-conductor \
3. openstack-nova-console openstack-nova-novncproxy \
4. openstack-nova-scheduler
2.
Edit the /etc/nova/nova.conf
file and complete the following actions:
In the [DEFAULT] section, enable only the compute and metadata APIs:
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
In the [api_database] and [database] sections,
configure database access:
[api_database]
...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
Replace NOVA_DBPASS
with the password you chose for the Compute databases.
In the [DEFAULT] and [oslo_messaging_rabbit]
sections, configure RabbitMQ message queue access:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace RABBIT_PASS
with the password you chose for the openstack account in
RabbitMQ.
In the [DEFAULT] and [keystone_authtoken]
sections, configure Identity service access:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS
with the password you chose for the nova user in the
Identity service.
Note
Comment out or
remove any other options in the [keystone_authtoken]
section.
In the [DEFAULT] section, configure the my_ip
option to use the management interface IP address of the controller node:
[DEFAULT]
...
my_ip = 10.0.0.11
In the [DEFAULT] section, enable support for the Networking service:
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
Note
By default, Compute
uses an internal firewall driver. Since the Networking service includes a
firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
In the [vnc] section, configure the VNC proxy to use the management
interface IP address of the controller node:
[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
In the [glance] section, configure the location of the Image service
API:
[glance]
...
api_servers = http://controller:9292
In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
3.
Populate the Compute databases:
4. # su -s /bin/sh -c "nova-manage api_db sync" nova
5. # su -s /bin/sh -c "nova-manage db sync" nova
Note
Ignore any
deprecation messages in this output.
Finalize
installation
Start the Compute
services and configure them to start when the system boots:
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
Install and configure a compute node
This section describes how to install and
configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity,
this configuration uses the QEMU hypervisor with
the KVM extension on
compute nodes that support hardware acceleration for virtual machines. On
legacy hardware, this configuration uses the generic QEMU hypervisor. You can
follow these instructions with minor modifications to horizontally scale your
environment with additional compute nodes.
Note
This section assumes that you are following
the instructions in this guide step-by-step to configure the first compute
node. If you want to configure additional compute nodes, prepare them in a
similar fashion to the first compute node in the example architectures
section. Each additional compute node requires a unique IP address.
Install and configure components
Note
Default configuration files vary by
distribution. You might need to add these sections and options rather than
modifying existing sections and options. Also, an ellipsis (...)
in the configuration snippets indicates potential default configuration options
that you should retain.
Install
the packages:
#
yum install openstack-nova-compute
Edit the /etc/nova/nova.conf
file and complete the following actions:
In the [DEFAULT] and
[oslo_messaging_rabbit] sections, configure RabbitMQ message
queue access:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace RABBIT_PASS
with the password you chose for the openstack account in
RabbitMQ.
In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with
the password you chose for the nova user in the Identity
service.
Note
Comment out or remove any other options in
the [keystone_authtoken] section.
In the [DEFAULT]
section, configure the my_ip option:
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace MANAGEMENT_INTERFACE_IP_ADDRESS
with the IP address of the management network interface on your compute node,
typically 10.0.0.31 for the first node in the example architecture.
In the [DEFAULT]
section, enable support for the Networking service:
[DEFAULT]
...
use_neutron = True
firewall_driver =
nova.virt.firewall.NoopFirewallDriver
Note
By default, Compute uses an internal firewall
service. Since Networking includes a firewall service, you must disable the
Compute firewall service by using the
nova.virt.firewall.NoopFirewallDriver
firewall driver.
In the [vnc] section,
enable and configure remote console access:
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url =
http://controller:6080/vnc_auto.html
The server component listens on all IP
addresses and the proxy component only listens on the management interface IP
address of the compute node. The base URL indicates the location where you can
use a web browser to access remote consoles of instances on this compute node.
Note
If the web browser to access remote consoles
resides on a host that cannot resolve the controller
hostname, you must replace controller with the
management interface IP address of the controller node.
In the [glance]
section, configure the location of the Image service API:
[glance]
...
api_servers = http://controller:9292
In the [oslo_concurrency]
section, configure the lock path:
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
Finalize
installation
Determine whether your compute node supports
hardware acceleration for virtual machines:
$
egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one
or greater, your compute node
supports hardware acceleration which typically requires no additional
configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration
and you must configure libvirt to use QEMU instead of
KVM.
Edit the [libvirt]
section in the /etc/nova/nova.conf file as follows:
[libvirt]
...
virt_type = qemu
Start the Compute service including its
dependencies and configure them to start automatically when the system boots:
#
systemctl enable libvirtd.service
openstack-nova-compute.service
#
systemctl start libvirtd.service openstack-nova-compute.service
Verify
operation
Verify
operation of the Compute service.
Note
Perform these commands on the controller
node.
Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
List service components
to verify successful launch and registration of each process:
$ openstack compute service list
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |
| 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
Note
This output should indicate
three service components enabled on the controller node and one service
component enabled on the compute node.
Note:
If you get this error message: Unknown Error (HTTP 503)
Try this fix
Add this to nova.conf and glance-api.conf
auth_uri = http://controller:5000/v2.0