Friday, July 29, 2016

Docker 1.11 installation and run on CentOS7: configure and use the local repository: Docker Repository image as your local repository


Docker 1.11 installation and run on CentOS7: configure and use the local repository:

Docker registry should be run with the TLS and other security enabled. This only talks on having the local registry up for use leveraging the docker regsitry container serving the docker registry to be local.



Get the Docker repo configured for YUM 

$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

yum clean all
yum repolist
yum -y install docker-engine


# Confirm the docker engine RPMs are installed


[root@compute1 ~]# rpm -qa | grep -i docker
docker-engine-selinux-1.11.2-1.el7.centos.noarch
docker-engine-1.11.2-1.el7.centos.x86_64
[root@compute1 ~]#


Start the docker service and optionally enable the autostart. 

systemctl start docker.service
systemctl status docker.service


systemctl enable docker.service

## Check if the docker service is working fine

Try pulling any image from docker registry and run 

docker -it run centos:latest "/bin/bash"


# download the latest image for registry from the docker hub (you can pull and run or can directly run as the run will pull it for the first time from the docker hub)
docker pull registry 

[root@compute1 yum.repos.d]# docker pull registry
Using default tag: latest
latest: Pulling from library/registry
8387d9ff0016: Pull complete
3b52deaaf0ed: Pull complete
4bd501fad6de: Pull complete
a3ed95caeb02: Pull complete
1d4dc7bffbb8: Pull complete
7c4baf947271: Pull complete
e14b922ad4f5: Pull complete
f1d1dbdd4f97: Pull complete
f2bbca3948d0: Pull complete
4e3899dc28fa: Pull complete
Digest: sha256:f374c0d9b59e6fdf9f8922d59e946b05fbeabaed70b0639d7b6b524f3299e87b
Status: Downloaded newer image for registry:latest


# See all the downloaded docker images

[root@compute1 yum.repos.d]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED                  SIZE
centos              latest              50dae1ee8677        Less than a second ago   196.7 MB
registry            2                   8ff6a4aae657        4 weeks ago              171.5 MB
registry            latest              bca04f698ba8        5 months ago             422.8 MB
[root@compute1 yum.repos.d]#


# Start of the docker container for the registry, we will start with the option as restart always.

docker run -d -p 5000:5000 --restart=always --name=localregistry registry:2

# confirm that the registry container is up

docker ps

[root@compute1 yum.repos.d]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
01057f771545        registry:2          "/bin/registry serve "   12 minutes ago      Up 12 minutes       0.0.0.0:5000->5000/tcp   localregistry
[root@compute1 yum.repos.d]#


# push the CentOS image to the local registry
# tag the image first to be pushed to the local registry

docker tag centos localhost:5000/centos

# confirm the tag has been given

docker images


[root@compute1 yum.repos.d]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED                  SIZE
centos                  latest              50dae1ee8677        Less than a second ago   196.7 MB
localhost:5000/centos   latest              50dae1ee8677        Less than a second ago   196.7 MB
registry                2                   8ff6a4aae657        4 weeks ago              171.5 MB
registry                latest              bca04f698ba8        5 months ago             422.8 MB
[root@compute1 yum.repos.d]#


# upload the tagged image now to local image repository

docker push localhost:5000/centos


[root@compute1 yum.repos.d]# docker push localhost:5000/centos
The push refers to a repository [localhost:5000/centos]
0fe55794a0f7: Pushed
latest: digest: sha256:e513d34ffa01c803fc812a479303fe0a4c14673f84301b877bec060578865f1b size: 529


# you can see that the push happens to the local repository reference

# Pull the image from the local repository

docker pull localhost:5000/centos


[root@compute1 yum.repos.d]# docker push localhost:5000/centos
The push refers to a repository [localhost:5000/centos]
0fe55794a0f7: Layer already exists
latest: digest: sha256:7b754086d2c7d74ac39dc0a2545d7b06d4266f873d502feb5b3e8bfca27c5dd8 size: 507
[root@compute1 yum.repos.d]#


As this images is already there in the local repo it says already present

# Start the container with the image which you have in the local registry

docker run localhost:5000/centos "/bin/bash" -it

# Lets make some changes inside the container, I will install httpd in there from the CENTOS repos


[root@compute1 yum.repos.d]# docker run -it localhost:5000/centos "/bin/bash"
[root@dc110984e8f0 /]#
[root@dc110984e8f0 /]#
[root@dc110984e8f0 /]# yum install httpd


[root@dc110984e8f0 etc]# rpm -qa | grep -i http
httpd-tools-2.4.6-40.el7.centos.4.x86_64
httpd-2.4.6-40.el7.centos.4.x86_64
[root@dc110984e8f0 etc]#
[root@dc110984e8f0 etc]#
[root@dc110984e8f0 etc]#

# open another session to the docker host and commit on the running container after the installation of httpd

# from other terminal

docker ps


[root@compute1 ~]# docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                    NAMES
8c5bcba48f2d        localhost:5000/centos   "/bin/bash"              3 minutes ago       Up 3 minutes                                 distracted_blackwell
01057f771545        registry:2              "/bin/registry serve "   38 minutes ago      Up 38 minutes       0.0.0.0:5000->5000/tcp   localregistry
[root@compute1 ~]#

# Commit the changes to the container image (install of HTTP is a change to the base image of the container). Committing saves the current state of the container into a new image with the changes. Also the changes are in the form of layered FS on top of the base layer.


[root@compute1 ~]# docker commit 8c5bcba48f2d localhost:5000/centos_with_test_http_install
sha256:9e047acbf61557af27e84054042d8d6f4d58202b9c27db8b9372c49e41d81892
[root@compute1 ~]#
[root@compute1 ~]#
[root@compute1 ~]#


# confirm this is seen as an image

docker images

[root@compute1 ~]# docker images
REPOSITORY                                     TAG                 IMAGE ID            CREATED                  SIZE
centos                                         latest              50dae1ee8677        Less than a second ago   196.7 MB
localhost:5000/centos                          latest              50dae1ee8677        Less than a second ago   196.7 MB
localhost:5000/centos_with_test_http_install   latest              9e047acbf615        54 seconds ago           316.8 MB
registry                                       2                   8ff6a4aae657        4 weeks ago              171.5 MB
registry                                       latest              bca04f698ba8        5 months ago             422.8 MB
[root@compute1 ~]#


# stop the container and delete the images localhost:5000/centos
This deletes the base image as we plan to use the image which we have installed HTTPD on the base image.

[root@compute1 ~]# docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                    NAMES
8c5bcba48f2d        localhost:5000/centos   "/bin/bash"              12 minutes ago      Up 12 minutes                                distracted_blackwell
01057f771545        registry:2              "/bin/registry serve "   47 minutes ago      Up 47 minutes       0.0.0.0:5000->5000/tcp   localregistry

# Stopping the container which is running the CentOS with Apache but has come from CentOS Local Repository

[root@compute1 ~]# docker stop 8c5bcba48f2d
8c5bcba48f2d
[root@compute1 ~]#
[root@compute1 ~]#


# Ensure the container is stopped

[root@compute1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
01057f771545        registry:2          "/bin/registry serve "   47 minutes ago      Up 47 minutes       0.0.0.0:5000->5000/tcp   localregistry
[root@compute1 ~]#
[root@compute1 ~]#


# Remove the local image with the name localhost:5000/centos


[root@compute1 ~]# docker images
REPOSITORY                                     TAG                 IMAGE ID            CREATED                  SIZE
centos                                         latest              50dae1ee8677        Less than a second ago   196.7 MB
localhost:5000/centos                          latest              50dae1ee8677        Less than a second ago   196.7 MB
localhost:5000/centos_with_test_http_install   latest              9e047acbf615        2 minutes ago            316.8 MB
registry                                       2                   8ff6a4aae657        4 weeks ago              171.5 MB
registry                                       latest              bca04f698ba8        5 months ago             422.8 MB



[root@compute1 ~]# docker rmi localhost:5000/centos
Untagged: localhost:5000/centos:latest
[root@compute1 ~]#
[root@compute1 ~]#


# Ensure the image is gone

[root@compute1 ~]# docker images
REPOSITORY                                     TAG                 IMAGE ID            CREATED                  SIZE
centos                                         latest              50dae1ee8677        Less than a second ago   196.7 MB
localhost:5000/centos_with_test_http_install   latest              9e047acbf615        2 minutes ago            316.8 MB
registry                                       2                   8ff6a4aae657        4 weeks ago              171.5 MB
registry                                       latest              bca04f698ba8        5 months ago             422.8 MB
[root@compute1 ~]#

# Spin a container from the available image which has the base centos with httpd installation

[root@compute1 yum.repos.d]# docker run localhost:5000/centos_with_test_http_install  "/bin/bash"
[root@compute1 yum.repos.d]# docker run -it localhost:5000/centos_with_test_http_install "/bin/bash"
[root@f3bcbb08427a /]#
[root@f3bcbb08427a /]#
[root@f3bcbb08427a /]# rpm -qa | grep -i http
httpd-tools-2.4.6-40.el7.centos.4.x86_64
httpd-2.4.6-40.el7.centos.4.x86_64
[root@f3bcbb08427a /]#


# so you can see you have the container with the test apache.

Sunday, July 10, 2016

OpenStack Mitaka + CentOS7 Unit openstack-cinder-volume.service entered failed state. ImportError: No module named keystonemiddleware.auth_token.__init__| Require install of python-kyestone on the cinder server

OpenStack Mitaka + CentOS7 Unit openstack-cinder-volume.service entered failed state. ImportError: No module named keystonemiddleware.auth_token.__init__

Fix by installation of python-kyestone on the cinder server.

Scenario:

As per the Mitaka installation guide on CentOS7 Controller and the cinder had been separate servers. After the cinder configuration parts on the controller as well as the cinder servers, the cinder service-list on the controller shows only the cinder scheduler running on controller and the cinder-volume running on the cinder server is not seen.


[root@controller1 ~]# cinder service-list
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |           Host          | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1.example.com | nova | enabled |   up  | 2016-07-10T22:05:35.000000 |        -        |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
[root@controller1 ~]#

Also the openstack-cinder-volume service on the cinder server keeps restarting by own with the following messages on the /var/log/messages on the cinder server.



ul 10 18:18:57 bstore1 cinder-volume: File "/usr/lib/python2.7/site-packages/cinder/volume/volume_types.py", line 31, in <module>
Jul 10 18:18:57 bstore1 cinder-volume: from cinder import quota
Jul 10 18:18:57 bstore1 cinder-volume: File "/usr/lib/python2.7/site-packages/cinder/quota.py", line 33, in <module>
Jul 10 18:18:57 bstore1 cinder-volume: from cinder import quota_utils
Jul 10 18:18:57 bstore1 cinder-volume: File "/usr/lib/python2.7/site-packages/cinder/quota_utils.py", line 31, in <module>
Jul 10 18:18:57 bstore1 cinder-volume: 'keystone_authtoken')
Jul 10 18:18:57 bstore1 cinder-volume: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2357, in import_opt
Jul 10 18:18:57 bstore1 cinder-volume: __import__(module_str)
Jul 10 18:18:57 bstore1 cinder-volume: ImportError: No module named keystonemiddleware.auth_token.__init__
Jul 10 18:18:58 bstore1 systemd: openstack-cinder-volume.service: main process exited, code=exited, status=1/FAILURE
Jul 10 18:18:58 bstore1 systemd: Unit openstack-cinder-volume.service entered failed state.
Jul 10 18:18:58 bstore1 systemd: openstack-cinder-volume.service failed.
Jul 10 18:18:58 bstore1 systemd: openstack-cinder-volume.service holdoff time --------------



The FIX:

The fix was to install the python-kyestone RPM to get the things working.

Installed:
  python-keystone.noarch 1:9.0.2-1.el7

Dependency Installed:
  PyPAM.x86_64 0:0.5.0-19.el7
  python-dogpile-cache.noarch 0:0.5.7-3.el7
  python-dogpile-core.noarch 0:0.4.1-2.el7
  python-keystonemiddleware.noarch 0:4.4.1-1.el7
  python-ldap.x86_64 0:2.4.15-2.el7
  python-ldappool.noarch 0:1.0-4.el7
  python-memcached.noarch 0:1.54-3.el7
  python-oauthlib.noarch 0:0.7.2-5.20150520git514cad7.el7
  python-pycadf.noarch 0:2.1.0-1.el7
  python-pysaml2.noarch 0:3.0.2-1.el7
  python-repoze-who.noarch 0:2.1-1.el7
  python-zope-interface.x86_64 0:4.0.5-4.el7
  python2-oslo-cache.noarch 0:1.5.0-1.el7
  python2-passlib.noarch 0:1.6.5-1.el7

Complete!

This successfully started the openstack-cinder-volume service on the cinder server.

This is the /var/log/messages on the cinder server post successful start of the openstack-cinder-volume server.

Jul 10 19:08:17 bstore1 cinder-volume: 2016-07-10 19:08:17.892 5793 INFO cinder.volume.manager [req-74d43fcd-c4ec-4a37-8d27-033c2f2eb86e - - - - -] Service not found for updating active_backend_id, assuming default for driver init.
Jul 10 19:08:18 bstore1 cinder-volume: 2016-07-10 19:08:18.095 5793 INFO cinder.volume.manager [req-74d43fcd-c4ec-4a37-8d27-033c2f2eb86e - - - - -] Image-volume cache disabled for host bstore1.example.com@lvm.
Jul 10 19:08:18 bstore1 cinder-volume: 2016-07-10 19:08:18.100 5793 INFO oslo_service.service [req-74d43fcd-c4ec-4a37-8d27-033c2f2eb86e - - - - -] Starting 1 workers
Jul 10 19:08:18 bstore1 cinder-volume: 2016-07-10 19:08:18.123 5807 INFO cinder.service [-] Starting cinder-volume node (version 8.0.0)
Jul 10 19:08:18 bstore1 cinder-volume: 2016-07-10 19:08:18.149 5807 INFO cinder.volume.manager [req-41ea7a90-0c1f-4dbe-8b1e-e35db3ba11a4 - - - - -] Starting volume driver LVMVolumeDriver (3.0.0)
Jul 10 19:08:19 bstore1 cinder-volume: 2016-07-10 19:08:19.882 5807 INFO cinder.volume.manager [req-41ea7a90-0c1f-4dbe-8b1e-e35db3ba11a4 - - - - -] Driver initialization completed successfully.
Jul 10 19:08:20 bstore1 cinder-volume: 2016-07-10 19:08:20.017 5807 INFO cinder.volume.manager [req-41ea7a90-0c1f-4dbe-8b1e-e35db3ba11a4 - - - - -] Initializing RPC dependent components of volume driver LVMVolumeDriver (3.0.0)
Jul 10 19:08:20 bstore1 cinder-volume: 2016-07-10 19:08:20.516 5807 INFO cinder.volume.manager [req-41ea7a90-0c1f-4dbe-8b1e-e35db3ba11a4 - - - - -] Driver post RPC initialization completed successfully.

Also the controller now correctly shows up the status of the both the openstack-cinder-scheduler running on the controller server (controller1.example.com for me) and the cinder server (bstore1.example.com for me).

[root@controller1 ~]# cinder service-list
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |           Host          | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1.example.com | nova | enabled |   up  | 2016-07-10T23:10:40.000000 |        -        |
|  cinder-volume   | bstore1.example.com@lvm | nova | enabled |   up  | 2016-07-10T23:10:40.000000 |        -        |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
[root@controller1 ~]#


Bad configuration issue + OpenStack Mitaka on Cinder service Causing "cinder service-list ERROR: Service Unavailable (HTTP 503)" Authorization failed. The request you have made requires authentication. from 192.168.100.138

Bad configuration issue + OpenStack Mitaka on Cinder service Causing "cinder service-list ERROR: Service Unavailable (HTTP 503)" Authorization failed. The request you have made requires authentication. from 192.168.100.138

Error on the command prompt as returned

ERROR: Service Unavailable (HTTP 503)"

Error on the controller server /var/log/keystone/keystone.log


2016-07-10 17:44:35.346 3533 INFO keystone.common.wsgi [req-220e3bf4-2382-4f2c-8c46-55539f9ae098 - - - - -] POST http://controller1.example.com:35357/v3/auth/tokens
2016-07-10 17:44:35.998 3533 WARNING keystone.common.wsgi [req-220e3bf4-2382-4f2c-8c46-55539f9ae098 - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.100.138

The fix was as below

This was a wrong configuration in which the /etc/cinder/cinder.conf on the Controller server controller1.example.com and /etc/nova/nova.conf had been correct as per the Mitaka installation document on CentOS (www.openstack.org). Also on the cinder server bstore1.example.com the file /etc/cinder/cinder.conf had been correctly set. 

The Database user cinder and the OpenStack service user cinder also had been with the passwords and the same had been being used in the configuration files /etc/cinder/cinder.conf.

Finally this issue was found out be accounted by the fact that after the cinder user was created on the controller server, the same was not given an admin role to the project service.

This was fixed by providing the admin role to the cinder user on the project service on the controller server and this had made things run fine.

# . admin-openrc -> to source the admin credentials.

# openstack role add --project service --user cinder admin

After this things started working as 

[root@controller1 ~]# cinder service-list
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |           Host          | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1.example.com | nova | enabled |   up  | 2016-07-10T22:05:35.000000 |        -        |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]# cinder list
+----+--------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+----+--------+------+------+-------------+----------+-------------+
+----+--------+------+------+-------------+----------+-------------+

[root@controller1 ~]#



Also the /var/log/keystone/keystone.log has started showing the correct authorization of the cinder 

2016-07-10 17:47:28.388 3536 INFO keystone.token.providers.fernet.utils [req-2ceda5e9-1ca9-41d9-b0c9-40bf8e60208d - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2016-07-10 17:47:28.422 3537 INFO keystone.token.providers.fernet.utils [req-a2e5f7c5-6c34-4202-8b4d-2344541b5ea2 - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2016-07-10 17:47:28.668 3537 INFO keystone.common.wsgi [req-a2e5f7c5-6c34-4202-8b4d-2344541b5ea2 31a638cfa16a42019ef1d7255a157a47 95b5e18a71274873b72d7faba0cb4365 - ca7e0958d3c84cc8800c8227fe84823a ca7e0958d3c84cc8800c8227fe84823a] GET http://controller1.example.com:35357/v3/auth/tokens
2016-07-10 17:47:28.670 3537 INFO keystone.token.providers.fernet.utils [req-a2e5f7c5-6c34-4202-8b4d-2344541b5ea2 31a638cfa16a42019ef1d7255a157a47 95b5e18a71274873b72d7faba0cb4365 - ca7e0958d3c84cc8800c8227fe84823a ca7e0958d3c84cc8800c8227fe84823a] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2016-07-10 17:47:28.908 3537 INFO keystone.token.providers.fernet.utils [req-a2e5f7c5-6c34-4202-8b4d-2344541b5ea2 31a638cfa16a42019ef1d7255a157a47 95b5e18a71274873b72d7faba0cb4365 - ca7e0958d3c84cc8800c8227fe84823a ca7e0958d3c84cc8800c8227fe84823a] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/


Note: Similar issue can be caused when using passwords for the cinder user being different in the cinder database and in cinder user in the service creation to those being used in the cinder configuration file /etc/cinder/cinder.conf on the controller also the cinder servers.



Wednesday, July 6, 2016

Issues starting the nova conductor service OpenStack Mitaka: AMQP server controller1.example.com:5672 closed the connection. Check login credentials: Socket closed




Issues starting the nova conductor service on the controller node. This results as a part of the misconfiguration of the nova conductor in /etc/nova/nova.conf on the controller node of OpenStack.

As usual the best place to see the reasons for problems with the OpenStack Nova services on the controller is at the location /var/log/nova/*. See the logs there.


The below message appears in /var/log/nova/nova-conductor.log

==> nova-conductor.log <==
2016-07-06 06:32:20.891 27161 ERROR oslo.messaging._drivers.impl_rabbit [req-7bff235c-2535-4d18-b9e0-351c6e23557c - - - - -] AMQP server controller1.example.com:5672 closed the connection. Check login credentials: Socket closed

This means that the user mentioned on the Nova Configuration user as the user to be used to access the Rabbitmq either is not present or if present has a password which does not match with the rabbitmq configuration.


My Nova Configuration file looked like this (/etc/nova/nova.conf on the controller server)

[DEFAULT]
rpc_backend = rabbit

[oslo_messaging_rabbit]
rabbit_host = controller1.example.com
rabbit_userid = openstack
rabbit_password = XXXXXXX



The Fix:



1) In case the user for rabbitmq access as mentioned here as "openstack" is not created then create the user on the controller server in the setup as 

rabbitmqctl add_user openstack XXXXXXX

where XXXXXXX is a password which you want to give to the user

2) In case there is a password mismatch between the one given in /etc/nova/nova.conf and the actual rabbitmq password in the rabbitmq password you can set a new password for the user (openstack here) and update the same in the NOVA configuration file /etc/nova/nova.conf

rabbitmqctl change_passwd <user> <password>

Refer to the man pages of rabbitmqctl for more.

root@controller1 nova]# rabbitmqctl --help | grep -i use
Error: could not recognise command
the correct suffix to use after the "@" sign. See rabbitmq-server(1) for
    add_user <username> <password>
    delete_user <username>
    change_password <username> <newpassword>
    clear_password <username>
            authenticate_user <username> <password>
    set_user_tags <username> <tag> ...
    list_users
    set_permissions [-p <vhost>] <user> <conf> <write> <read>
    clear_permissions [-p <vhost>] <username>
    list_user_permissions <username>
channels, protocol, auth_mechanism, user, vhost, timeout, frame_max,
user, vhost, transactional, confirm, consumer_count, messages_unacknowledged,
[root@controller1 nova]#
[root@controller1 nova]#
[root@controller1 nova]# rabbitmqctl change_passwd nova XXXXXXX

OpenStack Mitaka Issue starting the Nova API service on the controller "ERROR nova MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url"



OpenStack Mitaka Issue starting the Nova API service on the controller "ERROR nova MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url"


This issue was faced at the time of starting the nova-api service on the controller node on the OpenStack Setup. The reason was a misconfigured nova api.

As usual for nova related service best place to look out are the logs in /var/log/nova/* on the controller machine.


The below messages appear in the /var/log/nova/nova-api log


==> nova-api.log <==

2016-07-06 06:32:16.947 27284 WARNING oslo_reports.guru_meditation_report [-] Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2016-07-06 06:32:17.505 27284 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
2016-07-06 06:32:18.244 27284 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'server-migrations', 'servers', 'versions']
2016-07-06 06:32:18.257 27284 CRITICAL nova [-] MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url
2016-07-06 06:32:18.257 27284 ERROR nova Traceback (most recent call last):
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/bin/nova-api", line 10, in <module>
2016-07-06 06:32:18.257 27284 ERROR nova     sys.exit(main())
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 57, in main
2016-07-06 06:32:18.257 27284 ERROR nova     server = service.WSGIService(api, use_ssl=should_use_ssl)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/service.py", line 366, in __init__
2016-07-06 06:32:18.257 27284 ERROR nova     self.app = self.loader.load_app(name)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/wsgi.py", line 497, in load_app
2016-07-06 06:32:18.257 27284 ERROR nova     return deploy.loadapp("config:%s" % self.config_path, name=name)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
2016-07-06 06:32:18.257 27284 ERROR nova     return loadobj(APP, uri, name=name, **kw)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj
2016-07-06 06:32:18.257 27284 ERROR nova     return context.create()
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2016-07-06 06:32:18.257 27284 ERROR nova     return self.object_type.invoke(self)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2016-07-06 06:32:18.257 27284 ERROR nova     **context.local_conf)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call
2016-07-06 06:32:18.257 27284 ERROR nova     val = callable(*args, **kw)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/openstack/urlmap.py", line 160, in urlmap_factory
2016-07-06 06:32:18.257 27284 ERROR nova     app = loader.get_app(app_name, global_conf=global_conf)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app
2016-07-06 06:32:18.257 27284 ERROR nova     name=name, global_conf=global_conf).create()
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2016-07-06 06:32:18.257 27284 ERROR nova     return self.object_type.invoke(self)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2016-07-06 06:32:18.257 27284 ERROR nova     **context.local_conf)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call
2016-07-06 06:32:18.257 27284 ERROR nova     val = callable(*args, **kw)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/auth.py", line 79, in pipeline_factory_v21
2016-07-06 06:32:18.257 27284 ERROR nova     return _load_pipeline(loader, local_conf[CONF.auth_strategy].split())
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/auth.py", line 62, in _load_pipeline
2016-07-06 06:32:18.257 27284 ERROR nova     app = filter(app)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", line 1100, in auth_filter
2016-07-06 06:32:18.257 27284 ERROR nova     return AuthProtocol(app, conf)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", line 682, in __init__
2016-07-06 06:32:18.257 27284 ERROR nova     self._identity_server = self._create_identity_server()
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", line 1050, in _create_identity_server
2016-07-06 06:32:18.257 27284 ERROR nova     auth_plugin = self._get_auth_plugin()
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", line 995, in _get_auth_plugin
2016-07-06 06:32:18.257 27284 ERROR nova     return plugin_loader.load_from_options_getter(getter)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/keystoneauth1/loading/base.py", line 148, in load_from_options_getter
2016-07-06 06:32:18.257 27284 ERROR nova     return self.load_from_options(**kwargs)
2016-07-06 06:32:18.257 27284 ERROR nova   File "/usr/lib/python2.7/site-packages/keystoneauth1/loading/base.py", line 123, in load_from_options
2016-07-06 06:32:18.257 27284 ERROR nova     raise exceptions.MissingRequiredOptions(missing_required)
2016-07-06 06:32:18.257 27284 ERROR nova MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url
2016-07-06 06:32:18.257 27284 ERROR nova

The fix:


This comes if you have not define or improperly defined the auth_url in the /etc/nova/nova.conf

in my case this has been in /etc/nova/nova.conf  as

auth_url = http://controller1.example.com:35357

Also is worth noting another entry in the file as auth_uri in the /etc/nova/nova.conf 

as

auth_uri = http://controller1.example.com:5000