Monday, June 27, 2016

Ovirt Engine Installation setup oVirt Engine 4.0 on Fedora 23 Virtual machine on VMWare WorkStation 12



Ovirt Engine Installation setup oVirt Engine 4.0 on Fedora 23 Virtual machine on VMWare WorkStation 12

 In this setup the

oVirt-Engine 4.0 is on fedora23s.example.com
oVirt Host will be on fedora23.example.com

the Engine is on the server fedora23(s) -- Please note the "LETTER S" in the oVirt-Engine name

Ovirt Engine Installation setup oVirt Engine 4.0 on Fedora 23 Virtual machine on VMWare WorkStation 12

About the Setup:
  • ·         Using Fedora23 as the host.
  • ·         Basic Minimal Installation of Fedora 23
  • ·         Fedora23 machine is installed and running on VMWare WorkStation version 12.
  • ·         The base machine is AMD Opteron 5 based on which the ovirt engine 4.0 is installed is one of the Fedora 23 Server.
  • ·         The base machine is a laptop in whose BIOS SVM VIRTUALIZATION is enabled.
  • ·         The Virtual machine processor settings for the oVirt Engine as well as oVirt host are as set in the VM Properties on the VMWare Workstation as “Virtualize Intel VT-x/EPT or AMD-V/RI


Fedora23 Server Release

1) Download the RPM to install the Ovirt Engine Repo

wget http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm

2) Clean up yum cache

cd /var/cache/yum
rm -rf *
yum clean all

3) Install the rpm to ensure that the oVirt Engine repos for version 4.0 are available

sudo dnf install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm

4) create the yum cache again. Confirm the repo is available.

[root@fedora23s yum.repos.d]# dnf repolist
Last metadata expiration check: 0:05:13 ago on Sun Jun 26 14:08:06 2016.
repo id               repo name                                           status
docker-main-repo      Docker main Repository                                  39
*fedora               Fedora 23 - x86_64                                  46,074
ovirt-4.0             Latest oVirt 4.0 Release                               180
ovirt-4.0-patternfly1 Copr repo for patternfly1 owned by patternfly            2
*updates              Fedora 23 - x86_64 - Updates                        19,666
virtio-win-stable     virtio-win builds roughly matching what was shipped      2
[root@fedora23s yum.repos.d]#

5) Install the oVirt-Engine

dnf -y install ovirt-engine

  
[root@fedora23s yum.repos.d]# dnf -y install ovirt-engine
Last metadata expiration check: 0:06:41 ago on Sun Jun 26 14:08:06 2016.
Dependencies resolved.
================================================================================
 Package                        Arch   Version                  Repository
                                                                           Size
================================================================================
Installing:
 aopalliance                    noarch 1.0-11.fc23              fedora     16 k

<output truncated here>
.
.
.
.
.
.
.
.
.

These are the RPMs which get installed on the fedora23 machine as a part of the oVirt-Engine 4.0 installation.
Installed:
  aopalliance.noarch 1.0-11.fc23
  apache-commons-codec.noarch 1.10-2.fc23
  apache-commons-collections.noarch 3.2.2-3.fc23
  apache-commons-compress.noarch 1.10-0.2.svn1684406.fc23
  apache-commons-configuration.noarch 1.10-5.fc23
  apache-commons-io.noarch 1:2.4-14.fc23
  apache-commons-jxpath.noarch 1.3-24.fc23
  apache-commons-lang.noarch 2.6-17.fc23
  apache-commons-logging.noarch 1.2-4.fc23
  apache-mina-mina-core.noarch 2.0.9-3.fc23
  apache-sshd.noarch 0.11.0-5.fc23
  apr.x86_64 1.5.2-2.fc23
  apr-util.x86_64 1.5.4-2.fc23
  atlas.x86_64 3.10.2-6.fc23
  audit-libs-python3.x86_64 2.5.1-1.fc23
  c3p0.noarch 0.9.5-0.2.pre8.fc23
  cracklib-python.x86_64 2.9.1-6.fc23
  dom4j.noarch 1.6.1-25.fc23
  ebay-cors-filter.noarch 1.0.1-4.fc23
  fedora-logos-httpd.noarch 22.0.0-2.fc23
  giflib.x86_64 4.1.6-14.fc23
  hamcrest-core.noarch 1.3-13.fc23
  httpcomponents-client.noarch 4.5-4.fc23
  httpcomponents-core.noarch 4.4.1-2.fc23
  httpd.x86_64 2.4.18-1.fc23
  httpd-filesystem.noarch 2.4.18-1.fc23
  httpd-tools.x86_64 2.4.18-1.fc23
  jackson.noarch 1.9.11-6.fc23
  jakarta-commons-httpclient.noarch 1:3.1-23.fc23
  java-1.8.0-openjdk.x86_64 1:1.8.0.60-14.b27.fc23
  java-1.8.0-openjdk-headless.x86_64 1:1.8.0.60-14.b27.fc23
  javapackages-tools.noarch 4.6.0-8.fc23
  javassist.noarch 3.18.1-4.fc23
  jboss-annotations-1.1-api.noarch 1.0.1-0.9.20120212git76e1a2.fc22
  jcip-annotations.noarch 1-17.20060626.fc23
  joda-time.noarch 2.8.1-1.tzdata2015e.fc23
  jsr-311.noarch 1.1.1-11.fc23
  junit.noarch 1:4.12-3.fc23
  libXfont.x86_64 1.5.1-3.fc23
  libXtst.x86_64 1.2.2-5.fc23
  libfontenc.x86_64 1.1.3-2.fc23
  libgfortran.x86_64 5.3.1-6.fc23
  libquadmath.x86_64 5.3.1-6.fc23
  libsemanage-python3.x86_64 2.4-4.fc23
  lksctp-tools.x86_64 1.0.16-4.fc23
  log4j12.noarch 1.2.17-10.fc23
  m2crypto.x86_64 0.22.5-2.fc23
  mchange-commons.noarch 0.2.7-1.fc21
  mod_ssl.x86_64 1:2.4.18-1.fc23
  novnc.noarch 0.5.1-3.fc23
  numpy.x86_64 1:1.9.2-2.fc23
  objectweb-asm.noarch 5.0.3-2.fc23
  objectweb-asm3.noarch 3.3.1-12.fc23
  openstack-java-cinder-client.noarch 3.1.1-2.fc23
  openstack-java-cinder-model.noarch 3.1.1-2.fc23
  openstack-java-client.noarch 3.1.1-2.fc23
  openstack-java-glance-client.noarch 3.1.1-2.fc23
  openstack-java-glance-model.noarch 3.1.1-2.fc23
  openstack-java-keystone-client.noarch 3.1.1-2.fc23
  openstack-java-keystone-model.noarch 3.1.1-2.fc23
  openstack-java-quantum-client.noarch 3.1.1-2.fc23
  openstack-java-quantum-model.noarch 3.1.1-2.fc23
  openstack-java-resteasy-connector.noarch 3.1.1-2.fc23
  otopi.noarch 1.5.0-1.fc23
  otopi-java.noarch 1.5.0-1.fc23
  ovirt-engine.noarch 4.0.0.6-1.fc23
  ovirt-engine-backend.noarch 4.0.0.6-1.fc23
  ovirt-engine-cli.noarch 3.6.2.0-1.fc23
  ovirt-engine-dashboard.noarch 1.0.0-0.2.20160610git5d210ea.fc23
  ovirt-engine-dbscripts.noarch 4.0.0.6-1.fc23
  ovirt-engine-dwh.noarch 4.0.0-2.git38f5db5.fc23
  ovirt-engine-dwh-setup.noarch 4.0.0-2.git38f5db5.fc23
  ovirt-engine-extension-aaa-jdbc.noarch 1.1.0-1.fc23
  ovirt-engine-extensions-api-impl.noarch 4.0.0.6-1.fc23
  ovirt-engine-lib.noarch 4.0.0.6-1.fc23
  ovirt-engine-restapi.noarch 4.0.0.6-1.fc23
  ovirt-engine-sdk-python.noarch 3.6.7.0-1.fc23
  ovirt-engine-setup.noarch 4.0.0.6-1.fc23
  ovirt-engine-setup-base.noarch 4.0.0.6-1.fc23
  ovirt-engine-setup-plugin-ovirt-engine.noarch 4.0.0.6-1.fc23
  ovirt-engine-setup-plugin-ovirt-engine-common.noarch 4.0.0.6-1.fc23
  ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch 4.0.0.6-1.fc23
  ovirt-engine-setup-plugin-websocket-proxy.noarch 4.0.0.6-1.fc23
  ovirt-engine-tools.noarch 4.0.0.6-1.fc23
  ovirt-engine-tools-backup.noarch 4.0.0.6-1.fc23
  ovirt-engine-userportal.noarch 4.0.0.6-1.fc23
  ovirt-engine-vmconsole-proxy-helper.noarch 4.0.0.6-1.fc23
  ovirt-engine-webadmin-portal.noarch 4.0.0.6-1.fc23
  ovirt-engine-websocket-proxy.noarch 4.0.0.6-1.fc23
  ovirt-engine-wildfly.x86_64 10.0.0-1.fc23
  ovirt-engine-wildfly-overlay.noarch 10.0.0-1.fc23
  ovirt-host-deploy.noarch 1.5.0-1.fc23
  ovirt-host-deploy-java.noarch 1.5.0-1.fc23
  ovirt-image-uploader.noarch 4.0.0-1.fc23
  ovirt-iso-uploader.noarch 4.0.0-1.fc23
  ovirt-setup-lib.noarch 1.0.2-1.fc23
  ovirt-vmconsole.noarch 1.0.3-1.fc23
  ovirt-vmconsole-proxy.noarch 1.0.3-1.fc23
  patternfly1.noarch 1.3.0-1.fc23
  policycoreutils-python-utils.x86_64 2.4-21.fc23
  policycoreutils-python3.x86_64 2.4-21.fc23
  postgresql.x86_64 9.4.8-1.fc23
  postgresql-jdbc.noarch 9.4.1200-2.fc23
  postgresql-libs.x86_64 9.4.8-1.fc23
  postgresql-server.x86_64 9.4.8-1.fc23
  publicsuffix-list.noarch 20160323-1.fc23
  pygpgme.x86_64 0.3-13.fc23
  pyliblzma.x86_64 0.5.3-14.fc23
  python-chardet.noarch 2.2.1-3.fc23
  python-daemon.noarch 1.6-8.fc23
  python-dnf-plugins-extras-common.noarch 0.0.12-1.fc23
  python-dnf-plugins-extras-versionlock.noarch 0.0.12-1.fc23
  python-hawkey.x86_64 0.6.2-3.fc23
  python-iniparse.noarch 0.4-16.fc23
  python-kitchen.noarch 1.2.1-3.fc23
  python-libcomps.x86_64 0.1.7-1.fc23
  python-librepo.x86_64 1.7.16-2.fc23
  python-libxml2.x86_64 2.9.3-2.fc23
  python-lockfile.noarch 1:0.10.2-2.fc23
  python-nose.noarch 1.3.7-4.fc23
  python-ovirt-engine-sdk4.x86_64 4.0.0-0.3.a3.fc23
  python-psycopg2.x86_64 2.6.1-1.fc23
  python-pycurl.x86_64 7.19.5.1-4.fc23
  python-urlgrabber.noarch 3.10.1-7.fc23
  python-websockify.noarch 0.6.0-3.fc23
  python2-dnf.noarch 1.1.9-2.fc23
  python3-cssselect.noarch 0.9.1-6.fc23
  python3-javapackages.noarch 4.6.0-8.fc23
  python3-lxml.x86_64 3.4.4-1.fc23
  pyxattr.x86_64 0.5.3-5.fc23
  quartz.noarch 2.2.1-4.fc23
  resteasy-core.noarch 3.0.6-9.fc23
  resteasy-jaxrs-api.noarch 3.0.6-9.fc23
  rpm-python.x86_64 4.13.0-0.rc1.13.fc23
  scannotation.noarch 1.0.3-0.10.r12.fc22
  slf4j.noarch 1.7.12-2.fc23
  slf4j-jdk14.noarch 1.7.12-2.fc23
  snmp4j.noarch 2.2.3-5.fc23
  spice-html5.noarch 0.1.6-2.fc23
  stax2-api.noarch 3.1.4-3.fc23te
  ttmkfdir.x86_64 3.0.9-46.fc23
  tzdata-java.noarch 2016e-1.fc23
  vdsm-jsonrpc-java.noarch 1.2.3-1.fc23
  ws-commons-util.noarch 1.0.1-32.fc23
  xml-commons-apis.noarch 1.4.01-19.fc23
  xmlrpc-client.noarch 1:3.1.3-13.fc23
  xmlrpc-common.noarch 1:3.1.3-13.fc23
  xorg-x11-font-utils.x86_64 1:7.5-29.fc23
  xorg-x11-fonts-Type1.noarch 7.5-15.fc23
  yum.noarch 3.4.3-507.fc23
  yum-metadata-parser.x86_64 1.1.4-15.fc23
  yum-plugin-versionlock.noarch 1.1.31-508.fc23

Complete!
[root@fedora23s yum.repos.d]#

6) Disable the Firewalls if any (please note that in the production or in your setup you may need the firewalld running and allow the needed ports to be open for the oVirt Engine. Please refer to the documentation on oVirt website for more details.). Also we are disabling the Network Manager service.

systemctl status firewalld

[root@fedora23s ~]# systemctl stop firewalld
[root@fedora23s ~]# systemctl disable firewalld
[root@fedora23s ~]#

Stop and disable the Network.Manager service.

[root@fedora23s ~]# systemctl status Network.Manager
● Network.Manager.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
[root@fedora23s ~]# systemctl stop Network.Manager
Failed to stop Network.Manager.service: Unit Network.Manager.service not loaded.
[root@fedora23s ~]# systemctl disable Network.Manager
[root@fedora23s ~]#



7) Configure the ovirt-engine, create an answers file which can be used to install other ovirt-engine servers or gives you an answer file so as to reinstall the ovirt-engine on the same host again.

See the help of the engine-setup command

engine-setup --help | grep -i answer

Setup the Engine, this may take a while. Also you can see an answers file is being generated here.

[root@fedora23s ~]# engine-setup --generate-answer=/root/answers.txt

Engine Setup parameters as in my case are as

          --== CONFIGURATION PREVIEW ==--

          Application mode                        : both
          Default SAN wipe after delete           : False
          Update Firewall                         : False
          Host FQDN                               : fedora23s.example.com
          Engine database secured connection      : False
          Engine database host                    : localhost
          Engine database user name               : engine
          Engine database name                    : engine
          Engine database port                    : 5432
          Engine database host name validation    : False
          DWH database secured connection         : False
          DWH database host                       : localhost
          DWH database user name                  : ovirt_engine_history
          DWH database name                       : ovirt_engine_history
          DWH database port                       : 5432
          DWH database host name validation       : False
          Engine installation                     : True
          PKI organization                        : example.com
          Configure local Engine database         : True
          Set application as default page         : True
          Configure Apache SSL                    : True
          DWH installation                        : True
          Configure local DWH database            : True
          Engine Host FQDN                        : fedora23s.example.com
          Configure VMConsole Proxy               : True
          Configure WebSocket Proxy               : True

                                 
Finally as the engine-setup completes you can see the below messages on the screen.

          Web access is enabled at:
              http://fedora23s.example.com:80/ovirt-engine
              https://fedora23s.example.com:443/ovirt-engine
          Internal CA 58:9C:10:5C:88:6A:C0:6B:DC:7F:4B:C5:38:FB:E3:A5:F5:4F:D4:67
          SSH fingerprint: SHA256:55P16TbHogcz54nueDD1FXRqrVd86TJ6JqR9IqrI/Yo
[WARNING] Warning: Not enough memory is available on the host. Minimum requirement is 4096MB, and 16384MB is recommended.

          --== END OF SUMMARY ==--

[ INFO  ] Stage: Clean up
          Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20160626143910-tbwatg.log
[ INFO  ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20160626144443-setup.conf'
[ INFO  ] Generating answer file '/root/answers.txt'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully
[root@fedora23s ~]#


8) Login to the oVirt Engine console

Open a webbrowser pointing to http://fedora23s.example.com:80/ovirt-engine with the admin user and the engine password which was given in the engine-setup.

9) The engine-setup above accessing the HTTPS of the oVirt Engine server requires that the website be accessed using FQDN. In the current setup as there had been no DNS and the access was using a Windows machine to the interface, you need to add the IP also FQDN on the windows machine in c:\windows\system32\drivers\hosts the entry as below

192.168.100.132 fedora23s fedora23s.example.com

Then save the file.

Login into the oVirt-Engine WEB GUI 



 oVirt Engine DashBoard after login


oVirt Host installation oVirt Host 4 on Fedora 23 with VMWare WorkStation 12

oVirt Host installation oVirt Host 4 on Fedora 23 with VMWare WorkStation 12

  • ·         Using Fedora23 as the host.
  • ·         Basic Minimal Installation of Fedora 23
  • ·         Fedora23 machine is installed and running on VMWare WorkStation version 12.
  • ·         The base machine is AMD Opteron 5 based on which the ovirt engine 4.0 is installed is one of the Fedora 23 Server.
  • ·         The base machine is a laptop in whose BIOS SVM VIRTUALIZATION is enabled.
  • ·         The Virtual machine processor settings for the oVirt Engine as well as oVirt host are as set in the VM Properties on the VMWare Workstation as “Virtualize Intel VT-x/EPT or AMD-V/RI


·         [root@ovirthost1 ~]# uname -a
Linux ovirthost1.example.com 4.2.3-300.fc23.x86_64 #1 SMP Mon Oct 5 15:42:54 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirthost1 ~]# cat /etc/redhat-release
Fedora release 23 (Twenty Three)
[root@ovirthost1 ~]#

1) Install the RPM for the repos of ovirt 4.0

wget http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
rpm -ivh ovirt-release40.rpm

2) Confirm the repo is visible and vdsm packages are seen

cd /var/cache/yum && rm -rf * && yum clean all && yum repolist

The list of the repositories looks like below.
dnf repolist

Last metadata expiration check performed 0:07:36 ago on Mon Jun 27 08:21:19 2016.
repo id               repo name                                           status
*fedora               Fedora 23 - x86_64                                  46,074
ovirt-4.0             Latest oVirt 4.0 Release                               180
ovirt-4.0-patternfly1 Copr repo for patternfly1 owned by patternfly            2
*updates              Fedora 23 - x86_64 - Updates                        19,666
virtio-win-stable     virtio-win builds roughly matching what was shipped      2
[root@ovirthost1 ~]#


3) Ensure that these repositories are seen on the fedora 23 host as these will be the requirement when the ovirt Engine installs this as the oVirt host. The process of installation of the fedora23s server as oVirt hosts installs rpms which come from this repos.


4) For my setup the firewall is disabled also the Network.Manager service is stopped and disabled as seen.

systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl disable Network.Manager
systemctl stop Network.Manager

5) Ensure that the libvirtd service is enabled for autostart. Also libvirtd has to be running.

[root@ovirthost1 ~]# systemctl enable libvirtd.service
[root@ovirthost1 ~]#

[root@ovirthost1 ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
           └─unlimited-core.conf
   Active: active (running) since Thu 2016-06-02 20:53:05 IST; 3 weeks 3 days ago

6) Add the host to the oVirt-Engine. (Login to the oVirt Engine using admin user at https://<ovirt_Engine_FQDN>:443/ovirt-engine)


7) As the oVirt Engine adds the host to a cluster which you create in the GUI or the default cluster and the center, the following are seen as the host gets successfully added to the cluster in the oVirt Engine.

These logs are from ovirt-ENgine server at the location /var/log/messages/ovirt-engine/engine.log


2016-06-27 02:09:54,448 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [1cec4f0d] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Critical, Low disk space. Host ovirthost.example.com has less than 500 MB of free space left on: /var/run/vdsm/, /tmp. Low disk space might cause an issue upgrading this host.
2016-06-27 02:09:55,409 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [] Fetched 0 VMs from VDS '4244422c-ea6d-4ab3-abe7-bb0581f5efde'
2016-06-27 02:10:11,429 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler10) [4e5952e7] Fetched 0 VMs from VDS '4244422c-ea6d-4ab3-abe7-bb0581f5efde'
2016-06-27 02:10:27,457 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [50cd768e] Fetched 0 VMs from VDS '4244422c-ea6d-4ab3-abe7-bb0581f5efde'
2016-06-27 02:10:43,482 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [6c6b13f9] Fetched 0 VMs from VDS '4244422c-ea6d-4ab3-abe7-bb0581f5efde'
2016-06-27 02:10:59,504 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [3a942f11] Fetched 0 VMs from VDS '4244422c-ea6d-4ab3-abe7-bb0581f5efde'
2016-06-27 02:11:15,530 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [] Fetched 0 VMs from VDS '4244422c-ea6d-4ab3-abe7-bb0581f5efde'

After the server is added successfully as the host of the cluster in one of the data centers, you can see that the "libvirtd" service is up and running as below.

[root@ovirthost ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
           └─unlimited-core.conf
   Active: active (running) since Mon 2016-06-27 11:23:36 IST; 10h ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 19993 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─19993 /usr/sbin/libvirtd --listen

Jun 27 11:23:34 ovirthost.example.com systemd[1]: Starting Virtualization dae...
Jun 27 11:23:36 ovirthost.example.com libvirtd[19993]: libvirt version: 1.2.1...
Jun 27 11:23:36 ovirthost.example.com libvirtd[19993]: Module /usr/lib64/libv...
Jun 27 11:23:36 ovirthost.example.com libvirtd[19993]: Module /usr/lib64/libv...
Jun 27 11:23:36 ovirthost.example.com libvirtd[19993]: Module /usr/lib64/libv...
Jun 27 11:23:36 ovirthost.example.com libvirtd[19993]: Module /usr/lib64/libv...
Jun 27 11:23:36 ovirthost.example.com libvirtd[19993]: Module /usr/lib64/libv...
Jun 27 11:23:36 ovirthost.example.com systemd[1]: Started Virtualization daemon.
Hint: Some lines were ellipsized, use -l to show in full.




Saturday, June 25, 2016

Redhat OpenStack (Mitaka) using OpenStack packstack installation all in a Box on CentOS7 running on VMWare WorkStation 12



CentOS 7 minimal install
Configure OpenStack on a Single node
OpenStack Mitaka

The Setup which was used



  • CentOS 7 minial install
  • Configure Openstack on a Single node
  • OpenStack Mitaka


1) AMD based windows 7 is the installation on the workstation with SVM virtaulation enabled in the workstation BIOS

2) VMWare workstation 12

3) The Virtuak Machine used for OpenSatck installation had in the Processor settings as the option Selected
"Virtualize Intel VT-x/EPT or AMD-V/RV

4) The Virtual Machine has CentoS7 Minimal installed
[root@centos7 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@centos7 ~]#
[root@centos7 ~]#
[root@centos7 ~]# uname -a
Linux centos7.example.com 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@centos7 ~]#

5) The CentOS virtual Machine has 4 Virtual processors installed and number of cores per prcoessor as 1 
This may bary as per the hardware configuration of your workstation/server.



[root@centos7 ~]# cat /proc/cpuinfo  | grep -i core
model name      : AMD PRO A10-8700B R6, 10 Compute Cores 4C+6G
core id         : 0
cpu cores       : 1
model name      : AMD PRO A10-8700B R6, 10 Compute Cores 4C+6G
core id         : 0
cpu cores       : 1
model name      : AMD PRO A10-8700B R6, 10 Compute Cores 4C+6G
core id         : 0
cpu cores       : 1
model name      : AMD PRO A10-8700B R6, 10 Compute Cores 4C+6G
core id         : 0
cpu cores       : 1
[root@centos7 ~]#


Steps of Installation

A) Update the CentOS Installation

yum update -y

B) Setup the NTP services 

NTP server being used are the ones which we get in the default installation of CentOS7 Minimal 

Edit the /etc/ntp.conf and put the NTP server there in the virtual machine running CentOS


C) Install few more utilities 

yum -y install net-tools rsync wget bind-utils 


D) Adding the RDO (RedHat OpenStack Mitaka Repo)

sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm

The Above gives access to the repositories for RedHat OpenStack which we will be installing

After the repo is installed verify using yum repolist that the repository is listed

yum repolist


[root@centos7 ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.excellmedia.net
 * extras: centos.excellmedia.net
 * updates: centos.excellmedia.net
repo id                                                 repo name                                                     status
base/7/x86_64                               CentOS-7 - Base                                              9,007
extras/7/x86_64                        CentOS-7 - Extras                                            310
openstack-mitaka/x86_64    OpenStack Mitaka Repository                            1,181
updates/7/x86_64                   CentOS-7 - Updates                                   1,974
repolist: 12,472
[root@centos7 ~]#


E) Install the openstack-packstack 

yum install openstack-packstack 

These are the RPMs which finally get installed

Installed:
  openstack-packstack.noarch 0:8.0.0-1.el7

Dependency Installed:
  PyYAML.x86_64 0:3.10-11.el7           jbigkit-libs.x86_64 0:2.0-11.el7                      libjpeg-turbo.x86_64 0:1.2.90-5.el7             libtiff.x86_64 0:4.0.3-14.el7
  libwebp.x86_64 0:0.3.0-3.el7          libyaml.x86_64 0:0.1.4-11.el7_0                       openstack-packstack-puppet.noarch 0:8.0.0-1.el7 openstack-puppet-modules.noarch 1:8.0.4-1.el7
  pyOpenSSL.noarch 0:0.15.1-1.el7       python-docutils.noarch 0:0.11-0.2.20130715svn7687.el7 python-enum34.noarch 0:1.0.4-1.el7              python-idna.noarch 0:2.0-1.el7
  python-ipaddress.noarch 0:1.0.7-4.el7 python-netaddr.noarch 0:0.7.18-1.el7                  python-pillow.x86_64 0:2.0.0-19.gitd1c6db8.el7  python-ply.noarch 0:3.4-10.el7
  python-pycparser.noarch 0:2.14-1.el7  python-six.noarch 0:1.9.0-2.el7                       python2-cffi.x86_64 0:1.5.2-1.el7               python2-cryptography.x86_64 0:1.2.1-3.el7
  python2-pyasn1.noarch 0:0.1.9-6.el7.1 python2-setuptools.noarch 0:22.0.5-1.el7              ruby.x86_64 0:2.0.0.598-25.el7_1                ruby-irb.noarch 0:2.0.0.598-25.el7_1
  ruby-libs.x86_64 0:2.0.0.598-25.el7_1 rubygem-bigdecimal.x86_64 0:1.2.0-25.el7_1            rubygem-io-console.x86_64 0:0.4.2-25.el7_1      rubygem-json.x86_64 0:1.7.7-25.el7_1
  rubygem-psych.x86_64 0:2.0.0-25.el7_1 rubygem-rdoc.noarch 0:4.0.0-25.el7_1                  rubygems.noarch 0:2.0.14-25.el7_1

Complete!

F) See the packstack help 

[root@centos7 ~]# packstack --help | grep -i answer
  --gen-answer-file=GEN_ANSWER_FILE
                        Generate a template of an answer file.
  --answer-file=ANSWER_FILE
                        answerfile will also be generated and should be used
  -o, --options         Print details on options available in answer file(rst
                        Packstack a second time with the same answer file and
[root@centos7 ~]#



G) We will generate an answer file

An Answer file is a good place to save your answers in the installation of OpenStack in case you want to install it again or can also use the answer file for similar other installation of OpenStack.


[root@centos7 ~]# packstack --gen-answer-file=/root/mitaka.openstack.answers.txt
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
[root@centos7 ~]#


H) Edit the Answers file as needed

The values which had been changed as those from the values appearing in the answers files had been as 

[root@centos7 ~]# diff mitaka.openstack.answers.txt mitaka.openstack.answers.txt.1

< CONFIG_NTP_SERVERS=0.centos.pool.ntp.org
< CONFIG_NAGIOS_INSTALL=n
< CONFIG_KEYSTONE_ADMIN_PW=<Put a strong password here>
< CONFIG_HORIZON_SSL=y
< CONFIG_PROVISION_DEMO=n


++++++++++++

I) In case there are issues with the rabbitmq not starting and you get the error like below in the installation try this

ERROR

PuppetError: Error appeared during Puppet run: 192.168.100.128_amqp.pp
Error: Could not start Service[rabbitmq-server]: Execution of '/usr/bin/systemctl start rabbitmq-server' returned 1: Job for rabbitmq-server.service failed because the control process exited with error code. See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
You will find full trace in log /var/tmp/packstack/20160624-093650-Zlsy68/manifests/192.168.100.128_amqp.pp.log

FIX:

[root@centos7 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.128 centos7 centos7.example.com
[root@centos7 ~]#
[root@centos7 ~]#
[root@centos7 ~]# cat /etc/hostname
centos7.example.com
[root@centos7 ~]#
[root@centos7 ~]# hostnamectl set-hostname centos7.example.com
[root@centos7 ~]#

+++++++++++++

J) packstack installation using the answers file 

packstack --answer-file=/root/mitaka.openstack.answers.txt

The installation begins and completes showing these 

Applying 192.168.100.128_ceilometer.pp
192.168.100.128_ceilometer.pp:                       [ DONE ]
Applying 192.168.100.128_aodh.pp
192.168.100.128_aodh.pp:                             [ DONE ]
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.100.128. To use the command line tools you need to source the file.
 * NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.100.128 to u        se a CA signed cert.
 * To access the OpenStack Dashboard browse to https://192.168.100.128/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * The installation log file is available at: /var/tmp/packstack/20160624-094448-lY1juB/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20160624-094448-lY1juB/manifests
[root@centos7 ~]#


These may take sometime varying on the internet speed also configuration of the machine.

K) Login to the OpenStack UI using https://192.168.100.128/dashboard using the admin user and the password you may get in the /root/keystone* file which generates in the installation.