Monday, December 12, 2016

Deploying an very basic instance on OpenStack Mitaka using a simple heat orchestration template and associate a Floating IP to the simple stack

Deploying an very basic instance on OpenStack using a simple heat template


The below is a simple sample template which we will use
=========================================================
heat_template_version: 2013-05-23

description: Simple template to deploy a single compute instance

parameters:
  image:
    type: string
    label: Image name or ID
    description: Image to be used for compute instance
    default: cirros-0.3.3-x86_64
  flavor:
    type: string
    label: Flavor
    description: Type of instance (flavor) to be used
    default: m1.small
  key:
    type: string
    label: Key name
    description: Name of key-pair to be used for compute instance
    default: my_key
  private_network:
    type: string
    label: Private network name or ID
    description: Network to attach instance to.
    default: private-net

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      image: { get_param: image }
      flavor: { get_param: flavor }
      key_name: { get_param: key }
      networks:
        - network: { get_param: private_network }
      user_data: |
        #!/bin/sh
        echo "Hello, World!"
      user_data_format: RAW

outputs:
  instance_name:
    description: Name of the instance
    value: { get_attr: [my_instance, name] }
  instance_ip:
    description: IP address of the instance
    value: { get_attr: [my_instance, first_address] }



The above file has to be copied in the form of an YAML file


We will Need to specify the above in the heat template YAML file: network, instance, flavor, security keys also the instance name

Lets get the information on the resources from the existing OpenStack Setup


Source the required credentials


[root@controller ~]# . keystonerc_admin

See the available flavors
==========================
[root@controller ~(keystone_admin)]# openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+


See the available images
========================
[root@controller ~(keystone_admin)]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 44bb15d9-a970-40c4-8b77-a7e71b170659 | Cirros | active |
+--------------------------------------+--------+--------+
[root@controller ~(keystone_admin)]#


See the available networks in neutron
=====================================
[root@controller ~(keystone_admin)]# neutron net-list
+--------------------------------------+-----------+-------------------------------------------------------+
| id                                   | name      | subnets                                               |
+--------------------------------------+-----------+-------------------------------------------------------+
| 60ca0e7b-3c40-482c-92d1-3b4da4126e9d | testnet1  | 8fd08288-a83b-44ee-ac04-f8b88fc69628 172.16.15.0/24   |
| 8649a5a8-c137-444d-950d-e0c0569e3ee4 | internal1 | bd5b5a96-eaf5-45a8-93e5-6be837e61414 172.16.16.0/24   |
| 001ac29b-1f5f-4498-a44b-de0f809d2322 | external  | cf1da694-4a75-4768-b758-859dc750a02d 192.168.205.0/24 |
+--------------------------------------+-----------+-------------------------------------------------------+


See the available subnets in the neutron
========================================
[root@controller ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+--------------+------------------+--------------------------------------------------------+
| id                                   | name         | cidr             | allocation_pools                                       |
+--------------------------------------+--------------+------------------+--------------------------------------------------------+
| 8fd08288-a83b-44ee-ac04-f8b88fc69628 | testsub      | 172.16.15.0/24   | {"start": "172.16.15.2", "end": "172.16.15.254"}       |
| bd5b5a96-eaf5-45a8-93e5-6be837e61414 | internal1sub | 172.16.16.0/24   | {"start": "172.16.16.2", "end": "172.16.16.254"}       |
| cf1da694-4a75-4768-b758-859dc750a02d | extsub       | 192.168.205.0/24 | {"start": "192.168.205.101", "end": "192.168.205.254"} |
+--------------------------------------+--------------+------------------+--------------------------------------------------------+

See the available keypairs for the project
==========================================
[root@controller ~(keystone_admin)]# openstack keypair list
+------+-------------------------------------------------+
| Name | Fingerprint                                     |
+------+-------------------------------------------------+
| key1 | 2f:40:44:36:1d:b9:ab:cd:b6:57:24:c7:70:dc:8c:f7 |
+------+-------------------------------------------------+
[root@controller ~(keystone_admin)]#


Lets edit the YAML file which will look like below now

heat_template_version: 2013-05-23

description: Simple template to deploy a single compute instance

parameters:
  image:
    type: string
    label: Image name or ID
    description: Image to be used for compute instance
    default: Cirros
  flavor:
    type: string
    label: Flavor
    description: Type of instance (flavor) to be used
    default: m1.tiny
  key:
    type: string
    label: Key name
    description: Name of key-pair to be used for compute instance
    default: key1
  private_network:
    type: string
    label: Private network name or ID
    description: Network to attach instance to.
    default: internal1

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      image: { get_param: image }
      flavor: { get_param: flavor }
      key_name: { get_param: key }
      networks:
        - network: { get_param: private_network }
      user_data: |
        #!/bin/sh
        echo "Hello, World!"
      user_data_format: RAW

outputs:
  instance_name:
    description: Name of the instance
    value: { get_attr: [my_instance, name] }
  instance_ip:
    description: IP address of the instance
    value: { get_attr: [my_instance, first_address] }




Lets copy this YAML file in the OpenStack server as the file heatstack1.yaml


Create the stack syntax:

heat stack-create <Name of the Stack> -f <file containing YAML information of the stack>

heat stack-create stack1 -f heatstack1.yaml


Create the stack using the YAML file
====================================

[root@controller ~(keystone_admin)]# heat stack-create stack1 -f heatstack1.yaml
+--------------------------------------+------------+--------------------+---------------------+--------------+
| id                                   | stack_name | stack_status       | creation_time       | updated_time |
+--------------------------------------+------------+--------------------+---------------------+--------------+
| 5ab8fe95-7ea3-4530-86de-facae9f52c71 | stack1     | CREATE_IN_PROGRESS | 2016-12-07T06:31:07 | None         |
+--------------------------------------+------------+--------------------+---------------------+--------------+
[root@controller ~(keystone_admin)]#


See the list of the stacks
==========================
[root@controller ~(keystone_admin)]# heat stack-list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| id                                   | stack_name | stack_status    | creation_time       | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 5ab8fe95-7ea3-4530-86de-facae9f52c71 | stack1     | CREATE_COMPLETE | 2016-12-07T06:31:07 | None         |
+--------------------------------------+------------+-----------------+---------------------+--------------+
[root@controller ~(keystone_admin)]#
[root@controller ~(keystone_admin)]#


See more information on the Stack which is created
===================================================
[root@controller ~(keystone_admin)]# heat stack-show stack1
+-----------------------+----------------------------------------------------------------------------------------------------------------------------------+
| Property              | Value                                                                                                                            |
+-----------------------+----------------------------------------------------------------------------------------------------------------------------------+
| capabilities          | []                                                                                                                               |
| creation_time         | 2016-12-07T06:31:07                                                                                                              |
| description           | Simple template to deploy a single compute instance                                                                              |
| disable_rollback      | True                                                                                                                             |
| id                    | 5ab8fe95-7ea3-4530-86de-facae9f52c71                                                                                             |
| links                 | http://controller.example.com:8004/v1/f84cfac47c10472fb36c56dc149d3caa/stacks/stack1/5ab8fe95-7ea3-4530-86de-facae9f52c71 (self) |
| notification_topics   | []                                                                                                                               |
| outputs               | [                                                                                                                                |
|                       |   {                                                                                                                              |
|                       |     "output_value": "stack1-my_instance-rrmsgn57cvxy",                                                                           |
|                       |     "output_key": "instance_name",                                                                                               |
|                       |     "description": "Name of the instance"                                                                                        |
|                       |   },                                                                                                                             |
|                       |   {                                                                                                                              |
|                       |     "output_value": "172.16.16.5",                                                                                               |
|                       |     "output_key": "instance_ip",                                                                                                 |
|                       |     "description": "IP address of the instance"                                                                                  |
|                       |   }                                                                                                                              |
|                       | ]                                                                                                                                |
| parameters            | {                                                                                                                                |
|                       |   "OS::project_id": "f84cfac47c10472fb36c56dc149d3caa",                                                                          |
|                       |   "OS::stack_id": "5ab8fe95-7ea3-4530-86de-facae9f52c71",                                                                        |
|                       |   "OS::stack_name": "stack1",                                                                                                    |
|                       |   "image": "Cirros",                                                                                                             |
|                       |   "key": "key1",                                                                                                                 |
|                       |   "private_network": "internal1",                                                                                                |
|                       |   "flavor": "m1.tiny"                                                                                                            |
|                       | }                                                                                                                                |
| parent                | None                                                                                                                             |
| stack_name            | stack1                                                                                                                           |
| stack_owner           | None                                                                                                                             |
| stack_status          | CREATE_COMPLETE                                                                                                                  |
| stack_status_reason   | Stack CREATE completed successfully                                                                                              |
| stack_user_project_id | f7e5c9e4b80f4f1faf411c95880c1309                                                                                                 |
| tags                  | null                                                                                                                             |
| template_description  | Simple template to deploy a single compute instance                                                                              |
| timeout_mins          | None                                                                                                                             |
| updated_time          | None                                                                                                                             |
+-----------------------+----------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~(keystone_admin)]#


Association of the floating IP to the instance
================================================
Asscoiate a floating IP to the instance. Create a floating IP if there are no free floating IPs available.

Get the information on all the networks, here the external is the external network in which the floating IP will get created.

[root@controller ~(keystone_admin)]# neutron net-list
+--------------------------------------+-----------+-------------------------------------------------------+
| id                                   | name      | subnets                                               |
+--------------------------------------+-----------+-------------------------------------------------------+
| 60ca0e7b-3c40-482c-92d1-3b4da4126e9d | testnet1  | 8fd08288-a83b-44ee-ac04-f8b88fc69628 172.16.15.0/24   |
| 8649a5a8-c137-444d-950d-e0c0569e3ee4 | internal1 | bd5b5a96-eaf5-45a8-93e5-6be837e61414 172.16.16.0/24   |
| 001ac29b-1f5f-4498-a44b-de0f809d2322 | external  | cf1da694-4a75-4768-b758-859dc750a02d 192.168.205.0/24 |
+--------------------------------------+-----------+-------------------------------------------------------+
[root@controller ~(keystone_admin)]# neutron floatingip-create externla^C
[root@controller ~(keystone_admin)]#


Create a floating IP in the external network
=============================================

[root@controller ~(keystone_admin)]# neutron floatingip-create external
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| description         |                                      |
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.205.105                      |
| floating_network_id | 001ac29b-1f5f-4498-a44b-de0f809d2322 |
| id                  | ab40dcbd-1082-424a-b0fb-135b8756bc91 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | f84cfac47c10472fb36c56dc149d3caa     |
+---------------------+--------------------------------------+
[root@controller ~(keystone_admin)]#


Ensure the floating IP is listed and that the UUID of floating IP appears as free floating IP
===============================================================

[root@controller ~(keystone_admin)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 816e8bf9-7dec-4011-9333-f761b8b95c15 | 172.16.15.3      | 192.168.205.104     | 69d9cde5-e06d-4ecf-9f87-0f91cb06a653 |
| ab40dcbd-1082-424a-b0fb-135b8756bc91 |                  | 192.168.205.105     |                                      |
| c1e1d2a7-7f5c-4b51-938e-eb064d5d1939 | 172.16.16.3      | 192.168.205.103     | 315b610f-54e1-404c-bac3-048feb37bc23 |
+--------------------------------------+------------------+---------------------+--------------------------------------+


See information of the new instance get the Private network IP of the same.
===============================================================

[root@controller ~(keystone_admin)]# nova list
+--------------------------------------+---------------------------------+--------+------------+-------------+----------------------------------------+
| ID                                   | Name                            | Status | Task State | Power State | Networks                               |
+--------------------------------------+---------------------------------+--------+------------+-------------+----------------------------------------+
| a2c265bb-242c-40ae-ae9c-98849e459bf7 | instance1                       | ACTIVE | -          | Running     | internal1=172.16.16.3, 192.168.205.103 |
| 5d767783-933e-423d-848f-37e816fa05b3 | stack1-my_instance-rrmsgn57cvxy | ACTIVE | -          | Running     | internal1=172.16.16.5                  |
+--------------------------------------+---------------------------------+--------+------------+-------------+----------------------------------------+

Find the neutron port for the instance this information is needed to associate the floating IP to the port of the instance
===============================================================
[root@controller ~(keystone_admin)]# neutron port-list | grep -i 172.16.16.5
| 42304108-f397-4de7-8936-5f011e4be5f3 |      | fa:16:3e:ff:52:0a | {"subnet_id": "bd5b5a96-eaf5-45a8-93e5-6be837e61414", "ip_address": "172.16.16.5"}     |
[root@controller ~(keystone_admin)]#
[root@controller ~(keystone_admin)]#


Associate the floating IP
=================
[root@controller ~(keystone_admin)]# neutron floatingip-associate ab40dcbd-1082-424a-b0fb-135b8756bc91 42304108-f397-4de7-8936-5f011e4be5f3
Associated floating IP ab40dcbd-1082-424a-b0fb-135b8756bc91
[root@controller ~(keystone_admin)]#

Ensure the floating IP is seen to be associated to the instance as in the nova list
========================================================================

[root@controller ~(keystone_admin)]# nova list
+--------------------------------------+---------------------------------+--------+------------+-------------+----------------------------------------+
| ID                                   | Name                            | Status | Task State | Power State | Networks                               |
+--------------------------------------+---------------------------------+--------+------------+-------------+----------------------------------------+
| a2c265bb-242c-40ae-ae9c-98849e459bf7 | instance1                       | ACTIVE | -          | Running     | internal1=172.16.16.3, 192.168.205.103 |
| 5d767783-933e-423d-848f-37e816fa05b3 | stack1-my_instance-rrmsgn57cvxy | ACTIVE | -          | Running     | internal1=172.16.16.5, 192.168.205.105 |
+--------------------------------------+---------------------------------+--------+------------+-------------+----------------------------------------+
[root@controller ~(keystone_admin)]#


you can now use the keys and SSH to the instance.

Please note that the security group for the instance has to have the ingress SSH allowed for a successful SSH to the instance

Sunday, November 13, 2016

Nested Virtualization (Windows 7 Host Running VMWare Workstation 12.0 -> Ubuntu 16.04 LTS as a VM on VMWare WorkStation Running KVM - working as a KVM Host - with Ubuntu 16.04 LTS KVM Virtual Machine as Guest)

Nested Virtualization (Windows 7 Host Running VMWare Workstation 12.0 -> Ubuntu 16.04 LTS as a VM on VMWare WorkStation Running KVM - working as a KVM Host - with Ubuntu 16.04 LTS KVM Virtual Machine as Guest)


The Setup

Windows Laptop : Running Windows 7
Virtualization on Windows : VMWare WorkStation 12.0 (Hypervisor)
Guest VM on VMWare WorkStation : Ubuntu 16.04 LTS (this will run LINUX KVM Hypervisor)
Hypervisor on Ubuntu 16.04 above: Linux KVM
Guest on Linux KVM as KVM VM Guest: Ubuntu 16.04 LTS

Create a VM in the VMWare WorkStation

The VMWare WorkStation VM Properties are as below. The CPU properties are with Intel-VTX/AMD-RVI enabled.

VMWare WorkStation VM Ubuntu 16 Properties 2 CPU

The other properties are as below.
These are the general properties

VMWare WorkStation VM Ubuntu 16 Properties General
VMWare WorkStation VM Ubuntu 16 Properties Network 1


The VM is connected to VMNet2 which is a Windows 7 Bridged adpater in VMWare WorkStation and bridged to the Wireless LAN Adapter on the Windows Machine. The Wireless LAN adapater connects the Windows 7 host to the Internet.

VMWare WorkStation VM Ubuntu 16 Properties Network Connected to Bridged Wireless

Other properties of the VM. This Shows the RAM assigned to the VM

VMWare WorkStation VM Ubuntu 16 Properties Memory and others

This VM was allocated an Ubuntu Minimal Install 600 MB Ubuntu LTS 16.04 Media and Ubuntu was installed on the same.

Post install configurations are as below.

Logon to the Console of the VM after installation and run the following from the console. As this VM is on the VMWare WorkStation access the console for this VM using the VMWare Workstation


Login as normal user to console

update the repo list to be used by APT
apt-get update

In case the errors related to CD/DVD ROM as repo errors like below appear 

E: Failed to fetch cdrom://Ubuntu-Server 16.04.1 LTS _Xenial Xerus_ - Release amd64 (20160719)/dists/xenial/main/binary-amd64/Packages  Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs

Fix the above error using the fix as 


Edit the file /etc/apt/sources.list

to comment the CDROM lines


root@ansible:~# vi /etc/apt/sources.list
root@ansible:~# grep -i cdrom /etc/apt/sources.list
# deb cdrom:[Ubuntu-Server 16.04.1 LTS _Xenial Xerus_ - Release amd64 (20160719)]/ xenial main restricted
#deb cdrom:[Ubuntu-Server 16.04.1 LTS _Xenial Xerus_ - Release amd64 (20160719)]/ xenial main restricted


update the repos 

apt-get update


Install APTITUDE Too
apt-get install aptitude

Install OPENSSH Server, this is not needed if OPENSSH server is installed during the Ubuntu 16.04 OS installation

apt-get install openssh-server

Start SSHD and Enable

systenctl start sshd
systemctl enable sshd
systemctl status sshd


Upgrade the entire system with the latest patches

apt-get upgrade


Install these packages : these enable the KVM Virtualization, install VIRT-MANAGER tool and install GNOME to allow to run the virt-manager GUI.

aptitude install gnome-core gnome-session gnome-session-bin gnome-session-common libgnome-2-0 libgnome-2-0 libgnomevfs2-0 virt-manager


aptitude install aptitude search qemu-kvm libvirt virt-install bridge-utils


Allow root user to do a direct SSH login to the server. Edit the file /etc/ssh/sshd_config as below. (Please note you may not need it also in some secure environments allowing root a direct SSH to the server may  not  be allowed)

root@ansible:/etc/pam.d# grep -i root /etc/ssh/sshd_config
#PermitRootLogin prohibit-password
PermitRootLogin yes
# the setting of "PermitRootLogin without-password".
root@ansible:/etc/pam.d#

Restart SSHD service 

systemctl restart sshd.service

Allow Root user to direct login to the GNOME Desktop (Please note that again in secure environments this may not be allowed to do so)

Comment the line in the file /etc/pam.d/gdm-password to allow the root user to login to the gnome. root login to gnome is disabled.

in the file /etc/pam.d/gdm-password

root@ansible:~# grep -i root /etc/pam.d/gdm-password
#auth   required        pam_succeed_if.so user != root quiet_success
root@ansible:~#


Ensure that the systemctl default target is set to graphical.target 

root@ansible:~# systemctl get-default
graphical.target
root@ansible:~#


Reboot the system here if needed. Once done the VMWare WorkStation Console for the VM will show the Ubuntu GNOME Login screen. You can login there and start the virt-manager from the applications.

GUI of VMWare WorkStation VM


We are here using the MOBATERM connection to the VMWare WorkStatiom VM to access the virt-manager on this machine. This Ubuntu VM will be working as an KVM Hypervisor host on which another VM Will be created and Ubuntu will be installed on the KVM VM Guest.


Access the virt-manager using the MOBATERM


Mobaterm connection to VM to Open Virt-manager

As soon as virt-manager starts, the virt-manager screen appears as

Virt-manager screen from Mobaterm


In the virt-manager Create a new Virtual Machine Guest

Create a New VM in Virt-Manager Selection of Installation Media
Assign the ISO for Ubuntu 16.04 OS installation.

Installation Medium is local ISO Image, the Ubuntu 16.04 LTS minimal install media ~600MB was copied to the desktop of the root user on the KVM Host

Create a New VM in Virt-Manager Selection of Installation Media ISO 


The ISO Image was browsed and attached here.
Also the below shows the CPU also RAM allocation to the KVM Guest

Create a New VM in Virt-Manager 2 Browse for the media


Create a New VM in Virt-Manager (4) CPU also RAM allocation to VM




Create a New VM in Virt-Manager (5) 10GB disk allocation


Disk Allocation details to the KVM VM Guest


The Network allocation was as below, default NAT on the KVM Host was selected also the name was given as test.example.com as below

Create a New VM in Virt-Manager (6) Name and Default Network as NAT


With "Finish" above the machine has booted in the First Ubuntu Install screen, here ESC key was pressed to begin the Ubuntu installation.

Create a New VM in Virt-Manager (7) Ubuntu Install screen

Install Ubuntu server was selected. Here the details of the OS installation are being skipped to keep the document brief.
Create a New VM in Virt-Manager Select Install Ubuntu Server Installation

The Ubuntu Installation progresses as below. Please follow the on screen prompts for the installation.

Create a New VM in Virt-Manager Ubuntu Installation Starts
The install finishes and the KVM VM Guest reboots. Login Screen after reboot




Seeing the network of the KVM VM. The machine has got an interface from the NAT hence it is having IP of the rage 192.168.122.xx also as the interface is NAT from the KVM Guest and further Bridged on the Windows VMWare WorkStation Hypervisor with the Windows host Wireless adapter connected to the internet, it can directly reach to the internet as can be seen that it can resolve the name of www.google.com using the DNS.

IP allocation and resolving google


Further properties of the KVM VM Guest is as 

KVM VM Overview properties

The CPU properties

KVM VM Overview properties CPU


The Virtual Network Interface of the KVM VM Guest

KVM VM Guest Properties Network Interface