Friday, December 22, 2017

Ubuntu apt not able to fetch updates or install packages from behind a proxy


Apt can take the system environment values of the http_proxy as well as https_proxy defined.


So to get over this set the proxies as the user environment variables or  system environment variables, which will be automatically used by apt actions

So set env variables related to the http_proxy or https_proxy these in either of the following as 

~/.profile
~/.bash_profile
~/.bashrc

or 

If this has to be set as the system Env variables, these can be set in the file as 

/etc/profile

or 

/etc/environment


or in a file in the 

/etc/profile.d 

like /etc/profile.d/more_env_variables.sh


The entries for the proxy are as 


export http_proxy=http://<Proxy Server FQDN or IP:<Port>

export https_proxy=http://<Proxy Server FQDN or IP>:<Port>

export no_proxy=<IP1|FQDN1| ...>


To be noted that as these files are executed at the time of login the user may have to logoff and logback in or can execute the Profile Variable files as 

. /bash_profile 

please note the [SPACE] after the dot '.' .

Alternative way is that if there is a particular proxy set for APT or  you just want the proxy server to be reached by APT then you can specify the proxy server in /etc/apt/apt.conf 

Or depending on the Ubuntu distribution version you can create a file under /etc/apt/apt.conf.d

to have the contents like this 

Acquire::http::proxy "http://<Proxy_server_FQDN_or_IP>:<PORT>
Acquire::https::proxy "http://<Proxy_server_FQDN_or_IP<:<PORT>

Replace the values above with the Proxy server IP and Port being used at your network.

An Alternative way if you do not want the proxy settings to be saved persistently just want to be available for the current session then, simply export the env variables as below


export http_proxy=http://<Proxy_server_FQDN_or_IP>:<PORT>
export https_proxy=http://<Proxy_server_FQDN_or_IP>:<PORT>





Docker daemon not able to download images from behind a proxy. Enable docker deamon to download images from Docker Hub via a proxy server


This happens as the docker daemon tries to reach direct to the Docker Hub on the internet but the security requires that all traffic to the internet will be allowed only via PROXY server 



If not sure what is the configuration file for the docker service in terms of systemd, you can get the systemd docker configuration file using 'systemctl status docker' or 'systemctl status docker.service'.

The configuration file is highlighted as below

[root@rally docker.service.d]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-12-23 04:55:19 EST; 13min ago
     Docs: https://docs.docker.com
 Main PID: 22381 (dockerd)
   CGroup: /system.slice/docker.service
           ├─22381 /usr/bin/dockerd
           └─22387 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=...



Stop the docker service

systemctl stop docker



Edit the file  /usr/lib/systemd/system/docker.service



Make entries in the [service] section and put the 'Environment' variables for HTTP_PROXY and HTTPS_PROXY as per the proxy server type you have through which the docker daemon will try to reach out the Docker Hub to get the docker images.

If there are certain IPs that is needed that if ever docker daemon wants to reach to them not going through the proxy, put the entries of such IPs and FQDN in the 'Environment' variable as 'NO_PROXY'

For more on the related syntax please see the excerpt from the file /usr/lib/systemd/system/docker.service


[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID

Environment="HTTP_PROXY=http://<PROXY_Server_NAME_OR_IP>:<PROXY_PORT>"
Environment="HTTP_PROXY=http://<PROXY_Server_NAME_OR_IP>:<PROXY_PORT>"
Environment="NO_PROXY=<FQDN1|IPAddr1|IPAddr2|FQDN2| ...>"

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target



Save and exit the file 

Issue a systemctl daemon-reload so as to acknowledge that the docker systemd file has changed on the disk

Restart/start  the docker service

systemctl daemon-reload
systemctl restart docker

Friday, November 3, 2017

Installation of Apache Ant 1.10.x and docker-ce 17.x on CentOS7 Jenkins Master and Slaves

Installation of Apache Ant 1.10.x and docker-ce 17.x on CentOS7 Jenkins Master and Slaves 



This Requires that JRE is installed. Please note that the Ant 1.1o.x requires that JRE 1.8 to be installed. 

This will ensure that the Jenkins master identically have the Docker and the Ant installations so that the builds that need ant and docker can be identically run from the Jenkins master as well as Jenkins Slave servers.

Sources:


  • Also JAVA_HOME is to be set to the path where JRE is installed.
  • Latest versions of Apache ant are available from http://ant.apache.org/bindownload.cgi
  • Ant 1.10.x requires the Java8 JRE.
  • we have installed the Java8 from oracle for Linux as we are running these on the CentOS 7 server.


Please note that this is an RPM based installed of 64 Bit 1.8.0 update 152 from here



[root@jenkinsmaster ~]# which java
/usr/bin/java


[root@jenkinsmaster ~]# java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)



Installation of Ant 1.10.x


Download the ANT binaries 

wget http://redrockdigimark.com/apachemirror/ant/binaries/apache-ant-1.10.1-bin.tar.gz


Extract the Binaries to a folder, here the same is extracted into /opt

tar zxvf apache-ant-1.10.1-bin.tar.gz -C /opt


See the extracted folder

[root@jenkinsmaster apache-ant-1.10.1]# ls -la /opt
total 8
drwxr-xr-x.  3 root root   30 Nov  2 16:15 .
dr-xr-xr-x. 17 root root 4096 Oct 15 22:55 ..
drwxr-xr-x   6 root root 4096 Feb  2  2017 apache-ant-1.10.1
[root@jenkinsmaster apache-ant-1.10.1]#

This is optional if the ANT_HOME and PATH as shown below are set. This creates the soft link for the ant command in the extracted files to the /usr/bin/ant

[root@jenkinsmaster bin]# ln -s /opt/apache-ant-1.10.1/bin/ant /usr/bin/ant

Run ant command without any option to ensure that ant command runs properly 



[root@jenkinsmaster bin]# ant
Buildfile: build.xml does not exist!
Build failed
[root@jenkinsmaster bin]#


See the version of ant installed


[root@jenkinsmaster bin]# ant -version
Apache Ant(TM) version 1.10.1 compiled on February 2 2017
[root@jenkinsmaster bin]#



Append these lines to the /etc/profile or BASH Profile or BASHRC  to set the ANT and JAVA environment variables



Here we had added these lines to the /etc/profile

export ANT_HOME=/opt/apache-ant-1.10.1
export PATH=$PATH:$ANT_HOME/bin
export JAVA_HOME=/usr/java/jdk1.8.0_152
export PATH=$PATH:$JAVA_HOME/bin


Ensure that these are working 

Environment Variables for JAVA_HOME, ANT_HOME, PATH


[root@jekninss1 ~]# env | grep -i -e ant -e java -e path
ANT_HOME=/opt/apache-ant-1.10.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/apache-an -1.10.1/bin:/usr/java/jdk1.8.0_152/bin
JAVA_HOME=/usr/java/jdk1.8.0_152


Version of Apache ant installed


[root@jekninss1 ~]# ant -version
Apache Ant(TM) version 1.10.1 compiled on February 2 2017
[root@jekninss1 ~]#


version of Java JRE installed 


[root@jekninss1 ~]# java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
[root@jekninss1 ~]#

Follow the same instruction to setup Apache ant on the Jenkins slave also. This is to ensure that if ant can be run in the same manner out of Jenkins master or Jenkins Slave


Installation of Docker on the Jenkins Master server and the Jenkins Slave servers


Add the Docker CE Repo 


Reference: 


yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo


[root@jenkinsmaster yum.repos.d]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@jenkinsmaster yum.repos.d]#

Ensure that the docker repository shows up in the repolist


[root@jenkinsmaster yum.repos.d]# yum repolist
Loaded plugins: fastestmirror
docker-ce-stable                                         | 2.9 kB     00:00
docker-ce-stable/x86_64/primary_db                         | 9.3 kB   00:07
Loading mirror speeds from cached hostfile
 * base: mirror.nbrc.ac.in
 * epel: mirror2.totbb.net
 * extras: mirror.nbrc.ac.in
 * jpackage-generic: mirrors.dotsrc.org
 * jpackage-generic-updates: mirrors.dotsrc.org
 * updates: mirror.nbrc.ac.in
repo id                     repo name                                     status
base/7/x86_64               CentOS-7 - Base                                9,591
docker-ce-stable/x86_64     Docker CE Stable - x86_64                         10
*epel/x86_64                Extra Packages for Enterprise Linux 7 - x86_6 12,042
extras/7/x86_64             CentOS-7 - Extras                                280
jpackage-generic            JPackage (free), generic                       3,307
jpackage-generic-updates    JPackage (free), generic                          29
updates/7/x86_64            CentOS-7 - Updates                             1,052

[root@jenkinsmaster yum.repos.d]#

Install docker-ce


yum installation of docker-ce

yum -y install docker-ce


Enable the docker service

systemctl enable docker

[root@jenkinsmaster ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Start the docker service 

systemctl start docker


Ensure that docker is setup properly 

docker run 'hello-world'

This pulls the docker image of 'hello-world' and then runs 

[root@jenkinsmaster ~]# docker run 'hello-world'
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
5b0f327be733: Pull complete
Digest: sha256:175735360662078abd70dacb73c5518d5b3ae7c1ed069d22def5da57c3e917d6
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

[root@jenkinsmaster ~]#

Complete the same steps on the Jenkins Slave server(s) also.

Thursday, October 19, 2017

OpenStack Orchestration Heat Stack to create a network OpenStack Newton

OpenStack Orchestration Heat Stack to create a network OpenStack Newton 



The YAML file for the same as example is as 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# cat 04net.yml
---
# for Newton release of OpenStack
#
heat_template_version: 2016-10-14

description: having a private network in place

resources:
  private_net:
    type: OS::Neutron::Net
    properties:
      name: internal1
      shared: true

outputs:
  net_info:
    value: { get_attr: [private_net]}
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


Creation of a stack using the definition above creates a network with the name as "internal".

Running the stack 


openstack stack create -t 04net.yml internal1_network 

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | 95f4779a-ac2e-4d41-aec3-5236d24d0bb3 |
| stack_name          | internal1_network                    |
| description         | having a private network in place    |
| creation_time       | 2017-10-19T20:52:01Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
--------

See the stack information 


openstack stack show internal1_network

+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| id                    | 95f4779a-ac2e-4d41-aec3-5236d24d0bb3                                                                                                 |
| stack_name            | internal1_network                                                                                                                    |
| description           | having a private network in place                                                                                                    |
| creation_time         | 2017-10-19T20:52:01Z                                                                                                                 |
| updated_time          | None                                                                                                                                 |
| stack_status          | CREATE_COMPLETE                                                                                                                      |
| stack_status_reason   | Stack CREATE completed successfully                                                                                                  |
| parameters            | OS::project_id: 49b25ce4022c492fa0c1eab4fc6c7419                                                                                     |
|                       | OS::stack_id: 95f4779a-ac2e-4d41-aec3-5236d24d0bb3                                                                                   |
|                       | OS::stack_name: internal1_network                                                                                                    |
|                       |                                                                                                                                      |
| outputs               | - description: No description given                                                                                                  |
|                       |   output_error: '''qos_policy_id'''                                                                                                  |
|                       |   output_key: net_info                                                                                                               |
|                       |   output_value: null                                                                                                                 |
|                       |                                                                                                                                      |
| links                 | - href: http://172.29.240.100:8004/v1/49b25ce4022c492fa0c1eab4fc6c7419/stacks/internal1_network/95f4779a-ac2e-4d41-aec3-5236d24d0bb3 |
|                       |   rel: self                                                                                                                          |
|                       |                                                                                                                                      |
| parent                | None                                                                                                                                 |
| disable_rollback      | True                                                                                                                                 |
| deletion_time         | None                                                                                                                                 |
| stack_user_project_id | d409212bfdd14e50beabc71a01dc7627                                                                                                     |
| capabilities          | []                                                                                                                                   |
| notification_topics   | []                                                                                                                                   |
| stack_owner           | None                                                                                                                                 |
| timeout_mins          | None                                                                                                                                 |
| tags                  | null                                                                                                                                 |
|                       | ...                                                                                                                                  |
|                       |                                                                                                                                      |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+


Confirm that the netwrok has been created

openstack network list

So a network has been created, but there are no subnet yet on the network internal1


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack network list
+--------------------------------------+-----------+--------------------------------------+
| ID                                   | Name      | Subnets                              |
+--------------------------------------+-----------+--------------------------------------+
| 0b26c960-6158-4b20-9156-9d163ceaf2f3 | internal0 | 7206c8e9-64ca-4ba1-abef-12639820fd37 |
| 216e5f0c-e0ed-4c04-b912-37c6967a0038 | internal1 |                                      |
| 3bc5a907-42ad-4fa7-aa53-1e514b42d6df | public0   | d8e83610-7b77-4683-abc5-0cf3b6186395 |
+--------------------------------------+-----------+--------------------------------------+



[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

OpenStack Orchestration Heat to create a Subnet in a given Network OpenStack Newton


OpenStack Orchestration Heat to create a Subnet in a given Network 


Here is how to have a subnet using the heat stack.


---

Creation of the subnet using the heat stack 


Here is the stack to create the subnet



[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# cat 05subnet.yml
---
# for Newton release of OpenStack
#
heat_template_version: 2016-10-14

description:  having a private subnet in a private network

resources:
  subnet:
    type: OS::Neutron::Subnet
    properties:
      cidr: '192.168.201.0/24'
      name: internalsubnet1
      enable_dhcp: true
      ip_version: 4
      allocation_pools:
        - { start: 192.168.201.5, end: 192.168.201.254 }
      network: internal1

outputs:
  subnet_info:
    value: { get_attr: [subnet]}
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

  
----



  • Create the stack using the above YAML file to create a subnet in the network 'internal1'.
  • The subnet will be named as 'internalsubnet1'
  • The Subnet name is 'internalsubnet1'
  • The Subnet CIDR is 192.168.201.0/24
  • The Allocation Pool range start:ebdis 192.168.201.5-192.168.205.254


---


Create the stack


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack stack create -t 05subnet.yml internalsubnet_1
+---------------------+----------------------------------------------+
| Field               | Value                                        |
+---------------------+----------------------------------------------+
| id                  | b1fe49e0-01d1-4df8-9a15-9243f31b7055         |
| stack_name          | internalsubnet_1                             |
| description         | having a private subnet in a private network |
| creation_time       | 2017-10-19T21:01:47Z                         |
| updated_time        | None                                         |
| stack_status        | CREATE_IN_PROGRESS                           |
| stack_status_reason | Stack CREATE started                         |
+---------------------+----------------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


---

See that the stack is created 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack stack list | grep -i internalsubnet_1
| b1fe49e0-01d1-4df8-9a15-9243f31b7055 | internalsubnet_1  | CREATE_COMPLETE | 2017-10-19T21:01:47Z | None         |
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


--

Confirm that the subnet has been created also see the subnet using openstack subnet show


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack subnet show internalsubnet1
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 192.168.201.5-192.168.201.254        |
| cidr              | 192.168.201.0/24                     |
| created_at        | 2017-10-19T21:01:48Z                 |
| description       |                                      |
| dns_nameservers   |                                      |
| enable_dhcp       | True                                 |
| gateway_ip        | 192.168.201.1                        |
| host_routes       |                                      |
| id                | 9480b504-db1f-4f05-97b6-4058c7b168c9 |
| ip_version | 4                            |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | internalsubnet1                      |
| network_id        | 216e5f0c-e0ed-4c04-b912-37c6967a0038 |
| project_id        | 49b25ce4022c492fa0c1eab4fc6c7419     |
| project_id        | 49b25ce4022c492fa0c1eab4fc6c7419     |
| revision_number   | 2                                    |
| service_types     | []                                   |
| subnetpool_id     | None                                 |
| updated_at        | 2017-10-19T21:01:48Z                 |
+-------------------+--------------------------------------+