Containerising the Nutanix CLI (nCLI)

April 29, 2018 - - 0 Comment

Containerising the Nutanix CLI is my first post related to Nutanix technology since I joined the HCI leader a week ago. As company laptop I chose the Apple MBP, I want to try it for first time. So far so good, even I can have Docker “natively” running on it.

Always when I get a new computer I try to keep everything as much organised as I can. For my daily job I need to connect into many Nutanix clusters, so I prefer to use the Nutanix CLI (a.k.a. nCLI) and avoid jumping into the CVM(s).

As I said, I try to keep everything clean and organised, reason I have decided to run the nCLI as a container and not be messing up with the Java JRE and PATHS.

Containerising the Nutanix CLI

Containerising the Nutanix CLI is a straightforward task. I have not created a Docker image yet because I’m pending to confirm if the nCLI can be repackaged. But this is not a problem at all, you can build your own Docker image following the steps on my GitHub repo.

https://github.com/pipoe2h/docker-nutanix-cli

They are pretty straightforward. In a nutshell:

  • Download the ncli.zip from Prism.
  • Clone the GitHub repo.
  • Build the Docker image.
  • Run the nCLI as a container.

Disclaimer: Containerised nCLI is not officially supported by Nutanix. Please use at your own risk.

A New Adventure Ahead

March 20, 2018 - - 1 Comment

Two years and three months since I joined Computacenter and still a month ahead  to continue, for a little more, enjoying of this fantastic organisation with awesome people and great customers. During this time I had the opportunity to work on the design and delivery of the most exciting project of my career, the Liberty Global Private Cloud. When I saw the customer on the stage at the VMworld 2017 EU Keynote, I felt really proud for the work done during 18 months.

Another wonderful moment in Computacenter was when after my first year in the organisation I got the Rookie of the Year award.

During this time I continued actively involved with the community and that gave me the chance to present for vBrownBag at VMworld EU 2017, and also at the UK VMUG UserCon 2017.

However, for a long time I wanted to work for a company which I have been following since their beginnings. I was the first person in Spain to write about them introducing their technology into the community. I have seen how they have grown up so fast from a little startup to the strong organisation they are today, proof of that their position as THE LEADER in the Garner MQ.

So, I am pleased to announce, that as of the 23rd of April 2018, I will be joining Nutanix as a Senior Systems Engineer in the UK. I’m really excited to be moving to the vendor side, and specifically Nutanix, because I believe on what they are developing and their vision for the next years.

I want to finish this post giving the thank you to Computacenter, my colleagues and customers. They have helped me to become a better professional.

Kubernetes Dashboard. Installation Deep Dive

February 18, 2018 - - 13 Comments

The deployment of applications and add-ons in Kubernetes are straightforward until those need to consume the Kubernetes API, that is the case of the Kubernetes Dashboard add-on. On version 1.7 of Kubernetes the RBAC service was introduced and many of those applications and add-ons started to crash.

This post will walk you through the process to deploy, configure and access to the Kubernetes Dashboard.

Kubernetes Dashboard Prerequisites

  • Running a Kubernetes platform 1.7.x and above.
  • Internet connection (pull Kubernetes Dashboard manifest and image)

If you don’t have a Kubernetes platform running at this time take a look to my post Hands-on Kubernetes: Deployment

Deploying Kubernetes Dashboard

On a node with kubectl command line installed run the following command. The manifest includes all the Kubernetes components to create for the add-on.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

 

Check if your dashboard is running listing the pods in the namespace kube-system with the following command. You should see a kubernetes-dashboard-… pod with the status “running”.

kubectl -n kube-system get pod

Opening the dashboard

Access the dashboard at:

https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

 

You likely got an error trying to access the dashboard.

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "https:kubernetes-dashboard:",
    "kind": "services"
  },
  "code": 403
}

At this point you will start to look for a solution on Internet. The solutions you mostly will find are the ones below, kubectl proxy and NodePort, but they are not recommended for production.

kubectl proxy

This access mode is not recommended to be used as the method to publicly expose your dashboard. The proxy only allows HTTP connection.

To use this method you need to install kubectl in your computer and run the following command. The proxy will start to serve the dashboard on http://localhost:8001 by default.

kubectl proxy

 

Personally I don’t recommend to use this connection method. If you are sharing a jump server or even on your own computer, a sniffer will able to capture your kubeconfig file or token since they are sent as plain text via HTTP.

You can find more information Accessing Dashboard 1.7.X and above.

NodePort

If you are running a single node setup (unlikely in production), you can configure the Kubernetes Dashboard service to use NodePort as the type to publish the service.

I’m not going to explain how to set the service type since the Kubernetes Dashboard site has a clear procedure (Accessing Dashboard 1.7.X and above)

API Server

This is the method which I recommend to use for production systems as well as for dev and test. It is important to keep the same security mechanisms end to end and get familiar with Kubernetes RBAC.

To use the API server you need to install the user certificates in the browser. I’m going to use the kubeconfig file generated by kubeadm, I want to keep this post as short as I can.

Tip: For production systems each user should have its own certificates. Bitnami have a great doc about how to configure it (Create User With Limited Namespace Access)

Let’s see how we can extract the certificates from the kubeconfig file:

  1. Locate your kubeconfig or config file which you use to run kubectl commands. If you have used my Vagrant file above, you can find it on /home/vagrant/.kube/config or /etc/kubernetes/admin.conf
  2. You need to export a single file (.p12) with the following two certificates: the client-certificate-data, and the client-key-data. My example runs the command on /home/vagrant. If you run this command on macOS, be sure to change the base64 -d to base64 -D.
    grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
    grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
    openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

     

  3. Import the kubecfg.p12 certificate, reopen your browser, and visit the Kubernetes Dashboard URL. Accept any warning and you should see the authentication page. You can skip the login and check you are not able to perform any task.
  4. The following steps have been copied from the Kubernetes Dashboard wiki page (Creating-sample-user)
    1. Create service account
      cat <<EOF | kubectl create -f -
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: admin-user
        namespace: kube-system
      EOF
    2. Create ClusterRoleBinding
      cat <<EOF | kubectl create -f -
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: admin-user
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - kind: ServiceAccount
        name: admin-user
        namespace: kube-system
      EOF
    3. Get the Bearer Token. Once you run the following command, copy the token value which you will use on the following step.
      kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
    4. Come back to your browser and choose token on the login page. You will need to paste the token value you have copied on the previous step.
    5. Click “SIGN IN” and you should be able to see your Kubernetes Dashboard fully operational.

Summary

API Server should be your choice when production systems. If you want your users to have each one their own certificates, which I encourage you to do, don’t miss the Bitnami post mentioned above.

Note: You will find on GitHub and other blogs the option to give cluster-admin access to system:anonymous. This is an easy way to not export certificates and create a cluster-admin service account. I highly discourage you to use this approach on any enterprise environment.

Hands-on: Kubernetes Pods. My first container

November 12, 2017 - - 0 Comment

On this third post of the series I’m going to talk about Kubernetes Pods. Kubernetes documentation defines a pod as “a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers“. Kubernetes Pods are the smallest unit of computing that can be deployed and managed by Kubernetes.

Kubernetes Pods

Deep-diving into Kubernetes Pods

In the first place let’s walk through few key aspect of the Kubernetes Pods. Later on we will move on to the hands on labs.

Computing

A Kubernetes pod runs on a given node, it means a single pod cannot be stretched across multiple nodes. You can deploy more of the same pod, but it will be tied to a node. Also, a pod cannot be live migrated to another node like happens with virtual machines.

You cannot set resources to Kubernetes pods. The resource configuration happens at the container level in your pod definition. In order to successfully deploy a pod, your namespace must have enough resources.

Kubernetes Pods. Computing

Networking

From network point of view a routable IP address is assign to a given pod. Containers within a pod share an IP address and port space, and can find each other via localhost. You cannot have more than a container within the same pod listening on the same port. On the hands on section you will find an example.

In addition, containers in different pods have distinct IP addresses and can not communicate by IPC (inter-process communication)

Kubernetes Pods. Network.

Storage

The storage claimed by a pod is shared with all the containers within that pod. Once a persistent volume is claimed by a pod, it cannot be claimed/attached by another pod. Volumes enable data to survive container restarts and to be shared among the applications within the pod.

Kubernetes Pods. Storage.

Scheduling

By default the kube-scheduler service ensures that pods are only placed on nodes that have sufficient free resources. Also, it tries to balance out the resource utilisation of nodes.

Since Kubernetes 1.6 it offers advanced scheduling features: node affinity/anti-affinity, taints and tolerations, pod affinity/anti-affinity, and custom schedulers.

Kubernetes Pods. Scheduler.

Availability

The service kube-scheduler ensures a pod always is up and running in the event of a failure. If the Kubernetes node containing the pod fails, or your pod crashes, kube-scheduler will instantiate another pod for you.

With a single pod your availability is compromised if that fails. On a future post I’ll show you how to create replicas of your pod to improve its availability. ReplicaSet is the next-generation Replication Controller. A ReplicaSet ensures that a specified number of pod replicas are running at any given time. ReplicaSet should not be directly used, instead a deployment object as a high-level entity is recommended.

Kubernetes Pods. Availability.

Kubernetes Pods Lifecycle

Before we start with the creation, read, update, and delete (CRUD) of Kubernetes pods, I’d like to highlight an important recommendation. You should not directly instantiate a pod ever. I’m doing it for the sake of the post. You should always use a Kubernetes controller like Deployments, Job, or StatefulSet.

Creating Kubernetes Pods

First thing to remember is how works the Kubernetes Namespaces. If you are not familiar with namespaces, I suggest you to read my post Hands-on: Kubernetes Namespaces. We are going to work with two namespaces to showcase the connectivity between pods on different namespaces.

Unlike namespaces, to create a Kubernetes pod you must use a manifest. On our example we are going to use the namespaces production and development. Let’s create our first namespace (production) and a pod with Nginx at the same time.

cat <<EOF | kubectl create ‐f ‐
apiVersion: v1
kind: Namespace
metadata:
   name: production
‐‐‐
apiVersion: v1
kind: Pod
metadata:
   name: nginx
   namespace: production
spec:
   containers:
   ‐ name: nginx-app
     image: nginx
EOF
namespace “production” created
pod “nginx” created
kubectl -n production get all -o wide
NAME      READY  STATUS  RESTARTS  AGE  IP           NODE
po/nginx  1/1    Running 0         37s  10.244.2.12  node2

From any of your servers, master or nodes, you can check the Nginx container is accesible. Use curl and the IP address from the previous command.

curl 10.244.2.12
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed
and working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Let’s create the second namespace development with a busybox container.

vim development.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: development
‐‐‐
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: development
spec:
  containers:
  ‐ name: busybox-sleep
    image: busybox
    args:
    ‐ sleep
    ‐ “1000000”
kubectl create -f development.yaml
namespace “development” created
pod “busybox” create
kubectl -n development get all
NAME       READY  STATUS   RESTARTS  AGE
po/busybox 1/1    Running  0         30s

Now you will check the busybox container is able to open the Nginx site despite it is on a different namespace. The following command open a connection to a given container. Also, you will use wget to confirm the Nginx website is accessible from the busybox container. The IP address to use with wget can be gather listing the production namespace pods (kubectl -n production get pods -o wide)

kubectl -n development exec busybox -it /bin/bash
wget 10.244.2.12
Connecting to 10.244.2.12 (10.244.2.12:80)
index.html 100% |****************************************************|
612 0:00:00 ETA
cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed
and working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Reading Kubernetes Pods

To read any given Kubernetes object you use the option describe.

kubectl -n production describe pod nginx
Name: nginx
Namespace: production
Node: node2/192.168.34.12
Start Time: Sun, 12 Nov 2017 16:25:03 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.2.12
Containers:
  nginx-app:
    Container ID: docker://3c2846e5619f4203ed5d35e96818a71a9e71b
c5f3d442d25afae43f3af819766
    Image: nginx
    Image ID: docker-    pullable://[email protected]:9fca103a62af6db7f188ac3376c60927db41
f88b8d2354bf02d2290a672dc425
    Port: <none>
    State: Running
      Started: Sun, 12 Nov 2017 16:25:05 +0000
    Ready: True
    Restart Count: 0
    Environment: <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q2dpl (ro)
Conditions:
  Type Status
  Initialized True
  Ready True
  PodScheduled True
Volumes:
  default-token-q2dpl:
  Type: Secret (a volume populated by a Secret)
  SecretName: default-token-q2dpl
  Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type   Reason                Age From              Message
‐‐‐‐   ‐‐‐‐‐‐                ‐‐‐ ‐‐‐‐              ‐‐‐‐‐‐
Normal Scheduled             31m default_scheduler Successfully assigned nginx to node2
Normal SuccessfulMountVolume 31m kubelet, node2    MountVolume.SetUp succeeded
for volume “default-token-q2dpl”
Normal Pulling               31m kubelet, node2    pulling image “nginx”
Normal Pulled                31m kubelet, node2    Successfully pulled image
“nginx”
Normal Created               31m kubelet, node2    Created container
Normal Started               31m kubelet, node2    Started container

Take a look to each of the pod settings. You can see they are not hard to understand.

Updating Kubernetes Pod

The update options are limited when you directly deploy a Kubernetes pod. For example you cannot add or remove containers from a pod. You can update a pod on different ways. You will learn how to update a pod using patch and replace.

With kubectl patch you modify on fly the pod configuration. Let’s patch the Nginx pod running in the production namespace with a different Nginx version. Important to realize the container is destroyed and re-created with the new version, it means the service has an outage. The pod uptime is not restarted.

First thing to do is to check the current Nginx version with kubectl exec. The double dash after -it is required when you have more than an argument (nginx -v)

kubectl -n production exec nginx -it ‐‐ nginx -v
nginx version: nginx/1.13.6

Let’s downgrade the Nginx version to 1.12 using kubectl patch. For the patch argument you just use the JSON blob for the spec section of your pod manifest. You can see the image key value pair has been updated with nginx:1.12.

kubectl -n production patch pod nginx -p ‘{“spec”:{“containers”:[{“name”: “nginx-app”, “image”: “nginx:1.12”}]}}’
pod “nginx” patched

Check again the Nginx version.

kubectl -n production exec nginx -it ‐‐ nginx -v
nginx version: nginx/1.12.2

Before you move to the next section, deleting Kubernetes Pods, let’s replace our busybox pod in the namespace development with a new pod with two containers. The objective is to showcase how the network traffic is within a pod when you have more than a container. Remember the containers within a pod communicate each other through localhost.

First let’s update the development.yaml file to include the new Nginx container.

cp development.yaml development-update.yaml
vim development-update.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: development
‐‐‐
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: development
spec:
  containers:
  ‐ name: busybox-sleep
    image: busybox
    args:
    ‐ sleep
    ‐ “1000000”
  - name: nginx-app
    image: nginx

Now you will replace the current busybox pod with the new version of the manifest file. Since add or remove containers in a pod is not possible, you will use the option ‐‐force to delete and re-create the namespace and pod. Delete the pod takes a while because a graceful shutdown of the pod is done.

kubectl -n development replace ‐‐force -f development-update.yaml
namespace “development” deleted
pod “busybox” deleted
namespace “development” replaced
pod “busybox” replaced

If you run the following command will see the pod includes now two containers (kubectl -n development describe pod busybox)

Let’s check if the busybox container is able to get the web page provided by Nginx.

kubectl -n development exec busybox -c busybox-sleep -it /bin/sh
wget localhost
Connecting to localhost (127.0.0.1:80)
index.html 100% |****************************************************|
612 0:00:00 ETA
cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed
and working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Deleting Kubernetes Pods

The command to delete a pod is the same than the one to delete a namespace but changing the kind of object. When you delete a pod Kubernetes will try gratefully terminate the pod. It is important when you have volumes attached to your pod.

Let’s destroy the busybox pod in the namespace development and later destroy the rest of objects you have created for this post.

kubectl -n development delete pod busybox
pod “busybox” deleted
kubectl -n development get pod
NAME     READY  STATUS       RESTARTS  AGE
busybox  2/2    Terminating  0         11m

Let’s delete the rest of objects

kubectl delete namespace production development
namespace “production” deleted
namespace “development” deleted

Conclusion

If you have survived until here, congratulations! It was a long post but I’ve wanted to cover the most important aspects of Kubernetes Pods. This is a good start point to get familiar with the pod architecture and lifecycle.

Pods are the meat of Kubernetes. You must practice as much as you can with pods in order to troubleshoot your container platform in case of failures.

On future posts I’ll show you how the pods can be abstracted on other high-level objects also know as controllers.