Hands-on Kubernetes: Deployment

A month ago I started to get my head around Kubernetes, not to mention I’m still on my early journey with containers. Honestly, it has been hard get a stable platform up and running. In the first place depending what you are using underneath to spin up the platform, you may need to tweak some tools. Anyway, this is something I’ll explain you later on.

I’m pushing myself to write a series of posts where I’ll share with you what I have learnt. The series will be mostly hands-on experience and a bit of theory. You can find lot of documentation out there about how Kubernetes works under the hood. I don’t want to reinvent what is already written. Instead, I’ll share with you some useful references.

Spinning up a Kubernetes platform

You have many ways to spin up a Kubernetes platform, like using Rancher (don’t miss Deploying Rancher 2.0 on Vagrant). This time I wanted to configure Kubernetes using kubeadm. The tool doesn’t deploy the infrastructure for you. A set of pre-provisioned servers are required. Kubeadm is still under development and it’s not ready for production yet.

Vagrant as Infrastructure Orchestrator

Many people don’t have the opportunity to have dedicated infrastructure to test solutions. Vagrant together with a virtualisation software on your computer gives you the foundational infrastructure.

Vagrant stack

VirtualBox as Virtualisation Software

The reason to use VirtualBox as the Vagrant provider is because it’s free its use by Vagrant. You have other alternatives like VMware Workstation or Fusion, but if you want to use any of them as a provider you must buy VMware Integration – Vagrant.

Virtualbox stack

Cloning the Kubernetes-Vagrant GitHub repo

I have created a Vagrantfile blueprint to deploy a cluster with the following characteristics:

  • Single master
  • One or more nodes (a.k.a. worker or minion). By default two workers are deployed to test the overlay network
  • A standalone NFS server
  • Kubeadm is the tool to configure the latest Kubernetes version
  • Canal is the CNI plug-in

Kubernetes platform

As shown above, the diagram details the foundational infrastructure with the components running on each server.

  • Blue square. It represents the virtual machine with the hostname and the suffix IP
  • Purple rectangle. It represents the NFS export for persistent storage
  • Red rectangle. It represents a Linux tool or service which natively runs on the operating system
  • Green rounded rectangle. It represents a Docker container. Kubernetes components and the CNI plug-in Canal run in containers. For more information about the Kubernetes components visit this link
  • Orange dashed rectangle. It represents the free resources on a node to run pods
  • White dotted rounded rectangle. It represents a Kubernetes pod. A pod is the group of one or more containers with shared storage and/or network

To get this setup you just clone or download my GitHub repository.

~$ git clone https://github.com/pipoe2h/kubernetes-vagrant.git
~$ cd kubernetes-vagrant
# Customise your settings in the Vagrantfile
~$ vagrant up

Once the platform is up (approx. 10 minutes for two nodes setup) you can ssh into the master node with vagrant ssh master. From the master server run kubectl get nodes to list the nodes and check all of them show as ready.

kubectl get nodes

Lessons Learnt

Overall kubeadm makes simple the deployment process. In reality the challenges I faced were not related to kubeadm, my setup in Vagrant was wrong.

  1. Vagrant adds a NAT interface to perform the provisioning tasks. During this process Vagrant overrides the hosts file with the loopback address. Kubernetes requires the node IP for proper communication (you can run hostname -i). My Vagrantfile is tweaked to set the right IP in the hosts file. Otherwise, you cannot use kubectl exec command.
  2. With two network interfaces on the virtual machine the interface to use must be defined. By default the plug-in takes the main interface. To fix that behaviour the setting canal_iface in the Canal YAML blueprint must be set to the second network interface, enp0s8.

Before you start to use actively your Kubernetes platform, make sure ping works between pods on different nodes as well as the DNS resolution too (try to resolve an external domain)

In summary, this post has shown you how to deploy a Kubernetes platform. After all, now you have a functional container platform for training and testing purpose. In fact, this is the platform I will refer on the series of posts.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.