Author Description

Jose Luis Gomez

Author Posts

Hands-on: Kubernetes Namespaces. Multi-tenancy and more

November 4, 2017 - - 3 Comments

On the first post we saw how to deploy a Kubernetes platform using Vagrant and VirtualBox. Now we want to see how the cluster resources are logically segregated using Kubernetes namespaces. A namespace gives you the way to enable multi-tenancy in Kubernetes, RBAC, and have an isolated space where you won’t have any naming conflict with other namespaces.

Kubernetes namespaces

Getting started with namespaces

The following sections will walk you through the procedure to understand when you must use namespaces. In addition you will learn how to create and operate them.

Use cases

As an illustration let’s presume you are providing Container as a Service (a.k.a. CaaS) to your organisation, customers, or yourself. As shown above on the diagram the common use cases for namespaces are:

  • Multi-tenancy. You want to isolate organisations or customer on a shared Kubernetes platform. Kubernetes namespaces cannot be nested. If you have the namespace CustomerA and later on it requires two new namespaces for development and production, you cannot create sub-namespaces under the CustomerA namespace. It will require to create two namespaces like CustomerA-dev and CustomerA-prod.
  • Environment. If you desire to keep separated your development of production, you can create two namespaces one called Dev and the other Prod.
  • Project. Keep your projects on dedicated namespaces. It helps with CI/CD as well as a better utilisation and tracking of resources.
  • Team. If you are on a DevOps environment, the partitioning based on project or team is the same. On the contrary, you can create a playground area for learning purpose.

If you want more information about use cases take a look to this post.

Naming convention

The following approach is just a suggestion to have unique and scalable namespaces. Uses hyphens to separate the groups.

  • Customer/Organisation code. At least a minimum of three letters
  • Environment code. At least a minimum of three letters
  • Project/Service code. At least a minimum of three letters
  • Digits. At least a minimum of two numbers

Remember to use lowercase for your namespace name. This is an example using the convention above: jlg-prod-blog-01.

Operating with namespaces

Before we crack on with the creation of namespaces let’s take a look to the Kubernetes command-line tool called kubectl.

Kubectl is the command-line to operate Kubernetes clusters and its applications. It’s available for the majority of the operating systems. If you have deployed your Kubernetes platform using the Vagrantfile I have created, the master node has kubectl installed and configured for you.

If you want to know more about kubectl, please visit this link.

Listing

To list any kind of Kubernetes object you use kubectl get <object_type>. Our object type is namespace so our command shows as follows:

kubectl get namespace
NAME        STATUS AGE
default     Active 20h
kube-public Active 20h
kube-system Active 20h

By default any Kubernetes configuration tool creates at least two namespaces, default and kube-system. In our case we can see three because kube-public is created when you use kubeadm as the Kubernetes configuration tool.

  • default. This namespace is used when you don’t specify a different namespace using -n <namespace_name> or –namespace <namespace_name>.
  • kube-system. This is the namespace where all the Kubernetes core components are running.
  • kube-public (only with kubeadm). This namespace is readable by everyone, including those not authenticated. This namespace is used by kubeadm to host a ConfigMap object in order to enable a secure bootstrap with a simple and short shared token.

Viewing

You can see the details of an object using kubectl describe <object_type> <object_name>. Our object type is namespace and the object name we will use is kube-system. The command shows as follows:

kubectl describe namespace kube-system
Name:        kube-system
Labels:      <none>
Annotations: <none>
Status:      Active
 
No resource quota.
 
No resource limits.

As you can see the output is pretty simple. This is because we are working with a fresh Kubernetes installation. Also, the most common additional extra settings for a namespace like resource quotas and limits are not defined yet. Those objects will be covered on a future post.

Creating

Using the kubectl command you can create objects on two ways.

The first way is to create the object from the CLI without use a manifest file. This approach is supported for a limited number of Kubernetes objects. Let’s create a namespace called no-manifest.

kubectl create namespace no-manifest
namespace “no-manifest” created
 
kubectl get namespace
NAME        STATUS AGE
default     Active 5d
kube-public Active 5d
kube-system Active 5d
no-manifest Active 1m

The second way is to use a manifest file. This is the preferred method since it enables the opportunity to track and version your infrastructure as code manifests on your source code repository. Let’s create a namespace called manifest using a YAML file.

cat <<EOF > namespace_no-manifest.yaml
> apiVersion: v1
> kind: Namespace
> metadata:
>   name: manifest
> EOF
 
kubectl create -f namespace_no-manifest.yaml
namespace “manifest” created
 
kubectl get namespace
NAME        STATUS AGE
default     Active 5d
kube-public Active 5d
kube-system Active 5d
no-manifest Active 5m
manifest    Active 1m

As you could see the manifest file for a namespace is simple. Other attributes like labels or annotations can be added on.

Editing

You can edit the Kubernetes objects on fly with the command kubectl edit <object_type> <object_name> -n <namespace>. On our next example we don’t need to use -n namespace since we are editing one. For objects self-contained in a namespace, you must specify the namespace for the object. Let’s config the annotations with a description for our namespace called no-manifest.

kubectl edit namespace no-manifest
# Please edit the object below. Lines beginning with a ‘#’
# will be ignored, and an empty file will abort the edit.
# If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    description: This is just an example
  creationTimestamp: 2017-11-04T10:57:15Z
  name: no-manifest
  resourceVersion: “39333”
  selfLink: /api/v1/namespaces/no-manifest
  uid: eb25c489-c14e-11e7-bb73-0800277f7091
spec:
  finalizers:
  – kubernetes
status:
  phase: Active
 
kubectl describe namespace no-manifest
Name:        no-manifest
Labels:      <none>
Annotations: description=This is just an example
Status:      Active
 
No resource quota.
 
No resource limits.

Deleting

The deletion of a namespace removes ALL the child objects contained within the namespace. Before you delete a namespace make sure the objects in the namespace are not required anymore. Let’s delete the namespace called manifest with the command kubectl delete namespace manifest.

kubectl delete namespace manifest
namespace “manifest” deleted

Real Kubernetes Namespaces example

For the purpose of this real case scenario we are going to create a namespace to group the different kind of web servers. After you have created the namespace, you will proceed with the deployment of Nginx. Finally, you will list all the objects in the created namespace.

Int he first place let’s create a new namespace called webservers.

kubectl create namespace webservers
namespace “webservers” created

In addition, we are going to create a deployment so we can see later on how they are contain in a namespace.

kubectl create deploy dep-nginx –image=nginx -n webservers
deployment “dep-nginx” created

As a result let’s list the components in our namespace using the command kubectl get all -n webservers.

kubectl get all -n webservers
NAME             DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/dep-nginx 1       1       1          1         53s
 
NAME                    DESIRED CURRENT READY AGE
rs/dep-nginx-6f5568d8dd 1       1       1     53s
 
NAME             DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/dep-nginx 1       1       1          1         53s
 
NAME                    DESIRED CURRENT READY AGE
rs/dep-nginx-6f5568d8dd 1       1       1     53s
 
NAME                          READY STATUS  RESTARTS AGE
po/dep-nginx-6f5568d8dd-49csx 1/1   Running 0        53s

On the output above you can see what a deployment creates.

  • Deployments (deploy). This controller provides declarative updates for Pods (po) and ReplicaSets (rs). The controller is responsible to ensure the desire state of your application. You can see two deployments with the same ID (dep-nginx), this is because it’s created at cluster level rather than per node. Since we have two Kubernetes nodes it shows the object available for all the nodes in the cluster.
  • ReplicaSets (rs). This controller is the next-generation Replication Controller (rc). You can still find some applications using Replication Controller instead of ReplicaSets. The objective of ReplicaSets is to ensure that a specified number of pod replicas are running at any given time.
  • Pods (po). A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster. Since we didn’t define the number of ReplicaSets, for that reason it runs by default a single Pod. If you would like to see where the Pod is running on your cluster, you can run the same command with the flag -o wide (-o is output): kubectl get all -n webservers -o wide.

Conclusion

Kubernetes Namespaces are an important part of your platform and security. It will allow you to logically segregate and assign resources for each of them. Also, it creates a space where the object names don’t impact with the objects in other Kubernetes namespaces.

Hands-on Kubernetes: Deployment

October 29, 2017 - - 0 Comment

A month ago I started to get my head around Kubernetes, not to mention I’m still on my early journey with containers. Honestly, it has been hard get a stable platform up and running. In the first place depending what you are using underneath to spin up the platform, you may need to tweak some tools. Anyway, this is something I’ll explain you later on.

I’m pushing myself to write a series of posts where I’ll share with you what I have learnt. The series will be mostly hands-on experience and a bit of theory. You can find lot of documentation out there about how Kubernetes works under the hood. I don’t want to reinvent what is already written. Instead, I’ll share with you some useful references.

Spinning up a Kubernetes platform

You have many ways to spin up a Kubernetes platform, like using Rancher (don’t miss Deploying Rancher 2.0 on Vagrant). This time I wanted to configure Kubernetes using kubeadm. The tool doesn’t deploy the infrastructure for you. A set of pre-provisioned servers are required. Kubeadm is still under development and it’s not ready for production yet.

Vagrant as Infrastructure Orchestrator

Many people don’t have the opportunity to have dedicated infrastructure to test solutions. Vagrant together with a virtualisation software on your computer gives you the foundational infrastructure.

Vagrant stack

VirtualBox as Virtualisation Software

The reason to use VirtualBox as the Vagrant provider is because it’s free its use by Vagrant. You have other alternatives like VMware Workstation or Fusion, but if you want to use any of them as a provider you must buy VMware Integration – Vagrant.

Virtualbox stack

Cloning the Kubernetes-Vagrant GitHub repo

I have created a Vagrantfile blueprint to deploy a cluster with the following characteristics:

  • Single master
  • One or more nodes (a.k.a. worker or minion). By default two workers are deployed to test the overlay network
  • A standalone NFS server
  • Kubeadm is the tool to configure the latest Kubernetes version
  • Canal is the CNI plug-in

Kubernetes platform

As shown above, the diagram details the foundational infrastructure with the components running on each server.

  • Blue square. It represents the virtual machine with the hostname and the suffix IP
  • Purple rectangle. It represents the NFS export for persistent storage
  • Red rectangle. It represents a Linux tool or service which natively runs on the operating system
  • Green rounded rectangle. It represents a Docker container. Kubernetes components and the CNI plug-in Canal run in containers. For more information about the Kubernetes components visit this link
  • Orange dashed rectangle. It represents the free resources on a node to run pods
  • White dotted rounded rectangle. It represents a Kubernetes pod. A pod is the group of one or more containers with shared storage and/or network

To get this setup you just clone or download my GitHub repository.

~$ git clone https://github.com/pipoe2h/kubernetes-vagrant.git
~$ cd kubernetes-vagrant
# Customise your settings in the Vagrantfile
~$ vagrant up

Once the platform is up (approx. 10 minutes for two nodes setup) you can ssh into the master node with vagrant ssh master. From the master server run kubectl get nodes to list the nodes and check all of them show as ready.

kubectl get nodes

Lessons Learnt

Overall kubeadm makes simple the deployment process. In reality the challenges I faced were not related to kubeadm, my setup in Vagrant was wrong.

  1. Vagrant adds a NAT interface to perform the provisioning tasks. During this process Vagrant overrides the hosts file with the loopback address. Kubernetes requires the node IP for proper communication (you can run hostname -i). My Vagrantfile is tweaked to set the right IP in the hosts file. Otherwise, you cannot use kubectl exec command.
  2. With two network interfaces on the virtual machine the interface to use must be defined. By default the plug-in takes the main interface. To fix that behaviour the setting canal_iface in the Canal YAML blueprint must be set to the second network interface, enp0s8.

Before you start to use actively your Kubernetes platform, make sure ping works between pods on different nodes as well as the DNS resolution too (try to resolve an external domain)

In summary, this post has shown you how to deploy a Kubernetes platform. After all, now you have a functional container platform for training and testing purpose. In fact, this is the platform I will refer on the series of posts.