Hands-on: Kubernetes Namespaces. Multi-tenancy and more

November 4, 2017 - - 3 Comments

On the first post we saw how to deploy a Kubernetes platform using Vagrant and VirtualBox. Now we want to see how the cluster resources are logically segregated using Kubernetes namespaces. A namespace gives you the way to enable multi-tenancy in Kubernetes, RBAC, and have an isolated space where you won’t have any naming conflict with other namespaces.

Kubernetes namespaces

Getting started with namespaces

The following sections will walk you through the procedure to understand when you must use namespaces. In addition you will learn how to create and operate them.

Use cases

As an illustration let’s presume you are providing Container as a Service (a.k.a. CaaS) to your organisation, customers, or yourself. As shown above on the diagram the common use cases for namespaces are:

  • Multi-tenancy. You want to isolate organisations or customer on a shared Kubernetes platform. Kubernetes namespaces cannot be nested. If you have the namespace CustomerA and later on it requires two new namespaces for development and production, you cannot create sub-namespaces under the CustomerA namespace. It will require to create two namespaces like CustomerA-dev and CustomerA-prod.
  • Environment. If you desire to keep separated your development of production, you can create two namespaces one called Dev and the other Prod.
  • Project. Keep your projects on dedicated namespaces. It helps with CI/CD as well as a better utilisation and tracking of resources.
  • Team. If you are on a DevOps environment, the partitioning based on project or team is the same. On the contrary, you can create a playground area for learning purpose.

If you want more information about use cases take a look to this post.

Naming convention

The following approach is just a suggestion to have unique and scalable namespaces. Uses hyphens to separate the groups.

  • Customer/Organisation code. At least a minimum of three letters
  • Environment code. At least a minimum of three letters
  • Project/Service code. At least a minimum of three letters
  • Digits. At least a minimum of two numbers

Remember to use lowercase for your namespace name. This is an example using the convention above: jlg-prod-blog-01.

Operating with namespaces

Before we crack on with the creation of namespaces let’s take a look to the Kubernetes command-line tool called kubectl.

Kubectl is the command-line to operate Kubernetes clusters and its applications. It’s available for the majority of the operating systems. If you have deployed your Kubernetes platform using the Vagrantfile I have created, the master node has kubectl installed and configured for you.

If you want to know more about kubectl, please visit this link.

Listing

To list any kind of Kubernetes object you use kubectl get <object_type>. Our object type is namespace so our command shows as follows:

kubectl get namespace
NAME        STATUS AGE
default     Active 20h
kube-public Active 20h
kube-system Active 20h

By default any Kubernetes configuration tool creates at least two namespaces, default and kube-system. In our case we can see three because kube-public is created when you use kubeadm as the Kubernetes configuration tool.

  • default. This namespace is used when you don’t specify a different namespace using -n <namespace_name> or –namespace <namespace_name>.
  • kube-system. This is the namespace where all the Kubernetes core components are running.
  • kube-public (only with kubeadm). This namespace is readable by everyone, including those not authenticated. This namespace is used by kubeadm to host a ConfigMap object in order to enable a secure bootstrap with a simple and short shared token.

Viewing

You can see the details of an object using kubectl describe <object_type> <object_name>. Our object type is namespace and the object name we will use is kube-system. The command shows as follows:

kubectl describe namespace kube-system
Name:        kube-system
Labels:      <none>
Annotations: <none>
Status:      Active
 
No resource quota.
 
No resource limits.

As you can see the output is pretty simple. This is because we are working with a fresh Kubernetes installation. Also, the most common additional extra settings for a namespace like resource quotas and limits are not defined yet. Those objects will be covered on a future post.

Creating

Using the kubectl command you can create objects on two ways.

The first way is to create the object from the CLI without use a manifest file. This approach is supported for a limited number of Kubernetes objects. Let’s create a namespace called no-manifest.

kubectl create namespace no-manifest
namespace “no-manifest” created
 
kubectl get namespace
NAME        STATUS AGE
default     Active 5d
kube-public Active 5d
kube-system Active 5d
no-manifest Active 1m

The second way is to use a manifest file. This is the preferred method since it enables the opportunity to track and version your infrastructure as code manifests on your source code repository. Let’s create a namespace called manifest using a YAML file.

cat <<EOF > namespace_no-manifest.yaml
> apiVersion: v1
> kind: Namespace
> metadata:
>   name: manifest
> EOF
 
kubectl create -f namespace_no-manifest.yaml
namespace “manifest” created
 
kubectl get namespace
NAME        STATUS AGE
default     Active 5d
kube-public Active 5d
kube-system Active 5d
no-manifest Active 5m
manifest    Active 1m

As you could see the manifest file for a namespace is simple. Other attributes like labels or annotations can be added on.

Editing

You can edit the Kubernetes objects on fly with the command kubectl edit <object_type> <object_name> -n <namespace>. On our next example we don’t need to use -n namespace since we are editing one. For objects self-contained in a namespace, you must specify the namespace for the object. Let’s config the annotations with a description for our namespace called no-manifest.

kubectl edit namespace no-manifest
# Please edit the object below. Lines beginning with a ‘#’
# will be ignored, and an empty file will abort the edit.
# If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    description: This is just an example
  creationTimestamp: 2017-11-04T10:57:15Z
  name: no-manifest
  resourceVersion: “39333”
  selfLink: /api/v1/namespaces/no-manifest
  uid: eb25c489-c14e-11e7-bb73-0800277f7091
spec:
  finalizers:
  – kubernetes
status:
  phase: Active
 
kubectl describe namespace no-manifest
Name:        no-manifest
Labels:      <none>
Annotations: description=This is just an example
Status:      Active
 
No resource quota.
 
No resource limits.

Deleting

The deletion of a namespace removes ALL the child objects contained within the namespace. Before you delete a namespace make sure the objects in the namespace are not required anymore. Let’s delete the namespace called manifest with the command kubectl delete namespace manifest.

kubectl delete namespace manifest
namespace “manifest” deleted

Real Kubernetes Namespaces example

For the purpose of this real case scenario we are going to create a namespace to group the different kind of web servers. After you have created the namespace, you will proceed with the deployment of Nginx. Finally, you will list all the objects in the created namespace.

Int he first place let’s create a new namespace called webservers.

kubectl create namespace webservers
namespace “webservers” created

In addition, we are going to create a deployment so we can see later on how they are contain in a namespace.

kubectl create deploy dep-nginx –image=nginx -n webservers
deployment “dep-nginx” created

As a result let’s list the components in our namespace using the command kubectl get all -n webservers.

kubectl get all -n webservers
NAME             DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/dep-nginx 1       1       1          1         53s
 
NAME                    DESIRED CURRENT READY AGE
rs/dep-nginx-6f5568d8dd 1       1       1     53s
 
NAME             DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/dep-nginx 1       1       1          1         53s
 
NAME                    DESIRED CURRENT READY AGE
rs/dep-nginx-6f5568d8dd 1       1       1     53s
 
NAME                          READY STATUS  RESTARTS AGE
po/dep-nginx-6f5568d8dd-49csx 1/1   Running 0        53s

On the output above you can see what a deployment creates.

  • Deployments (deploy). This controller provides declarative updates for Pods (po) and ReplicaSets (rs). The controller is responsible to ensure the desire state of your application. You can see two deployments with the same ID (dep-nginx), this is because it’s created at cluster level rather than per node. Since we have two Kubernetes nodes it shows the object available for all the nodes in the cluster.
  • ReplicaSets (rs). This controller is the next-generation Replication Controller (rc). You can still find some applications using Replication Controller instead of ReplicaSets. The objective of ReplicaSets is to ensure that a specified number of pod replicas are running at any given time.
  • Pods (po). A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster. Since we didn’t define the number of ReplicaSets, for that reason it runs by default a single Pod. If you would like to see where the Pod is running on your cluster, you can run the same command with the flag -o wide (-o is output): kubectl get all -n webservers -o wide.

Conclusion

Kubernetes Namespaces are an important part of your platform and security. It will allow you to logically segregate and assign resources for each of them. Also, it creates a space where the object names don’t impact with the objects in other Kubernetes namespaces.

Hands-on Kubernetes: Deployment

October 29, 2017 - - 0 Comment

A month ago I started to get my head around Kubernetes, not to mention I’m still on my early journey with containers. Honestly, it has been hard get a stable platform up and running. In the first place depending what you are using underneath to spin up the platform, you may need to tweak some tools. Anyway, this is something I’ll explain you later on.

I’m pushing myself to write a series of posts where I’ll share with you what I have learnt. The series will be mostly hands-on experience and a bit of theory. You can find lot of documentation out there about how Kubernetes works under the hood. I don’t want to reinvent what is already written. Instead, I’ll share with you some useful references.

Spinning up a Kubernetes platform

You have many ways to spin up a Kubernetes platform, like using Rancher (don’t miss Deploying Rancher 2.0 on Vagrant). This time I wanted to configure Kubernetes using kubeadm. The tool doesn’t deploy the infrastructure for you. A set of pre-provisioned servers are required. Kubeadm is still under development and it’s not ready for production yet.

Vagrant as Infrastructure Orchestrator

Many people don’t have the opportunity to have dedicated infrastructure to test solutions. Vagrant together with a virtualisation software on your computer gives you the foundational infrastructure.

Vagrant stack

VirtualBox as Virtualisation Software

The reason to use VirtualBox as the Vagrant provider is because it’s free its use by Vagrant. You have other alternatives like VMware Workstation or Fusion, but if you want to use any of them as a provider you must buy VMware Integration – Vagrant.

Virtualbox stack

Cloning the Kubernetes-Vagrant GitHub repo

I have created a Vagrantfile blueprint to deploy a cluster with the following characteristics:

  • Single master
  • One or more nodes (a.k.a. worker or minion). By default two workers are deployed to test the overlay network
  • A standalone NFS server
  • Kubeadm is the tool to configure the latest Kubernetes version
  • Canal is the CNI plug-in

Kubernetes platform

As shown above, the diagram details the foundational infrastructure with the components running on each server.

  • Blue square. It represents the virtual machine with the hostname and the suffix IP
  • Purple rectangle. It represents the NFS export for persistent storage
  • Red rectangle. It represents a Linux tool or service which natively runs on the operating system
  • Green rounded rectangle. It represents a Docker container. Kubernetes components and the CNI plug-in Canal run in containers. For more information about the Kubernetes components visit this link
  • Orange dashed rectangle. It represents the free resources on a node to run pods
  • White dotted rounded rectangle. It represents a Kubernetes pod. A pod is the group of one or more containers with shared storage and/or network

To get this setup you just clone or download my GitHub repository.

~$ git clone https://github.com/pipoe2h/kubernetes-vagrant.git
~$ cd kubernetes-vagrant
# Customise your settings in the Vagrantfile
~$ vagrant up

Once the platform is up (approx. 10 minutes for two nodes setup) you can ssh into the master node with vagrant ssh master. From the master server run kubectl get nodes to list the nodes and check all of them show as ready.

kubectl get nodes

Lessons Learnt

Overall kubeadm makes simple the deployment process. In reality the challenges I faced were not related to kubeadm, my setup in Vagrant was wrong.

  1. Vagrant adds a NAT interface to perform the provisioning tasks. During this process Vagrant overrides the hosts file with the loopback address. Kubernetes requires the node IP for proper communication (you can run hostname -i). My Vagrantfile is tweaked to set the right IP in the hosts file. Otherwise, you cannot use kubectl exec command.
  2. With two network interfaces on the virtual machine the interface to use must be defined. By default the plug-in takes the main interface. To fix that behaviour the setting canal_iface in the Canal YAML blueprint must be set to the second network interface, enp0s8.

Before you start to use actively your Kubernetes platform, make sure ping works between pods on different nodes as well as the DNS resolution too (try to resolve an external domain)

In summary, this post has shown you how to deploy a Kubernetes platform. After all, now you have a functional container platform for training and testing purpose. In fact, this is the platform I will refer on the series of posts.

Deploying Rancher 2.0 on Vagrant

October 1, 2017 - - 1 Comment

A week ago Rancher Labs released (announcement) a technology preview of the What’s new, a container management platform built on Kubernetes. This post will drive you through the process to deploy without any effort a Rancher platform using Vagrant and VirtualBox.

Source: rancher.com

(more…)

Dynamic Enforcement for Network Selection in vRA 7

December 22, 2016 - - 16 Comments

NOTE: This is available out-of-the-box on vRA 7.4 release

I’m working in the design and deployment of a large Enterprise Cloud project where the multi-tenancy is vital for the success of the project. When you work in a private cloud project a common shared infrastructure is the typical approach to make feasible the business case. Usually, the physical segregation is based mainly on security requirements and not anymore on whom purchased the hardware. The multi-tenancy segregation is moved up to the logical level, making the infrastructure a shared commodity.

From network standpoint, the logical segregation is supported by the use of VLAN, PVLAN, GRE, VXLAN protocols and so on. Every time a virtual machine is provisioned through vRealize Automation, it requires the network access to communicate with other workloads.

In the case of vSphere as an endpoint in vRA, the NIC(s) of the virtual machine will be connected to one of the port groups available within the reservation where that is being provisioned. The first level of logical segregation in vRA is the business group object, where it can have one or more reservations (the second level). If the user is not member of the business group with the linked reservation(s), it cannot provision any virtual machine on that.

Source: VMware.com

The reservation includes what network(s) the user can connect its virtual machine, but these networks are not populated dynamically in vRA. The easy approach is the creation of the custom property called “VirtualMachine.NetworkN.Name”, where the N is the index of your virtual machine NIC (starting at 0). During the creation of this custom property, you will have the chance to create a static list, or a dynamic one using any script action you have available in your vRealize Orchestrator platform that vRA is consuming.

The main challenge of a static list is the maintenance of the network list that is shown to the user, also this list is shared for all the users. From security standpoint, this is not an issue since the network must be enabled in the user reservation, if that is not enabled the user will get an error that the virtual machine cannot be provisioned because the reservation is not entitled to consume the network. While this is not involve a security breach, expose the whole network list is not the best approach since sensitive information can be contained in the network name. For this reason, create a dynamic network list on fly based in the user business group and the linked reservation to that, seems like the better approach I’ve found until now. When you’re dealing with a multi-tenancy environment, the security requirements are always the most importance piece of the design.

To achieve that, we are going to leverage a built-in vRO action available from version 7.1 called “getApplicableNetworks” (getApplicableNetworks). We will tweak a bit this action since out-of-the-box shows all the entitled networks for the user regardless for what business group is requesting the virtual machine. To filter the networks based in the business group, we will create a custom property in the business group with the value equal to the business group name. The reason to create this custom property in the business group is because this is not exposed during the wizard, for this reason vRO cannot get the value using the ASD properties. To solve this gap, the vRO action will include a string input populating the value of the business group custom property.

vRO Configuration

Let’s start with the configuration of our dynamic enforced network list:

  • Copy the built-in vRO action in a new location. With this step we ensure in future releases of the vRA plug-in nothing is broken. Also, the copy will be the action to modify and include the filter to retrieve only the business group’s reservation(s) that the requester is entitled for.

Copy the vRO script action “getApplicableNetworks”

In vRealize Orchestrator version 7.2, VMware has changed the action code and the previous dependency in 7.1 with the action called “getReservationsForUser”, has been changed to “getReservationsForUserAndComponent”. The issue is the action “getApplicableNetworks” doesn’t work in 7.2 anymore because VMware has forgotten to update the script to include as an input some values that the dependency “getReservationsForUserAndComponent” requires.

If you are using vRO 7.1, you can skip this step.

  • Duplicate the current “getReservationsForUserAndComponent” action

Duplicate the “getReservationsForUserAndComponent” action

  • Downgrade to version 1.0. The action name will be changed to “getReservationsForUser”.

Downgrade the copied action to version 1.0 (getReservationsForUser)

  • Edit the copied action and add a string input called “subtenant”

Add a string input

  • Go to the script tab and update the 4th line with the action folder and name you have duplicated and downgraded above.

var reservations = System.getModule(“com.joseluisgomez.vra.reservations“).getReservationsForUser(user, tenant, host);

  • Between the 4th line (var reservations) and the 5th line (var applicableNetworks), add the following line to find the business group populated with the custom property value from vRA.

var subtenants = vCACCAFEEntitiesFinder.findSubtenants(host, subtenant);

Code updated (lines 4th and 5th)

  • The last step with the action is to add a conditional. It will look for the business group name (subtenant name) matching with the input populated in vRA in all the gather reservations. Add the following line after the “for each“, also you can add an additional tabulation to the current code to properly align. In addition, an additional closing curly bracket is required to close the conditional. The red code is the new one to add.

for each(var res in reservations) {
if(res.getSubTenantId() == subtenants[0].id){
var extensionData = res.getExtensionData();
if(extensionData) {
var networks = extensionData.get(“reservationNetworks”);
if(networks) {
for each(var network in networks.getValue()) {
var path = network.getValue().get(“networkPath”);
applicableNetworks.put(path.label, path.label);
}
}
}
}
}
return applicableNetworks;

Conditional to add

We are done with vRO. Now it’s the moment to consume this action from vRA and our blueprint.

vRA Configuration

  • Create a custom property for the business group with your preferred name. In my case, I’ve used the following one:
    • Property name. NGDC.Software.VMware.vRA.Subtenant.Name
    • Property value. The business group name (ex.: Tenant_Environment_Service_Index)

Custom Property with Business Group’s name as value

  • Create a custom property definition called “VirtualMachine.Network0.Name” with the following settings:
    • Name: VirtualMachine.Network0.Name
    • Label: Select a network
    • Visibility: All tenants
    • Display order: 1
    • Data type: string
    • Required: yes
    • Display as: Dropdown
    • Values: External values
    • Script action: Click the change button and select the script action called getApplicableNetworksBySubtenant we have created in the previous steps.
    • Input parameters: Check the bind checkbox and as value type the business group custom property “NGDC.Software.VMware.vRA.Subtenant.Name“. Note: The business group custom properties are not automatically discovered if you click the dropdown list. You must type the custom property.

  • The last step is to add into the blueprint the custom property definition you have created in the step above. The blueprint doesn’t require any network object, just the vSphere virtual  machine object and add to that a virtual NIC. The virtual NIC0 must have as custom property the following one:
    • Name: VirtualMachine.Network0.Name
    • Value: Empty
    • Encrypted: No
    • Overridable: Yes
    • Show in Request: Yes

Blueprint custom property

Result

The best way to test our custom property definition is to create two business groups with one reservation each, make the same user the group manager and create two service catalogue entitlements, one for each business group.

The next screenshot shows the first business group (Tenant1-Global). As you can see, this business group has two networks. A static dvPortGroup (Main) and a NSX VXLAN dvPortGroup (5008-Photon-Hosts)

Tenant1-Global business group dynamic enforced network list

The following screenshot shows the networks for the second business group (CORP_PROD_APP1_01). As you can see, the business groups also has two networks but different ones. Two NSX VXLAN dvPortGroups (5005-NTXCE-Mgmt and 5004-NTXCE-VMs)

CORP_PROD_APP1_01 business group dynamic enforced network list

Conclusion

I hope you have found it useful and can help you with new and current deployments. This is the best approach I have found until now supporting the multi-tenancy based in business group. This approach also works if the multi-tenancy is done using the tenant functionality in vRA.

If you have liked it, don’t hesitate to share it with your contacts.