Introduction

When you are developing applications that will ultimatly use kubernetes you ideally want your developement environment to be as close to your staging or production environments. Kind helps you create a local kubernetes cluster easely and in a repeatable way, it also helps you use your local environment to test some scenarios.

There are several different tools that can be used to create local kubernetes clusters, like minikube, k3s, docker desktop and many more. So why use Kind? Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. this means we can run clusters multiple nodes on a local environment.

Advantages of using kind

  • Allows for running clusters with multiple nodes.
  • Allows for running clusters with multiple controlplane
  • Light weight running on top of containers
  • Config file for repeatable clusters on any system
  • Run multiple clusters with multiple versions of kubernetes

Installation and usage

In order to use Kind you will need at a minimum 2 things docker and kubectl

Installation

On MacOS and linux via Homebrew:

  brew install kind

On linux:

  curl -Lo ./kind "https://kind.sigs.k8s.io/dl/v0.14.0/kind-$(uname)-amd64"
  chmod +x ./kind
  sudo mv ./kind /usr/local/bin/kind

On Windows:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

# OR via Chocolatey (https://chocolatey.org/packages/kind)
choco install kind

Basic Usage

To use kind, you will need to have docker up and running on your system. Once you have docker running you can create a cluster with:

kind create cluster

To delete your cluster use:

kind delete cluster

With the 2 commands above you can create and delete clusters at will. There are several options you can pass on your create command to adjust the cluster to your needs, from the name to the number of worker nodes and so on. We could list the options here but our intent is to create replicable clusters, and share the cluster setup in a team this way we can speed up onboarding processes and make sure all developement environments are the closest possible to production environments.

Configure your cluster with a file

We want to setup a file that can be shared in a version control system like git and be available to every developer in the project. If you are working in a team and/or working with microservicces, running multi tear applications and so on you want to be able to create and configure a cluster in seconds.

So on the base of you project create a file ```kind.yaml`` and add the following content:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: my-cluster
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker

Basically we are creating a cluster with 1 control-plane and 2 worker nodes and mapping cluster ports to the host system, mainly port 80and port 443.

You can use KubeADM Configurations for example to add labels to your control-plane instances and/or your nodes as well. In this example we will add the label “ingress-ready” with the value “true” to the control-plane and worker nodes:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: my-cluster
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    

You can either specify a name on the file or ommit it and name the cluster when running the create command.

Using the name on the file:

kind create cluster --config=kind.yaml

Or name your cluster when creating it:

kind create cluster --name my-cluster --config=kind.yaml