Kubernetes DaemonSet: A Beginner’s Guide

k8s daemonset FI
Kubernetes

Share Post Now :

HOW TO GET HIGH PAYING JOBS IN AWS CLOUD

Even as a beginner with NO Experience Coding Language

Explore Free course Now

Table of Contents

Loading

In the dynamic world of containerized applications and microservices, orchestrating and managing deployments at scale can be a daunting task. Kubernetes, the leading container orchestration platform, offers a plethora of tools and resources to streamline this process. One such essential resource is the Kubernetes DaemonSet.

In this blog, we will take an in-depth look at what DaemonSets are, how they work, and why they are crucial for maintaining uniformity across your Kubernetes cluster.

We will learn:

What is DaemonSet?

At its essence, a DaemonSet can be understood as a specialized Kubernetes resource that ensures a designated pod instance is operational on each node within the cluster. Unlike other controllers like Deployments or ReplicaSets that target a specific number of pod replicas across the cluster, DaemonSets are geared towards ensuring that a specific pod is scheduled to run on every available node.

Common Use Cases of DaemonSet

Monitoring and Logging Agents

You might want to deploy monitoring and logging agents, such as Prometheus exporters, Fluentd, or Filebeat, on every node to collect metrics, logs, and other telemetry data from each node.

👉 Learn more about Prometheus Monitoring

Security and Compliance

Security-related tools like intrusion detection systems, anti-virus agents, or network security utilities can be deployed using DaemonSets to ensure consistent protection across all nodes.

Resource Management

DaemonSets can be employed to manage resources at the node level, such as a cache layer for improving application performance or resource cleanup utilities.

Network Infrastructure

Network-related services, like load balancers, network proxies, or VPN clients, can be set up on every node using DaemonSets to provide consistent network connectivity and routing.

The Kubernetes Network CNI fits within the “Network Infrastructure” category because it is responsible for establishing the networking environment in which your Kubernetes pods and services operate, and it is a key component of the overall “Cluster Networking” setup.

Kubernetes supports various CNI plugins that offer different networking solutions, such as Calico, Flannel, Weave, and more. These plugins handle tasks like IP address assignment, network isolation, routing, and network policies.

👉 Learn more about K8s Networking

Distributed Storage

If your application requires distributed storage systems like GlusterFS or Ceph, DaemonSets can be used to deploy the required storage components on each node to form the storage cluster.

Node-Level Operations

Certain maintenance tasks, like node monitoring agents, kernel modules, or node-level tunings, can be set up using DaemonSets to ensure that each node adheres to specific operational requirements.

Device Drivers

If your application requires specific device drivers, like GPU drivers for machine learning workloads, you can use DaemonSets to ensure those drivers are available on every node.

Custom Services

For specialized services that need to run alongside your main application, like a sidecar that performs specific functions or interacts with external systems, DaemonSets can help deploy these services uniformly across all nodes.

Middleware and Infrastructure Components

DaemonSets can be used to deploy middleware components like service meshes, service discovery agents, and distributed tracing components, ensuring uniformity across the cluster.

Node Labeling and Configuration

DaemonSets can also be employed to apply specific node labels, annotations, or configurations across the cluster, helping organize and customize nodes based on their roles or capabilities.

Overall, DaemonSets are a versatile tool in Kubernetes for ensuring that specific pods are running on each node, enabling consistent deployment of services and utilities across the cluster. They are particularly useful when you need to maintain uniformity and consistency in various aspects of your cluster’s infrastructure and services.

👉 Learn How to create a Multinode K8s Ckuster

How DaemonSets Work

When a DaemonSet is instantiated, Kubernetes orchestrates the creation of an associated pod on every node within the cluster. This mechanism operates dynamically, implying that if a new node is added to the cluster, Kubernetes will automatically schedule the corresponding pod on the newly added node. Conversely, if a node is removed from the cluster, the DaemonSet controller ensures the associated pod is gracefully terminated.

DaemonSet Pods vs Regular Pods

While DaemonSet pods and regular pods managed by Deployments or ReplicaSets share fundamental characteristics, they differ in some key aspects. Unlike regular pods controlled by replica counts, DaemonSet pods leverage node selectors or affinity rules to determine the nodes on which they should be scheduled. This distinction ensures the fulfillment of the DaemonSet’s unique objective: one pod per node.

👉 Learn more about K8s Pods

How DaemonSet Pods are Scheduled

A DaemonSet ensures that all eligible nodes run a copy of a Pod. The DaemonSet controller creates a Pod for each eligible node and adds the spec.affinity.nodeAffinity field of the Pod to match the target host. After the Pod is created, the default scheduler typically takes over and then binds the Pod to the target host by setting the .spec.nodeName field. If the new Pod cannot fit on the node, the default scheduler may preempt (evict) some of the existing Pods based on the priority of the new Pod.

You can add your own tolerations to the Pods of a DaemonSet as well, by defining these in the Pod template of the DaemonSet.

Because the DaemonSet controller sets the node.kubernetes.io/unschedulable:NoSchedule toleration automatically, Kubernetes can run DaemonSet Pods on nodes that are marked as unschedulable. Due to that toleration, we are able to schedule pods on the Master node as well in the above example.

Note: If you add a node to your cluster, then this daemonset will automatically deploy a pod on that too.

Scaling and Updating DaemonSet

Scaling a DaemonSet pertains to the process of adding or removing nodes from the cluster. As new nodes are introduced, the DaemonSet mechanism ensures that the corresponding pods are scheduled onto these nodes. Conversely, when nodes are removed, the DaemonSet controller triggers the termination of the associated pods.

Updating a DaemonSet necessitates a thoughtful approach. Typically, updating involves the creation of a new version of the pod template. Kubernetes orchestrates the seamless rollout of these new pods while diligently ensuring that each node ultimately accommodates the updated version.

DaemonSet Example

First, we need to create a Manifest file that will contain all of the necessary configuration information for our DaemonSet.

You can describe a DaemonSet in a YAML file. For example, the daemonset.yaml file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image

1. Below is a sample daemonset.yaml file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: default
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # these tolerations are to have the daemonset runnable on control plane nodes
      # remove them if your control plane nodes should not run pods
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

Let’s understand the manifest file.

  1. apiVersion: apps/v1: Specifies the API version of the Kubernetes resource. In this case, it’s using the apps/v1 version, which is commonly used for managing applications.
  2. kind: DaemonSet: Specifies the type of Kubernetes resource being defined, which is a DaemonSet. A DaemonSet ensures that a copy of a specific pod is running on all (or a specific subset of) nodes in the cluster.
  3. metadata: Contains metadata about the DaemonSet, including its name, namespace, and labels.
    • name: fluentd-elasticsearch: Sets the name of the DaemonSet.
    • namespace: default: Specifies the namespace where the DaemonSet will be created.
    • labels: Defines labels that can be used to identify and categorize the DaemonSet.
  4. spec: Specifies the desired state of the DaemonSet.
    • selector: Defines how the DaemonSet should identify which nodes to run on. It uses labels to match nodes with the pod.
    • updateStrategy: Specifies how updates to the DaemonSet are handled. In this case, it’s set to a rolling update strategy where only one instance is updated at a time.
    • template: Describes the template for the pods managed by the DaemonSet.
  5. Inside the template section, you find specifications for the pods that the DaemonSet will create.
    • tolerations: Specifies tolerations that allow the pods to be scheduled on specific nodes.
      • Two tolerations are defined that allow the pods to run on control plane nodes and master nodes.
    • containers: Defines the containers that will be deployed within each pod.
      • name: fluentd-elasticsearch: Name of the container.
      • image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2: Specifies the Docker image to use for the container.
      • resources: Specifies resource requests and limits for the container (CPU and memory).
      • volumeMounts: Describes how volumes should be mounted inside the container.
        • Two volumes are mounted: /var/log and /var/lib/docker/containers, both of which are read-only.
    • terminationGracePeriodSeconds: Defines the grace period for terminating the pod when the DaemonSet is scaled down.
  6. volumes: Specifies the volumes to be used in the pods.
    • name: varlog: Name of the volume.
      • hostPath: Specifies that the volume is sourced from the host’s file system (specifically, /var/log).
    • name: varlibdockercontainers: Another volume.
      • Also sourced from the host’s file system (specifically, /var/lib/docker/containers).

2. Create the DaemonSet based on the YAML file:

$ kubectl apply -f daemonset.yaml

Create the DaemonSet based on the YAML file:

3. Check the pod status on all nodes including the master.

$ kubectl get pods -owide

Check the pod status on all nodes including the master.

Conclusion

Kubernetes DaemonSets is a powerful tool for maintaining consistency and uniformity across your cluster. They help automate the deployment of specific pods to every node, making them indispensable for tasks like logging, monitoring, security, and resource management. By understanding how DaemonSets work and employing them in your infrastructure, you can ensure that essential components are consistently available on every node, contributing to the stability and reliability of your Kubernetes ecosystem.

Related Post

Join FREE Masterclass of Kubernetes

Discover the Power of Kubernetes, Docker & DevOps – Join Our Free Masterclass. Unlock the secrets of Kubernetes, Docker, and DevOps in our exclusive, no-cost masterclass. Take the first step towards building highly sought-after skills and securing lucrative job opportunities. Click on the below image to Register Our FREE Masterclass Now!

Mastering Kubernetes Docker & DevOps

Picture of mike

mike

I started my IT career in 2000 as an Oracle DBA/Apps DBA. The first few years were tough (<$100/month), with very little growth. In 2004, I moved to the UK. After working really hard, I landed a job that paid me £2700 per month. In February 2005, I saw a job that was £450 per day, which was nearly 4 times of my then salary.