Docker and Kubernetes [CKA/ CKS/ CKAD] Q/A (Docker Secret, Docker Config, Docker EE): Day 7 Live Session Review

Kubernetes

Share Post Now :

HOW TO GET HIGH PAYING JOBS IN AWS CLOUD

Even as a beginner with NO Experience Coding Language

Explore Free course Now

Table of Contents

Loading

This blog post covers a brief overview of the topics covered and some common questions asked on Day 7 Live Interactive training on Docker and Kubernetes Certification i.e. CKACKAD  and CKS.

This will help you to learn Docker & Kubernetes and prepare you for these certifications and get a better-paid job in the field of Microservices, Containers and Kubernetes.

In the Day 6 CKA Live session we covered an overview of Docker Compose, Docker Swarm, Docker service. And in this week, Day 7, we covered Docker secrets, Docker Config, Docker Placement Constraints and Docker EE. We also performed labs.

Docker Secrets

Docker secrets are designated to create for storing the sensitive information like username, password, SSL certificates, and any secure files. Docker Secret is created and used widely in Docker Swarm and then extended to docker compose from v3.  Just imagine, we never want to store a configuration file with all of our passwords on our GitHub/any repository even in public or private. In this guide we will walk you through various aspects of setting up a using Docker secrets.

Before we head into steps, here is some introduction how it is used on docker swarm services. First, we create & add a secret to the swarm, and then we give our services access to the secrets they require. When the service is created (or updated), the secret is then mounted onto the container in the /run/secrets directory. Now your application has access to the secrets it requires.

To read more about Docker Swarm, click here

Q&A’s asked in the session are:

Q) How to Create Secrets
Ans:
Create secret from stdin:Just assume with your swarm already running or for test, please run “docker swarm init”, this initiate your docker swarm on the node. Now, you can use the docker secret create command to add a new secret to the swarm. Here is a basic example:

echo "mypassword" | docker secret create mypass -

Create from file: Let’s say you have file with a password. For example, db_pass.txt has the following content: Now you can create a new secret from that file:

docker secret create my_db_pass db_pass.txt

where my_db_pass is the name of your secret.

Now let’s use the “docker secret ls” command to confirm that our secret was added:

docker secret ls

should output something like this:

ID                          NAME     CREATED             UPDATED
rkxav7s9rvnc9d7ct6dhkrsyn   mypass  3 minutes ago        3 minutes ago

Q) How to inspect Docker Secret
Ans: You can use inspect command on Docker secret also, same as other docker commands

docker secret inspect secret_name

our case it will be “mypass”

Q) How to remove Docker Secret:
Ans: You can remove the docker secret using following command

docker secret rm secret_name

Q) How to add secret to Service
Ans: Now that you’ve added the secret to the swarm, you need to give the service access. This can be accomplished when the Docker service is created or updated. Here’s an example of how to add a secret when the service is created:

docker service create --secret mypass --name secret alpine ping foxutech.com

In this example, I’m adding the mypass secret we created in the previous step to a service running the alpine image. If we’ve already got a service running and we want to add (or change) a secret, you can use the –secret-add option.

docker service update --secret-add mypass existing_service_name

Q) How to Use a Secret in docker-compose.yml:
Ans:
Let’s say we initialize two secrets for a database. Here we have two secrets psql_user and psql_password which are created from the files with the same name. This technique does not require the initial setup with docker secret createPlease remember that storing the “secret” files as plain text files on your production machine is not secure. You’ll have to find a way to hide the text files.

version: '3'

secrets:
  psql_user:
    file: ./psql_user.txt
  psql_password:
    file: ./psql_password.txt

First, you need the top-level declaration of all secrets.

version: '3'

secrets:
  db_user:
    external: true
  db_password:
    external: true

Here we use theexternalkeyword to show that we created the secrets before using the docker-compose.ymlfile. You can use external secrets when Docker is in swarm mode (docker swarm init). For a local setup, we might want to use the file version

version: '3'

secrets:
  db_user:
    file: ./my_db_user.txt
  db_password:
    file: ./my_db_pass.txt

Now you have to tell each service which secrets it is allowed to use.

version: '3'

secrets:
  db_user:
    external: true
  db_password:
    external: true

services:
  postgres_db:
  image: postgres
  secrets:
    - db_user
    - db_password

The postgres_dbservice can now access the db_userand  db_passwordsecrets.

To read more about Docker compose, click here

Docker Config

Docker swarm service configs allow you to store non-sensitive information, such as configuration files, outside a service’s image or running containers. This allows you to keep your images as generic as possible, without the need to bind-mount configuration files into the containers or use environment variables.
Configs operate in a similar way to secrets, except that they are not encrypted at rest and are mounted directly into the container’s filesystem without the use of RAM disks. Configs can be added or removed from a service at any time, and services can share a config. You can even use configs in conjunction with environment variables or labels, for maximum flexibility. Config values can be generic strings or binary content (up to 500 kb in size).

Q&A’s asked in the session are:

Q) How to embed configuration in an Image
Ans:
We often see Dockerfile like the following one, where a new image is created only to add a configuration to a base image.

$ cat Dockerfile
FROM nginx:1.13.6
COPY nginx.conf /etc/nginx/nginx.conf

In this example, the local nginx.confconfiguration file is copied over to the NGINX image’s filesystem in order to overwrite the default configuration file, the one shipped in /etc/nginx/nginx.confOne of the main drawbacks of this approach is that the image needs to be rebuilt if the configuration changes.

To read more about Dockerfile, click here

Q) Explain Docker config by an example
Ans:
Add a config to Docker:
The docker config createcommand reads standard input because the last argument, which represents the file to read the config from, is set to

$ echo "This is a config" | docker config create my-config -

Create a redisservice and grant it access to the config. By default, the container can access the config at /my-config, but you can customize the file name on the container using the target option.

$ docker service create --name redis --config my-config redis:alpine

Verify that the task is running without issues using docker service psIf everything is working, the output looks similar to this:

Docker Placement Constraints

Placement constraint limits where a task can run (a task will not run on a node unless it satisfies the constraint, it will remain in pending state). Swarm services provide a few different ways for you to control scale and placement of services on different nodes. Placement constraints let you configure the service to run only on nodes with  specific (arbitrary) metadata set, and cause the deployment to fail if appropriate nodes do not exist.

Q&A’s asked in the session are:

Q) What is difference between Docker placement preference vs Docker placement constraints?
Ans:
While placement constraints limit the nodes a service can run on, placement preferences try to place tasks on appropriate nodes in an algorithmic way (currently, only spread evenly). Placement preference helps you in distributing the tasks based on a constraint across the nodes.

e.g. placement constraint of node.region=east will let a task only run on nodes labelled “east” whereas, placement preference of node.region=east will help you spread the tasks across nodes evenly based node.region. If any node does not have this label, it will still get the task.

Q) How to use constraints with Swarm mode
Ans: i will explain you how to use constraints to limit the set of nodes where a task can be scheduled. It’s the purpose of a Swarm cluster, all your nodes can accept containers. But sometimes, you need to specify a subset of nodes for some reasons. For example, maybe all your nodes have not the same hardware and some are more powerfull. That’s where constraints appear ! They will let you specify on which nodes your service can be scheduled. Constraints are based on labels.

This cluster have 3 managers and 2 workers. By default, a new service will be scheduled on one of this 5 nodes.

1. Docker’s defaults constraints By default, nodes already have labels. You can use these labels to restrict scheduling on your service :

If you specify multiples constraints, Docker will find nodes that satisfy every expression (it’s an AND match).

With this example, the new service will be scheduled on docker00 or docker02 (both are managers).

2. Add your own’s labels

With the defaults labels, you can affine scheduling but if you want to be more specific, add your own’s labels. Recently, in my cluster, i have updated docker00 and docker01 with the latest Raspberry Pi 3B+ (the others are Raspberry Pi 3B). So, i have 2 nodes more powerfull (cpu and network) than the others. It could be usefull to schedule containers that need more CPU or network on these nodes.

For this, we need to :

Add a custom label to your nodes (only managers can add labels):

We added the label powerfull : true to the 2 nodes. You can see labels with this command :

Start the service with the new constraint :

Please note that the syntax for your own’s labels is : node.labels.YOUR_LABEL_NAME

3. Delete your own’s labels: Just in case you need it

Docker EE

Docker EE provides the market a single solution to manage container workflow (containers in application development) and container lifecycle (containers in operations).

Q&A’s asked in the session are:

Q) What are components of Docker Enterprise?
Ans:
Docker Enterprise has three major components, which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment.

  • Docker Engine – Enterprise: The commercially supported Docker engine for creating images and running them in Docker containers.
  • Docker Trusted Registry (DTR): The production-grade image storage solution from Docker.

DTR is designed to scale horizontally as your usage increases. You can add more replicas to make DTR scale to your demand and for high availability. All DTR replicas run the same set of services, and changes to their configuration are propagated automatically to other replicas.

  • Universal Control Plane (UCP): Deploys applications from images, by managing orchestrators, like Kubernetes and Swarm. UCP is designed for high availability (HA). You can join multiple UCP manager nodes to the cluster, and if one manager node fails, another takes its place automatically without impact to the cluster.

Overview of Universal Control Plane

  • Docker UCP is a containerized application that runs on Docker Engine – Enterprise and extends its functionality to make it easier to deploy, configure, and monitor our applications at scale.
  • Docker Universal Control Plane (UCP) is the enterprise-grade cluster management solution from Docker. We can install it on-premises or in the virtual private cloud, and it helps us manage our Docker cluster and applications through a single interface

Mirantis Kubernetes Engine Use Cases

The The Mirantis Kubernetes Engine container platform (formerly Docker Enterprise/UCP) delivers immediate value to your business by reducing the infrastructure and maintenance costs of supporting your existing application portfolio while accelerating your time-to-market for new solutions.
For enterprises that need to manage consistent Kubernetes at scale, Mirantis Container Cloud is the best tool for deploying Mirantis Kubernetes Engine. But for individual Kubernetes Engine clusters, Mirantis Launchpad is a faster solution. Mirantis Kubernetes Engine itself can run acceptably well on medium-sized virtual machines in a home lab (e.g., hosted on VirtualBox), and runs very well on larger VMs on any private or public cloud. To know more about MKE, click here.

Q&A’s asked in the session are:

Q) What’s the use case of Mirantis Kubernetes Engine?
Ans: Mirantis Kubernetes Engine can run almost anywhere: on virtual machines, bare metal, or on any public cloud. Worker nodes can run on a range of Linux operating systems, or on Windows Server. Below are the common use-cases of the same.

  • Run securely
  • Run Windows-native container workloads
  • Specialized hardware support
  • Ready for work – batteries included.
  • Consistent and Centrally-Manageable

Using Mirantis Container Cloud, consistent Mirantis Kubernetes Engine clusters can be configured, deployed, observed, and lifecycle-managed across your hybrid or multi-cloud. Centralized provisioning, zero-downtime updates, built-in observability, and a single point of integration for self-service and operations automation streamline Ops. Consistent clusters everywhere simplifies CI/CD and help you ship code faster.

Related Post

Join FREE Class

Begin your journey towards becoming a Certified Kubernetes Administrator [CKA]  from our Certified Kubernetes Administrator (CKA) training program. To know about the Roles and Responsibilities of a Kubernetes administrator, why learn Docker and KubernetesJob opportunities for Kubernetes administrator in the market. Also, know about Hands-On labs you must perform to clear the Certified Kubernetes Administrator (CKA) Certification exam by registering for our FREE class.

Picture of mike

mike

I started my IT career in 2000 as an Oracle DBA/Apps DBA. The first few years were tough (<$100/month), with very little growth. In 2004, I moved to the UK. After working really hard, I landed a job that paid me £2700 per month. In February 2005, I saw a job that was £450 per day, which was nearly 4 times of my then salary.