Developer Blog

Tipps und Tricks für Entwickler und IT-Interessierte

Docker

Docker | Create an extensible build environment

TL;DR

The complete code for the post is here.

General Information

Building a Docker image mainly means creating a Dockefile and specifying all the required components to install and configure.

So a Dockerfile contains at least many operating system commands for installing and configuring software and packages.

Keeping all commands in one file (Dockerfile) can lead to a confusing structure.

In this post I describe and explain an extensible structure for a Docker project so that you can reuse components for other components.

General Structure

The basic idea behind the structure is: split all installation instructions into separate installation scripts and call them individually in the dockerfile.

In the end the structure will look something like this (with some additional collaborators to be added and described later)

RUN bash /tmp/${SCRIPT_UBUNTU}
RUN bash /tmp/${SCRIPT_ADD_USER}
RUN bash /tmp/${SCRIPT_NODEJS}
RUN bash /tmp/${SCRIPT_JULIA}
RUN bash /tmp/${SCRIPT_ANACONDA}
RUN cat ${SCRIPT_ANACONDA_USER} | su user
RUN bash /tmp/${SCRIPT_CLEANUP}

For example, preparing the Ubuntu image by installing basic commands is transfered into the script 01_ubuntu_sh

Hint: There are hardly any restrictions on the choice of names (variables and files/scripts). For the scripts, I use numbering to express order.

The script contains this code:

apt-get update                                       
apt-get install --yes apt-utils
apt-get install --yes build-essential lsb-release curl sudo vim python3-pip

echo "root:root" | chpasswd

Since we will be working with several scripts, an exchange of information is necessary. For example, one script installs a package and the other needs the installation folder.

We will therefore store information needed by multiple scripts in a separate file: the environment file environment

#--------------------------------------------------------------------------------------
USR_NAME=user
USR_HOME=/home/user

GRP_NAME=work

#--------------------------------------------------------------------------------------
ANACONDA=anaconda3
ANACONDA_HOME=/opt/$ANACONDA
ANACONDA_INSTALLER=/tmp/installer_${ANACONDA}.sh

JULIA=julia
JULIA_HOME=/opt/${JULIA}-1.7.2
JULIA_INSTALLER=/tmp/installer_${JULIA}.tgz

And each installation script must start with a preamble to use this environment file:

#--------------------------------------------------------------------------------------
# get environment variables
. environment

Preparing Dockerfile and Build Environment

When building an image, Docker needs all files and scripts to run inside the image. Since we created the installation scripts outside of the image, we need to copy them into the image (run run them). This also applies to the environment file environment.

File copying is done by the Docker ADD statement.

First we need our environment file in the image, so let’s copy this:

#======================================================================================
FROM ubuntu:latest as builder

#--------------------------------------------------------------------------------------
ADD environment environment

Each block to install one required software looks like this. To be flexible, we use variable for the script names.

ARG SCRIPT_UBUNTU=01_ubuntu.sh
ADD ${SCRIPT_UBUNTU}    /tmp/${SCRIPT_UBUNTU}
RUN bash tmp/${SCRIPT_UBUNTU}

Note: We can’t run the script directly because the run bit may not be set. So we will use bash to run the text file as a script.

As an add-on, we will be using Docker’s multi-stage builds. So here is the final code:

#--------------------------------------------------------------------------------------
# UBUNTU CORE
#--------------------------------------------------------------------------------------
FROM builder as ubuntu_core
ARG SCRIPT_UBUNTU=01_ubuntu.sh
ADD ${SCRIPT_UBUNTU}    /tmp/${SCRIPT_UBUNTU}
RUN bash 

Final results

Dockerfile

Here is the final Dockerfile:

#==========================================================================================
FROM ubuntu:latest as builder

#------------------------------------------------------------------------------------------
# 
#------------------------------------------------------------------------------------------
ADD environment environment
#------------------------------------------------------------------------------------------
# UBUNTU CORE
#------------------------------------------------------------------------------------------
FROM builder as ubuntu_core
ARG SCRIPT_UBUNTU=01_ubuntu.sh
ADD ${SCRIPT_UBUNTU}    /tmp/${SCRIPT_UBUNTU}
RUN bash                /tmp/${SCRIPT_UBUNTU}

#------------------------------------------------------------------------------------------
# LOCAL USER
#------------------------------------------------------------------------------------------
ARG SCRIPT_ADD_USER=02_add_user.sh
ADD ${SCRIPT_ADD_USER}  /tmp/${SCRIPT_ADD_USER}
RUN bash /tmp/${SCRIPT_ADD_USER}

#------------------------------------------------------------------------------------------
# NODEJS
#-----------------------------------------------------------------------------------------
FROM ubuntu_core as with_nodejs

ARG SCRIPT_NODEJS=10_nodejs.sh
ADD ${SCRIPT_NODEJS}    /tmp/${SCRIPT_NODEJS}
RUN bash                /tmp/${SCRIPT_NODEJS}

#--------------------------------------------------------------------------------------------------
# JULIA
#--------------------------------------------------------------------------------------------------
FROM with_nodejs as with_julia

ARG SCRIPT_JULIA=11_julia.sh
ADD ${SCRIPT_JULIA} /tmp/${SCRIPT_JULIA}
RUN bash /tmp/${SCRIPT_JULIA}

#---------------------------------------------------------------------------------------------
# ANACONDA3  with Julia Extensions
#---------------------------------------------------------------------------------------------
FROM with_julia as with_anaconda

ARG SCRIPT_ANACONDA=21_anaconda3.sh
ADD ${SCRIPT_ANACONDA} /tmp/${SCRIPT_ANACONDA}
RUN bash /tmp/${SCRIPT_ANACONDA}

#---------------------------------------------------------------------------------------------
#
#---------------------------------------------------------------------------------------------
FROM with_anaconda as with_anaconda_user
ARG SCRIPT_ANACONDA_USER=22_anaconda3_as_user.sh
ADD ${SCRIPT_ANACONDA_USER} /tmp/${SCRIPT_ANACONDA_USER}
#RUN cat ${SCRIPT_ANACONDA_USER} | su user

#---------------------------------------------------------------------------------------------
#
#---------------------------------------------------------------------------------------------
FROM with_anaconda_user as with_cleanup

ARG SCRIPT_CLEANUP=99_cleanup.sh
ADD ${SCRIPT_CLEANUP} /tmp/${SCRIPT_CLEANUP}
RUN bash /tmp/${SCRIPT_CLEANUP}

#=============================================================================================
USER    user
WORKDIR /home/user

#
CMD ["bash"]

Makefile

HERE := ${CURDIR}

CONTAINER := playground_docker

default:
	cat Makefile

build:
	docker build -t ${CONTAINER} .

clean:
	docker_rmi_all
	
run:
	docker run  -it --rm  -p 127.0.0.1:8888:8888 -v ${HERE}:/src:rw -v ${HERE}/notebooks:/notebooks:rw --name ${CONTAINER} ${CONTAINER}

notebook:
	docker run  -it --rm  -p 127.0.0.1:8888:8888 -v ${HERE}:/src:rw -v ${HERE}/notebooks:/notebooks:rw --name ${CONTAINER} ${CONTAINER} bash .local/bin/run_jupyter

Installation scripts

01_ubuntu.sh

#--------------------------------------------------------------------------------------------------
# get environment variables
. environment

#--------------------------------------------------------------------------------------------------
export DEBIAN_FRONTEND=noninteractive
export TZ='Europe/Berlin'

echo $TZ > /etc/timezone 

apt-get update                                       
apt-get install --yes apt-utils

#
apt-get -y install tzdata

rm /etc/localtime
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
dpkg-reconfigure -f noninteractive tzdata

#
apt-get install --yes build-essential lsb-release curl sudo vim python3-pip

#
echo "root:password" | chpasswd

Kubernetes | Cheat Sheet

Readings

Kubectl autocomplete

BASH

source < (kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc

You can also use a shorthand alias for kubectl that also works with completion:

alias k=kubectl
complete -F __start_kubectl k

ZSH

source <(kubectl completion zsh)  # setup autocomplete in zsh into the current shell
echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell

Kubectl context and configuration

Set which Kubernetes cluster kubectl communicates with and modifies configuration information. See Authenticating Across Clusters with kubeconfig documentation for detailed config file information.

kubectl config view # Show Merged kubeconfig settings.
# use multiple kubeconfig files at the same time and view merged config
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 
kubectl config view
# get the password for the e2e user
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
kubectl config view -o jsonpath='{.users[].name}'    # display the first user
kubectl config view -o jsonpath='{.users[*].name}'   # get a list of users
kubectl config get-contexts                          # display list of contexts 
kubectl config current-context                       # display the current-context
kubectl config use-context my-cluster-name           # set the default context to my-cluster-name
# add a new user to your kubeconf that supports basic auth
kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword
# permanently save the namespace for all subsequent kubectl commands in that context.
kubectl config set-context --current --namespace=ggckad-s2
# set a context utilizing a specific username and namespace.
kubectl config set-context gce --user=cluster-admin --namespace=foo 
  && kubectl config use-context gce
kubectl config unset users.foo                       # delete user foo

Kubectl apply

apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes applications on production. See Kubectl Book.

Creating objects

Kubernetes manifests can be defined in YAML or JSON. The file extension .yaml.yml, and .json can be used.

kubectl apply -f ./my-manifest.yaml            # create resource(s)
kubectl apply -f ./my1.yaml -f ./my2.yaml      # create from multiple files
kubectl apply -f ./dir                         # create resource(s) in all manifest files in dir
kubectl apply -f https://git.io/vPieo          # create resource(s) from url
kubectl create deployment nginx --image=nginx  # start a single instance of nginx
# create a Job which prints "Hello World"
kubectl create job hello --image=busybox -- echo "Hello World" 
# create a CronJob that prints "Hello World" every minute
kubectl create cronjob hello --image=busybox   --schedule="*/1 * * * *" -- echo "Hello World"    
kubectl explain pods                           # get the documentation for pod manifests
# Create multiple YAML objects from stdin
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox-sleep
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - sleep
    - "1000000"
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox-sleep-less
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - sleep
    - "1000"
EOF
# Create a secret with several keys
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: $(echo -n "s33msi4" | base64 -w0)
  username: $(echo -n "jane" | base64 -w0)
EOF

Viewing, finding resources

# Get commands with basic output
kubectl get services                          # List all services in the namespace
kubectl get pods --all-namespaces             # List all pods in all namespaces
kubectl get pods -o wide                      # List all pods in the current namespace, with more details
kubectl get deployment my-dep                 # List a particular deployment
kubectl get pods                              # List all pods in the namespace
kubectl get pod my-pod -o yaml                # Get a pod's YAML
# Describe commands with verbose output
kubectl describe nodes my-node
kubectl describe pods my-pod
# List Services Sorted by Name
kubectl get services --sort-by=.metadata.name
# List pods Sorted by Restart Count
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storage
# Get the version label of all pods with label app=cassandra
kubectl get pods --selector=app=cassandra -o 
  jsonpath='{.items[*].metadata.labels.version}'
# Retrieve the value of a key with dots, e.g. 'ca.crt'
kubectl get configmap myconfig 
  -o jsonpath='{.data.ca.crt}'
# Get all worker nodes (use a selector to exclude results that have a label
# named 'node-role.kubernetes.io/master')
kubectl get node --selector='!node-role.kubernetes.io/master'
# Get all running pods in the namespace
kubectl get pods --field-selector=status.phase=Running
# Get ExternalIPs of all nodes
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "(.key)=(.value),"'
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})
# Show labels for all pods (or any other Kubernetes object that supports labelling)
kubectl get pods --show-labels
# Check which nodes are ready
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' 
 && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
# List all containerIDs of initContainer of all pods
# Helpful when cleaning up stopped containers, while avoiding removal of initContainers.
kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"n"}{end}' | cut -d/ -f3
# List Events sorted by timestamp
kubectl get events --sort-by=.metadata.creationTimestamp
# Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.
kubectl diff -f ./my-manifest.yaml
# Produce a period-delimited tree of all keys returned for nodes
# Helpful when locating a key within a complex nested JSON structure
kubectl get nodes -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
# Produce a period-delimited tree of all keys returned for pods, etc
kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'

Updating resources

kubectl set image deployment/frontend www=image:v2               # Rolling update "www" containers of "frontend" deployment, updating the image
kubectl rollout history deployment/frontend                      # Check the history of deployments including the revision 
kubectl rollout undo deployment/frontend                         # Rollback to the previous deployment
kubectl rollout undo deployment/frontend --to-revision=2         # Rollback to a specific revision
kubectl rollout status -w deployment/frontend                    # Watch rolling update status of "frontend" deployment until completion
kubectl rollout restart deployment/frontend                      # Rolling restart of the "frontend" deployment
cat pod.json | kubectl replace -f -                              # Replace a pod based on the JSON passed into std
# Force replace, delete and then re-create the resource. Will cause a service outage.
kubectl replace --force -f ./pod.json
# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000
kubectl expose rc nginx --port=80 --target-port=8000
# Update a single-container pod's image version (tag) to v4
kubectl get pod mypod -o yaml | sed 's/(image: myimage):.*$/1:v4/' | kubectl replace -f -
kubectl label pods my-pod new-label=awesome                      # Add a Label
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq       # Add an annotation
kubectl autoscale deployment foo --min=2 --max=10                # Auto scale a deployment "foo"

Patching resources

# Partially update a node
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
# Update a container's image; spec.containers[*].name is required because it's a merge key
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
# Update a container's image using a json patch with positional arrays
kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
# Disable a deployment livenessProbe using a json patch with positional arrays
kubectl patch deployment valid-deployment  --type json   -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
# Add a new element to a positional array
kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'

Editing resources

Edit any API resource in your preferred editor.

kubectl edit svc/docker-registry                      # Edit the service named docker-registry
KUBE_EDITOR="nano" kubectl edit svc/docker-registry   # Use an alternative editor

Scaling resources

kubectl scale --replicas=3 rs/foo                                 # Scale a replicaset named 'foo' to 3
kubectl scale --replicas=3 -f foo.yaml                            # Scale a resource specified in "foo.yaml" to 3
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql  # If the deployment named mysql's current size is 2, scale mysql to 3
kubectl scale --replicas=5 rc/foo rc/bar rc/baz                   # Scale multiple replication controllers

Deleting resources

kubectl delete -f ./pod.json                                              # Delete a pod using the type and name specified in pod.json
kubectl delete pod,service baz foo                                        # Delete pods and services with same names "baz" and "foo"
kubectl delete pods,services -l name=myLabel                              # Delete pods and services with label name=myLabel
kubectl -n my-ns delete pod,svc --all                                      # Delete all pods and services in namespace my-ns,
# Delete all pods matching the awk pattern1 or pattern2
kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs  kubectl delete -n mynamespace pod

Interacting with running Pods

kubectl logs my-pod                                 # dump pod logs (stdout)
kubectl logs -l name=myLabel                        # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod --previous                      # dump pod logs (stdout) for a previous instantiation of a container
kubectl logs my-pod -c my-container                 # dump pod container logs (stdout, multi-container case)
kubectl logs -l name=myLabel -c my-container        # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my-container --previous      # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
kubectl logs -f my-pod                              # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container              # stream pod container logs (stdout, multi-container case)
kubectl logs -f -l name=myLabel --all-containers    # stream all pods logs with label name=myLabel (stdout)
kubectl run -i --tty busybox --image=busybox -- sh  # Run pod as interactive shell
kubectl run nginx --image=nginx -n 
mynamespace                                         # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx                     # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml
kubectl attach my-pod -i                            # Attach to Running Container
kubectl port-forward my-pod 5000:6000               # Listen on port 5000 on the local machine and forward to port 6000 on my-pod
kubectl exec my-pod -- ls /                         # Run command in existing pod (1 container case)
kubectl exec --stdin --tty my-pod -- /bin/sh        # Interactive shell access to a running pod (1 container case) 
kubectl exec my-pod -c my-container -- ls /         # Run command in existing pod (multi-container case)
kubectl top pod POD_NAME --containers               # Show metrics for a given pod and its containers

Interacting with Nodes and cluster

kubectl cordon my-node                                                # Mark my-node as unschedulable
kubectl drain my-node                                                 # Drain my-node in preparation for maintenance
kubectl uncordon my-node                                              # Mark my-node as schedulable
kubectl top node my-node                                              # Show metrics for a given node
kubectl cluster-info                                                  # Display addresses of the master and services
kubectl cluster-info dump                                             # Dump current cluster state to stdout
kubectl cluster-info dump --output-directory=/path/to/cluster-state   # Dump current cluster state to /path/to/cluster-state
# If a taint with that key and effect already exists, its value is replaced as specified.
kubectl taint nodes foo dedicated=special-user:NoSchedule

Resource types

List all supported resource types along with their shortnames, API group, whether they are namespaced, and Kind:

kubectl api-resources

Other operations for exploring API resources:

kubectl api-resources --namespaced=true      # All namespaced resources
kubectl api-resources --namespaced=false     # All non-namespaced resources
kubectl api-resources -o name                # All resources with simple output (just the resource name)
kubectl api-resources -o wide                # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get       # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group

Formatting output

To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.

Output formatDescription
-o=custom-columns=<spec>Print a table using a comma separated list of custom columns
-o=custom-columns-file=<filename>Print a table using the custom columns template in the <filename> file
-o=jsonOutput a JSON formatted API object
-o=jsonpath=<template>Print the fields defined in a jsonpath expression
-o=jsonpath-file=<filename>Print the fields defined by the jsonpath expression in the <filename> file
-o=namePrint only the resource name and nothing else
-o=wideOutput in the plain-text format with any additional information, and for pods, the node name is included
-o=yamlOutput a YAML formatted API object

Examples using -o=custom-columns:

# All images running in a cluster
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
 # All images excluding "k8s.gcr.io/coredns:1.6.2"
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'
# All fields under metadata regardless of name
kubectl get pods -A -o=custom-columns='DATA:metadata.*'

More examples in the kubectl reference documentation.

Kubectl output verbosity and debugging

Kubectl verbosity is controlled with the -v or --v flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described here.

VerbosityDescription
--v=0Generally useful for this to always be visible to a cluster operator.
--v=1A reasonable default log level if you don’t want verbosity.
--v=2Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.
--v=3Extended information about changes.
--v=4Debug level verbosity.
--v=6Display requested resources.
--v=7Display HTTP request headers.
--v=8Display HTTP request contents.
--v=9Display HTTP request contents without truncation of contents.

Kubernetes| Getting started

Readings

TL;DR

Helm (Documentation): Using Helm

choco install kubernetes-helm
helm
helm repo add stable https://charts.helm.sh/stable
helm search repo stable
helm repo update
helm install stable/mysql --generate-name
helm install stable/mysql --generate-name
helm ls
helm status mysql-1609772389
helm search hub wordpress
helm repo add brigade https://brigadecore.github.io/charts
helm repo ls
helm search repo brigade

Customizing the Chart Before Installing

Helm: Concepts

Three Big Concepts

ChartHelm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
RepositoryPlace where charts can be collected and shared.
ReleaseInstance of a chart running in a Kubernetes cluster.
One chart can often be installed many times into the same cluster. And each time it is installed, a new release is created.
Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own release, which will in turn have its own release name.

Helm: Creating Charts

helm create chartname
helm package chartname
helm install mychart
helm install mychart mychart/ --values mychart/values.yaml
helm ls
helm delete generated-deployment-name
helm package mychart
helm install mychart-0.1.0.tgz
helm install release-name mychart 

Helm: CLI

Common actions for Helm:

- helm search:    search for charts
- helm pull:      download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment variables:

| Name                               | Description                                                                       |
|------------------------------------|-----------------------------------------------------------------------------------|
| $HELM_CACHE_HOME                   | set an alternative location for storing cached files.                             |
| $HELM_CONFIG_HOME                  | set an alternative location for storing Helm configuration.                       |
| $HELM_DATA_HOME                    | set an alternative location for storing Helm data.                                |
| $HELM_DEBUG                        | indicate whether or not Helm is running in Debug mode                             |
| $HELM_DRIVER                       | set the backend storage driver. Values are: configmap, secret, memory, postgres   |
| $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use.                      |
| $HELM_MAX_HISTORY                  | set the maximum number of helm release history.                                   |
| $HELM_NAMESPACE                    | set the namespace used for the helm operations.                                   |
| $HELM_NO_PLUGINS                   | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.                        |
| $HELM_PLUGINS                      | set the path to the plugins directory                                             |
| $HELM_REGISTRY_CONFIG              | set the path to the registry config file.                                         |
| $HELM_REPOSITORY_CACHE             | set the path to the repository cache directory                                    |
| $HELM_REPOSITORY_CONFIG            | set the path to the repositories file.                                            |
| $KUBECONFIG                        | set an alternative Kubernetes configuration file (default "~/.kube/config")       |
| $HELM_KUBEAPISERVER                | set the Kubernetes API Server Endpoint for authentication                         |
| $HELM_KUBEASGROUPS                 | set the Groups to use for impersonation using a comma-separated list.             |
| $HELM_KUBEASUSER                   | set the Username to impersonate for the operation.                                |
| $HELM_KUBECONTEXT                  | set the name of the kubeconfig context.                                           |
| $HELM_KUBETOKEN                    | set the Bearer KubeToken used for authentication.                                 |

Helm stores cache, configuration, and data based on the following configuration order:

– If a HELM_*_HOME environment variable is set, it will be used
– Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
– When no other location is set a default location will be used based on the operating system

By default, the default directories depend on the Operating System. The defaults are listed below:

| OS      | Cache Path                | Configuration Path             | Data Path
|---------|---------------------------|--------------------------------|-------------------------|
| Linux   | $HOME/.cache/helm         | $HOME/.config/helm             | $HOME/.local/share/helm
| macOS   | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm
| Windows |

Available Commands:

  completion  generate autocompletions script for the specified shell
  create      create a new chart with the given name
  dependency  manage a chart's dependencies
  env         helm client environment information
  get         download extended information of a named release
  help        Help about any command
  history     fetch release history
  install     install a chart
  lint        examine a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      install, list, or uninstall Helm plugins
  pull        download chart from repository and unpack in local directory
  repo        add, list, remove, update, and index chart repositories
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  show        show information of a chart
  status      display the status of the named release
  template    locally render templates
  test        run tests for a release
  uninstall   uninstall a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client version information

Flags

      --debug                       enable verbose output
  -h, --help                        help for helm
      --kube-apiserver string       the address and the port for Kubernetes API server
      --kube-as-group stringArray   Group to impersonate for the operation, 
                                    flag can be repeated to specify multiple groups.
      --kube-as-user string         Username to impersonate for the operation
      --kube-context string         name of the kubeconfig context to use
      --kube-token string           bearer token used for authentication
      --kubeconfig string           path to the kubeconfig file
  -n, --namespace string            namespace scope for this request
      --registry-config string      path to the registry config file 
                                    (<User>\AppData\Roaming\helm\registry.json)
      --repository-cache string     path to file containing cached repository indexes
                                    (<User>\AppData\Local\Temp\helm\repository)
      --repository-config string    path to file containing repository names and URLs
                                   (<User>\AppData\Roaming\helm\repositories.yaml)
 
Use "helm [command] --help" for more information about a command.

General Information

Minikube

Installation

https://minikube.sigs.k8s.io/docs/start/

Start Cluster

minikube start
* minikube v1.16.0 auf Microsoft Windows 10 Pro 10.0.19042 Build 19042
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.20.0 preload ...
    > preloaded-images-k8s-v8-v1....: 491.00 MiB / 491.00 MiB  100.00% 24.50 Mi
* Creating docker container (CPUs=2, Memory=8100MB) ...
E1225 11:29:49.056358   11564 kic.go:241] icacls failed applying permissions - err - [
1 Dateien erfolgreich verarbeitet, bei 0 Dateien ist ein Verarbeitungsfehler aufgetreten.]
* Vorbereiten von Kubernetes v1.20.0 auf Docker 20.10.0...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
kubectl get po -A
minikube kubectl -- get po -A
minikube dashboard
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube --type=NodePort --port=8080
kubectl get services hello-minikube
minikube service hello-minikube

Alternatively, use kubectl to forward the port:

kubectl port-forward service/hello-minikube 7080:8080

Kubernetes Basics Module

Reference

Kubernetes : production-grade, open-source platform that orchestrates the placement (scheduling) and execution of application containers within and across computer clusters.

Deployment: responsible for creating and updating instances of your application

Pod: group of one or more application containers (such as Docker) and includes shared storage (volumes), IP address and information about how to run them.

Node: a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Multiple Pods can run on one Node.

Kubernetes Service: abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.

Scaling is accomplished by changing the number of replicas in a Deployment

Tutorial

1. Create a Kubernetes cluster

minikube version
minikube start
kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:49153
KubeDNS is running at https://127.0.0.1:49153/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

2. Deploy an App

kubectl get node
NAME       STATUS   ROLES                  AGE     VERSION
minikube   Ready    control-plane,master   5d22h   v1.20.0
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
deployment.apps/kubernetes-bootcamp created
kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   1/1     1            1           52s
kubectl proxy
Starting to serve on 127.0.0.1:8001
curl http://localhost:8001/version
{
  "major": "1",
  "minor": "20",
  "gitVersion": "v1.20.0",
  "gitCommit": "af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38",
  "gitTreeState": "clean",
  "buildDate": "2020-12-08T17:51:19Z",
  "goVersion": "go1.15.5",
  "compiler": "gc",
  "platform": "linux/amd64"
}
kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
hello-minikube-6ddfcc9757-kpl5f        1/1     Running   1          5d22h
kubernetes-bootcamp-57978f5f5d-ccr6p   1/1     Running   0          6m25s
k
ubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{end}}'

3. Explore your app

kubectl get pods
> kubectl describe pods
> kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{end}}'
kubernetes-bootcamp-57978f5f5d-z8vsm
> $PODNAME=kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{end}}
> Invoke-WebRequest http://localhost:8001/api/v1/namespaces/default/pods/$PODNAME
StatusCode        : 200
StatusDescription : OK
Content           : {
                      "kind": "Pod",
                      "apiVersion": "v1",
                      "metadata": {
                        "name": "kubernetes-bootcamp-57978f5f5d-z8vsm",
                        "generateName": "kubernetes-bootcamp-57978f5f5d-",
                        "namespace": "default",
                        "uid…
RawContent        : HTTP/1.1 200 OK
                    Cache-Control: no-cache, private
                    Date: Thu, 31 Dec 2020 10:07:12 GMT
                    X-Kubernetes-Pf-Flowschema-Uid: be79abd4-3f87-4ca2-80be-2b83f4c48148
                    X-Kubernetes-Pf-Prioritylevel-Uid: 3c0f88d…
Headers           : {[Cache-Control, System.String[]], [Date, System.String[]], [X-Kubernetes-Pf-Flowschema-Uid, System.String[]], [X-Kubernetes-Pf-Prioritylevel-Uid, System.String[]]…}
Images            : {}
InputFields       : {}
Links             : {}
RawContentLength  : 5593
RelationLink      : {}
> kubectl logs $PODNAME
Kubernetes Bootcamp App Started At: 2020-12-31T09:09:43.763Z | Running On:  kubernetes-bootcamp-57978f5f5d-z8vsm
> kubectl exec $PODNAME env
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=kubernetes-bootcamp-57978f5f5d-z8vsm
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
NPM_CONFIG_LOGLEVEL=info
NODE_VERSION=6.3.1
HOME=/root
> kubectl exec -ti $PODNAME bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@kubernetes-bootcamp-57978f5f5d-z8vsm:/# exit
exit

4. Expose your app publicly

Step 1 Create a new service
> kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
kubernetes-bootcamp-57978f5f5d-z8vsm   1/1     Running   1          95m
❯ kubectl get services
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP          6d
> kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
service/kubernetes-bootcamp exposed
> kubectl get services
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes            ClusterIP   10.96.0.1       <none>        443/TCP          6d
kubernetes-bootcamp   NodePort    10.98.244.121   <none>        8080:31603/TCP   3m58s
> kubectl describe services/kubernetes-bootcamp
Name:                     kubernetes-bootcamp
Namespace:                default
Labels:                   app=kubernetes-bootcamp
Annotations:              <none>
Selector:                 app=kubernetes-bootcamp
Type:                     NodePort
IP Families:              <none>
IP:                       10.98.244.121
IPs:                      10.98.244.121
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31603/TCP
Endpoints:                172.17.0.5:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
> $NODEPORT=kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}'
❯ $NODEPORT
31603
Step 2: Using labels
> kubectl describe deployment
Name:                   kubernetes-bootcamp
Namespace:              default
CreationTimestamp:      Thu, 31 Dec 2020 09:49:43 +0100
Labels:                 app=kubernetes-bootcamp
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=kubernetes-bootcamp
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=kubernetes-bootcamp
  Containers:
   kubernetes-bootcamp:
    Image:        gcr.io/google-samples/kubernetes-bootcamp:v1
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kubernetes-bootcamp-57978f5f5d (1/1 replicas created)
Events:          <none>
> kubectl get pods -l app=kubernetes-bootcamp
NAME                                   READY   STATUS    RESTARTS   AGE
kubernetes-bootcamp-57978f5f5d-z8vsm   1/1     Running   1          31h
> kubectl get services -l app=kubernetes-bootcamp
NAME                  TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes-bootcamp   NodePort   10.98.244.121   <none>        8080:31603/TCP   30h
> $POD_NAME=kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{end}}'
❯ $POD_NAME
kubernetes-bootcamp-57978f5f5d-z8vsm
> kubectl label pod $POD_NAME app=v1
> kubectl label pod $POD_NAME app=v1
error: 'app' already has a value (kubernetes-bootcamp), and --overwrite is false
❯ kubectl label pod $POD_NAME app=v1 --overwrite
pod/kubernetes-bootcamp-57978f5f5d-z8vsm labeled
> kubectl get pods -l app=v1
NAME                                   READY   STATUS    RESTARTS   AGE
kubernetes-bootcamp-57978f5f5d-z8vsm   1/1     Running   1          31h
Step 3: Deleting a service
> kubectl get services
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes            ClusterIP   10.96.0.1       <none>        443/TCP          7d6h
kubernetes-bootcamp   NodePort    10.98.244.121   <none>        8080:31603/TCP   30h
❯ kubectl delete service -l app=kubernetes-bootcamp
service "kubernetes-bootcamp" deleted

Not reachable from outside

> curl $(minikube ip):$NODE_PORT

Reachable from inside

> kubectl exec -ti $POD_NAME -- curl localhost:8080
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-57978f5f5d-z8vsm | v=1

5. Scale up your app

Step 1: Scaling a deployment

To list your deployments 

> kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   1/1     1            1           32h
> kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubernetes-bootcamp-57978f5f5d   1         1         1       32h
> kubectl scale deployments/kubernetes-bootcamp --replicas=4
deployment.apps/kubernetes-bootcamp scaled
> kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubernetes-bootcamp-57978f5f5d   4         4         4       32h
> kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   4/4     4            4           32h
> kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
kubernetes-bootcamp-57978f5f5d-9fwnz   1/1     Running   0          28m     172.17.0.6   minikube   <none>           <none>
kubernetes-bootcamp-57978f5f5d-dgvm4   1/1     Running   0          6m33s   172.17.0.8   minikube   <none>           <none>
kubernetes-bootcamp-57978f5f5d-qh686   1/1     Running   0          6m33s   172.17.0.9   minikube   <none>           <none>
kubernetes-bootcamp-57978f5f5d-wx9zg   1/1     Running   0          6m33s   172.17.0.7   minikube   <none>           <none>
kubernetes-bootcamp-57978f5f5d-z8vsm   1/1     Running   1          32h     172.17.0.5   minikube   <none>           <none>
> kubectl describe deployments/kubernetes-bootcamp
Step 2: Load Balancing

Create service if desired

> kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
> $NODE_PORT=kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}'
❯ $NODE_PORT
31937
> $NODE_IP=minikube ip
Step 3: Scale Down
> kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   4/4     4            4           32h
❯ kubectl scale deployments/kubernetes-bootcamp --replicas=2
deployment.apps/kubernetes-bootcamp scaled
❯ kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   2/2     2            2           32h
>  kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
kubernetes-bootcamp-57978f5f5d-9fwnz   1/1     Running   0          48m   172.17.0.6   minikube   <none>           <none>
kubernetes-bootcamp-57978f5f5d-wx9zg   1/1     Running   0          26m   172.17.0.7   minikube   <none>           <none>
kubernetes-bootcamp-57978f5f5d-z8vsm   1/1     Running   1          32h   172.17.0.5   minikube   <none>           <none>

6. Update your app

Docker

Docker | Cookbook

Useful apps

portainer.io: MAKING DOCKER MANAGEMENT EASY. Build and manage your Docker environments with ease today.

dive: About A tool for exploring each layer in a docker image

Useful commands

  • docker ps — Lists running containers.
    Some useful flags include: -a / -all for all containers (default shows just running) and —-quiet /-q to list just their ids (useful for when you want to get all the containers).
  • docker pull — Most of your images will be created on top of a base image from the Docker Hub registry. Docker Hub contains many pre-built images that you can pull and try without needing to define and configure your own. To download a particular image, or set of images (i.e., a repository), use docker pull.
  • docker build — Builds Docker images from a Dockerfile and a “context”.
    A build’s context is the set of files located in the specified PATH or URL. Use the -t flag to label the image, for example docker build -t my_container . with the . at the end signalling to build using the currently directory.
  • docker run — Run a docker container based on an image.
    You can follow this on with other commands, such as -it bash to then run bash from within the container. 
    Also see Top 10 options for docker run — a quick reference guide for the CLI command
    docker run my_image -it bash
  • docker logs —  Display the logs of a container.
    You must specify a container and can use flags, such as --follow to follow the output in the logs of using the program. 
    docker logs --follow my_container
  • docker volume ls — Lists the volumes,.
    Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
  • docker rm — Removes one or more containers.
    docker rm my_container
  • docker rmi — Removes one or more images. 
    docker rmi my_image
  • docker stop — Stops one or more containers.
    docker stop my_containerstops one container, while docker stop $(docker ps -a -q) stops all running containers. A more direct way is to use docker kill my_container, which does not attempt to shut down the process gracefully first.
  • Use them together, for example to clean up all your docker images and containers:
  • kill all running containers with docker kill $(docker ps -q)
  • delete all stopped containers with docker rm $(docker ps -a -q)
  • delete all images with docker rmi $(docker images -q)

Create new container

Start a new docker image with a given name

You can start a new container by using the run command and specify the desired image 

$ docker run -it --name playground ubuntu:17.10 /bin/bash
....
root@c106fbb48b20:/# exit

As a result, you are in the container at the bash command line

Reconnect to image
$ docker attach playground
Commit changes in container
$ docker start playground
$ docker attach playground
root@c106fbb48b20:/# echo 1.0 >VERSION
root@c106fbb48b20:/# exit
$ docker commit playground playground:1.0
$ docker tag playground:1.0 playground:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
playground 1.0 01703597322b Less than a second ago 94.6MB
playground latest 01703597322b Less than a second ago 94.6MB

Add tools and utilities

Python

$ apt-get update 
$ apt-get upgrade 
$ apt-get install python3 python3-pip

Java

$ apt-get install default-jre

Manage File Shares

File shares with Docker Desktop on Mac OS

Configuration is stored under

~/Library/Group Containers/group.com.docker/settings.json

Monitor Docker Logs

Logs with Docker Desktop on Mac OS

pred='process matches ".*(ocker|vpnkit).*" || (process in {"taskgated-helper", "launchservicesd", "kernel"} && eventMessage contains[c] "docker")'
/usr/bin/log stream --style syslog --level=debug --color=always --predicate "$pred"

Alternative you can run

/usr/bin/log show --style syslog --debug --info --last 1d --predicate "$pred" >/tmp/logs.txt

Add Timezone Konfiguration

ENV TZ 'Europe/Berlin'

RUN echo $TZ > /etc/timezone 
RUN    apt-get install -y tzdata \
    && rm /etc/localtime \
    && ln -snf /usr/share/zoneinfo/$TZ /etc/localtime \
    && dpkg-reconfigure -f noninteractive tzdata \
    && apt-get clean

Install local apps in a docker container

Install Atom Editor

Start docker image

$ docker run -it --name docker-atom -v /Dockerfiles/HOME:/home -e DISPLAY=192.168.99.1:0 ubuntu /bin/bash

Install Atom

# apt-get update
# apt-get install curl
# curl -sL https://deb.nodesource.com/setup_7.x | bash -
# apt-get install nodejs
# node -v
v7.4.0
# apt-get -y install git
# apt-get -y install software-properties-common
# add-apt-repository -y ppa:webupd8team/atom
# apt-get update
# apt-get -y install atom
# apt-get install libxss1

Commit changes  and build a image

# docker commit -a "Docker Tutorial 1.0.0" -m ionic d378e8647af9 atom:1.0.0
# docker tag atom:1.0.0 atom:latest

Links and Resources

Docker quick reference guides

Docker in more depth

Working with networks

I’ve got a lot of inspiration from wsargent/docker-cheat-sheet.

Goal

We will setup to applications, each in his one container, running on the smae network, so they can communication together.

Setup

Starting with the default configuration, you will see 4 networks

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
6742a11bff1e        bridge              bridge              local
3af0a1c9eaac        host                host                local
e60f68aad9d6        none                null                local

Create a bridged network for running two apps inside: web and database

$ docker network create -d bridge playground_bridge
c97a80b449d9b6b28ddffa0a7bd4a7938e0b8261773080ab33ae4b7ab08826b1
$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
6742a11bff1e        bridge              bridge              local
3af0a1c9eaac        host                host                local
e60f68aad9d6        none                null                local
c97a80b449d9        playground_bridge   bridge              local

Start the database application using the bridges network

$ docker run -d --net=playground_bridge --name playground_db training/postgres
88abb9d018c628ed1abe7da0466289846a8342a28b2cbef3305ea5313c46d647
$ docker inspect --format='{{json .NetworkSettings.Networks}}' playground_db| python -m json.tool
{
    "playground_bridge": {
        "Aliases": [
            "88abb9d018c6"
        ],
        "DriverOpts": null,
        "EndpointID": "9fdfe9baf5b159471a39601779ee451aa555a9a9be72be4472e56bd3fcfd1350",
        "Gateway": "172.18.0.1",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "IPAMConfig": null,
        "IPAddress": "172.18.0.2",
        "IPPrefixLen": 16,
        "IPv6Gateway": "",
        "Links": null,
        "MacAddress": "02:42:ac:12:00:02",
        "NetworkID": "c97a80b449d9b6b28ddffa0a7bd4a7938e0b8261773080ab33ae4b7ab08826b1"
    }
}

Now, start the web application with the default network (not the playground_bridge used by the database)

$ docker run -d --name playground_web training/webapp python app.py
2f31761168d75d10c2f1bffc805fb8963a18529e17c2592c2b279afd9e364e7b

They cannot communication, because they are running in different networks:

$ docker exec -it playground_db ifconfig | grep inet
         inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
         inet addr:127.0.0.1 Mask:255.0.0.0
$ docker exec -it playground_web ifconfig | grep inet
         inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
         inet addr:127.0.0.1 Mask:255.0.0.0

Now check the connectivity. The Web application can reach itself:

$ docker exec -it playground_web ping -c 5 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.036 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.031 ms
64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.029 ms

But could not reach the database application (because of the different network):

$ docker exec -it playground_web ping -c 5 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.

--- 172.18.0.2 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4126ms

To connect both apps together, put them in the same network:

Before, they use different networks:

$ docker inspect --format='{{json .NetworkSettings.Networks}}' playground_web | python -m json.tool|grep NetworkID
        "NetworkID": "6742a11bff1ebcdeaee9151f146a74b1c3d77db95d4931e4e79f48f7d7f491f7"
$ docker inspect --format='{{json .NetworkSettings.Networks}}' playground_db | python -m json.tool|grep NetworkID
        "NetworkID": "c97a80b449d9b6b28ddffa0a7bd4a7938e0b8261773080ab33ae4b7ab08826b1"

Connect Web Application to network playground_bridge:

$ docker network connect playground_bridge playground_web

 Now, they use the same network:

$ docker inspect --format='{{json .NetworkSettings.Networks}}' playground_web | python -m json.tool|grep NetworkID
        "NetworkID": "6742a11bff1ebcdeaee9151f146a74b1c3d77db95d4931e4e79f48f7d7f491f7"
        "NetworkID": "c97a80b449d9b6b28ddffa0a7bd4a7938e0b8261773080ab33ae4b7ab08826b1"
$ docker inspect --format='{{json .NetworkSettings.Networks}}' playground_db | python -m json.tool|grep NetworkID
        "NetworkID": "c97a80b449d9b6b28ddffa0a7bd4a7938e0b8261773080ab33ae4b7ab08826b1"

And they can communication

$ docker exec -it playground_web ping -c 5 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.033 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.032 ms
64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.033 ms

--- 172.17.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4144ms
rtt min/avg/max/mdev = 0.032/0.034/0.044/0.008 ms

$ docker exec -it playground_web ping -c 5 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.134 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.056 ms
64 bytes from 172.18.0.2: icmp_seq=4 ttl=64 time=0.052 ms
64 bytes from 172.18.0.2: icmp_seq=5 ttl=64 time=0.053 ms

--- 172.18.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4156ms
rtt min/avg/max/mdev = 0.052/0.069/0.134/0.033 ms

Usefull Links

https://blog.docker.com/2019/07/intro-guide-to-dockerfile-best-practices/

Docker

Docker | Getting started

General Information

Installation

Install with Homebrew

brew install bash-completion
brew cask install docker
brew install kubectl
brew cask install minikube

After Installation, check versions

docker version
docker-compose version
docker-machine --version
kubectl version --client

First steps

Start a docker image with a given name

$ docker run --interactive --tty --name ubuntu ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
af49a5ceb2a5: Pull complete 
8f9757b472e7: Pull complete 
e931b117db38: Pull complete 
47b5e16c0811: Pull complete 
9332eaf1a55b: Pull complete 
Digest: sha256:3b64c309deae7ab0f7dbdd42b6b326261ccd6261da5d88396439353162703fb5
Status: Downloaded newer image for ubuntu:latest
root@a5b411d609f0:/#

Run a command

root@a5b411d609f0:/# uname -a
Linux a5b411d609f0 4.4.27-moby #1 SMP Wed Oct 26 14:21:29 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root@a5b411d609f0:/# id
uid=0(root) gid=0(root) groups=0(root)
root@a5b411d609f0:/# hostname
a5b411d609f0
root@a5b411d609f0:/#

Leave image

root@a5b411d609f0:/# exit
exit
$

 Show running images

$ docker ps -a
CONTAINER ID  IMAGE   COMMAND      CREATED        STATUS                   PORTS  NAMES
b01ba9bfef78  ubuntu  "/bin/bash"  41 seconds ago Exited (0) 2 seconds ago        ubuntu

Start image

$ docker start ubuntu
ubuntu
$ docker ps -a
CONTAINER ID  IMAGE   COMMAND      CREATED        STATUS                 PORTS   NAMES
b01ba9bfef78  ubuntu  "/bin/bash"  2 minutes ago  Up 1 seconds                   ubuntu

Attach to image, e.g. “enter” the image

Don’t forget to press enter after you entered the command do display the shell in the image again

$ docker attach ubuntu
root@b01ba9bfef78:/#

Working with Docker

$ docker run -it --name ubuntu ubuntu bash

You are in a terminal with ubuntu and can do whatever you like.

To start again after a reboot:

$ docker start ubuntu
$ docker exec -it ubuntu bash

If you want save your changes:

$ docker commit ubuntu
$ docker images

See the unnamed image and:

$ docker tag <imageid> myubuntu

Then you can run another container using your new image.

$ docker run -it --name myubuntu myubuntu bash

Or replace the former

$ docker stop ubuntu
$ docker rm ubuntu
$ docker run -it --name ubuntu myubuntu bash

Docker Components

Machines

Create machine

$ docker-machine create --driver=virtualbox default
$ docker-machine ls
NAME         ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default      -        virtualbox   Running   tcp://192.168.99.100:2376           v1.12.2   
virtualbox   -        virtualbox   Stopped                                       Unknown 

Show environment of machine

$ docker-machine env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/docker/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
# Run this command to configure your shell: 
# eval $(docker-machine env)

Stop machine

$ docker-machine stop default
Stopping "default"...
docker-Machine "default" was stopped.

Start  machine

$ docker-machine start default
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the docker-machine env command.

Set environment of a machine

$ eval "$(docker-machine env default)"

Create machine from iso image

$ docker-machine create -d virtualbox --virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso <MACHINE-NAME>
$ docker-machine env rancheros

Images

Images are just templates for docker containers.

List all images

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              f753707788c5        5 days ago          127.2 MB

Remove images

$ docker rmi <IMAGE ID>

Remove all images

$ docker rmi $(docker images -q)

Container

List all container

$ docker ps -a

Run command in container

$ docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
6bbedd9b76a4: Pull complete 
fc19d60a83f1: Pull complete 
de413bb911fd: Pull complete 
2879a7ad3144: Pull complete 
668604fde02e: Pull complete 
Digest: sha256:2d44ae143feeb36f4c898d32ed2ab2dffeb3a573d2d8928646dfc9cb7deb1315
Status: Downloaded newer image for ubuntu:latest

Run command and delete container after running

$ docker run -it --rm ubuntu hostname

Check environment in container

$ docker run -it ubuntu bash
# hostname
c26fc567f552
root@c26fc567f552:/# uname -a
Linux c26fc567f552 4.4.24-boot2docker #1 SMP Fri Oct 7 20:54:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Remove all container

$ docker rm $(docker ps -a -q)

Commit container as new image

$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                     PORTS               NAMES
e4a50905aa9c        continuumio/anaconda   "/usr/bin/tini -- /bi"   22 minutes ago      Exited (0) 5 minutes ago                       pedantic_kirch
164daaac2349        4f3b088e1307           "/bin/sh -c 'apt-get "   4 hours ago         Exited (100) 4 hours ago                       happy_jang
817bb15d3171        i_electron             "/bin/bash"              2 weeks ago         Exited (0) 2 weeks ago                         cranky_wilson
$ docker commit e4a50905aa9c r14r_anaconda

Tipps and Tricks

Use npm as a comand wrapper

Create a  empty npm package

$ npm init --yes

Add this lines to the package.json file

{
  "name": "development",
   ...
  "license": "ISC",
  "scripts":  {
    "build": "docker build -t development .",
    "ssh": "docker run -i -t development /bin/bash"
}

Create a Dockerfile

FROM ubuntu
RUN apt-get update && apt-get install -y firefox

Now you can build the image with

$ npm run build
$ npm run ssh

Running XClients from Docker on macOS Host (Source)

Install Homebrew

Install requires parts

$ brew install socat
$ brew cask install xquartz

Now start XQuartz

$ open -a XQuartz

Expose local xquartz socket via socat on a TCP port.
Run this in another terminal window

$ socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\"

The masking of the characters ” is VERY important

Run Firefox in Docker Container

$ docker run -it -e DISPLAY=server:0.0 i_firefox firefox

Create image from scratch

Scratch image

Create image from scratch

Using Linux as Docker OS

Available Linux OS’s

Alpine LinuxHome | Github | Docker Hub 
CoreOS Container LinuxHome | Github | Docker HubISO |
Rancher Labs RancherOSHome | Github | Docker Hub 
Red Hat Project AtomicHome | Github | Docker Hub 
VMware Photon OSHome | Github | Docker Hub