Docker Compose is currently used by millions of developers and with over 650,000 Compose files on GitHub, Compose has been widely embraced by developers because it is a simple cloud and platform-agnostic way of defining multi-container based applications. Problem statement Today docker-compose is used on local development environments and in integration tests on physical Jenkins nodes. Moving almost everything to Kubernetes means that either we must make docker-compose work without hassle on the Kubernetes platform or agree upon another preferred way of working. Important to take into consideration are: Testability: Support for Ephemeral deploys. Simplicity: Hide complex and technical implementation details for developers behind a simple to use api with a lot of sensible default values. Because developers should focus on business logic and not build and deploy scaffolding. Because with an api as an abstraction layer on top we can be flexible and replace underlying implementations like Jenkins, docker-compose and kubernetes without developers having to make a change. Reproducibility: Make different environments as similar as possible to each other. Delivery speed: Provide a smooth and quick process from code commit to production. [Read More]
Docker on K8s
Traditional docker engine in K8s is kaput, gone. We need to build docker images another way.
Different ways of building your docker containers in K8s without docker engine
|BuildKit||DOCKER_BUILDKIT=1 docker build .||-|
|Buildah and Podman||Docker Compose in the K8s cloud||developed by RedHat|
|kaniko||Build container images from a Dockerfile, inside a container or Kubernetes cluster||-|
|k3c||Docker for Kubernetes (to use with k3s)||-|
|Bazel||Google’s generic build system.
||no Dockerfile needed|
Which one is the best?
- Current recommendation is BuildKit or Kaniko.
Development with K8s locally
Different ways to run “lightweight” K8s locally, and a basic measurement of how resource heavy it is. Kind kind creates local multi-node Kubernetes clusters using Docker container nodes. https://github.com/kubernetes-sigs/kind brew install kind kind create cluster kind delete cluster minikube https://minikube.sigs.k8s.io/docs/ brew install minikube minikube start minikube pause minikube delete k3d (k3s) krd is k3s in docker. k3s is the lightweight Kubernetes distribution by Rancher: rancher/k3s. https://github.com/rancher/k3d brew install k3d k3d cluster create test k3d cluster delete Resource usage Resource usage when using docker and K8s. Docker settings: CPU 4, mem 4 GB, swap 1 GB image + (pod) CPU % Memory GB docker - 4 4,7 - 6.8 * redis 7 * * redis+mongo 9 * docker-desktop with K8s ca 20 k8s_XXX + (1 redis) 37 * * ca 20 k8s_XXX + (10 redis) 42 * * ca 20 k8s_XXX + (100 redis) 71 * docker with minikube gcr.io/k8s-minikube + (1 redis) 50 (80 after one hour) * * gcr. [Read More]
Docker on Mac, and Hyperkit
Docker on Mac has an unusual architecture which makes it difficult to get a clear picture of resource consumption. Docker relies on features unique to the Linux kernel like cgroups to implement containers and as a result, requires Linux to run. Because OSX is not Linux, Docker utilizes a large Linux VM, called hyperkit to run your containers. As a result of this architecture, running docker stats doesn’t tell you everything you need to know about resource consumption. You’ll get information about the footprint of your Docker containers, but not about hyperkit, the VM needed to run those containers. As we will see, hyperkit can be quite a resource hog, tending to eat more and more memory the longer it runs and the more containers you put on it. Docker and CPU usage When Docker consistently uses a lot of CPU cycles, and the fan is running loud, try switching to the latest docker-desktop version (3 is out now). The Latest comes with: Downgraded the kernel to 4.19.121 to reduce the CPU usage of hyperkit. Fixes docker/for-mac#5044 Avoid caching bad file sizes and modes when using osxfs. [Read More]