In no way, shape or form do I consider myself an expert in Docker. Just highlighting some of my journeys.
I’m doing quite a bit of research with it right now in hopes of getting proficient with it as a developer. I do plan to create some demo videos on some of the things I am doing with Docker.
Anyhow, from a newbie standpoint (which I sort of consider myself), here are some things I am thinking with respect to Docker:
- Docker is a lightweight alternative to an environment with virtual machines (be it fronted by a load balancer or stand alone). Whereas vm’s each have their own OS and hardware layer that reaches out to the host’s hardware. Docker containers, however, run as seperate isolated processes (with their own namespace) in which they share the same OS and docker manages how much CPU time is given to each running container so that you don’t have a situation where one container is monopolizing the CPU. Vm’s virtualize hardware resources on the host while containers virtualize OS resources on the host (this includes network resources, file systems and process tree). Every container on the host will share the host’s kernal.
- Images are physically constructed by layers. Layers can be shared across multiple images and are basically defined in a manifest and tracked by a digest. So there is not a “physical image per se”. An image is defined by it’s manifest and will contain numerous layers that are stacked together at build time. The base image is read only. When you start up a container (ie, a running instance of an image with it’s own copy/on write), a writable layer gets created and stacked on top of the image.
- Containers can be thought of as “an instance of an image”.
- Docker allows for multiple containers to be a part of a network and it’s typical to use docker-compose to construct an “application” which can consist of multiple containers that are started up at once (in an order you specify). These containers can communicate to one another over the network or thru volumes. You can then “compose down” the application.
- You can compose an application by defining its components in a yaml file and then use “docker-compose up” and “docker-compose down” to start it up and tear it down.
- Docker images are constructed by a file called “Dockerfile”
- There is a piece called the “docker engine” which you interface with via the command line. This engine will delegate the creation of containers and images to another process. What this means is that you can install updates of the docker engine without having to interrupt any running containers. The new docker engine can learn about running containers via “discovery”.
- Docker orchestration is the concept that multiple containers call all communicate via one another across nodes over a predefined cluster manager. Be it “Swarm” or the recommended Kubernetes by google. In this concept, you will have a master node in the orchestration that is responsible for keeping track of the other nodes that come and go. If the “master” should go down, another “worker” in the cluster will be delegated to be the new master. A master can also do work.
- Just like “git”, there are private and public “repos” for docker images. The default one is “docker hub”
- You can implement the concept of load-balancing by creating replicas for your containers/services.
- Docker naturally works beautifully with microservices because you can create docker services that join a swarm to provide endpoints and communication to other services/containers in the swarm.
- Containers, like vm’s, have a lifecycle. They can be stopped (in which case they don’t lose any persisted data) and started. Or deleted.
- Deleting a container will not delete the image.