All about Docker

SaiKrishnaD [Saaki]
7 min readDec 1, 2021
Photo by Ian Taylor on Unsplash

What is a Container?

A Container is a way to package application with all the necessary dependencies and configurations.

Container is portable i.e. easily moved and shared around.

This portability of the container and with all the required files and dependencies makes the development and deployment efficient

Containers live in Container Repositories. Many companies have their own private repositories. Docker hub is there for all the public images useful for any development. In Docker hub we have an official docker image for any application we need.

How containers improve application development?

Whenever a same application is required to be installed in different local systems for development in a team , they had to install all the services required for the application on every system which is very tedious.

With containers we do not need to install any service required for the application in our OS because container is its own isolated environment or an isolated OS layer with a base image packaged with all the necessary configurations.

How containers improve application deployment?

Before containers development team would produce artifacts together with set of instructions of how to install and configure those artifacts in the server. We have a jar file or something similar for our application along with a database service with instructions of how to configure and set it up on the server.

Development team would pass those artifacts to the operations team who set up all the environments and deploy those applications.

So these require everything to be configured and installed on the OS directly which leads to conflicts in the versions. So problems arises if there is any misunderstanding between the D and O team. These would result in back and forth communication between the development team and the operations team until the application is properly deployed.

With containers D & O work together to package the application in a container. No environmental configuration needed on server only thing needed is to run a docker command that pulls the container image that we have stored somewhere in the repository.

Image vs Container

Image is the actual package I.e. the application + dependencies + configurations. So it is the artifact that is movable around. [ not running ]. In docker hub all are images.

Container is the running environment of the image i.e. environment is created when we pull the image into the system and start the application.

Docker vs VM

Basically an OS has two layers majorly

1. OS kernel

2. Applications

And the third mini layer is the hardware system.

So the kernel communicates with the hardware and applications run on kernel layer.

Docker virtualizes the applications layer of the OS and uses the kernel of the host machine where as a VM virtualizes the complete OS.

Docker is smaller and faster where as a VM is GB’s large. VM of any OS can be run on any OS host but it is not possible for docker but this is the problem for windows older versions i.ie below windows 10.

Basic docker commands

docker pull “image” — to install and configure the image and run as a container

docker run “image” — to run an already instaled image as a container

docker run -d “image” — to start a container in a detach mode i.e. minimal logs in command line interface.

docker run -p xxxx:xxxx “image” — to run a container binding its port to host post (x indicates digit here)

docker run -d -p xxxx : xxxx — name “containername” “image” — to run a container with a specific name.

docker images — to see all the existing images in our system.

docker ps — to see all the running containers

docker ps -a — to see all the containers running and not running

docker start “container name/ID” to restart an already existing container (stopped)

docker stop “container name/ID” to stop a container.

docker logscontainer name/ID” — to check the logs of the container. D

docker exec -it “containerID” — to get to the terminal of container where we can check the log file , configuration file and env variables. Here “-it ” is interactive terminal.

Container port vs Host port

We know container is a virtual env running on our host. So multiple containers can run on our host machine . By default our laptop has already some ports available and we need to create a binding between the container port and the host port. Same container port can be binded to two different ports of the host. (kind of versions)

To connect a container to local host port from starting itself we already have a command for that docker run -p 5000:3000 “image” ,here 5000 is host port and 3000 is container port.

If we connect a host port to two different containers it won’t work.

Docker Network

Docker creates its isolated docker network where the container is running in. So if we deploy two containers in a docker network they can communicate with each other using simple container names without ports and all because they are in the same network, but the applications that are on the outside of the network has to be communicated through local host and ports.

Docker by default generates networks , to check those auto generated docker networks we can run docker network ls with all the information.

We can create our own network with command docker network create new-network

Docker compose

If we have bunch of docker containers to run we can automate it using docker compose. It helps to run multiple containers at a time with all the necessary configurations. We can take all the commands and map them into a .yml file. A typical docker compose file would look like this

https://docs.docker.com/compose/

Every configuration variable like environment , ports , images and etc can be included in each service which is basically a container. One of the main benefits is we can create all the containers at a time in a single common network , so no need to create docker network again

docker-compose -f ‘filename’ up to start all the containers at a time and docker-compose -f ‘filename’ down to stop the containers using docker compose. It removes the created network too.

Docker file — building our own docker image

Docker file is a blue print for creating docker images. The syntax is very simple and it is like

FROM “image” — from where we start from a base image.

ENV “env variables like username and password” — sets environment variables

RUN “ mkdir -/ p/home/data/” — creates a directory in the container environment

COPY “./home/app” — this command is executed on host machine which copies the current running folders and files to the container.

CMD [“image”] — start the application , it is an entry point command.The dockerfile name should always be “Dockerfile” and it is a simple text file.

We build the docker image from the docker file using docker build -t myimage:1.0. myimage is image name and 1.0 is tech i.e. version of the image. So we can start our own container by docker run myimage:1.0

docker rm “containerId” to remove the container.

docker rmi “image” to remove the image.

Docker private repository on AWS

In AWS we can have only one repository for a image and we can add different versions of the image.

After logging in ,We have to create a repository in AWS and name it . To push a image into this new repository we have to login to the private repository i.e docker login to authenticate ourself .( we are telling the repository ,Hey I am trying to access you and these are my credentials)

For that AWS will give us a login credential to login(we need to have AWS CLI to be installed and the credentials configured)

Inorder to push the image into the AWS Private repository we can’t push directly using the command docker push myimage:1.0 , docker thinks that we are pushing it to docker hub.

So we need to tag that image i.e include the information of the private repository (name or address) in the name of the image.

docker tag myimage:1.0 “newname with aws information:1.0” this command will be there in view push commands of AWS ECR

now we can push the image into AWS repository using

docker push “newname with aws information:1.0”

Persisting Data with Docker volumes

A container has a virtual file system where the data is stored. Here there is no persistence , so if we remove the container or stop it, all the data in it would be gone. So we need docker volumes.

A folder in the host system is plugged with the container file system and data would be automatically gets replicated from the container virtual files to the host system.

Also If we change any data in the host system it would replicates there in the container file system.

docker run -v hostdirectory-path : containerterminal-path , this is called as Host volume as we are mentioning specific folder in the host.

docker run -v containerterminal-directory , this is anonymous volumes

docker run -v “name”:containerterminal, this is named volume because we are giving the name for the data folder to be replicated. These are the most used volumes.

We can mention the volumes in the docker compose itself.

--

--