An Introductory Guide To Docker

Introduction to Docker

Docker is an open-source platform that simplifies the process of building, packaging, and deploying applications into containers. Containers provide lightweight and isolated environments that ensure consistency across different systems, making it easier to run applications seamlessly on any infrastructure. Docker enables developers to package their applications along with their dependencies into portable images, allowing for efficient resource utilization and simplified deployment workflows. With Docker, developers can achieve reproducibility, scalability, and faster development cycles, making it a popular choice for modern software development and deployment. In this blog, the following concepts will be covered:

What is Docker?

Docker is a tool for running applications in an isolated environment, it is very similar to a virtual machine but it is much faster and doesn’t require a lot of memory and an entire operating system to operate.Also, it gives us the ability to ship our code faster and gives you control over your applications.Docker-based applications can be seamlessly moved from local development machines to production deployments.

What is Container?

A container is an abstraction at the application layer that packages code and dependencies together. Instead of virtualizing the entire physical machine, containers virtualize the host operating system only.

Multiple containers can run on the same machine and share the OS kernel with other containers. Here containers don’t require a full operating system and it simply shares the same underlying operating system by running in isolation from other containers.

Docker image

A Docker image is a template for creating an environment of your choice. It could be a database, a web application pretty much it can be anything. Docker images are multi-layered read-only files carrying your application in the desired state inside them.

Image contains everything you need to run your application, it contains the operating system, any software required as well as the application code.So once you have the image the container comes into play.  In simplified terms, a container is a running instance of an image.

Hands-On

Follow the below steps to create a Dockerfile, Image & Container.

Step 1: First you have to install Docker. 

Step 2: Once installation is complete use the below command to check the version.

1. docker -v
Step 3: Now create a folder in which you can create a DockerFile and change the current working directory to that folder.
            1. mkdir images
            2. cd images
Step 4.1: Now create a Dockerfile by using an editor. In this case, I have used the nano editor.
          1.  nano Dockerfile
Step 4.2: After you open a Dockerfile, you have to write it as follows.

FROM centos:7
WORKDIR /app
RUN yum -y update
RUN yum -y install wget
RUN wget https://downloads.apache.org/httpd/httpd-2.4.57.tar.gz
RUN tar -xzf httpd-2.4.57.tar.gz
RUN rm httpd-2.4.57.tar.gz
RUN yum -y install httpd
COPY . .
EXPOSE 4000
CMD ["httpd", "-DFOREGROUND"]


  • FROM: Specifies the image that has to be downloaded
  • WORKDIR: sets the working directory to app/
  • RUN: Specifies the commands to be executed
  • EXPOSE: Specifies the port on which the container is exposed
  • COPY:Copy the files from local to container
  • CMD:It will executes the commands

Step 4.3: Once you are done with that, just save the file.

Step 5: Build the Dockerfile using the below command.

** “.” is used to build the Dockerfile in the present folder **


Step 6: Once the above command has been executed the respective docker image wii be created. To check whether Docker Image is created or not, use the following command.

1. docker images


Step 7: Now to create a container based on this image, you have to run the following command:

1. docker run -it -p port_number -d image_id

Where -it is to make sure the container is interactive, -p is for port forwarding, and -d to run the daemon in the background.

 Exposing multiple ports 
 docker run -d -p 4000:4000 -p 4001:4000 image_id
Now from localhost:4000 and localhost:4001, we can access nginx server running inside the container.
Step 8: Now you can check the created container by using the following command:

1. docker ps


Some additional commands:
docker ps -a (all running as well as an inactive container)
docker start container_id (starts a container)
docker stop container_id (stops a container)
docker rm container_id (delete a container)
docker rmi image_id (delete an image)

Deleting containers one by one can be a tedious task, there’s also a way to delete all the containers running or not in one go.

 docker ps -aq (List all the containers running or inactive with its id only)


Now to delete all the containers and images in one go
docker rm -f $(docker ps -aq)
In the case of images rm changes to rmi
docker rmi -f $(docker images -aq)

Conclusion:

In conclusion, Docker is a powerful platform that simplifies the process of building, packaging, and deploying applications into containers. Containers provide lightweight and isolated environments that ensure consistency across different systems. Docker allows developers to package their applications and dependencies into portable images, enabling efficient resource utilization and simplified deployment workflows. In this blog, we covered the concepts of Docker, containers, Docker images, and the hands-on process of creating a Dockerfile, building an image, and running a container. Docker's benefits include faster development cycles, scalability, and reproducibility. By leveraging Docker, developers can streamline their application deployment and achieve greater control and flexibility.


Thank you

M. Nishitha (Intern)

Guard Innovators,

Data Guard Team,

Enterprise Minds.


Comments

Popular posts from this blog

3-Tier Apllication in AWS

Using Trivy for Container Image Vulnerability Scanning in DevOps

Database Backup Software - Feature Study