navigation

Deploy application on Docker Container

Overview

  • Containers is a solution to the problem of how software should run reliably when transferred from one computer environment to another.

  • Docker exploded in 2013 and it has caused excitement in IT circles ever since.

  • Container technology powered by Docker promises to change the way IT works in the same way that virtualization did a few years ago.

What is a Container and why should it be used?

Containers are a solution to the problem of how to reliably run software when moved from one computing environment to another. This can be from a developer’s laptop to a test environment, from a pre-production environment to a production environment, and it can be from a physical machine in a data center to a virtual machine in the cloud.

Problems arise when the supporting software environment is not the same, says Docker creator Solomon Hykes. “You would test in Python 2.7, and then it would run on Python 3 in production and something weird would happen. Or you will rely on the behavior of a certain version of the SSL library but in fact another version will be installed instead. You’re going to run tests on Debian and run production in Red Hat and all sorts of weird things happen. "

How do containers solve this problem?

Simply put, a container includes the entire environment at runtime: an application with all the libraries, binaries, and configuration files needed to run that application. into a package. In this way, the problem of differences in operating systems and underlying infrastructure is overcome and eliminated.

What is the difference between containers and virtualization?

Container

With virtualization technology, a package is a virtual machine and it includes the entire operating system as well as the application. A physical server running three virtual machines will have a hypervisor and three separate operating systems running on it.

In contrast, a server running three applications packaged with Docker runs one operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system have read-only functionality, while each container has its partitions (i.e. how to access the container) for writing. That means containers are much lighter and use much fewer resources than virtual machines.

What are the benefits of containers?

  • A container can be only tens of megabytes in size, while a virtual machine with its entire operating system can be several gigabytes in size. Because of this, a single server can host much more containers than virtual machines.

  • Another big benefit is that virtual machines can take a few minutes to start up their operating system and start running the applications they host, while applications packaged in a container can be started. move almost instantaneously. That means containers can be instantiated “just in time” when they are needed and can disappear when no longer needed, whereby resources are released on their host.

  • A third benefit is that containerization allows for greater modularity. Instead of running an entire complex application inside a container, the application can be broken down into modules (such as databases, user interfaces, etc.). This is the so-called microservice approach. Applications built this way are more manageable because each module is relatively simple and changes can be made to the modules without having to rebuild the entire application. Since containers are so lightweight, individual modules (or microservices) can only be instantiated when they are needed and are available almost immediately.

Main content

  1. Introduction
  2. Preparation steps
  3. Create DB instance
  4. Create EC2 instacne
  5. Connect DB instance
  6. Deploy Application
  7. Amazon Elastic Container Registry
  8. Resource Cleanup

Source

  1. AWS-First-Cloud-Journey Repository