Containers became major buzzword nowadays, but what is it exactly and why we need it?
There were always problems with managing multiple services in an operating system.
Take for example a scenario in which you install an application in windows.
Lets say your windows installation is very slow and you’ve decided to reinstall the operating system from scratch.
Now to the catch point – how do you backup your application in a way that it will stay the same as it was, includes configuration and data?
One way that you might say is application virtualization.
This is true, it is kind of container implementation, saving the application as a black box.
As a user, you don’t mind if it needs some libraries and where does the application files resides (it can be on program files, on your user profile directory somewhere or even in global operating system settings inside the registry).
All packaged together, decoupled from your operating system.
Another way when we’re talking about background applications/services is containers.
The application is packaged along with its dependencies (library at most), configuration and data.
The host operating system shares mostly the kernel and if it is possible, shared libraries.
This resembles somehow virtual machine. but in virtual machine we package the whole operating system along with emulating some hardware (disk controller, network controller, etc.).
Containers are more lightweight compares to virtual machines and do not use CPU hypervisor mode (known as ring -1).
One more advantage is that each container has its own library versions and without conflict other container dependencies. note that the same application can run in multiple instances!
In fact, in Linux you can run multiple distributions on the same host, sharing the kernel together.
In Docker posts I will explain how it works in Linux, what is the difference between container and Docker and how Docker assists.