Saturday, May 07, 2016

Layman's Guide for LXD - Canonical's OpenSource Container HyperVisor [Part-I]


Certainly, "container" is the new buzz word among techies. Everyone is talking about Docker, LXC and LXD. The continous need to reduce costs, optimize performance, as well as maintain the data availability combined with data integrity has been the most prominent need for most of the organizations leading to convergence of various virtualization concepts to develop an efficient model called, "Containerization". Containerization of resources helps in utilizing the resources more efficiently than the other virtualization techniques like hypervisors. Even though there are a few limitations to it, but still there is a wide spread acceptance in production environments too. Let us see why and how this is achieved.

What are Hypervisors?

Hypervisor based virtualization technologies have been around for a long time now. Today they can be installed and deployed on any laptop and server, helping in reducing costs and optimizing performance where ever required. Hypervisor or a "virtual machine manager", is a software that uses hardware virtualization technique to emulate computing environments of different operating systems sharing a single hardware machine. It is like a thin layer between the hardware and operating system mainly used to create virtual machines. Each guest operating system that is installed on it is assigned a part of host's processor, memory, storage and network as allocated by the user. The hypervisors provide full virtualization mechanism that emulates the hardware in such a way that we can run any operating system on top of any other. Here each virtual machine has it's own kernel and hence the resources allocated are statically fixed. This provides a high level of isolation between the host and guest machines.

Types of Hypervisors

There are mainly two types of hypervisors.

1. Bare metal Hypervisor or native - Type1

These hypervisors run directly on the host's hardware to manage the guest operating system and to control the hardware, due to which they are commonly referred as bare metal hypervisors.

Example: VMware ESXi, vSphere Hypervisor, Microsoft Hyper-V, Citrix XenServer, Oracle VM, KVM, IBM z/VM

2. OS based or Embedded or hosted - Type2

These hypervisors run like any other computer program running on the operating system. They tend to abstract the guest operating system from the host operating system. Example: VMware Workstation, VMware PlayerOracle VirtualBox, KVM, Qemu, Parallels Workstation (discontinued).

Note: KVM is categorized as both Type1 and Type2 Hypervisor.

What is a container?

Container is another method of virtualization where the kernel of an operating system allows multiple isolated user-space instances. These instances are also called software containers, lightervisors, virtualization engines, jails and zones, depending upon the operating system OEMs. Containers are based on shared operating system concept. Therefore instead of virtualizing the hardware, they work on top of single linux instance running on the same hardware.
The operating system's kernel provides resource management feature with the help of namespace support combined with chroots, to limit the impact of one container's activities on another container in order to facilitate complete isolation of the environments. This isolation mechanism helps to ensure security and hardware independence.
Containers use operating system's normal system call interface and hence, there is no need to run in an intermediate virtual machine. They are skinnier, lightweight and portable than the virtual machines.  The only limitation of this technique is that it cannot host a guest operating system different from the host operating system. For example, a container with Linux as a host operating system can only host Linux guest operating systems and not Windows. It can host both identical or different distributions of Linux operating systems to the host operating system. They are believed to run close to bare metal speeds and theoretically run 6000 containers and 12000 bind mounts of root filesystem directories. This link demonstrates the procedure to use a container as a router.

Example: LXC, LXD, Docker, rktOpenVZ, Virtuozzo, Spoon, Solaris Containers/Zones, AIX WPARs, HP-UX Containers, FreeBSD Jails.

Difference between container and hypervisor

All hypervisors are usually resource hungry to emulate virtual hardware which makes them slow and incur significance performance overhead. Due to more resource consumption they can host limited number of virtual machines. Virtual machines require separate and independent kernel instance to run on. On the other hand containers do not emulate hardware and can be deployed from the host operating system by sharing the host OS kernel. This makes them faster with reduced startup/shutdown speeds. The enhanced sharing feature helps in making the containers more leaner, lightweight and smaller than hypervisor guests, just because the kernel sees the containers as simply resources to be managed. 

For example, when container1 and container2 open the same file, the host kernel opens the file and puts the pages into the kernel page cache. These pages are then handed over to both the containers. Whereas in case of VMs, first it creates and caches the pages in the host kernel, then the same process takes place in both VM1 and VM2 kernels. Just because hypervisors cannot share the pages in a same way a container can, there are three separate pages, one each in page cache of host, VM1 and VM2.
The above example proves that the advanced sharing in containers enable them in consuming less resources and run more number of containers as compared to the hypervisors.

Types of Containers

Containers can be categorized into two categories depending upon the requirement. The two use cases are as follows.

1. Full System Containers

Full system containers or OS containers share the kernel of host operating system but provide user space isolation. User space is nothing but allowing host CPU to partition memory allocation into isolation levels. The OS containers can be compared to hypervisors or virtual machines. We can install different applications and libraries just like any other operating system running on a virtual machine. Full system containers runs the init process, thus supporting multiple processes and services just like any Linux OS. Using full system containers we can easily assign static IP and routable IP, use multiple network devices, edit /etc/hosts file, basically OS containers can perform anything that a virtual machine can do.

Example: LXC/LXD, OpenVZ, Oracle Solaris Zones, FreeBSD Jails

2. Application Containers

Application containers also share the host operating system kernel just like OS containers. These containers are designed to run a single process (process tree) or application. The application container do not fit in all use cases. Here containers provide a limited control over services and configuration files due to which admin users who require to perform admin related activities to guarantee SLAs, cannot perform tasks like logging, ssh, cron, networking activities like setting up static IP, modifying system files like /etc/hosts and monitoring system using system tools and utilities is bit complicated task to perform and a nightmare to actually manage. For example to create a LAMP container we need to create three containers that consume services from each other, Apache container, MySQL container and PHP container respectively. These containers are ephemeral, stateless and minimal, the idea behind application containers was to reduce a container as much as possible to a single process that can be efficiently managed by the docker.

Example: Docker, rkt

What is LXD?

LXD, "Linux Container Daemon" pronounced "Lex-Dee", is a container based hypervisor sponsored by Canonical. It is a full system container just like VMware Workstation/ESX and VirtualBox, that runs a full Linux distribution and is built on top of LXC. LXD uses LXC API in the background for container management and REST API on top to provide a friendlier user interface. Hence, LXD is just a value-added extension and successor to LXC. Due to it's unique sharing features and being lightweight, it is also called "lightervisor" - world's fastest hypervisor.
Both LXC and LXD are developed by Stephane GraberSerge Hallyn from Ubuntu and Canonical. LXC was initially released on 6th August, 2008. Version 1.0 of LXC was the first stable version released on 20th February, 2014 which has a LTS support. It is actively developed at released Ubuntu 16.04 comes with inbuilt LXD 2.0 which is an Apache2 Licensed opensource project written in Go Language and has a Long Term Support release with 5 years support commitment from upstream, ending on 1st June, 2021. There is no LXD 1.0 version release because LXD was a successor to LXC 1.0, and hence LXD 2.0 was released.

LXD Features

1. Secure
LXD facilitates the use of unprivileged containers that provides access to non-root users to run and deploy containers for better security and multi-tenant workloads. It also supports resource restrictions.
2. Scalable
Containers support scaling i.e. they can be deployed on a single laptop to a large number of compute nodes.
3. Interactive
LXD interacts with the user with the help of it's REST API providing a very user friendly command line interface.
4. Live Migration
LXD also supports live migration of containers, an ability to move a running container from one host to another without actually shutting the container down.
5. CRIU (Checkpoint/Restore In Userspace)
LXD supports in freezing a container at a particular point of time and restore it later from the point it was frozen at. Online snapshotting has also been introduced in the latest version.

Difference Between LXD and Docker

1. LXD is a OS container or system container whereas docker is a application container.
2. Principal process for LXD is liblxc whereas docker has recently started using it's own libcontainer. Earlier docker also used to have liblxc as the principal process.
3. LXD wraps up application inside a "userspace image" as opposed to docker that holds the application in a self contained filesystem which makes it ephemeral and stateless.
4. LXD runs init process which a parent to all processes and services, whereas docker runs a single process per application in the container.
5. LXD specializes in deploying virtual machines whereas Docker specializes in deploying applications.
6. Administrator related activities can be easily managed in LXD, but for docker it can be complicated to configure, tune and monitor the system.
7. LXD works just like any other hypervisor with shared kernel. Whereas in docker, only the outer layer is writable and all other internal layers are read-only. It can be best compared to an onion.

Lightervisor and Hypervisors Coexistence

Yes. Since containers do not support windows on top, hypervisors make sure that Windows runs on top of Linux host. On the other side dockers can perform a scale out operation on a large scale and LXD can be used to carry standard linux workloads. Hence it is possible for a hypervisor to contain LXD and a LXD to host multiple docker containers simultaneously.

Future is actually evolving into a world where there is convergence of various virtualization technologies, and where the containers lead the virtualization space LXD is slowly attracting users.

LXD is "FREE" and that is the power of opensource innovation.


  1. As we heared more and more tips which enable us for writing for the content. But with this especially how the authoritative really nice. I agree with your 3 point include with more images and videos. It will enable the readers without any confused or any other thing and finally cleared with what we are going to tell.

    Car Wash Services in Mumbai

  2. Thank you for this great article which conveyed a good information.keep more updates.
    SEO Company in India
    SEO Services in India
    SEO Companies in India