Follow Us

Basic Concepts

Recent Posts

Virtualization Basics and Fundamentals

HomeCloud ComputingVirtualization Basics and Fundamentals

Virtualization is the process or a technique which can create a virtual version of IT datacenter components such as Compute, storage and network etc. Hypervisor is one of the commonly used virtualization technology to create virtualized IT infrastructures. It is the important technology and is the main driver of cloud based infrastructure. Hypervisor is widely used by many cloud service providers to virtualize the traditional hardware IT infrastructure and offer them as one or more Cloud Services.

In the previous post we learned fundamental basics of Cloud Computing, models and services. In this post we will learn below topics about Hypervisor – the core concept of Cloud.

What is a Hypervisor 

A hypervisor is a software that enables a server to be logically abstracted and appear to the operating systems running on it as if they are running directly on the hardware itself. Vmware is the pioneer in virtualization technology and it defines Hypervisor as below

A hypervisor, also known as a virtual machine monitor, is a process that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, like memory and processing.

The underlying operating systems sees the hypervisor as the actual computer. With virtualization, many operating systems can now run on a single server hardware platform, whereas in the past each operating system required its own physical server platform.

Types of Hypervisor

This is the core piece of technology which is used by the Cloud service providers to virtualize the infrastructure in a datacenter to offer compute, storage, network etc as a services. 

Types of Hypervisors

There are different types of Hypervisors available and each type has its own use cases.

Type1  Hypervisor

  • The Type 1 hypervisor is installed and runs directly on top of the server hardware platform. This type is referred to as either bare-metal or native hypervisors.
  • Type 1 hypervisors are generally more advanced and offer more features than a Type 2 and are found in the cloud datacenters as well as in the enterprise datacenters.
  • Because the Type 1 hypervisor is running directly on top of the bare-metal hardware and not as an application on another operating system, it offers much higher performance, less overhead, and more security than a Type 2 hypervisor.

Type 2 Hypervisor

  • The Type 2 hypervisor is installed as an application on an already existing operating system and allows to install the VMs in the application.
  • For example, a PC running Windows can install a Type 2 hypervisor and run it as any other application. Then, inside the hypervisor, multiple operating systems or VMs can be run.  VMware workstation and VirtualBox from Oracle are examples of Type 2 hypervisors.
  • Type 2 hypervisors are good for testing applications and in situations where dedicating a server to be virtualized is not desirable.
  • This type of hypervisor does not offer the higher performance of a Type 1 since the Type 2 has the additional overhead of running on top of another operating system such as Windows or Linux and not directly on top of the bare-metal server hardware.

Also Read: Server and Application Virtualization Overview

Proprietary vs Opensource Hypervisors

  • Hypervisors can be proprietary, they were developed and sold by private corporations such as Microsoft or VMware.
  • Examples of proprietary hypervisors are Hyper-V developed by Microsoft and ESXi from VMware.
  • Open source hypervisors are free for use by the public. Open source software is in the public domain, and there are no licensing fees.
  • Some examples of fully functional open source hypervisors are KVM by Red Hat, VirtualBox from Oracle, and XenServer by Citrix.
  • They provide complete virtualization of systems that allow for one or more VMs to run on the same server hardware platforms as the proprietary hypervisors.

Advantages and Disadvantages

  • There are advantages and disadvantages to each approach. Generally proprietary hypervisors are fully supported by the vendor’s support agreements and have regular updates to add features and bug fixes.
  • With open source, support is generally offered in community forums, with a few companies offering support agreements.

Physical components  that creates Datacenter Infrastructure

Following IT infrastructure components form a traditional IT Datacenter in On-premises

  • Compute components
  • Network Components
  • Storage Components

Compute Components

Below are the important components of a Compute and the combination of these components will make a server or a computer to perform its duties.

BIOS/firmware

  • BIOS is a small program stored on a memory chip of every motherboard that provides basic configuration information to the CPU when it first boots and before the operating system is loaded.
  • The BIOS plays a critical role when the CPU first starts up. The BIOS provides a low-level operating function that is necessary to provide basic preboot and boot configuration information to the system.
  • The BIOS program is stored on a flash or ROM chip, and it contains the initial configuration information such as the clock, hard drives connected, basic driver software for the keyboard, display, USB settings, security settings, order of drives to boot off, virtualization settings, and many other options that the server may need when it is powered up but before the operating system is loaded.
  • The BIOS is the first stage in the boot process. When a server is running hypervisor software for virtualization, the BIOS can be configured to optimize CPU performance by enabling certain features on the CPU.
  • Server and motherboard manufacturers will from time to time update the BIOS firmware to enable new features and capabilities as well as correct issues found in earlier versions of the code.
  • A cloud service provider offering compute services may not allow a customer access to the BIOS because many other VMs may be running on the same server hardware and may be used by other customers.

CPU

  • CPU is the important component in a Compute which executes the instructions. Each CPU can have multiple cores, and multiple CPUs can be inserted into a server. Enough CPU slots should be available in motherboard to increase the physical CPU power.
  • A single-core CPU has one processing thread and can perform only one task at a time. A core is an individual processing unit within a CPU chip. Each CPU can have from one to many cores on a single silicon wafer. With multicore processing, multiple threads can run simultaneously, dramatically increasing the processing power of a single server.
  • Each CPU core will access its own cache memory, known as Level 1 (L1) cache. The L1 cache is a small but very fast memory pool that can be used to reduce the time it takes to access the main memory on the motherboard. 
  • Initially CPU will run a self-test and then read its initial configuration information from the BIOS. This configuration information will instruct the CPU what storage devices are connected and activate the monitor and keyboard.
  • Based on the drive and boot information, the CPU will access the storage hardware and boot the operating system or hypervisor.
  • CPUs that are used in servers will have a feature called hardware-assisted virtualization. This feature is used to optimize processing in a hypervisor environment.

Memory (RAM)

  • Memory is the another important component which is used to store the information for immediate use.
  • When configuring a server to be used in a virtualized environment it is important to take into consideration the memory to be installed in the bare-metal server. 
  • The memory will be consolidated into a resource pool and shared between the VMs running on the hypervisor.
  • Memory requirements for each VM’s are determined based on below settings
    • Memory for running hypervisor
    • Available memory slots in motherboard.
    • Memory requirements for an application that will be hosted.
    • Additional memory to support peak work loads.

 

Also Read Logical and physical components of a Compute Server

 

Network Components

The following is the important component of a Network which allows the servers to communicate and transfer data from one compute to another compute within a network

Network Interface Card (NIC)

  • The network interface card (NIC) is the Ethernet physical connection from the server to the data network. NICs can vary in speed from 100 Mbps, 1 Gbps, 10 Gbps, up to 25 and 40 Gbps.
  • Often multiple Ethernet ports are installed in a single server for redundancy and capacity, and ports are assigned specifically to a group of vertical servers.
  • Ethernet interfaces can support a large number of features that are commonly used in a cloud datacenter such as link aggregations, VLAN tagging, jumbo frame support, checksum, and TCP offload. We will learn more about these concepts in future posts.

Storage Components

The following are the important components of a Storage which allows the servers to store data and transfer data from one compute to another compute within a LAN or SAN network.

Host Bus Adapter (HBA)

  • A host bus adapter (HBA) is a network interface installed in a server to provide a connection to remote storage.
  • HBAs are installed in a server’s expansion slot, much like NIC cards are installed to access the LAN.
  • To the server’s operating system, the storage appears to be attached locally as it talks to the HBA.
  • The HBA hardware and driver software will take the SCSI storage commands and encapsulate them into the Fibre Channel networking protocol. Fibre Channel is a high-speed, optical SAN, with speeds from 2 Gbps per second to 16 Gbps and higher.

 

See Different types of storage devices used in a IT datacenter


Storage Devices

Various types of storage devices can be installed directly inside the server, locally attached, or accessed over a network

  • Tape – Tape storage uses magnetic material to store information and are commonly found in the slower storage tiers due to long read and write access times. For example, tape storage is frequently found in offline storage backup systems as tapes are easy to store in a safe facility offsite from the datacenter for disaster recovery purposes.
  • SSD – A solid-state drive (SSD) replaces the mechanical spinning platters of a traditional hard drive with silicon storage chips. Data read and write times are significantly faster than mechanical drives, SSDs are a natural choice for I/O-intensive applications such as databases or other processes that could benefit from fast storage access.
  • USB – The universal serial bus (USB) drive is a removal storage interface option that is useful when performing server maintenance or installing locally drivers or updates. The USB interfaces can support thumb drives, external hard drives, and DVD drives, as well as many other externally attached devices.
  • Disk – Traditional disk drives consist of a spinning magnetic platter with read-write heads floating above them. Hard disks are the backbone of all storage systems and are usually installed in RAID arrays for redundancy. Hard drives can be installed directly inside a server using SCSI and SATA interfaces as the most common connection types. In the large storage systems that centralizes the storage, connects to the servers using a storage area network (SAN).

 

Do you know What are the intelligent Storage Systems

 

Virtual Components that creates Cloud Infrastructure

Similar to the physical components, following virtual components helps a virtual server to perform its tasks, store data and communicate via network with the other virtual servers. The following virtual components form a virtual IT infrastructure

  • Virtual Compute components
  • Virtual Network components
  • Virtual Storage components

 

Virtual Compute components

Virtual shared Memory

  • A pool of RAM can be created and shared through the hypervisor to the VMs. This can allow for memory to be allocated as needed and reclaimed later when the additional memory is no longer needed.
  • Memory can be provisioned for growth and ordered by the cloud consumer from the cloud provider on the fly on demand.

Virtual CPU

  • As with shared networking and memory, the CPU or processing component has been virtualized and can be allocated as requested or required by the cloud computing consumers.
  • CPU processing power can be elastic in that it can be created and used when additional processing power is needed and returned after the period of higher demand subsides.

 

Virtual Network components

Virtual NIC

  • Virtual network interface cards (vNICs) are the virtualized equivalent of a standard Ethernet LAN NIC installed in a server. The vNIC is installed on the virtual machine, and the operating system sees that as a connection to a real LAN.
  • Multiple vNICs can be installed on each virtual server. The vNIC will need to be configured on a VM in the same manner as a standard NIC installed in a bare-metal server to setup the IP address, the subnet mask, and the default gateway address.
  • Many networking components such as virtual routers, virtual firewalls, and virtual load balancers may be installed  and are often implemented as a VM running on a Type 1 hypervisor.

Virtual Switches

  • A virtual switch is a software representation of an Ethernet switch that provides the Ethernet interconnection from the VMs to the network.
  • All vNICs are connected to this virtual switch and, in turn, all LAN traffic destined to the outside world is forwarded out of the hardware NIC on the server to an external hardware switch or router in the datacenter.
  • The virtual switch configuration utility will include options such as if the network is bridged, routed, or if NAT is being used and must be configured based on the network requirements.

Virtual LAN

  • It is a Virtual network with a common endpoints of servers to provide a separate Layer 2 container for each IP subnet. All devices in the same VLAN communicate as if they were all directly connected to each other.
  • This approach allows a datacenter communications network to be segmented into many VLANs and to use a common switch hardware platform.
  • Each server’s vNIC must be assigned an ID that is used to bind to a specific interface’s VLAN ID that matches the configuration of the datacenter LAN. VLANs will be in the numeric range from 1 to 4096.

 

Virtual Storage components

Virtual Disks

  • A virtual disk is an image that is presented to the guest VM as an actual hard drive. These virtual disks can be located either locally on the server.
  • The virtual disk will have common configuration parameters, such as storage limits that can be defined and SCSI/SATA IDs.
  • A virtual disk will contain all of the data that would be stored on a physical hard drive in this virtual disk or hard drive.

Also Read: What is Network Virtualization

Virtual SAN

  • It is a common storage switching fabric and divide it up into many individual storage networks by creating a virtual storage area network (VSAN).
  • All ports that are in the same VSAN can communicate with each other and share common SAN fabric services. All virtual server host bus storage adapters are configured to be in the same VSAN as the remote storage arrays and controllers they are accessing.
  • The host bus adapter for storage networking and the storage controller’s interfaces must be bound in the same VSAN to access storage resources from a virtual server.
  • VSANs must be assigned IDs and have the same 1-4096 number range that Ethernet VLANs use.

 

See How storage network is virtualized in modern datacenters

 

How to create a Virtualized Cloud Infrastructure

Virtualized environment is achieved by creating an image that has the base operating system, service packs, security configurations, applications, and features installed, which then later can be used to add and build virtual servers in the cloud.

This provides for a consistency of virtual image during deployment and also speeds up the process of creating VMs. Virtual Management utilities and graphical interfaces allows us to install and prepare a VM as a template.

Creating VM with VM templates

This template can be used as the base configuration of all new VMs. This also greatly simplifies and decreases deployment time since virtual machines do not have to be created from the beginning for every deployment. Instead, the VM template is reused for the creation of new servers.

After the VM is installed on the hypervisor, additional drivers and utilities, known as guest tools, are installed on the VM. These tools allows enhanced file sharing, mouse, sound, graphics, and networking performance. 

Also Read: How a server and application is Virtualized

Snapshot & Cloning in Virtualized environment

  • The snapshot is a file-based image of the current state of a VM. The snapshot will record the data on the disk, its current state, and the VM’s configuration at that instant in time.
  • Snapshots can be created while the VM is in operation and are used as a record of that VM’s state. They can be saved to roll back to at a later time.
  • The process of taking a snapshot is usually performed in the management tools that are used to manage the virtual environment.
  • There is a second type of VM replication called cloning, which is very similar to snapshots but has a different use in managing cloud deployments.
  • With a snapshot, an exact copy is made of a running VM. Cloning is different in that it takes the master image and clones it to be used as another separate and independent VM.

Benefits of virtualization in a cloud environment


Resource pooling

  • Resource pooling is the concept of creating a group of storage, memory, processing, I/O, or other types of resources and sharing them in a virtualized cloud environment.
  • The hypervisor will use what is available in these resource pools to provide infrastructure to be shared as the VMs need to consume them.
  • Virtualization create following resource pools in cloud environment
    • Compute Pools
    • Memory Pools
    • Network Pools
    • Storage Pools

Do you know The advantages of Cloud Computing ?

Elasticity

The ability to dynamically commit and reclaim resources such as storage, compute, and memory is referred to as elasticity in cloud computing. Cloud resources can be created, consumed, and reclaimed for future use dynamically by the hypervisor.

Scalability

One of the big advantages of cloud computing is the ability to use what is needed now and be able to scale to larger requirements in the future as needed. Modern cloud services take this into account and offer a truly scalable cloud service that can be accessed by the cloud consumer as required.

Availability

With hypervisor high availability (HA) designs, a cloud server can be architected to achieve high availability. Also, redundant power, cooling, storage, and networking infrastructure designs in the cloud provide for increased availability.


Portability

The ability of a virtual machine to move to different hosts inside and between datacenters on the fly with zero downtime. Virtualization with the use of VM templates provides this capability to deploy VMs in mulitple regions easily.


Network and Application Isolation

Applications running in the cloud can be shared by multiple tenants, and at the same time, the customers are segmented and isolated from each other even when sharing the same application. The network can be segmented with VLANs and firewalls so that only a single customer can access their computing resources in a cloud.

Infrastructure Consolidation

The cloud datacenters uses unified computing that often includes a shared or common switching fabric. Infrastructure consolidation is used for both Ethernet LAN and storage area networking (SAN) data over the common hardware platform.

Also Read: Software Defined Networking Overview

How to connect to a Virtualized Infrastructure

For maintenance, management, monitoring, and day-to-day operations of the servers running in the cloud datacenter, there is a need to connect to them as if we were local to the server.

If we can connect to and administer the servers remotely, there is less need to rely on the support staff in the cloud datacenter, and issues can be resolved quickly.

Also, the servers are most certainly in a secure datacenter, and access to the physical server is restricted; therefore, remote access is usually our only option for managing our servers.

The following options provide remote access to the virtulaized IT infrastructure

Remote hypervisor access

  • Most hypervisor products on the market have a management application that can be installed on a workstation and can be used to fully configure and manage the hypervisor.
  • With the management application installed on local workstation or laptop, users can connect to the hypervisors from most locations over either the Internet or a VPN, depending on the security configurations.
  • The remote application is usually a graphical interface that communicates over specific TCP ports, so it is important to ensure that all firewalls  allow the application to communicate.
  • User access controls are implemented to allow for view-only access all the way up to full control of the environment.
  • The ability to control these functions will depend on the type of cloud services purchased; the cloud provider will generally restrict access to any portion of the service that they control.
  • However, in environments such as IaaS this is a viable option. If you using PaaS or SaaS from the cloud, then you would not be responsible for the maintenance of the VMs and usually would not be allowed to access the hypervisor.

Do you know How data is accessed from intelligent storage systems 

The Remote Desktop Protocol (RDP)

  • It is a proprietary protocol developed by Microsoft to allow remote access to Windows devices.
  • It is a client-server application, which means RDP has to be installed and running on both the server and the local workstation.
  • The remote desktop application comes preinstalled on most versions of Windows. Ports need to be opened up TCP and UDP port 3389 in firewalls in order to connect.

The Secure Shell protocol (SSH)

  • To use SSH, the SSH service must be supported on the server in the cloud datacenter and enabled. This is pretty much standard on any Linux distribution and can also be installed on Windows devices.
  • Many SSH clients are available on the market as both commercial software and free of charge in the public domain.
  • The SSH client connects over the network using TCP port 22 using an encrypted connection, SSH is a common remote connection method used to configure network devices such as switches and routers.

Console ports

  • These are very common in the networking environment and are used to configure switches and routers from a command-line interface (CLI).
  • Linux servers also use the console or serial ports for CLI access. 

Webbased (HTTPS)

  • Probably the most common and easiest way of managing remote devices is to use a standard browser and access the remote device’s web interface.
  • Most devices are now supporting web access with HTTPS. When connected and authenticated, the web-based applications allow for a graphical interface that can be used to monitor and configure the device.
  • HTTPS, which uses TCP port 443, is the suggested remote access protocol for web-based access because since it is secure. The insecure HTTP port 80 is rarely supported due to its security concerns

 

In the next post, we will learn the fundamental basics of Storage infrastructure and its features which are implemented in both Cloud and traditional datacenters. Storage is another important technology which enables fast and reliable data access methods in Cloud.

Sponsored Links
Anil K Y Ommi
Anil K Y Ommihttps://mycloudwiki.com
Cloud Solutions Architect with more than 15 years of experience in designing & deploying application in multiple cloud platforms.

2 COMMENTS

Leave a Reply

AWS Certified Solutions Architect Professional – Free Practice Tests

This AWS practice test helps you to pass the following AWS exams and can also helps you to revise the AWS concepts if you...