Advertisement

Follow Us

Basic & Fundamentals

Advertisement

Recent Posts

Advertisement
Advertisement

Most Read

Virtualization Basics and Fundamentals

Virtualization is the process or a technique which can create a virtual version of IT datacenter components such as Compute, storage and network etc....

AWS Certified Solutions Architect Associate (SAA02) – Free Practice Tests

This AWS practice test helps you to pass the following AWS exams and can also helps you to revise the AWS concepts if you...

Cloud Computing Basics and Fundamentals

In this first post, we learn the fundamental basics of Cloud Computing, cloud characteristics and its advantages, different cloud implementation models, major cloud services...

Network Basics and Fundamentals

A Network is basically connecting two or more devices though a wired or wireless channel to share & exchange the information electronically. These devices...

Cloud Infrastructure Performance Tuning -Basics and Fundamentals

Applications in cloud environment may need to be tuned for the performance over the time and to meet the continuous changes in the user...

Basic concepts of Cloud Configurations and Cloud Deployments

A successful cloud deployment requires proper planning and determining the right cloud configurations and then executing the plan as it is. But to create...

Security Basics and Fundamentals

Security is the important strategy which is to be planned and implemented across all infrastructure layers to secure the IT infrastructure and the information...

2.6 What are Intelligent Storage Systems ?

HomeStorage Area Networking2.6 What are Intelligent Storage Systems ?
Storage Arrays which are feature-rich RAID arrays that provide highly optimized I/O processing capabilities are generally referred as Intelligent Storage Arrays or Intelligent Storage Systems. These intelligent storage systems have the capability to meet the requirements of today’s I/O intensive next generation applications. These applications require high levels of performance, availability, security, and scalability. Therefore, to meet the requirements of the applications many vendors of intelligent storage systems now support SSDs, encryption, compression, deduplication, and scale-out architecture. 
 
The use of SSDs and scale-out architecture enable to service massive number of IOPS. These storage systems also support connectivity to heterogeneous compute systems. Further, the intelligent storage systems support APIs to enable integration with Software-Defined Data Cneter (SDDC) and cloud environments.

Intelligent Storage Systems Overview

These storage systems have an operating system that intelligently and optimally handles the management, provisioning, and utilization of storage resources. The storage systems are configured with a large amount of memory called cache and multiple I/O paths and use sophisticated algorithms to meet the requirements of performance-sensitive applications. An intelligent storage system has two key components, controller and storage. 
 
A controller is a compute system that runs a purpose-built operating system that is responsible for performing several key functions for the storage system. Examples of such functions are serving I/Os from the application servers, storage management, RAID protection, local and remote replication, provisioning storage, automated tiering, data compression, data encryption, and intelligent cache management.

 
An intelligent storage system typically has more than one controller for redundancy. Each controller consists of one or more processors and a certain amount of cache memory to process a large number of I/O requests. These controllers are connected to the servers either directly or via a storage network. The controllers receive I/O requests from the servers that are read or written from/to the storage by the controller. 
 
Based on the type of data access, a storage system can be classified as block-based storage system, file-based storage system, object-based storage system, and unified storage system. A unified storage system provides block-based, file-based, and object-based data access in a single system. These are described in the next posts.

Architecture of Intelligent Storage Systems

An intelligent storage system may be built either based on scale-up or scale-out architecture.

A scale-up storage architecture provides the capability to scale the capacity and performance of a single storage system based on requirements. Scaling up a storage system involves upgrading or adding controllers and storage. These systems have a fixed capacity ceiling, which limits their scalability and the performance also starts degrading when reaching the capacity limit.
 
scaleup and scaleout storage sytems
 
A scale-out storage architecture provides the capability to maximise its capacity by simply adding nodes to the cluster. Nodes can be added quickly to the cluster, when more performance and capacity is needed, without causing any downtime. This provides the flexibility to use many nodes of moderate performance and availability characteristics to produce a total system that has better aggregate performance and availability. Scale-out architecture pools the resources in the cluster and distributes the workload across all the nodes. This results in linear performance improvements as more nodes are added to the cluster.

Advertisement

Features of an Intelligent Storage Systems

Storage Tiering
Storage tiering is a technique of establishing a hierarchy of different storage types (tiers). This enables storing the right data to the right tier, based on service level requirements, at a minimal cost. Each tier has different levels of protection, performance, and cost. 

This technique allows us to place data on the most appropriate tier of storage. It helps to place frequently accessed data on fast media and inactive data on slow media. This can improve the performance of the storage array and bring costs down by not having to fill array with fast disks when most of the data is relatively infrequently accessed. This movement of data happens based on defined tiering policies. The tiering policy might be based on parameters, such as frequency of access.


For example, high performance solid-state drives (SSDs) or FC drives can be configured as tier 1 storage to keep frequently accessed data and low cost SATA drives as tier 2 storage to keep the less frequently accessed data. Keeping frequently used data in SSD or FC improves application performance. Moving less-frequently accessed data to SATA can free up storage capacity in high performance drives and reduce the cost of storage. 



The process of moving the data from one type of tier to another is typically automated. In automated storage tiering, the application workload is proactively monitored; the active data is automatically moved to a higher performance tier and the inactive data is moved to higher capacity, lower performance tier. The data movement between the tiers is performed non-disruptively.



Redundancy

Redundancy feature in a intelligent storage system ensure that failed components do not interrupt the operation of the array. Even at the host level, multiple paths are usually configured between the host and storage in multipath I/O configurations, ensuring that the loss of a path or network link between the host and storage array does not take the system down.

Replication
Storage  system based replication makes remote copies of production volumes that can play a vital role in disaster Recovery (DR) and business continuity (BC) planning. Depending on the application and business requirements, remote replicas can either be zero-loss synchronous replicas. Asynchronous replication technologies can have thousands of miles between the source and target volumes but synchronous replication requires the source and target to be not more than 100 miles.
 
Thin Provisioning
Thin provisioning technologies can be used to more effectively utilise the capacity in the storage systems. Over provisioning of storage space would eventually result in running out of available space.



Go To >> Index Page

Sponsored Links

You might also like to read

1.9 Storage Virtualization Overview

Just as we can virtualize the physical servers and applications we can also virtualize the storage systems to access the data. Storage Virtualization can...

4.7 Introduction to VSAN

Virtualizing FC SAN (VSAN) Virtual SAN (also called virtual fabric) is a logical fabric on an FC SAN, which enables communication among a group of...

4.5 Fibre Channel (FC) Zoning Overview

In general the initiator performs the discovery of all the devices in a SAN environment. If zoning is not done previously, the initiator will...

8.2 How NAS is different from SAN and DAS

Generally NAS storage is often used for unstructured data storage such as shared folders and document repositories. SAN and NAS have been around for years. SAN...

1.6 Overview of Compute Server

A compute system or Server is any computing device which is the combination of hardware, firmware, and system software that runs business applications. Examples...

2.5 Storage Array Architecture

The common model architecture of any vendor storage array may consist the below components.  Front-End Ports Processors (CPU) Cache Memory Backend Storage Disks Front-End Ports:...
Anil K Y Ommi
Anil K Y Ommihttps://mycloudwiki.com
Cloud Solutions Architect with more than 15 years of experience in designing & deploying application in multiple cloud platforms.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

AWS Certified Solutions Architect Professional – Free Practice Tests

This AWS practice test helps you to pass the following AWS exams and can also helps you to revise the AWS concepts if you...