HyperQ™ DR Replication White Paper The Easy Way

Comments

Transcription

HyperQ™ DR Replication White Paper The Easy Way
Your data, your device, your destination™
HyperQ™
DR Replication
White Paper
The Easy Way to Protect Your Data
Parsec Labs, LLC
7101 Northland Circle North, Suite 105
Brooklyn Park, MN 55428 USA
1-763-219-8811
www.parseclabs.com
[email protected]
[email protected]
Introduction
In today’s world, no company can afford to lose all of its digital assets. In some cases, even
losing a small amount of data can result in a major impact to a company's business.
Many companies also require their digital assets for not only running their business but for
regulatory reasons as well. For certain businesses, regulations can mandate data retention
for multiple years.
Storage vendors have long recognized the need to offer that level of protection and provide
different ways to replicate data. They can also arrange to provide backup and disaster
recovery solutions in addition to the storage arrays.
The problem with most of those solutions is that efficient replication features are only
available at higher priced storage tiers and, in almost every case, only offer
replication between two devices of the same vendor. In a world where all data resides on
the same high quality storage tier in the data center this approach has served the industry
well and has reached a high level of penetration.
However, complexity is introduced when companies realize that they are required to utilize
storage tiering in order to introduce higher density storage devices and to be able to afford
their ever-increasing storage needs. The moment some of the data leaves the main
storage array the problem of replication and offsite storage resurfaces and requires special
attention. This same problem also exists in almost every remote office. Remote sites usually
don’t deploy high-end storage therefore don’t have replication capabilities available to them.
Another potential complication lies in the recovery case. Once a local storage tier has failed
it is not easy to provide efficient access to the replicated copy of the data. This leads to the
business facing major downtimes while the data is being restored locally.
The HyperQ™ storage router provides a vendor independent, fully featured, answer to that
problem. The built in replication feature of the HyperQ™ storage router can be used to
replicate any file share from one device to another device. This replication operation
operates independent of the storage vendor and is independent of the capabilities that the
underlying storage provides. In addition the HyperQ™, with its advanced caching feature,
makes is easy to promote the replicated copy of the data set to be the primary source in
case the local device fails. Even though the remote copy, which is not the primary source,
might be slower than the local copy, the user will enjoy full performance as they interact with
the storage since the HyperQ advanced cache will serve as a local copy of the working data
therefore making it almost unnoticeable to the user that a failover has occurred. Providing
such capability in a central fashion across the entire network is a major innovation in
protecting storage in a vendor heterogeneous tiered storage environment.
Many solutions provide replication, and a subset of those solutions allow for a failover but
no other solution provides fast, cached access to the replicated copy in case of a failover.
Figure 1 – The HyperQ™
The HyperQ™ storage router provided an innovative and unique way to use replication and
caching to provide robust data replication / DR protection across a vendor heterogeneous
environment.
This solution can be used to achieve a great level of protection at remote sites, as well as
providing tiered storage infrastructure in the core datacenter.
In the case of the remote office, a HyperQ™ storage router in the remote site can provide a
complete data protection solution by replicating the entire local storage to a central location.
The real unique value comes into play once the local storage fails, and a simple
administrative step can promote the replicated copy to the primary source, therefore,
bringing the remote site back only in less than 5 minutes. This provides an unmatched level
of service to the remote site that is now running off a storage tier that is attached via the
WAN (for more information about the remote office caching and acceleration please see the
remote office white paper.
Storage Replication Today
Keeping data safe and available is a growing challenge for IT departments. The most
common solution requires the user to stay within a single vendor platform and to pay the
additional fee for replication licenses. Depending on the specific solution deployed, the
failover to the replicated copy can be quite difficult and in some cases is not supported at
all. In the case of an outage the data needs to be restored to a different on premise device
before operations can continue. And even when the solution does support a failover, the
latency and bandwidth issue between the main site and the secondary site cause major
performance issues in the case of a failover.
The performance/vendor lock in conundrum:
In today’s implementation of replication and DR protection of data, IT professionals face
major challenges. In most cases a failover to the replicated site will cause a major
performance hit. In addition, vendor lock in prevents IT professionals from implementing
urgently needed storage tiering in their storage layout. As a result organizations are
exposed to longer recovery times, slower performance during an outage and higher cost to
keep all data on a single vendor’s platform in order to efficiently allow for data replication.
Cost:
The total cost of the current approach is driven by multiple factors.
1)
Most storage vendors charge extra for the replication license
2)
Replication features are usually only available at higher storage tiers, forcing
companies to buy higher-grade storage in order to get access to replication
3)
Replication typically only works between devices of the same vendor, preventing
companies from taking advantage of lower cost options for their DR site
4)
All data needs to remain on the higher storage tier in order to be replicated,
preventing companies from taking advantage of urgently needed storage tiering
The compounding effect of the extra cost stated above adds up to a substantial drain on the
organization’s IT budget, preventing them from dedicating funds to urgently needed security
upgrades and other IT improvements.
Storage Replication with the HyperQ™
Figure 2 – DR protection with The HyperQ™
Parsec Lab’s innovative approach to optimizing storage provides a unique opportunity for
the HyperQ™ storage router to solve the data replication / DR problem in a comprehensive
and efficient way across a heterogeneous storage environment. The HyperQ™ storage
router can provide efficient data replication across multiple storage platforms from different
vendors. The HyperQ replicates from one location across the WAN to a secondary location
without relying on expensive replication licenses of the underlying storage. In addition the
HyperQ™’s unique capability of virtualizing the name space coupled with its built-in SSD
cache allow for efficient and quick fail overs. As a result companies can be certain that they
are protected at the highest level. This solution will allow IT operations to continue within
minutes after the loss of the local storage tier, with minimal impact to the performance even
though the data is being accessed via WAN in many cases.
Key Features:
1.Acceleration – Acceleration reduces latency and increases burst throughput and IOPS to
existing storage by means of SSD write cache in the HyperQ™. Acceleration is especially
beneficial for database and virtual machine performance.
2.Heterogeneous Replication – Using the built-in replication engine a user can easily
configure an entire file share to be replicated to a DR site. Replication occurs in the
background on a scheduled basis. The replication engine is smart enough to minimize the
data movement between the two sites. It also supports bandwidth throttling to ensure that
the DR replication operations don’t impact important business.
3.Network Throttling – The HyperQ™ appliance has a sophisticated network throttling
solution built-in. The administrator can define “off peak” and “on peak” time windows and set
bandwidth limitations for each period. This provides full control of how much bandwidth
should be dedicated to the data replication between sites and how much bandwidth needs
to be reserved for other services such as VOIP.
Solving the performance/vendor lock in conundrum:
Using the HyperQ™ storage router, companies now have the means to easily configure and
maintain a vendor independent replication process. This approach allows them to deploy
storage tiering, making it easy to leverage lower cost storage for the DR site and it
eliminates the need for expensive replication licenses.
Once deployed the HyperQ™ storage router will handle the replication across the network,
including WAN, and it provides a very simplified mechanism to fail over to the secondary
copy without suffering a major performance hit.
Dramatically reducing cost:
Storage tiers that support replication are usually in the $3/GB range above. In many cases
adding the replication license causes the prices to go to $5/GB or more.
In addition the secondary site (DR site) needs to be equipped with the same storage or at
least the same vendor and family of storage in order to make replication work.
Introducing the HyperQ™ storage router unlocks the option for multi storage vendor
environments and it also enables the usage of much lower storage tiers for parts of the
environment, therefore reducing the cost of storage from $3/GB to 20¢/GB or less.
The HyperQ™ storage router service starts as low as $500/month and provides a wide
range of capabilities at a very affordable rate, utilizing the existing storage, while saving
magnitudes more money on the storage side.
A Closer Look – The HyperQ™ Architecture
Figure 3 – Depiction of the HyperQ™ in Your Environment
Hardware:
The HyperQ™ is delivered in a standard 1U rack-mounted server chassis or as a desktop
server. This server is constructed with cutting edge processing capability using the latest
CPUs from Intel coupled with the strategic application of award winning solid state drives
(SSDs) in order to deliver significant performance advantages. The HyperQ™ also features
multiple 1GigE or 10GigE network interfaces in order to provide optimal network throughput.
The HyperQ™ is inserted into your storage environment between shared storage and the
clients. It also acts as a router and cloud gateway.
Software:
The HyperQ™ consists of the following software components:
1.A Linux operating system.
2.The Parsec File System – is a user-space (FUSE) file system. It manages file system
expansion and data migration between storage tiers at the sub-file level.
3.The Parsec Cache Manager - that exploits SSD and HDD storage in the HyperQ™ for
storage acceleration.
4.The Parsec Migration Engine that selects files for migration according to migration
policies. Once a file is selected for migration, the migration engine interacts with the Parsec
File System to relocate the file. A migration policy is executed periodically or can be
executed as a one-time event.
5.Diagnostic utilities for the Parsec File System.
6.A web GUI for HyperQ™ configuration.
7.Diagnostic tools, including reporting and telemetry.
Figure 4 – Inside the HyperQ™
How to deploy, administer, and monitor the HyperQ™:
The Storage Administrator’s job is done in four easy steps:
1. Configure new IP address for existing file server from which managed file systems are
exported.
2. Define target storage for migration and/or expansion.
3. Define managed file systems.
4. Configure replication policies.
A single dashboard that depicts local storage and cloud storage usage allows ongoing
monitoring of the HyperQ™ and file servers.
It is simple, efficient, and effective.
Key Advantages
·Works across existing storage devices – Sitting in the network, the HyperQ™ storage
router has the ability replicate a wide variety of storage devices without disrupting the
current storage architecture. Also the ability to span multiple vendors and devices allows for
efficient storage tiering (see storage tiering white paper <link>) and deployment of less
expensive solutions at the DR site.
·Simple Deployment and Management – The HyperQ™ will seamlessly integrate with the
existing network, no changes are required on existing servers that access the storage. After
the HyperQ™ is deployed; its management GUI will allow the admin to manage the entire
storage through a single pane of glass.
·WAN Optimization – the HyperQ™ has special provisions to optimize WAN traffic
between sites and to the DR site, allowing the user to utilize the DR copy without impacting
performance.
·Scalability – Scale to your needs in cost and storage requirements. Choose between
cost-effective pricing tiers.
Conclusion
The HyperQ™ offers a new, innovative solution to the DR / replication problem. Using the
HyperQ™ allows companies to provide better performing, more cost effective DR replication
across a wider range of storage tiers than ever before. Also the HyperQ™ innovation sets
new standards when it comes to recovery time and performance during an outage.
Deploying the HyperQ™ storage router greatly reduced the cost of providing a
comprehensive DR / replication solution and provides a unique cached approach during a
failover scenario which is unmatched in the industry.
Contact Us
Parsec Labs, LLC.
7101 Northland Circle North, Suite 105
Brooklyn Park, MN 55428 USA
1-763-219-8811
www.parseclabs.com
[email protected]
[email protected]

Similar documents