VIRTUALIZED DATA RISKS: HOW TO ASSESS, PROTECT

Transcription

VIRTUALIZED DATA RISKS: HOW TO ASSESS, PROTECT
VIRTUALIZED DATA RISKS: HOW TO ASSESS, PROTECT
AND RECOVER FROM DATA LOSS
7 ways to prevent the perils of disappearing
data in a virtualized environment
With progress, there is risk—but why so much risk?
Virtualization presents enormous opportunities for businesses of every size—otherwise it wouldn’t
be so popular. But it also serves up a number of intense challenges—among them, the increased
risk of data loss and massive outage. Here are the types of risks you need to consider…
You’ve heard the warning before.
But what does it really mean in today’s
virtualized environment?
As an IT professional, you’ve undoubtedly been
bombarded with statistics about the devastating
impact that massive data loss would have on your
business. You’ve heard how over 40% of companies
that have experienced a data disaster never reopen
their doors.* Or how over 90% of companies that lost
their data center for 10 days or more went bankrupt
within a year.**
Sobering statistics, wouldn’t you say? Especially
when you consider that the risk of data loss is even
greater in a virtualized environment than a nonvirtualized one (as we’ll see later on in this guide).
And yet, the questions linger…
• Will massive data loss really happen to
my company?
• What are the odds that a flood or hurricane
will wipe out our data center AND our off-site
backup?
• Will the sudden closure of one of our service
partners really cripple us?
• Or can a sustained hacker attack really take us
all the way down?
Most of us believe these once-in-a-lifetime data
disasters could happen to us, but probably won’t.
We hold onto a belief that business will continue as
normal—and at the very least, when data disruptions
occur, they will not be catastrophic in nature.
But the fact is, there are a thousand-and-one ways
that everyday data disruptions—the lesser events
that occur on a regular basis—can severely impact
your business and damage your company and career.
For example:
• Need to produce a compliance document to
satisfy government regulators, only to discover
that it was deleted from your system and never
backed up?
• Need to retrieve an old e-mail to help your
company defend itself in a litigation case—but
can’t find it in the archive?
• Trying to reconcile accounts with an old and
valued customer, only to find that your data on
past transactions is inexplicably incomplete?
• And, on a bigger scale that touches everyone
in your organization, can you recover lost data,
applications and servers in minutes and save
your company hundreds of thousands of dollars?
In all these instances, missing data is the culprit, and
inevitably, it will fuel conjecture among your peers
that the data disappeared because it wasn’t properly
protected. And incidentally, it’s not a case of IF this
type of conjecture will occur—but rather, WHEN.
*McGladrey and Pullen
**National Archives & Records Administration
Data loss and virtualization:
Death by a thousand cuts.
The success of virtualization has given rise to a myth—
namely, that because of the operational flexibility
that virtualization provides, where one server can be
“brought up” if another server goes down, the risk of
data loss is minimized. More succinctly, there is a belief
that virtualization is backup.
They are NOT one and the same—not even close.
Furthermore, for all the good that virtualization does,
it actually has a downside with which you may be
uncomfortably familiar.
Many companies that have invested in virtualization
(and benefitted greatly from it) have not made
corresponding investments in the enhanced data
protection that it requires. They’re using the same old
“pre-virtualization” tools and processes to protect
their data. By relying on legacy solutions and
processes that don’t work well anymore, they can’t
keep pace with the exponentially larger amounts
of data that need to be managed, or the expanding
number of virtual machines, or the requirements
of other business stakeholders that can be so
burdensome. (Additionally, these companies are likely
to still be paying on a per-machine basis, rather
than per-host, so their expenditures may be many
times more expensive than they ought to be.) And
by expressly using backup that funnels data from the
front-end to the back-end—all within an impossibly
tight backup window—they are bound to fail.
Plus, consider this: With virtualization, many VMs
may be sharing the same physical resources. If
every agent in every machine runs a backup at the
same time (an acceptable action with physical
machines), they could kill the entire host. It’s a perfect
example of how the very same virtualization
initiatives that were designed to increase data
availability redundancy and security are now putting
your data at greater risk.
You live with these risks every day. You cope as nimbly
and expertly as you can. You deal with frustrated
end-users, hamstrung IT colleagues and exasperated
C-level executives in the corner office. But through it
all, do you recognize that there are proven ways to
combat the threat that ongoing, sporadic data loss
presents to you and your company?
There are solutions that work. We urge you to
investigate them, and in the process, learn how to
more effectively minimize the risk of data loss and
downtime in even the most complex virtualized
environments.
Secure data protection and access,
every time, everywhere.
Did you know that only 37% of companies feel they
have adequate data protection plans in place for
their virtual environments? It’s true. And it means
there’s a huge gap between the reality we live in and
the dream of consistent, reliable data protection.
The good news is that you can find solutions
that help you fill the gap. Wouldn’t it be ideal, for
example, to have all of the following in your
virtualized environment?
1. Cross-vendor choice and flexibility. When
you choose a backup and recovery solution, you may
not want to be locked into a single vendor. Rather,
perhaps you’d prefer affordable cross-hypervisor
data migration, backup and disaster recovery—
executed across disparate systems, different sites
and multiple vendors. Single and multiple vendor
solutions can both work—the key is picking a solution
that gives you a choice.
2. Speedy new technology. New technology
innovations allow you to recover data with much more
speed and frequency than the traditional methods
you’ve relied on—up to 100 times faster than was
previously possible. As a result of this speed, you can
ensure that multiple VMs on a single server have a
current and reliable backup available at all times.
3. Any-to-any recovery. Today, you need to be
able to recover data with greater speed and frequency
to, and from, any platform—and ensure that your
data is always available to any system at any time.
So when a system goes down, you’re able to run its
workloads and restore its data on any combination
of available platforms—whether it’s from failover to
a replica VM, to running a VM directly from a backup.
That goes for physical servers, VMware, Hyper-V, Xen,
RHEV, KVM, you name it. There’s no time-consuming
restore process required. Nor are there excessive
expenditures—you can choose from a wide variety of
recovery-time-to-cost options, all while making your
applications and data totally portable.
4. Affordable, near-zero downtime. It used to
be you had hours to correct a problem. Now you’re
expected to be up and running in minutes, without
breaking the bank with expensive redundant sites, or
pricey SAN-based replication.
5. Easy migration. How about a solution that
enables you to migrate applications from any server
or hypervisor to any other server or hypervisor? It
would enable you to maximize capacity and efficiency
with any and every resource at your disposal—
without the corresponding data loss that is so prevalent
during migrations. And of course, you’d want the
option of being able to roll back if you wanted to.
6. Reduced complexity. Think of the advantages
of using the same backup solutions and policies for
all your physical and virtual systems. This type of
strategy will simplify data management, lower risk,
and by definition, keep your data better protected.
An integrated solution helps free you from the risk of
a complex patchwork of point solutions.
7. Future-proof solutions. You need technology
today that enables you to take advantage of the next
wave of virtualization or cloud computing tomorrow—
no small consideration in the rapidly-changing world
of the modern data center.

Similar documents

Global Network Function Virtualization Market

Global Network Function Virtualization Market Network function virtualization (NFV) is the combination of software and hardware network featured in a virtual network. The service was initiated by service operators in order to increase the deployment of new operations, maintenance and service of networks. Network function virtualization catalyzes the major changes in implementation of network and their operations. The solution offered by network function virtualization projects a prominent future by optimizing the network operations and benefitting the service providers across the globe. Network function virtualization involves implementation of network functions in software suitable for different hardware and that can be moved to various locations without any need of installation of new equipments.

More information