EMC VPLEX Witness Deployment within VMware vCloud Air

Transcription

EMC VPLEX Witness Deployment within VMware vCloud Air
White Paper
EMC VPLEX WITNESS DEPLOYMENT
WITHIN VMWARE VCLOUD AIR
Abstract
This white paper provides a summary and an
example of deploying VPLEX Witness in a public
cloud based virtual data center. In particular, the
rationale for VPLEX Witness within VMware vCloud Air
and sample deployment steps are provided.
Copyright © 2015 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate of its publication
date. The information is subject to change without notice.
The information in this publication is provided “as is”. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this publication
requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.
Part Number H14033
Deploying VPLEX Witness within a Public Cloud
2
Table of Contents
Executive summary ....................................................................................................... 4
Support Statement ......................................................................................................... 4
Audience ........................................................................................................................ 5
Introduction .................................................................................................................... 6
Why is VPLEX Witness critical? ................................................................................... 6
VPLEX Witness Deployment Requires a 3rd Fault Domain ..................................... 7
EMC Delivery and Service Offerings ........................................................................ 8
VPLEX Witness Virtual Machine Installation Requirements.................................... 8
Securing Your IP Management Network When Using Public Cloud ................... 8
Host and Networking Requirements for VPLEX Witness ......................................... 9
Deployment Example: VMware vCloud Air ............................................................... 11
Conclusion.................................................................................................................... 22
Deploying VPLEX Witness within a Public Cloud
3
Executive summary
For several years, businesses have relied on traditional physical storage to meet their
information needs. Developments such as sever virtualization and the growth of
multiple sites throughout a businesses’ network have placed new demands on how
storage is managed and how information is accessed.
To keep pace with these new requirements, storage must evolve to deliver new
methods of freeing data from a physical device. Storage must be able to connect
to virtual environments and still provide automation, integration with existing
infrastructure, consumption on demand, cost efficiency, availability, and security.
The EMC® VPLEX™ family is the next generation solution for information mobility and
access within, across, and between physical or virtual data centers. It is the first
platform that delivers both Local and Distributed storage Federation.
• Local Federation provides the transparent cooperation of physical elements
within a site.
• Distributed Federation extends access between two locations across distance.
VPLEX is a solution for federation both EMC and non-EMC storage.
VPLEX completely changes the way IT is managed and delivered – particularly when
deployed with server virtualization. By enabling new models for operating and
managing IT, resources can be federated – pooled and made to cooperate through
the stack—with the ability to dynamically move applications and data across
geographies and service providers. The VPLEX family breaks down technology silos
and enables IT to be delivered as a service.
VPLEX Metro requires VPLEX Witness in order to deliver continuously availability across
data centers. VPLEX Witness requires a 3rd fault domain for deployment, but some
customers do not have a 3rd fault domain available to them. One solution to this
challenge is to leverage public cloud virtual datacenters to provide the 3rd fault
domain. This whitepaper outlines an example deployment of VPLEX Witness using
VMware vCloud Air.
Support Statement
For the most up to date information on VPLEX Metro and the applications it supports,
please refer to the VPLEX EMC Simple Support Matrix located on support.emc.com.
This white paper is based on the systems architecture of EMC VPLEX Metro and VMware
vCloud Air.
Deploying VPLEX Witness within a Public Cloud
4
Audience
•
EMC Pre-Sales Organization for outlining and describing the architecture
for their customers prior to purchase.
•
EMC Global Services Application Support for effectively introducing the
product into the environment and assuring that the implementation is
specifically oriented to the customers’ needs and negates any possible
DU/DL and/or application failure or misunderstanding of such failures.
•
EMC customers interested in deploying VPLEX Witness or have deployed
VPLEX and need a solid understanding of how VPLEX Metro benefits from
VPLEX Witness.
Deploying VPLEX Witness within a Public Cloud
5
Introduction
EMC VPLEX Metro uses a cluster guidance mechanism known as VPLEX Witness to provide
continuous availability in the event of a site failure. Using VPLEX Witness ensures that Continuous
Availability can be delivered by VPLEX Metro. Continuous Availability means that regardless of site
or link/WAN failures; data will automatically remain online in at least one of the locations. The
challenge for some VPLEX Metro customers is providing a physically isolated site to use as a 3rd fault
domain. Here is where public cloud providers can be leveraged to solve this problem. Since
VPLEX Witness is a virtual machine it can easily be deployed within a public cloud that is up to
1000ms away from each of the two primary VPLEX sites.
Why is VPLEX Witness critical?
When setting up a single or a group of distributed volumes preference rules are configured. It is
the preference rule that determines the outcome after failure conditions such as site failure or dual
WAN link partition. The preference rule can either be set to Site A preferred, Site B preferred or no
automatic winner for each distributed volume and/or group of distributed volumes.
Overall, the following effects to a single or group of distributed volumes can be observed under
each of the failure conditions list in Table 1:
Preference Rule /
scenario
Cluster A Preferred
Cluster B preferred
No automatic
winner
VPLEX CLUSTER PARTITION
Site A
Site B
ONLINE
SUSPENDED
GOOD
SUSPENDED
ONLINE
GOOD
SITE A FAILS
Site A
FAILED
BAD (by design)
FAILED
GOOD
SUSPENDED (by design)
SUSPENDED (by design)
Site B
SUSPENDED
ONLINE
SITE B FAILS
Site A
ONLINE
GOOD
SUSPENDED
BAD (by design)
Site B
FAILED
FAILED
SUSPENDED (by design)
Table 1 Failure scenarios without VPLEX Witness
Table 1 shows that with the use of just preference rules (without VPLEX Witness) then under some
scenarios manual intervention would be required to bring the VPLEX volume online at a given
VPLEX cluster (For example, if Site A is the preferred site, and Site A fails, Site B would also suspend).
This is why VPLEX Witness matters so much – it can dramatically improve the situation. It can better
diagnose failures as the independent fault domain isolation and network triangulation ensure that
Witness can provide guidance to each of the clusters. This allows VPLEX Metro to provide an
active path to the data in both the dual WAN partition and full site loss scenarios as shown in Table
2:
Deploying VPLEX Witness within a Public Cloud
6
Preference Rule
Cluster A Preferred
Cluster B preferred
No automatic
winner
VPLEX CLUSTER PARTITION
Site A
Site B
ONLINE
SUSPENDED
GOOD
SUSPENDED
ONLINE
GOOD
SITE A FAILS
Site A
FAILED
GOOD
FAILED
GOOD
SUSPENDED (by design)
SUSPENDED (by design)
Site B
ONLINE
ONLINE
SITE B FAILS
Site A
ONLINE
GOOD
ONLINE
GOOD
Site B
FAILED
FAILED
SUSPENDED (by design)
Table 2 Failure scenarios with VPLEX Witness
Table 2 Shows the results when VPLEX Witness is deployed -- failure scenarios become selfmanaging (i.e. fully automatic).
VPLEX Witness Deployment Requires a 3rd Fault Domain
VPLEX Witness must be deployed within an independent failure domain so that it is isolated from
events within either of the other two VPLEX clusters that form VPLEX Metro. What is an independent
failure domain? It’s a domain that does not share any resources with another domain. In short, it
needs to operate independently from the domains it is monitoring. The third site in our example is
the public cloud provided by VMware vCloud Air.
Deploying VPLEX Witness within a Public Cloud
7
EMC Delivery and Service Offerings
EMC offers standard delivery and professional services to provide Public Cloud based VPLEX
Witness deployment. These are not net new offerings, but rather variations on what EMC already
provided for installation options. Customers will have to still supply information for services to
complete installation as if it were installed on customer premises.
VPLEX Witness Virtual Machine Installation Requirements
The Cluster Witness Server is a Linux process that runs in a SLES 11 (64-bit) guest OS VM. The VM is
packaged as an Open Virtualization Format Archive (OVA) and installation follow the standard
OVA deployment workflow within vSphere Web Client or vSphere Desktop Client. Since VPLEX
Witness is a virtual machine it can be hosted by a Public Cloud provider and then connected to
each of the two VPLEX Metro sites using a VPN connection. It requires a publicly accessible IP
address (for example, using SNAT/DNAT rules + a public edge gateway IP) and an IP network
connection to the VPLEX management server at each site.
Securing Your IP Management Network When Using Public Cloud
Does opening up a port to the cloud based witness pose a threat? Certainly any ports open in a
firewall can be exploited if a weakness or credentials are leaked. To bolster security and reduce
your risk, it is recommended that a secondary perimeter be setup around the VPLEX management
servers to prevent the possibility of anyone using it to hop onto other management resources. See
the EMC VPLEX Security Configuration Guide for port usage and work with your IP network team to
ensure that the secondary perimeter around the VPLEX management servers properly secure your
resources.
Deploying VPLEX Witness within a Public Cloud
8
Host and Networking Requirements for VPLEX Witness
Server host requirements
SERVER HOST
DESCRIPTION
REQUIREMENT
Host hardware
CPU utilization
RAM utilization
Hard disk storage utilization
Network interface card
Network adapter
Network addresses
Power
BIOS
Security
Refer to the ESSM for a list of supported ESX versions.
To ensure a trouble free installation of ESX:
Verify the hardware is compliant as described in the Hardware Compatibility Guide.
(http://www.vmware.com/resources/compatibility/search.php) including:
System compatibility
I/O compatibility (Network and HBA cards)
Storage compatibility
Backup software compatibility
Install and run only on servers with 64 bit x86 CPUs.
NOTE: 32-bit no longer supported
Verify Intel Virtualization Technology (VT) is enabled in the host's BIOS.
NOTE: VPLEX Witness VM is not supported on VMware Server, VMware Player, VMware Fusion, or
VMware Workstation.
Allocate one vCPU for the Cluster Witness Server guest VM.
Allocate 512MB for the Cluster Witness Server guest VM.
Allocate 2.5GB of storage space for Cluster Witness Server guest VM. If deploying on VMware-FT, the
storage must be visible to all hosts in the VMware cluster.
1 GigE NIC with one Ethernet port connected to the IP management network.
It must be possible to allocate two Virtual Network Adapters for use by Cluster Witness Server guest VM.
Host must be configured with a static IP address on a public network.
Host must be connected to a UPS (Uninterruptible Power Source) to protect against power outages.
Host should enable BIOS Virtualization Technology (VT) extension in the BIOS.
This ensures proper functionality and performance of the Cluster Witness Server VM.
Access to the Cluster Witness Server host (the ESX Server) must be secured via a password (assigned
and configured by the customer).
Networking requirements
The VPLEX Witness virtual machine must be connected to the same IP management network that
provides inter-cluster management connectivity at VPLEX Site1 and VPLEX Site2.
Deploying VPLEX Witness within a Public Cloud
9
NETWORK
REQUIREMENT
DESCRIPTION/DETAILS
High Availability
The IP Management network for the Cluster Witness Server must use physically separate networking equipment
from either of the inter-cluster networks used by VPLEX.
CAUTION: Failure to meet this deployment requirement significantly increases the risk of Data Unavailability in the
event of simultaneous loss of inter-cluster connectivity as well as connectivity with the Cluster Witness Server.
Static IP addresses must be assigned to the public ports on each management server (eth3) and the public port in
the Cluster Witness Server VM (eth0).
If these IP addresses are in different subnets, the IP Management network must be able to route packets between
all such subnets.
To confirm connectivity between subnets:
Use the ping -I eth3 cws-public-ip-address command from either of the management servers to the public IP
address of the Cluster Witness Server. Note: -I flag uses the uppercase letter i.
Firewall configuration settings in the IP Management network must not prevent the creation of IPsec tunnels.
VPLEX Witness traffic as well as VPLEX management traffic use VPN tunnels established on top of IPsec.
The following protocols must not be filtered in either the inbound or outbound direction:
Authentication Header (AH): IP protocol number 51
Encapsulating Security Payload (ESP): IP protocol number 50
The following ports must be open on the firewall
Internet Key Exchange (IKE): UDP port 500
NAT Traversal in the IKE (IPsec NAT-T): UDP port 4500
The IP Management network must be capable of transferring SSH traffic between management servers and Cluster
Witness Server.
The following ports must be open on the firewall and not filtered in either incoming or outgoing direction:
Secure Shell (SSH): TCP port 22
Domain Name Service (DNS): UDP port 53
Ensure that both outgoing and incoming traffic for UDP port 53 (DNS) is allowed for the network where ESX host
with Cluster Witness Server VM is deployed.
The IP Management network must not be able to route to the following reserved VPLEX subnets:
128.221.252.0/24
128.221.253.0/24
128.221.254.0/24
IMPORTANT: If any of these networks are accessible from the public IP management network, contact EMC
Customer Support.
If VPLEX is deployed with an IP inter-cluster network, the inter-cluster network must not be able to route to the
following reserved VPLEX subnets:
128.221.252.0/24
128.221.253.0/24
128.221.254.0/24
IMPORTANT: If any of these networks are accessible from the public IP management network, contact EMC
Customer Support.
A typical VPLEX Witness deployment generates 2-3 Kbps of duplex VPLEX Witness IP traffic (transmitted over IP
management network) per director per cluster.
For example; a quad engine cluster (8 directors) will generate 16-24 Kbps of duplex VPLEX Witness IP traffic.
The required minimum value for Maximum Transmission Unit (MTU) is 1500 bytes. Configure MTU as 1500 or
larger.
Round trip latencies on the management network between the Cluster Witness Server and both management
servers in the VPLEX Metro or VPLEX Geo should not exceed 1 second.
Accessibility
Bandwidth
MTU
Latency
Deploying VPLEX Witness within a Public Cloud
10
Deployment Example: VMware vCloud Air
Example Network Topology:
Note: The steps below provide an overview of the VPLEX Witness virtual machine
installation in a VMware vCloud Air virtual data center. They are intended to
provide an overview of the installation process, but they do not replace the
official VPLEX Witness installation documentation provided by the Solve Desktop
application.
Deploying VPLEX Witness within a Public Cloud
11
Step 1: Connect to the Public Cloud / Virtual Data Center you wish to deploy into. In this example,
the VDC is within VMware vCloud Air.
Note: In this example VMware vCloud Director is used for .ova deployment. Install
the vCloud Director plug-in into your web browser if it has not already been
installed.
Step 2: Select Deploy Virtual Machine from the Virtual Machines tab.
Deploying VPLEX Witness within a Public Cloud
12
Step 3: Select Create My Virtual Machine from Scratch
Step 4: Click on the Add App from OVF icon
Deploying VPLEX Witness within a Public Cloud
13
Step 5: Follow Add vApp from OVF wizard driven instructions and provide the path to the .ova file for
VPLEX Witness (provided by EMC)
Step 6: Review Details
Deploying VPLEX Witness within a Public Cloud
14
Step 7: Accept License
Step 8: Name the VPLEX Witness vApp
Deploying VPLEX Witness within a Public Cloud
15
Step 9: Name the computer that will host the VM
Step 10: Set Network 1 and Network 2 to Static - IP Pool addresses. Set Network 1 for default routed
network and Network 2 to default isolated. Click Next
Note: Do not set IP addresses during OVA upload / deployment
Deploying VPLEX Witness within a Public Cloud
16
Step 11: Leave CPU, Memory, Disk to default settings. Click Next
Step 12: Select Power on vApp after wizard is finished. Click Finish
.
Deploying VPLEX Witness within a Public Cloud
17
Step 13: Observe .ova deployment progress and completion.
Step 14: Confirm VPLEX Witness VM is deployed and powered On. If it is not powered on, power it on
now. Follow the official VPLEX Witness install procedures (available from Solve) to set the IP of the
VPLEX Witness via the console.
Deploying VPLEX Witness within a Public Cloud
18
Step 15: Modify Firewall rules to accommodate IP traffic from the VPLEX VM to the VPLEX Clusters.
Click on the Gateways tab and then click on the gateway you will be sending VPLEX IP traffic
through. Then click the Firewall Rules tab and modify them to allow traffic to and from the VPLEX
Witness VM to the VPLEX Management Server at each site.
Add/Modify Firewall rules to allow all traffic from the VPLEX Witness VM to the VPLEX Management
Servers:
In this example, All traffic in the vCloud Air data center is being allowed (not just VPLEX Witness) to
connect back to the management network. If very granular rules are used, ensure that the port,
protocol, and IP requirements (shown earlier in this document) for VPLEX Witness IP traffic are in
place. A direct VPN connection is in place between vCloud Air and the two physical data centers
that contain VPLEX, so this IP configuration is secure. The VDC in this case also has other applications
and IP traffic, so opening it up to all traffic was deemed acceptable. In some cases, a more narrow
set of rules will be used to limit traffic to just the VPLEX Witness VM and just the VPLEX Management
Servers.
Deploying VPLEX Witness within a Public Cloud
19
Step 16: Confirm VPLEX is now connected to the Gateway:
Step 17: Confirm Virtual Machine Status and Network Settings:
Deploying VPLEX Witness within a Public Cloud
20
Step 18: Follow the standard VPLEX Witness installation instructions (available from the Solve Desktop
application) to complete the configuration of VPLEX Witness.
More Info
Contact your EMC Account or Professional Services team for further details.
Deploying VPLEX Witness within a Public Cloud
21
Conclusion
This paper has focused on VPLEX Metro and why VPLEX Witness is such a critical in achieving
continuous availability of storage. VPLEX Witness is a virtual machine that provides intelligent
guidance to each of the VPLEX Metro sites from an independent 3rd site. As independent tertiary
fault domains are not always available to customers, public cloud providers like VMware vCloud Air
can fill this gap in traditional infrastructure. The installation of the VPLEX Witness into a virtual data
center is not unlike a traditional deployment with a few extra steps to configure the network,
firewalls, and VPN connectivity to the public cloud. Once deployed into the cloud, the
management and operation of the VPLEX Witness is identical to a traditional physical data center
implementation.
Deploying VPLEX Witness within a Public Cloud
22