HP CloudSystem Matrix How-To Guide: ESXi Cluster Provisioning

Transcription

HP CloudSystem Matrix How-To Guide: ESXi Cluster Provisioning
Technical white paper
HP CloudSystem Matrix
How-To Guide:
ESXi Cluster Provisioning
Contents
Introduction ..................................................................................................................................................... 2
Prerequisites .................................................................................................................................................... 3
Suggested configuration .................................................................................................................................... 3
Virtual Connect configurations ...................................................................................................................... 3
Matrix OE infrastructure operation network configuration ................................................................................. 4
Setting up the environment................................................................................................................................. 6
Step 1: Install vSphere PowerCLI on the CMS ................................................................................................. 6
Step 2: Configure Auto Deploy ..................................................................................................................... 6
Step 3: Configure Operations Orchestration properties .................................................................................... 7
Step 4: Register vCenter with Matrix ............................................................................................................ 10
Step 5: Register Auto Deploy as a provisioning source for the CMS ................................................................. 10
Step 6: Configure the Auto Deploy deployment network in IO ......................................................................... 11
Step 7: Enter CMS Administrator credentials in Windows service RSJRAS ......................................................... 12
Creating the ESXi cluster provisioning template ................................................................................................... 13
Step 1: In template designer, move components to the working area ............................................................... 13
Step 2: Edit the server configuration ............................................................................................................ 14
Step 3: Attach OO workflows ..................................................................................................................... 14
Step 4: Configure the networks ................................................................................................................... 17
Step 5: Select the ESXi software .................................................................................................................. 18
Step 6: Configure the physical storage ........................................................................................................ 19
Provisioning the cluster .................................................................................................................................... 20
Step 1: Make sure that the necessary server and storage resources are available .............................................. 20
Step 2: Create an IO service based on the template ...................................................................................... 21
Step 3: Verify the configuration of the provisioned cluster ............................................................................... 22
Step 4: Provision a virtual service on the cluster............................................................................................. 26
Step 5: Perform IO life-cycle operations on a cluster ...................................................................................... 26
Troubleshooting ............................................................................................................................................. 28
Appendix 1: Obtaining a depot for Auto Deploy ................................................................................................ 32
Appendix 2: OO system properties, system accounts, and flows ........................................................................... 33
System property descriptions ...................................................................................................................... 33
System accounts ....................................................................................................................................... 33
OO flow descriptions ................................................................................................................................ 33
Customizing the OO flows ......................................................................................................................... 34
Customizing the VMware host profile........................................................................................................... 34
Appendix 3: Validating the Auto Deploy installation ........................................................................................... 35
Appendix 4: Configuring storage in Matrix........................................................................................................ 36
Predefined single-volume storage pool entries ............................................................................................... 36
Predefined multivolume storage pool entries.................................................................................................. 36
Predefined SPM-managed static storage pool entries...................................................................................... 37
SPM-managed autocarved storage pool entries (HP Matrix Operating Environment 7.1 and later) ....................... 37
For more information ...................................................................................................................................... 38
1
Introduction
This document describes how to provision a VMware 5.0 ESXi cluster using the infrastructure orchestration (IO)
capabilities of the HP Matrix Operating Environment 7.0. By using an infrastructure orchestration template, you can
quickly and easily deploy ESXi hosts. This new feature provides the following:
The ability to define an ESXi cluster configuration solution in terms of networking, servers, and storage in the form of
a Matrix OE template. The template is created using the graphical Matrix OE infrastructure orchestration designer
and can be made available in the Matrix OE service catalog.
Special workflows that are unique to the ESXi clustering feature are added to the ESXi cluster Matrix OE templates
using the designer tool.
Access to the defined ESXi cluster solution template by administrators or (optionally) users by using the Matrix OE
infrastructure orchestration self-service portal. From the portal, a user can request an instance of the template for
provisioning.
Automation of the ESXi cluster provisioning based on the template by the Matrix OE infrastructure orchestration
engine. The specific tasks are a combination of the generic functions in the orchestrator and those of the workflows
attached to the template and initiated by the orchestration engine. These tasks automatically perform the following:

Configure physical blade servers

Connect the physical servers to an Auto Deploy provisioning engine and its network

Trigger initial installation of ESXi on all cluster members

Configure and attach physical storage and networks to all cluster members

Communicate with the vSphere server to create the cluster and configure cluster members to match the
template specification by using vSphere host profiles

Integrate the completed cluster into the server resource pool framework of IO so that it is ready as a target for
virtual machine (VM) provisioning through the Infrastructure as a Service (IaaS) paradigm of IO
Management and configuration of the pool of resources from which the cluster will be created using the Matrix OE
console in HP Systems Insight Manager (HP SIM). This administrative interface can be used to configure and manage
the available networks, servers, and storage resources from which the ESXi clusters will be provisioned. It also can
be used to manage and approve individual requests for ESXi clusters within those resource pools.
ESXi cluster provisioning in Matrix OE is accomplished by Matrix OE components working in conjunction with
VMware vSphere components in the environment. The total solution comprises the following:
Software components running on the Matrix Central Management Server (CMS)

User interfaces for the Matrix OE IO designer, self service portal, and console (SIM)

The Matrix OE service catalog that contains templates that define ESXi cluster solutions

The Matrix OE IO engine to automate installation and configuration of an ESXi cluster based on a template in the
service catalog

(New) The workflow library that contains ESXi provisioning–specific workflows added in this release

(New) VMware PowerCLI tools that the Matrix OE workflows use to communicate with the vSphere and Auto
Deploy servers
A VMware vSphere server

Manages ESXi clusters from a VMware prospective

Is registered with the Matrix CMS via standard Matrix procedures
A VMware Auto Deploy server (can be the same server as the vSphere server or independent)
2

Responds to requests to provision ESXi servers over a provisioning and management network

Contains the ESXi image depot for HP blade servers

Contains credentials for that depot

Has preloaded VMware licenses for automatic allocation as new ESXi servers are provisioned One or more HP
CloudSystem Matrix enclosures

Physical blades that will host ESXi cluster member servers

HP Virtual Connect modules that configure and define available networks that can be attached to the blade
servers that will be ESXi hosts
Auto Deploy/VMware console network

An Auto Deploy provisioning and vSphere management network that connects vSphere, Auto Deploy, and target
blade servers
The Matrix OE infrastructure orchestration software communicates with the vSphere server to define the cluster, add
the ESXi hosts to the vSphere configuration and new cluster, and set the properties of the servers in VMware to
match those specified in its template.
The Matrix OE infrastructure orchestration software communicates with the Auto Deploy server to configure
provisioning rules as needed to make sure that physical blades that are being provisioned will be provisioned with
the HP depot.
Prerequisites
Before you create the cluster, be sure to meet all the following prerequisites: Define the cluster to be provisioned by
scoping the following:
1.
Number of hosts in the cluster
2.
Number and sizes of the data stores
3.
Cluster networks (management, vMotion, VM networks, and so on)
Make sure that HP CloudSystem Matrix is installed and operational. In examples in this white paper, CloudSystem
Matrix 7.0 is used.
Make sure that VMware vCenter Server, with VMware Auto Deploy service, is installed on a server other than the
Central Management Server and is operational. Auto Deploy is a component that is installed when the vCenter server
is installed.
Make sure that the vCenter Server credentials do not contain a $ sign.
For more information, see "vSphere Installation and Setup" at http://pubs.vmware.com/vsphere50/index.jsp?topic=/com.vmware.vsphere.install.doc_50/GUID-7C9A1E23-7FCD-4295-9CB1C932F2423C63.html.
To validate that Auto Deploy is properly installed, see Appendix 3: Validating the Auto Deploy installation.
NOTE: Due to the dynamic nature of MAC addresses in Matrix, VMware Auto Deploy IP addresses will be dynamic.
Static IP and automatic IP addresses are not an option in the current solution.
VMware Auto Deploy does not set the hostname on provisioned vmhost, so it needs DHCP and DNS hostname
reservation configured prior the server provision. Make sure the PDC (Primary Domain Controller) server have DHCP
addresses leases and reservations configured; as well DNS Forward Lookup and Reverse Lookup entries for each
vmhost.
Make sure that the vSphere Client is installed and operational so that it can access the vCenter server. In examples
in this white paper, vCenter Server 5.0 and vSphere Client 5.0 are used.
Make sure that storage for the cluster data stores is defined as Matrix OE logical server management (LSM) storage
pool entries (SPEs) with appropriate data size, sharers, World Wide Names (WWNs), and so on. Configurations
are made on the LSM storage pool entry (SPE) configuration page, accessible on the IO console Storage tab
through the Manage Storage Pool link. For more information, see Appendix 4: Configuring storage in Matrix. It is a
good idea to become familiar with and test physical shared disk provisioning using IO before you proceed.
Configure IO networks with virtual local area network (VLAN) ID and VLAN trunk assignments as necessary.
Suggested configuration
Virtual Connect configurations
In this example, a Virtual Connect Flex-10 Ethernet module is installed in Bay 1. A single external network switch (a
ProCurve Switch 3500yl-24G) is used.
NOTE: Optionally, a second Virtual Connect Flex-10 Ethernet module can be installed in Bay 2 for redundancy
configurations that require customized HP Operations Orchestration (OO) worfklows. Also, two external network
switches can be used for high availability.
3
The Virtual Connect Flex-10 Ethernet modules are configured as in the Virtual Connect FlexFabric Cookbook
(http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf) but without the
second interconnect for an active-active configuration. A shared uplink set, which is presented to the server as
separate networks, is defined to carry deployment/management (ESX console) and ESX vMotion traffic. Additionally,
a vNet network is defined to carry the multiple VLAN traffic of production networks that will be configured though
vCenter by the provided OO workflows to be presented to virtual machines.
A shared uplink set is defined:
Bay 1, Port X1, and associated networks VMware Management and vMotion are added to the first shared uplink set
(in this example, VLAN-Trunk-1).
A vNet Ethernet network is defined, with VLAN tunneling enabled:
Bay 1, Port X2 is added to the first vNet (in this example, Trunk).
The corresponding ports in the external network switches are configured to support the appropriate VLAN IDs (in this
example, 101 and 102) for the shared uplink set associated networks (in this example, VMware Management and
vMotion) and the VLANs of the production networks (in this example, 2000, 2042, 2309, 2740, and 2936).
The following tables summarize the Virtual Connect configuration used in this example.
Table 1. Virtual Connect Ethernet networks
Ethernet
network
Enable
VLAN
tunneling
Shared uplink
set
External
VLAN ID
VMware
Management
VLAN-Trunk-1
101
vMotion
VLAN-Trunk-1
102
Trunk
Yes
External uplink port
Bay 2 Port X2
Table 2. Virtual Connect shared uplink sets
Shared uplink set
VLAN-Trunk-1
(Infrastructure trunk)
External uplink
port
Associated networks
Network name
VLAN ID
Bay 1 Port X1
VMware Management
101
vMotion
102
Matrix OE infrastructure operation network configuration
In this example, the Trunk network is configured in Virtual Connect as a VLAN tunnel trunk that carries production
VLANs with IDs 2000, 2042, 2309, 2740, and 2936. These VLAN networks are not defined in Virtual Connect
and therefore are not available in the IO network inventory. The network must first be created and designated as
carried on the VLAN tunnel trunk named Trunk. Alternatively, the production VLANs can be networks defined in
Virtual Connect and will be aggregated, as necessary, onto a single multinetwork port when the Virtual Connect
profile is created.
For each production network, perform the following steps to create the network in IO and designate it as carried
on the VLAN trunk named Trunk:
If the network does not already exist on the IO Networks tab, click Create Network.
a.
4
Specify the network name, network address/netmask, usable IP address space, Domain Name System (DNS),
and any other optional values.
b.
Click Save.
Select the IO network and click Edit Network.
a.
Specify the network address/netmask, usable IP address space, DNS, and any other optional values.
b.
Enter the VLAN ID (2000, 2042, 2309, 2740, or 2936).
c.
Click the Trunk tab.
d.
Select the Trunk network (indicating that the Trunk network carries this network).
e.
Click Save.
In this example, IO is expected to satisfy the Trunk network, defined in an IO service template, by attaching a Virtual
Connect VLAN tunnel trunk that carries those networks to the Virtual Connect profile network port. IO can also satisfy
the Trunk network, defined in an IO service template, by aggregating the desired networks onto a Virtual Connect
profile multinetwork port. When all desired networks are defined in the Virtual Connect domain group, IO will
5
aggregate those networks onto a multinetwork port in favor of using any VLAN tunnel trunk that carries the same
networks.
NOTE: Creating both a Virtual Connect network and a Virtual Connect VLAN trunk that carries the same network
will result in network connectivity issues unless the configuration is manually, and carefully, configured as a
failover configuration.
Setting up the environment
This section describes the necessary environment for provisioning a cluster. The following diagram shows the
components and relationships that must be in place for cluster provisioning. The arcs are annotated with step number
and name, and the step descriptions follow the diagram.
vCenter Server
Depot
AutoDeploy
2. Configure
AutoDeploy
5. Register AutoDeploy
as a provisioning
source for CMS
4. Register vCenter
with Matrix
CMS Server
vspherePowerCli
1. Install vspherePowerCli
Operations Orchestration
3. Configure OO Properties
Matrix infrastructure orchestration
6. Configure Deployment Network in IO
RSJRAS Windows Service
7. Enter the CMS Administrator credentials
Step 1: Install vSphere PowerCLI on the CMS
PowerCLI is a Windows PowerShell snap-in that can be downloaded from VMware. The process is documented in
the following VMware document:
http://www.vmware.com/support/developer/PowerCLI/PowerCLI501/doc/vsph_powercli_usg501.pdf
NOTE: Make sure that the properties are set to support remote signing.
Step 2: Configure Auto Deploy

Configure the CMS to use Auto Deploy by creating a ruleset to assign Auto Deploy host profiles to newly added
blades that will become the cluster hosts. To do this, walk through "Auto Deploy Proof of Concept Setup" in the
vSphere 5 Documentation Center:
http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc_50%2FGUIDA9FFEDEE-1A3D-4EFD-A130-F6E78C727380.html
6

After the Auto Deploy server has been installed, confirm that it can be connected from the CMS.
a.
Launch the PowerCLI interface by clicking Start -> All Programs -> VMware -> VMware vSphere
PowerCLI.
b.
Connect to the vCenter server by using the following command:
Connect-VIServer -Server <ip address> -User <user>
-Password <password>
You can ignore the following certificate warnings.
PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI>
Connect- VIServer -Server localhost -User sp_admin -Password hpinvent
WARNING: There were one or more problems with the server certificate:
*
The X509 chain could not be built up to the root certificate.
*
The certificate's CN name does not match the passed value.
Name
---localhost
Port
---443
User
---Administrator
PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI>
TIP: Test the Auto Deploy solution to provision a server blade outside of IO before continuing.
Step 3: Configure Operations Orchestration properties
The IO implementation for provisioning hypervisors utilizes special-purpose workflows that are packaged with the
Matrix software by HP. This section identifies the system properties and system accounts that must be set in the OO
instance configuration.
1.
Launch OO Studio from the CMS by clicking Start -> All Programs -> Hewlett Packard -> Operations
Orchestration -> HP Operations Orchestration Studio.
The parameters that the workflows use are located in paths under two branches of the Configuration tree:
System Accounts and System Properties. These locations contain numerous other parameters that IO uses
internally, but do not modify them. Matrix will look at only those settings specifically used by the hypervisor
provisioning workflows. The process for setting these values is as follows:
a.
Select the system account or system property.
b.
Right-click Repository -> Check out.
c.
In the open variable window:
i.
ii.
iii.
d.
Set the value.
Save (click the diskette icon near top tab).
Close (click x).
Right-click Repository -> Check in.
7
2. Set the following values:
a.
Open Configuration -> System Accounts:
o
o
vCenterCredentials: The credentials to access the vCenter being used with Auto Deploy

User name: <AdminUserName>

Password: <AdminPwd>
VMhostCredentials: System accounts; credentials to be set by IO for the ESX hosts.

User name: root

Password: <AdminPwd>
Note that VMware passwords must conform to a certain level of complexity. The rules are
documented at:
http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=server_config/c_p
assword_complexity.html#1_14_9_9_8_1
8
b. Open Configuration -> System Properties:
o
DepotURL: <path of the ESXi 5.0 depot image that will be provisioned on blades>
This is the pathname of the location where the depot is stored on the CMS. For more information, see
Appendix 1 (Obtaining a depot for Auto Deploy).
o
vCenterIp: <IP address of vCenter>
9
Step 4: Register vCenter with Matrix
As with previous Matrix OE releases, the vCenter server that manages the environment must be registered with the
Matrix OE software. For instructions, see the HP Matrix Operating Environment 7.0 Infrastructure Orchestration User
Guide at http://www.hp.com/go/matrixoe/docs.
To determine whether the vCenter 5.0 server is already registered in SIM:

Open the SIM hypervisor registration page.

Click Options -> View VME Settings.
o Look for the IP address of the vCenter 5.0 server that has Auto Deploy.
Step 5: Register Auto Deploy as a provisioning source for the CMS
Register the custom VMware vSphere Auto Deploy server in Systems Insight Manager.

Open a command prompt on the CMS and run the following command:
mxnodesecurity -a -p dsc_custom -c <user>:<password> -t on –n <vSphere Auto Deploy IP>
where the credentials are those entered for the VM host in the earlier "Step 3: Configure Operations
Orchestration properties" step1.
NOTE: The vCenter and Auto Deploy servers can be located in the same server or in different servers.

Verify that the vSphere Auto Deploy server was added to the Software tab in the infrastructure orchestration
console.
a.
From within HP SIM, switch to the Matrix OE IO console by clicking Tools -> Infrastructure
Orchestration.
b.
Open the Software tab.
The Auto Deploy server will appear as Generic Image for X.X.X.X and have a source of Custom.
Note that unlike other software sources for IO, specific images being offered by the Auto Deploy server are not
listed.
10
Step 6: Configure the Auto Deploy deployment network in IO
Auto Deploy and the vCenter software reside on the same server, so the same network is used for deployment and
cluster management. In IO, a network must be configured with vCenter as a deployment server. To do this:

Launch SIM in a browser window.

Open the Matrix OE IO console by clicking Tools -> Infrastructure Orchestration.

Open the Networks tab.
a.
Select the appropriate network from the table (VMware_Management in this configuration).
b.
Set the following:
o
Deployment Server to the IP address of vCenter with Auto Deploy
o
Boot Network to Yes
A single custom deployment service can also be configured during installation. If multiple custom deployment
services are required, they can be added later.
1
11
Step 7: Enter CMS Administrator credentials in Windows service RSJRAS
Edit the CMS Windows service RSJRAS (HP Operations Orchestration Remote Action Service) and change the
credentials to those of the CMS Administrator:


12
Run services.msc, locate the service named RSJRAS, and double-click to open the properties.
Click the Log On tab and select This account.

Provide the account name and password used by the CMS Administrator user.

Confirm and restart the service for the credentials to take effect.
Creating the ESXi cluster provisioning template
Create a service template for deploying a physical VM Host or ESX cluster. The following diagram shows the key
features of a cluster provisioning template.
Management and
AutoDeployment
Network
n
Used for migrating
guest VMs
Datastore disks shared by the Cluster
No boot disk is required
Server Group of the VMhost Blades
Uses ESXi image on the vCenter Auto Deploy server
Hypervisor provisioning workflows attached
Carries the networks
used by guest VMs
The following sections describe the process steps to create an example template.
Step 1: In template designer, move components to the working area

Launch infrastructure orchestration designer at https://<cms-name-or-ip>:51443/hpio/designer/ and click
New.
Drag a physical server group, a physical storage component, two networks, and one or more trunk components
onto the working area.
Connect the physical server group to the physical storage and the networks. No boot disk is required.
13
Step 2: Edit the server configuration
Right-click the physical server group and select Edit Server Group Configuration. Note that the server
type is Physical.

a.
Give the server group a name that will be used as the name of the provisioned cluster.
b.
For Cluster Type, select VM Host. VMware ESX is automatically selected as the subtype.
c.
Make sure that Processor Architecture is set to x86 64-bit.
Set the number of servers in the cluster, and fill in other optional information. A cluster can contain one or more
servers.
NOTE: To add servers after the template is published and the service is created, select a service on the Services
tab in the infrastructure orchestration console, self service portal, or organization administrator portal. Click
Details, and then select Add servers from the Actions list.
Step 3: Attach OO workflows
Configure the IO service template to call Operations Orchestration workflows.

14
In infrastructure orchestration designer, select the service template and edit the server group configuration.
a.
Select Workflows, and then click Add.
b.
Select Select workflow from tree and expand the tree by clicking ++.
c.
Select 00_ConfigureVMADBootProfile_server_begin, select the Configure Server
Beginning check box, and click Add.
d.
Click Add and expand the tree by clicking ++.
e.
Select 05_CheckServerHasBooted_server_end, select the Configure Server End check box, and
click Add.
f.
Click Add and expand the tree by clicking ++.
g.
Select 10_ConfigureVMhostDataStore_group_end, select the Configure Server Group End
check box, and click Add.
h.
Click Add and expand the tree by clicking ++.
i.
Repeat for the remaining eight server group end workflows.
When this task is complete, the Summary tab displays the following.
NOTE: The Appendix 2 (OO system properties, system accounts, and flows) contains a brief description of all the
OO flows.
15
Workflows for life-cycle operations
It is necessary to add Operations Orchestration workflows to the template to support the following life-cycle
operations: add a server, add disks to a service, and delete a service. Without these workflows, the life-cycle
operations will not work. They are added to the service-level workflows, which are accessed from the Workflows
button in the title bar.
The following figure lists these workflows.
They should be added to the execution points as listed in the following figure.
16
Step 4: Configure the networks
The first network must be a deployment network connected to the Auto Deploy server. It must be specified by name
rather than by desired attributes.
The Networks tab in the IO console shows the following:
VMware_Management is the deployment network selected in the network configuration.
17
The vMotion network can be specified by name or desired attributes.
The Trunk network carries VLANs for the guest VMs. These are selected as follows:

Right-click Trunk and select Configure Trunk.

Click the Config tab.

Select one or more networks from All Available Networks and drag the networks or click >> to move the
networks to the Selected Networks list. This selection defines a trunk network on a single NIC port that
carries the selected networks. Only networks that are available from a common physical or virtual source (in
the same Virtual Connect domain or VM host) can be grouped together in a trunk.
Step 5: Select the ESXi software
18

Right-click the physical server group and select Edit Server Group Configuration.

Click the Software tab.

Select an ESXi image on the vCenter Auto Deploy server and then set OS Type to ESX.
Step 6: Configure the physical storage


Right-click the physical storage component and select Edit Server Group Configuration.
Click the Config tab.
The storage type is FC-SAN. Note that SCSI over IP (iSCSI) attached storage is not supported.

Select Specify desired attributes.

Leave the RAID level set to Any.

Leave Disk is bootable unselected.

The setting for Redundant SAN paths to disk depends on how you configured your logical server storage pool
entries. At least one matching storage pool entry is required for each service provisioned. Those entries might
or might not be fully configured with redundant paths.
In vCenter, the data store is given a unique name constructed as follows:
auto-<lsgid>-<index number>-<vmhostName>
where <lsgid> is LogicalServerGroup:<UUID>, <index number> is a non-negative number starting at 0,
and
<vmhostName> is the name of the VM host. For example:
auto-LogicalServerGroup:53b4ff14-794b-4ba3-9d46-3105c116d1a6-0-ClustABC01
19
Optionally, you can create additional shared data storage as follows:

a.
Drag another physical storage component and connect it to the ESX cluster component.
b.
Select the Disk is shared across servers check box. (Do not select the Disk is bootable check
box.)
After you have completed these steps, note that the validation status is green. If it is not green, click Show Issues
and resolve those items.
Provisioning the cluster
After the CMS has been correctly configured and the template has been created, there are just five steps to follow.
Note that only a Service Provider Administrator can execute an ESXi cluster provisioning request. Organization
Administrators cannot execute the request.
Step 1: Make sure that the necessary server and storage resources are
available

Launch the IO Admin Console in HP System Insight
Manager: https://<cms-name-or-ip>:50000

20
Open the IO console by clicking Tools -> Infrastructure Orchestration.

Open the Servers tab and create a server pool to hold the blades to be used for cluster nodes
(IntelClusterPool in the following figure).
Note that the hosts should all be either Intel or AMD G7 blades. If the blades in the pool are heterogeneous, the
80_CreateHostProfile_group_end workflow will not be able to create a host profile for the cluster.

Make sure that there are sufficient storage volumes.
o
Open the Storage tab. The cluster will need one non-boot disk of type VMware and of sufficient size to
hold the data store with one connection for each host in the cluster. The template shown previously requires
just one 20 Gb disk with at least two connections.
Step 2: Create an IO service based on the template
Create a service by using the cluster template and the server pool that holds the blades.
There are three phases during service creation: reserving and allocating the blades, creating the logical servers for
the hosts, and finally running the OO workflows to automatically deploy the ESXi software and configure the hosts.
The following warning messages can be safely ignored.
For a list of error conditions that should not be ignored, see "Troubleshooting."
21
Step 3: Verify the configuration of the provisioned cluster

Confirm that the service is up in IO.
o
From the Services tab, make sure that the service has been created and that the logical servers are up.
The Resource Details tab in this view shows which blades are being used for the ESXi hosts.
22

Confirm that the ESXi hosts are available as servers. For each cluster that is provisioned, the Servers tab shows
two IO pools:
o
The original pool used in the creation request that contains the serial numbers of the blades used for the
provisioned hosts
o
A new pool with the name of the server group that contains the IP addresses of the VM hosts
The provisioned ESXi hosts on these blades
appear in a new pool which has the name of
the server group in the template.
The physical blades remain in the original pool
and are marked in In-Use.
The details for each blade show which VM host it carries and which service is assigned to it.
23
The details for each VM host in the new pool show the blade serial number, its usage, and any (virtual) services
that it hosts.

Verify the logical server configuration.
a.
24
From within HP SIM, switch to Matrix OE visualization: Open Tools -> HP Matrix OE visualization.

b.
For Perspective, select Logical Server.
c.
Find the logical servers, select one, and click the View movable logical server details icon.
d.
Verify that the logical server has the expected network and storage connections.
Confirm that the ESXi servers are booted and have been added into the vCenter data center.
a.
Launch VMware vSphere Client and connect to vCenter Server.
b.
Navigate to Home -> Inventory -> Hosts and Clusters.
c.
Select each ESX server and open the Summary tab.
d.
Validate that the hosts are connected to the data stores and networks.
25
Check that all hosts are connected to the datastore
and have the correct network connections.
The datacenter is named after the service
and the cluster is named after the server
group in the template
Step 4: Provision a virtual service on the cluster
Create a virtual IO template and use it to provision a virtual service on the cluster. This should work as for any other
ESXi cluster.
Step 5: Perform IO life-cycle operations on a cluster
As long as the workflows for life-cycle operations were attached in Step 3: Attach OO workflows of creating the
template, the following life-cycle operations can be applied to a cluster: add a server, add a disk, and delete a
service.
Add a server
Precondition:
The service has not reached the maximum number of servers in
the cluster logical server group.
The server to be added is identical to the other blades in the
cluster.
Result:
The server is added to the cluster.
The new server will boot but existing VM hosts will not.
Add a disk
Depending on how the storage is configured in Matrix, there are different ways to add a disk to a cluster (for
more information, see Appendix 4: Configuring storage in Matrix.
a. Adding a disk in a new predefined single-volume SPE
Precondition:
There are no guests running on the VM hosts.
An SPE appropriate for the data store is available.
Result:
The VM hosts are rebooted.
The SPE is added as a data store.
26
b. Adding a disk that contains an additional volume in SPE that reuses an existing, predefined multivolume SPE.
Precondition:
An unused volume is defined in the SPE.
SAN storage is properly configured (disk carved,
presented, and SAN zoned).
Result:
The VM hosts are not rebooted.
The pre-presented volume is added to the IO service and
configured as a data store.
c. Adding a disk in a predefined static SPE managed in HP Storage Provisioning Manager (SPM).
Precondition:
An unused volume is defined in the SPE.
SAN storage is properly configured (disk carved and SAN
zoned).
Result:
The VM hosts are not rebooted.
The pre-carved volume is automatically presented by SPM to the
VM host, added to the IO service, and configured as a data
store.
d. Adding a disk by using an automatically generated SPM (HP Matrix Operating Environment 7.1 and later).
Precondition:
An IO system (IO template, SPM, and LSM) is configured for
autocarving.
Result:
The VM hosts are not rebooted.
The new volume is added as a data store.
Delete a service
Precondition:
The VM hosts must have no guest VMs defined; otherwise, the
request will fail.
Result:
All the VM hosts are deleted and the storage is released.
The blades remain available to IO.
27
Troubleshooting
This section lists some of the issues that can arise when you are provisioning clusters. Note that changes
to configuration files will require the affected services to be restarted or rebooted.
Issue
If the provisioning fails, "clean-me" logical servers can be created.
Action
Because the ESXi hosts do not have boot disks, it is only necessary to delete the logical servers.
It is not necessary to follow the full manual clean-up process.
Issue
During logical server provisioning and before OO workflows start, the following message
appears:
Workflow execution error: Target blade does not have enough network ports with
network connectivity to support networks configured in LogicalServer.
Possible cause
In the template, the Networks tab of the server group specified redundant network interfaces.
Action
Clear the redundancy or supply an appropriate blade.
Issue
Failure message:
Workflow 00_ConfigureVMADBootProfile_server_begin has failed.
Possible cause
Auto Deploy Depot is not set up correctly in OO.
Action
Configure OO with the correct file path or network path for Auto Deploy Depot. For more
information, see "Step 3: Configure Operations Orchestration properties."
Issue
Failure message:
Workflow 05_CheckServerHasBooted_server_end has failed.
Or a pink screen displays in the iLO blade.
Possible cause
Auto Deploy Depot is corrupt or does not have the necessary drivers.
Action
Reconfigure Auto Deploy Depot image file from HP web site.
Issue
Failure message:
Cannot register the VM Host in HP SIM. Possible causes: a) HP SIM is not
running; b) Incorrect vCenter credentials were supplied; c) vCenter is not
discovered or connected; d) VM Host is not deployed/ready or not started; e)
RSJRAS service is not running with local account credentials (such as
Administrator). If any of these conditions exist, correct the problem and
retry the operation. See the HP Insight Control Virtual Machine Management
User Guide for more details.
Possible cause
The RSJRAS Windows service is not using the administrator
credentials. Or
The OO credentials for the ESXi server are insufficiently complex.
28
Action
Use the administrator credentials for the RSJRAS Windows service. For more information,
see "Step 7: Enter CMS Administrator credentials in Windows service RSJRAS."
Make sure that the ESXi host password set in the step of configuring OO properties is
sufficiently complex. For more information, see "Password Complexity" at
http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=server_config/c_p
assword_complexity.html#1_14_9_9_8_1.
Issue
Failure message:
Task for checking if Logical Server <ls_name> is up has failed. The Logical
Server is not reachable. Logical server job (ID = <ls_ID>) completed with a
failure status. Failure: No value for xpath:
/logicalSubnets/logicalIPAddresses[id=''].
Possible cause
The ESXi host is not licensed in vCenter.
Action
Make sure that the ESXi host is correctly licensed in vCenter.
Issue
While OO workflow 80_CreateHostProfile_group_end is running, the following message
appears:
Could not create the host profile for the cluster, possible causes: a) vCenter
is not running; b) Wrong vCenter's credentials were supplied; c) Wrong
vCenter's IP/hostname were supplied; d) DataCenter or cluster does not yet
exist (this is a pre-condition in order to create the cluster's host profile);
e) Reference VM host not deployed or not started. Please check if any of those
conditions are not covered and try again.
Possible cause
The blades being used in provisioning are not homogeneous with respect to processor
architecture.
Action
Make sure that the server pools used in the creation request are homogeneous.
Issue
While 90_AssignVMHostsToPool_group_end is running, the following message
appears:
Could not complete the addition of vmhosts to pool, please execute it
manually.
Possible cause
The blades being used in provisioning are not homogeneous with respect to
processor architecture.
Action
Ensure that the server pools used in the create/add request are homogeneous.
29
Issue
While OO workflow 50_HostRegisterOnVMM_group_end is running, the following message
appears:
Could not register the VM host in ICvirt. …
Possible
cause
The OO workflow times out.
Action
1. Set the following timeout values as shown:
 In hpio.properties: timeout.oo.workflow.max.run = 90
 In OO system properties: hpscriptdeadline = 50
2. Change the timeout setting for a remote RAS installation. Make the following change to
the wrapper.conf file in TWO locations, as follows.
a. Open the file <OO_HOME>\RAS\Java\Default\webapp\conf\wrapper.conf
b. Add the following line to the wrapper.conf file:
wrapper.java.additional.<n>=–Dras.client.timeout=4800
c. Open the file <OO_HOME>\Central\conf\wrapper.conf and repeat Step b.
NOTE: Change additional.<n> to additional.2 if a line with additional.1 already exists; change
additional.<n> to additional.3 if lines with additional.1 and additional.2 already exist, and so
on. This value may be different in each wrapper.conf file.
Depending on the current configuration of the wrapper.java.additional.<n> lines within the file,
you may need to reorder the lines such that the new RAS timeout line is successfully applied. HP
strongly recommends that you place each definition (-D) on its own line.
3.
Restart the following services:

HP Matrix infrastructure orchestration

RSCentral
 RSJRAS
Issue
After creation of the cluster service, vCenter shows a warning triangle on a host.
Possible cause
System logging is not configured for the host.
Action
Confirm the cause by clicking the Summary tab.
To enable system logging, follow the procedure in the VMware Knowledge Base article at
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&exte
rnalId=2006834.
30
Issue
Suspected incorrect behavior by OO flows.
Action
View the logs that the Windows PowerShell code produces. The logs are recorded for each
execution of each workflow, and can be viewed in OO Central reports.
1.
Log in to your fully installed CMS.
2.
Open Internet Explorer. (Firefox and Chrome do not work with the widgets from OO
Central.)
3.
In Internet Explorer, open the site https://localhost:16443/PAS.
4.
Enter the credentials. The user name is admin, and the password is what was entered at
installation time.
5.
Click the Reports tab.
6.
Click the flow name required from the list presented. You may have to update the time
range selection.
7.
Click the history ID number that represents the desired run to investigate.
8.
Place a check mark in the Result report column if it is unchecked, and click Apply.
The Result column shown in the report for the step named "PowerShell Script" contains the logs
for that flow execution. Unfortunately, the readability is not good because of the environment.
Issue
vCenter reports licensing problems with the blades. For example:
Possible cause
HP BladeSystem date and time are inconsistent with the VMware license file.
Action
Make sure that the BIOS date and time are correct.
31
Appendix 1: Obtaining a depot for Auto Deploy
The software depot used to provision clusters is a combination of VMware executables and HP value-added
management code. Using the HP code enables the ESXi server to operate correctly and be managed on HP devices.
An Auto Deploy Depot is a zip file that is used to provision an Auto Deploy ESXi 5.x server onto the desired
hardware. As of 2012, the most recent release and all future releases of the HP Custom ISOs and their
corresponding depot.zip file are published at the following URL:
http://www.vmware.com/go/downloads-image-hp-esxi
32
Appendix 2: OO system properties, system accounts, and
flows
System property descriptions
DepotUrl
The full path of the software depot being provisioned
hpsciptbase1
Common support code for all flows (beginning segment)
hpioscriptbase2
Common support code for all flows (ending segment)
hpscriptdeadline
The timeout, in minutes, that each flow must end by
hpscriptdelayminutes
The time that the Util_Delay flow will delay
vCenterIP
The IP address of vCenter
System accounts
vCenterCredentials
The vCenter credentials.
VMhostCredentials
The VM host credentials. The password on the VM host will be
changed to the value specified here.
OO flow descriptions
00_ConfigureVMADBootProfile_server_begin
Connects to the vCenter/Auto Deploy server by using vCenterIp
and vCenterCredentials and creates Auto Deployment rules for the
boot MAC, enabling PXEboot to point to the depot defined in
DepotURL.
Default behavior is that every boot MAC gets its own
deployment rule.
05_CheckServerHasBooted_server_end
Connects to the vCenter/Auto Deploy Server and confirms the
appearance of the new VM host in vCenter based on boot MAC. It
will wait approximately 19 minutes. If you want to change the wait
time, see "Customizing the OO flows."
10_ConfigureVMhostDataStore_group_end
Connects to the vCenter/Auto Deploy server and configures the data
stores for the cluster.
20_CreateDatacenter_group_end
Connects to the vCenter/Auto Deploy server and creates the
vCenter data center for the cluster if it is not present.
30_CreateCluster_group_end
Connects to the vCenter/Auto Deploy server and creates the
vCenter cluster for the cluster if it is not present.
40_HPSimDiscovery_group_end
Initiates a SIM discovery task for each server in the cluster.
33
50_HostRegisterOnVMM_group_end
Registers each VM host in the cluster with Matrix Operating
Environment, and in case of error, attempts to discover the
server again.
60_MoveHostToCluster_group_end
Connects to the vCenter/Auto Deploy server and moves each VM host
into the vCenter cluster.
70_ConfigureVSwitchesAndPortGroups_gr
oup_end
Connects to the vCenter/Auto Deploy server and configures the first host
with the defined networks.
80_CreateHostProfile_group_end
Connects to the vCenter/Auto Deploy server, creates a VMware host
profile from the first VM host, and applies the host profile to all VM
hosts in the cluster.
90_AssignVMhostsToPool_group_end
In Matrix OE, assigns each VM host to the HP IO pool named for
the cluster.
95_Delete_Datastore
Connects to the vCenter/Auto Deploy server, removes each VM host
from vCenter, and deletes the associated data stores.
Util_Delay
A utility flow that delays hpscriptdelayminutes minutes.
Customizing the OO flows
You can modify the behavior of the OO flows by accessing the Windows PowerShell code in Operations
Orchestration Studio as follows:


Log in to your fully installed CMS.
Open OO Studio by clicking Start -> Programs -> Hewlett-Packard -> Operations Orchestration ->
HP Operations Orchestration Studio.



Enter the credentials. The user name is admin, and the password is what was entered at installation time.
Expand the tree on the left section of the page, down to Library -> Hewlett-Packard -> Infrastructure
Orchestration -> Service Actions -> HP -> ESXHOST.
The flows are listed alphabetically. Double-click the required workflow.
In the flow diagram that appears, double-click the PowerShell Script step. Click the right arrow on the input
variable named script.
Customizing the VMware host profile
VMware host profiles define a standard configuration for the blades in a cluster. The host profile created by the
80_CreateHostProfile_group_end workflow is uniquely named for each instance of service request and is generated
from the first VM host in the service request. The VMware host profile is then applied to subsequent VM hosts added
to the cluster either at creation or server add time.
With the distributed set of OO workflows, the initial host profiles must be created by IO first through the IO process.
Customizations made to the primary VM host in the cluster, such as networking configuration or Common Information
Model (CIM) indication subscriptions, require that the VMware host profile be regenerated, with the same name,
through vCenter and reapplied to the cluster. Any new VM hosts added to the cluster will get the updated VMware
host profile applied to them to match the other hosts in the cluster.
34
Appendix 3: Validating the Auto Deploy installation
To validate that Auto Deploy is installed, open the Auto Deploy administrative interface by clicking the icon in the
vSphere client. To do this, you must be logged in to a host with direct connectivity to vCenter. In private networks, this
can be the CMS (because it must have accessibility to vCenter).
The Auto Deploy page is very limited and is not the real administrative interface. It just provides some information
about the low-level bootstrap images that are being used. You can also see the IP address that the Auto Deploy server
is serving on.
35
Appendix 4: Configuring storage in Matrix
Before Matrix Operating Environment infrastructure orchestration can allocate storage, it must be configured in
logical server storage pools in Matrix OE visualization. A logical server storage pool entry (SPE) represents storage
that has been carved from a storage array (for example, HP Enterprise Virtual Array or HP 3PAR), presented to a
host-initiator port, and zoned in a SAN fabric on the SAN switches for connectivity with the blade. Creating an SPE
is detailed in the HP Matrix Operating Environment 7.0 Logical Server Management User Guide at
www.hp.com/go/matrixoe/docs.
How an SPE is defined determines how IO uses the storage for provisioning. Storage for physical provisioning can
be defined in one of four ways:

Predefined single-volume SPEs

Predefined multivolume SPEs

Predefined SPM-managed static SPEs

SPM-managed autocarved SPEs
Predefined single-volume storage pool entries
A single-volume SPE defines a single volume that is presented to one (or more for shared storage) host initiator port
WWNs. Adding multiple single-volume-SPEs to a blade requires N_Port ID virtualization (NPIV) support in the Fibre
Channel (FC) host bust adapter (HBA), allowing multiple WWNs on each host port. IO can use any predefined SPE
that meets the required criteria (Virtual Connect Domain Group, size, RAID level, or tags) for VM cluster provisioning
or disk add operations on the cluster.
Adding multiple SPEs to an FC port in Virtual Connect requires that the blade first be deactivated to add WWNs to
the NPIV configuration. This operation is disruptive to the VM host and consequently the cluster. For VM clusters, all
VMs must first be powered down before a new SPE can be added to the cluster.
Adding storage to an IO VM Cluster service by adding a predefined single-volume SPE requires taking the following
steps:

Verify that an SPE has been defined and is available to be used. The request will pause and wait for this to
be put in place if it is missing.

Power down all VMs running on the cluster.
NOTE: This can be mitigated as follows:
a. Add an external VM host to the cluster.
b. Migrate the VMs to the externally added VM host.
c. When the operation is complete, move the VMs back, and then remove the external host from the cluster.

Perform the IO disk add.

Power on VMs.
The disk add operation will power down all blades known to be in the IO defined VM cluster, add the disk to each
host, and then power on the blades. The 10_ConfigureVMHostDataStore_group_end OO workflow attached to the
Add Data Disk End actions causes the VM host to rescan the SCSI bus and creates a data store for the volume in
vCenter.
Predefined multivolume storage pool entries
A predefined SPE can be defined with more than one volume. Each volume in the SPE is manually designated as
"Ready." After a volume is marked as "Ready," it is assumed to be properly carved from the SAN storage array,
presented to a host-initiator port, and zoned in a SAN fabric on the SAN switches for connectivity with the blade.
Because adding a volume does not introduce a new SPE to the blade, no Virtual Connect profile deactivation is
required (or the associated reboot). After the storage becomes available to the blade, the OS can rescan the SCSI
bus and begin using the new volumes.
36
Adding storage to an IO VM Cluster service by adding a volume to a pre-existing SPE requires taking the following
steps:
1. Carve the storage on the SAN storage array.
2.
Present the storage on the SAN storage array.
3.
Add the volume to the SPE if it is not already defined in the SPE.
4.
Mark the volume as "Ready" in the SPE.
5.
Perform the IO disk add.
NOTE: Zoning is the same as all other volumes in the SPE, so it does not need to be modified.
The disk add operation will update IO to reflect the added disk. The 10_ConfigureVMHostDataStore_group_end
OO workflow attached to the Add Data Disk End actions causes the VM host to rescan the SCSI bus and creates a
data store for the volume in vCenter.
Predefined SPM-managed static storage pool entries
Storage Provisioning Manager can manage a predefined SPE to automate the presentation of the storage on the
SAN storage array. (For more information, see the HP Storage Provisioning Manager User Guide at
http://www.hp.com/go/matrixoe/docs.) Volumes must still be properly carved and zoned. The IO disk add process
directs the presentation of the volume. Adding an SPM-managed volume does not introduce a new SPE to the blade,
and no Virtual Connect profile deactivation is required (or the associated reboot). After the storage becomes
available to the blade, the OS can rescan the SCSI bus and begin using the new volumes.
Adding storage to an IO VM Cluster service by adding a volume to a pre-existing SPE requires the following steps:
1.
Carve the storage on the SAN storage array.
2.
Add the volume to the SPE if it is not already defined in the SPE.
3.
Perform the IO disk add
NOTE: Zoning is the same as all other volumes in the SPE, so it does not need to be modified.
The disk add operation will instruct SPM to present the storage to the host initiator port that WWN defined in the
SPE and update IO to reflect the added disk. The 10_ConfigureVMHostDataStore_group_end OO workflow
attached to the Add Data Disk End actions causes the VM host to rescan the SCSI bus and creates a data store for
the volume in vCenter.
SPM-managed autocarved storage pool entries (HP Matrix Operating
Environment 7.1 and later)
IO can be configured to use a properly configured Storage Provisioning Manager to automatically carve, present,
and zone storage. (For more information, see the HP Storage Provisioning Manager User Guide and the HP Matrix
Operating Environment 7.0 Infrastructure Orchestration User Guide at http://www.hp.com/go/matrixoe/docs.) The
carving, presentation, and zoning of the volume are directed by the IO disk add process. Adding an SPM-managed
on-demand volume does not introduce a new SPE to the blade, and no Virtual Connect profile deactivation is
required (or the associated reboot). After storage becomes available to the blade, the OS can rescan the SCSI bus
and begin using the new volumes.
Adding storage to an IO VM Cluster service by adding an on-demand volume to a pre-existing SPE requires taking
the following step:

Perform the IO disk add.
The disk add operation will instruct SPM to carve, present, and zone the storage to the host initiator port that WWN
defined in the SPE and update IO to reflect the added disk. The 10_ConfigureVMHostDataStore_group_end OO
workflow attached to the Add Data Disk End actions causes the VM host to rescan the SCSI bus and creates a data
store for the volume in vCenter.
37
For more information
To read more about the information in this paper, see the following resources:
HP Matrix Operating Environment Infrastructure Orchestration User Guide
http://www.hp.com/go/matrixoe/docs
Installing vCenter Server
http://pubs.vmware.com/vsphere-50/index.jsp?topic=/com.vmware.vsphere.install.doc_50/GUID-7C9A1E237FCD-4295-9CB1-C932F2423C63.html
Configuring Virtual Connect Flex-10 Ethernet modules
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf
Auto Deploy Proof of Concept Setup
http://pubs.vmware.com/vsphere-50/index.jsp?topic=/com.vmware.vsphere.install.doc_50/GUID-A9FFEDEE-1A3D4EFD-A130-F6E78C727380.html&resultof=%22%70%72%6f%6f%66%22%20
VMware password complexity http://pubs.vmware.com/vsphere-4-esxvcenter/index.jsp?topic=/com.vmware.vsphere.server_configclassic.doc_40/esx_server_config/service_console_secur
ity/c_password_complexity.html
HP Storage Provisioning Manager User Guide
http://www.hp.com/go/matrixoe/docs
Get connected
hp.com/go/getconnected
© Copyright 2012-2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without
notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and
services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors
or omissions contained herein.
AMD is a trademark of Advanced Micro Devices, Inc. Intel® is a trademark of Intel Corporation in the U.S. and other countries. Microsoft®
and Windows® are U.S. registered trademarks of Microsoft Corporation.
5900-2257 Created June 2012
Rev 2 August 2012
Rev 5 March 2013
Rev 3 November 2012
Rev 6 November 2013
Rev 4 December 2012
Rev 7 July 2014