Data Center Migration

Transcription

Data Center Migration
Data Center Migration
Whitepaper
Authored By Mark Huff
February 2011
Technologent Inc
Table of Contents 1 Introduction ................................................................................................................. 3 1.1 Executive Summary ................................................................................................................................................ 3 2 Data Center Migration -­‐ Key Drivers ............................................................................. 4 2.1 Scenario A – Regulatory Requirements .......................................................................................................... 4 2.2 Scenario B – Multiple Regional Data Centers ............................................................................................... 5 2.3 Scenario C – Obsolete Data Center Facilities Infrastructure .......................................................... 6 2.4 Scenario D – Legacy IT Infrastructure ........................................................................................................ 7 2.5 Scenario E – Fully utilized, Out of Capacity ............................................................................................. 8 2.6 Scenario F – Merger / De-merger .................................................................................................................. 9 2.7 Scenario G – Hybrid .............................................................................................................................................. 9 3 Applications & Infrastructure Analysis – Source Data Center ...................................... 11 3.1 Application & Infrastructure Contacts ......................................................................................................... 12 3.2 Application & Infrastructure Contacts ......................................................................................................... 13 3.3 Infrastructure Profile .......................................................................................................................................... 14 3.4 Application Interdependencies ....................................................................................................................... 15 3.5 Current Facilities Profile .................................................................................................................................... 16 4 Business Service Reviews (BSR) .................................................................................. 17 4.1 Why do we need to conduct BSR? .................................................................................................................. 17 4.2 Application -­‐ Recovery Point Objective ....................................................................................................... 20 4.3 Application -­‐ Availability Objective ............................................................................................................... 21 4.4 Application -­‐ Recovery Time Objective ....................................................................................................... 23 4.5 High Level Cost estimation ............................................................................................................................... 24 4.5.1 Recurring Expenses ........................................................................................................................................... 24 4.5.2 One-­‐Time Expenses ........................................................................................................................................... 25 5 Application & Infrastructure Design – Target Data Center .......................................... 25 5.1 Core Infrastructure Design ............................................................................................................................... 26 5.2 Enterprise Shared Services Design ............................................................................................................... 28 5.3 Application Re-­‐Architecture – As needed .................................................................................................... 30 5.4 Selection of Deployment Model / Patterns – Infrastructure Perspective ...................................... 30 5.4.1 Active-­‐passive failover: .................................................................................................................................... 31 5.4.2 Active-­‐active failover: ...................................................................................................................................... 32 5.4.3 Load balancing ................................................................................................................................................... 34 5.5 Application specific Infrastructure Design ................................................................................................ 36 5.5.1 Individual Servers & Databases Associated with an application .................................................. 37 5.5.2 Virtualization / Consolidation Assessment of Servers ....................................................................... 38 5.5.3 Application specific software requirements ........................................................................................... 41 5.5.4 Application specific appliances .................................................................................................................... 42 5.5.5 Application specific LAN/ WAN & Third Party network connectivity requirements ........... 43 5.5.6 Security Requirements ..................................................................................................................................... 43 5.5.7 Network Performance / Latency Sensitivity assessment ................................................................. 44 5.5.8 Application specific External Storage & Backup requirements .................................................... 44 Technologent Inc.
11
1
5.5.9 Client software needed to access Enterprise shared services ......................................................... 45 5.6 Facilities Requirements / Design ................................................................................................................... 45 6 Move Plans ................................................................................................................. 47 6.1 Application & Infrastructure Interdependencies .................................................................................... 47 6.2 Time Zone Dependencies .................................................................................................................................. 49 6.3 Create Move Bundle Packages ......................................................................................................................... 50 6.3.1 Move Build Package Template (Illustrative) ......................................................................................... 50 6.3.2 Application Build Package template (Illustrated) ............................................................................. 50 6.4 Update Disaster Recovery Plans ..................................................................................................................... 51 6.5 Create Detailed Implementation plan .......................................................................................................... 52 6.6 Regulatory Considerations ............................................................................................................................... 55 6.7 Enterprise Contractual considerations ....................................................................................................... 57 7 Build Applications & Infrastructure -­‐ Target Data Center ............................................ 58 7.1 Core Infrastructure & Shared Services Build & Readiness Testing ................................................. 58 7.2 Pilot Applications – Build & Test ..................................................................................................................... 59 7.3 Build Move Bundle Packages ........................................................................................................................... 60 7.4 Test Move Bundle Packages ............................................................................................................................. 62 7.4.1 Unit Testing .......................................................................................................................................................... 62 7.4.2 Shared Infrastructure Testing ...................................................................................................................... 62 7.4.3 System Integration Testing – Move Group level ................................................................................... 63 7.4.4 System Integration Testing – Program level ......................................................................................... 63 7.5 Typical Risks, Challenges & Mitigation options – Build & Test Phase ............................................ 63 8 Roll out & decommission ............................................................................................ 64 8.1 Rollout / Cutover applications to Target Data Center .......................................................................... 65 8.2 Decommissioning Source Data Center Hardware ................................................................................... 69 9 Appendix .................................................................................................................... 71 9.1 Virtualization Candidacy Criteria (Illustrated) ........................................................................................ 71 9.2 Acronyms & Glossary .......................................................................................................................................... 74 9.3 References: ............................................................................................................................................................... 75 Technologent Inc.
22
2
1 Introduction
1.1 Executive Summary
Business decisions to migrate data centers can be as a result of IT cost reduction
initiatives, regulatory requirement, business service risk migration plan, a newer data
center operation strategy, or other legacy data center environments incapable of
hosting modem & dense IT infrastructure. Among the biggest risks organizations face
when transitioning their Data Center is migrating IT systems and applications. A data
center migration is a highly strategic project that must be executed without impacting
business operations, Service Level Agreements, performance/availability and data
protection requirements. Given the dynamic operational environment in which today’s
data centers operate, wherein applications and data in the production environment
and carefully carve the Data Center migration strategy.
Every environment has its own challenges and one migration strategy does not fit
every client environment. The bottom line consideration for a good migration strategy
is near-zero disruption of business services. This objective drives a deeper &
thorough understanding of the following major subsystems of a data center.
•
Nature & Criticality of Application that cater to different business services
•
Server, Shared hosting environments, Databases that host the application or
service logic.
•
Network that provides the access & security to information within Intranet from
Internet and VPN's
•
Finally, most applications require disk storage for storing data and the amount
of disk space and the frequency of access varies greatly between different
services/applications.
•
Performance & Service levels requirements
By necessity, moving or migrating services from one data center to another need to
consider all of these components. The level & effort for such due diligence is based
Technologent Inc.
33
3
on the current Data Center's application and Infrastructure portfolio, tolerance to
unavailability of applications & services as well as time & budget constraints.
2 Data Center Migration - Key Drivers
Below are some of the commonly see Business-IT situations that trigger a Data
Center Migration initiative. Beginning next section we will understand a generic data
center migration process flow that applies to most of the scenarios discussed in this
section. However the level of focus in a specific area will differ based on scenario
being addressed.
2.1 Scenario A – Regulatory Requirements
Scenario A – Regulatory Requirement
Existing Production & Disaster Recovery Data Centers are in same seismic zone / are
in proximity to each other. Regulators mandate an out-of-region recoverability
capability for mission critical applications.
•
Relocate DR instance of all mission critical applications to another data center
in a different seismic zone.
•
Ensure that availability and recoverability of the applications is not impacted
Key Consideration
in transition
Technologent Inc.
44
•
Ensure that all interdependent applications are able to communicate with
each other in view of mission critical applications being relocated while others
being retained.
•
Ensure optimal costs are incurred on Hardware, Software & Labor by making
smart decisions on:
•
Hardware & Software re-usability
•
Appropriate program phases to minimize bubble period
•
Utilization of appropriate tools & technologies
•
Processes & best practices
•
Deploying right resources
4
2.2 Scenario B – Multiple Regional Data Centers
Scenario B – Multiple Regional Data Centers
Multiple regional data centers exist in the current state, lacking standardization,
efficiencies & growth scalability & unable to meet end-to-end Service level objectives.
•
Build-out a state of the art Dual / Multiple site Data Center
environment that meets required levels of resiliency and data
protection
•
Migrate all common type of business IT processes & Infrastructure
services that serve the entire or most part of enterprise, to a Resilient
Enterprise data center model. Retain (if needed), only the unique
Key Consideration
functionality in the regional data centers that is required specific to a
region e.g. A news paper publishing company may want to keep all
local news feeds to its regional data center but rely on Enterprise Data
Center for all Shared and Core Services.
•
Ensure optimal costs are incurred on Hardware, Software & Labor by
making smart decisions on:
•
Hardware & Software re-usability
•
Appropriate program phases to minimize bubble period
•
Utilization of appropriate tools & technologies
•
Processes & best practices
•
Deploying right resources
•
Build-out a state of the art Dual / Multiple site Data Center
environment that meets required levels of resiliency and data
protection
Technologent Inc.
55
5
2.3 Scenario C – Obsolete Data Center Facilities Infrastructure
Scenario C – Obsolete Data center facilities Infrastructure
Existing data center IT infrastructure is legacy & most of the Hardware & Software is
nearing end-of support. Moreover, current IT infrastructure is inefficient, space taxing,
and does not integrate well with modern applications and technologies. This scenario
is generally seen in combination with other scenarios.
Option A
•
Key Consideration
•
Technologent Inc.
66
To build-out a new data center
Option B
•
Depending on the size & criticality
with state-of the art facilities
of current data center
infrastructure while maintaining
infrastructure, temporarily move
current data center environment
applications to a hosting
Migrate applications to newly
environment and upgrade facilities
built data center.
infrastructure of the existing data
center
•
Relocate applications from hosted
environment back to the upgraded
data center infrastructure.
•
Ensure optimal costs are incurred on Hardware, Software & Labor by
making smart decisions on:
•
Hardware & Software re-usability
•
Appropriate program phases to minimize bubble period
•
Utilization of appropriate tools & technologies
•
Processes & best practices
•
Deploying the right resources
6
2.4 Scenario D – Legacy IT Infrastructure
Scenario D - Legacy IT Infrastructure
Existing data center IT infrastructure is legacy & most of the Hardware & Software is
nearing end-of support. Moreover, current IT infrastructure is inefficient, space taxing,
and does not integrate well with modern applications and technologies. This scenario
is generally seen in combination with other scenarios.
•
Redesign and or Rebuild applications on modern hardware and
Key Consideration
software technologies.
Technologent Inc.
77
•
Adopt virtualization & consolidation technologies, Enterprise Shared
Services Model
•
Mitigate any data integrity challenges while migrating to modern
infrastructure
•
Ensure optimal costs are incurred on Hardware, Software & Labor by
making smart decisions on:
•
Hardware & Software re-usability
•
Appropriate program phases to minimize bubble period
•
Utilization of appropriate tools & technologies
•
Processes & best practices
•
Deploying the right resources
7
2.5 Scenario E – Fully utilized, Out of Capacity
Scenario E - Fully utilized, Out of Capacity
Current data center floor space is nearly 100% utilized, needs optimization to enable
Key Consideration / Drivers
growth capabilities.
Technologent Inc.
88
•
Consolidate and virtualize IT infrastructure.
•
Re-visit facilities infrastructure to ensure it is capable of providing power and
cooling to high density IT infrastructure.
•
Free up Rack Space & facilities
•
Ensure optimal costs are incurred on Hardware, Software & Labor by making
smart decisions on:
•
Hardware & Software re-usability
•
Appropriate program phases to minimize bubble period
•
Utilization of appropriate tools & technologies
•
Processes & best practices
•
Deploying the right resources
8
2.6 Scenario F – Merger / De-merger
Scenario F – Merger / De-merger
Merger / De-merger of two organizations driving a newer data center strategy for the
Key Consideration / Drivers
merged / split entity.
•
Application & Infrastructure technologies selection for the merged entity
•
Data Center Strategy for Infrastructure consolidation / separation &
standardization
•
Build-out of Data Center Facilities in-line with new strategy
•
Migration and relocation of infrastructure to align with new data center
strategy
•
Ensure optimal costs are incurred on Hardware, Software & Labor by making
smart decisions on:
•
Hardware & Software re-usability
•
Appropriate program phases to minimize bubble period
•
Utilization of appropriate tools & technologies
•
Processes & best practices
•
Deploying the right resources
2.7 Scenario G – Hybrid
Scenario G – Hybrid
Hybrid Scenario – This may be a combination of one or more scenarios discussed
above.
Key Consideration / Drivers
•
Technologent Inc.
99
This may be a combination of one or more key considerations discussed
above
9
Hybrid Data Center Migration (Scenario A & D)
So, we have seen that there can be multiple drivers that trigger a data center
migration initiative. Data Center Migration methodology discussed further in this
document is based on an experience with a Hybrid situation (Scenario A & D above)
In view of this understanding, the scope of Data Center Migration consists of
following broader steps. We will understand each of these in much deeper detail as
we follow:
1. Application & Infrastructure Analysis / Design
a. Source Data Center environment (From where Applications &
Infrastructure needs to be migrated)
b. Business Service Reviews (Capture Target Service level objectives, IT
budgets)
Technologent Inc.
1010
10
c. Target Data Center Infrastructure Planning & Design (where
Applications & Infrastructure would reside finally)
2. Move Plans
d. Migration options (Build New Hardware at Target Data Center / Forklift
or Truck Hardware from Source to Target Data Center / Interim Build to
swing Hardware to Target Data Center
e. Create Move Bundle & Application Build Packages
3. Build & Test Applications & Infrastructure
4. Roll-out Applications at Target Data Center
5. Decommission Source Data Center Infrastructure
f. Cutover users to Target Data Center
g. Shutdown / Clean up corresponding source Data Center components
3 Applications & Infrastructure Analysis – Source Data Center
It appears as if moving / building infrastructure at target data center is predominantly
about planning hardware infrastructure. However, in order to ensure that business
services are not disrupted in an unplanned fashion, a detailed application &
infrastructure assessment is needed.
The purpose of Application & Infrastructure Analysis of Source Data Center
environment is to create a document package for each application, with sufficient
understanding about current IT footprint. Further, based on current environment’s
understanding & other dependencies discussed later in this document, evolve a
target state design to be implemented at new data center. This includes, but not
limited to:
Technologent Inc.
1111
11
•
Key contacts for an application (Application Manager, Technology Delivery
Manager, and System Administrator, Business continuity Manager, Architects
etc.)
•
Number & Type of Hardware devices (Servers, Network, Storage, and
Security) deployed.
•
Number & type of Operating Systems, business & Enterprise software
services deployed.
•
Shared hosting environments and shared database environments deployed.
•
Obsolete Hardware & Software Services that need to be refreshed along with
Migration.
•
Existing Availability, Recovery Time & Recovery Point objectives of each
application.
•
Architectural information, Physical and Logical diagrams exhibiting an
application’s deployment & geographical footprints.
•
Information on application & infrastructure interdependencies and affinities to
assist in grouping Applications into move bundles.
•
Current Service levels, Change Windows, Production issues.
3.1 Application & Infrastructure Contacts
In a dynamic IT environment, role and responsibilities of various resources keeps
changing based on Business & IT requirements. During the initial assessment phase
of each application it is important to capture the names of all up-to-date stakeholders
that will enable in key decisions pertaining to application & infrastructure architecture
& IT service management considerations for data center migration.
Large enterprises maintain an inventory called Application Portfolio Management that
generally maintains application profile & key contacts information. Below table
illustrates useful set of information about key stakeholders for an application. These
Technologent Inc.
1212
12
contacts will need to be interviewed for target state design activities, build and test
planning activities & to seek sign-off of end of phase activities of the program.
Application ID
Application Name
Line of Business
Contact Type
Contact Name
Email
Contact Number
3.2 Application & Infrastructure Contacts
The objective of capturing basic profile of each application is understand certain key
parameters such as criticality of application, primary business area served, tolerance
for downtime, time-zone sensitivity etc. of each application. This profile helps in
•
Prioritizing & scheduling application analysis design & build activities,
breakdown of program into smaller, more manageable work packages.
•
Optimizing co-ordination effort by grouping Application Analysis, Design &
Build activities that fall under common stakeholders
•
Applying a common set of guiding principles, processes, procedure, testing
methods for applications that meet a common set of criteria e.g. all
applications with X RTO or Y of AO to be dealt in a specific manner.
Below table illustrates some of the key business service parameters for an
application that would be helpful in further due-diligence
Particular
App ID
Application Name
Application Description
Application Type
Application Environment
L.O.B.
Risk Rating
Current RTO
Current RPO
Current AO
SLA
Status
Weekend Down Time
Max Down Time
Technologent Inc.
1313
Description
Unique Application Identifier
Name of the Application
Brief description about the application
Software only or Software w /hardware
Mainframe / Distributed / both
Primary Business Area Served
High / Medium / Low based on criticality
Existing Recovery Time Objectives
Existing Recovery Point Objectives
Existing Availability Objectives
Service level agreement (if any)
Dev / Prod / Obsolete
Weekend change window for planned activities
Maximum number of hours available in a given change window
13
Time zone Sensitive
Sunset Date
Shared Environment
Is Application time zone sensitive?
(Applicable if target data center is in a different Time Zone)
Planned retirement / sunset date for the application (if any)
- Dependent Shared hosting environment (if any)
- Dependent Shared databases (if any)
3.3 Infrastructure Profile
The objective of capturing infrastructure profile of each application is to
•
Understand the size and complexity of application's physical environment
•
Geographical foot print
•
Type of hardware and software that has been deployed
•
Any unique security & appliance requirements
•
Dependency of Enterprise Shared Infrastructure
•
External Vendor communication dependencies
Below table illustrates some of the key infrastructure parameters an application that
would be helpful in further due-diligence for Data Center Migration.
Particular
Databases
Appliances
Individual
Servers
Technologent Inc.
1414
Description
Database instance
Database name
Database Role (Production / DR / Test / Dev)
Database Server Hostname
Database Vendor (Oracle / SQL / Sybase etc.)
Appliance Category (Shared / Dedicated)
Appliance Name (e.g. Data Power, Security devices etc.)
Appliance Type (Facilities / Network / Security etc.)
Appliance Vendor, Model
Hardware Model
Hardware Vendor
Hostname
Local Disk size
Location (City, Rack location etc.)
Number of CPUs
OS Type (Linux, Unix, Windows etc.)
OS Version
RAM
14
Network
Shared
Databases
Shared Servers
Software
Storage
Server DR Partner Hostname (if any)
Server HA cluster Partner (if any)
Server Role (Test / Dev / Production / DR)
Server Type (Web / App / DB)
Any other specific network hardware solely deployed for the application
Associated Security Zone (if any)
IP address
Network Traffic volume (Low / Medium / High)
Third Party Circuit Details
Brief description about the dependent Shared Database
Location (City, Rack location etc.)
Shared Server / Hosting environment name
Number of licenses
Software Name
Software Role (Backup / Monitoring / Remote connection etc.)
Software Vendor & Version
Any other specific storage hardware solely deployed for the application
Backup Type (Tape / Disk-to-Disk / DAS)
External SAN storage size (if any)
Local Disk Size
Replication Type (synchronous / asynchronous)
Storage Tier (if Hierarchical storage exists)
3.4 Application Interdependencies
Today’s complex application architectures interface with large number of other
Enterprise & business applications. Migrating complete / partial application
infrastructure to another data center does not only need a due-diligence from all the
facets of application in question, but also need a thorough assessment of its impact
to other interfacing applications. This analysis would assist in first understanding the
existing interdependencies and then in evolving appropriate Move Bundles. Some
Applications may be able to tolerate the unavailability of some non-critical
interdependencies during migration window. However many other critical interfaces
such as Real time business critical data processing may require that such
applications be bundled together for the purpose of migration.
Technologent Inc.
1515
15
Following table illustrates some of the critical parameters to assess Application
Interdependencies:
Particular
Interface Name
Related Application
Initiator
Description
Direction
Layer Making Connection
Latency Sensitivity
Protocol
Bind Environment
Frequency of Access
Tolerance for Unavailability
Integration Type
Description
Name of the interface
Unique Application Identifier of the interfacing Application
Application / Bound Application
A brief description of the interface
Direction of dependency (Inbound / Outbound / Both)
Web / Application / Database Layer
Yes (If tolerance is < 50 ms, No if tolerance is > 50 ms)
HTTP / SOAP / NDM / FTP etc.
Specify if dependency exist with Mainframe / distributed / both
component /s. (Applicable if Applications have both Mainframe
and distributed components.)
Daily Batch, Real-time, Ad hoc etc.
Can process without impact / Can process with impact / cannot
process at all
Provides Information & Support to / Receives information &
depends on / Is a component of
3.5 Current Facilities Profile
In order to plan the capacity & nature of facilities environment needed in the target
data center viz. Floor Space, Number & Type of Racks, Power, Power Distribution
system, Network & Storage Cabling, Air-conditioning systems etc. it is important to
conduct an assessment of source data center facilities infrastructure. The utilization
and pain points (if any) in existing facilities will provide meaningful data to plan
capacity, distribution & design of target data center facilities. A mapping between the
IT Infrastructure profile (hardware devices deployed), allocated facilities capacity,
utilization patterns, Assessment of past outages, shortcomings provides information
about:
•
Total Allocated UPS power, % utilization (peak & average)
•
Battery back up time
•
BTUs of Heat dissipated by different hardware
•
Distribution pattern of dissipated heat based on Server rack layout and
hardware mounted in each rack.
Technologent Inc.
1616
16
•
Any hot spots needing extra cooling, suction of heat / redistribution of
hardware dissipating heat to racks where enough cooling is available etc.
•
Any issues with cooling distribution such as Air flow blockages / Low Air
pressure
•
Energy consumption patterns
•
Handling capacity of PDUs, % utilization
•
Any Single points of failure
•
Any capacity bottlenecks
4 Business Service Reviews (BSR)
It is very important to understand the service level expectations defined by business
to ensure that all Data Center Migration Planning, Build & Test work is base lined in
view of the same.
4.1 Why do we need to conduct BSR?
Business Service Reviews are intended to seek answers to following illustrative
questions so that most appropriate / cost effective Service level objectives are
defined / agreed at.
•
What level of availability does your business require?
•
How much data is the business willing to lose in a catastrophic event?
•
How will you mitigate data loss?
•
How much revenue will be lost or additional expenses incurred by an
unscheduled system outage for different durations (i.e. 10 minutes, half hour,
one hour, etc)?
•
Is this financial impact more likely to be felt during certain peak processing
times?
•
Technologent Inc.
1717
If so, what are the peak periods?
17
•
What hard processing commitments do we have to external agencies or
trading partners that are critical?
•
What is your customer’s tolerance for this service not being available?
•
What reputation risk or regulatory consequences are at stake for certain
processing commitments?
Business Partners should be educated to appropriately gauge the target state
Service Level objectives for their Business Services. Following concepts about
availability and recoverability should be made clear before seeking these Service
level objectives.
•
Availability is different from Disaster Recovery.
•
Availability provides for recovery from localized routine or non-routine events,
with potential end-user impact. Disaster Recovery involves recovery between
data centers following catastrophic/extraordinary events, with certain end-user
impact from a disruption in service.
•
The business lens used for what would be reasonable to accept in a “true
disaster” (Recoverability) has no relevance in normal processing (Availability).
•
Recoverability has both a technology and business operations component.
•
The technology will recover the application to the RPO and then the individual
business unit will need to recover lost transactions.
•
Application interdependencies will need to be clearly understood to accurately
assign both the AO and RPO.
Technologent Inc.
1818
18
Exhibit A below demonstrates differences between Availability & Recoverability.
Availability
The ability of a component or service to perform
its required function at a stated instant or over a
stated period of time
Recoverability
The process of restoring business services after a
disruption of service by transferring Information
Technology (IT) resources to secondary or
backup locations
•
Routine or non-routine events.
•
Catastrophic/extraordinary events
•
Availability Objective drives the
decision process in designing a
system & determining investment
trade-offs.
•
Two components of recoverability are
Recovery Time Objective and
Recovery Point Objective
Detailed systems analysis and design
are required to ensure IT can deliver
on the Availability Objective of the
business.
•
•
Technology can be deployed to
minimize data loss (RPO)
•
Business recovery processes should
be assessed to further reduce the
impact of data loss
•
Recovery Time Objectives and
capabilities are validated through
annual testing.
•
Availability Objectives are
demonstrated by achieving and
maintaining service level agreements
Based on this understanding, the ultimate goal of this exercise is to capture RPO; AO
& RTO values for the target state so that Application & Infrastructure design activities
can be based on these business requirements. In the next section we will understand
the definitions of these Service level objectives and how typically these objectives are
classified in an organization.
Technologent Inc.
1919
19
4.2 Application - Recovery Point Objective
Recovery Point Objective (RPO) is defined as the maximum amount of active
transaction data, expressed in units of time, which may be irretrievably lost before a
disaster event in order to meet business requirements. Some applications can handle
up to 24 hours or more of data loss in the event of a true disaster, while others
require near zero data loss.
Below table illustrates how typically Recoverability Objectives are defined for each
application. These will be utilized while defining the target state and further in
Bundling of Applications for creating move groups.
RPO
Typical Use
Transaction
A - Up to 15 minutes of data
loss (Target: 15 minutes)
Processing
Notes
•
RPO standard for critical
applications
•
Minimal data loss via
hardware based data
replication solutions
Intra-day snapshots or
platform- specific data
replication solutions
Applications
B - greater than 15 min and
less than 24 hours (Targets: 1
hour, 4 hours, 8 hours, or 16
hours)
•
Data Warehouses
•
•
C - Equal to or greater than 24
hours of data loss (Targets: 24
hours or 48 hours)
Departmental Applications
X - Not Applicable
Pass-Through applications
•
•
Z - Near Zero data loss
Technologent Inc.
2020
Third- Party custom solution,
e.g. Wire Transfer System
•
Target data loss of less than
24hrs
RPO Standard for noncritical Applications Impact up to 2 business days of
data loss
Applications do not store
any business data,
database, or file structures
Custom IT solution
Requires completing
Exception Process for
approval
20
4.3 Application - Availability Objective
Availability Objective (AO) is defined as the ability of a component or service to
perform its required function at a stated instant or over a stated period of time. In
determining the most appropriate/cost effective Availability Objective, the following
business considerations must be addressed:
•
How much revenue will be lost or additional expenses incurred by an
unscheduled system outage for different durations (i.e. 10 minutes, half hour,
one hour, etc)?
•
Is this financial impact more likely to be felt during certain peak processing
times? If so, what are the peak periods?
•
What hard processing commitments do we have to external agencies or
trading partners that are critical (i.e. Fed deadline or Balance Reporting)?
•
What is your customer’s tolerance for this service not being available?
•
What reputation risk or regulatory consequences are at stake for certain
processing commitments?
Technologent Inc.
2121
21
Below table illustrates how typically Availability Objectives are defined for each
application. These will be utilized while defining the target state and further in
Bundling of Applications for creating move groups.
App
Category
Platinum
Gold
Silver
Bronze
Technologent Inc.
2222
Overall
Service
Availability
Greater
then
99.95%
Monthly
Availability
(22 minutes
/ month
unavailable)
Between
99.9% 99.95%
Monthly
Availability
(44 minutes
/ month
unavailable)
Between
99.5% 99.9%
Monthly
Availability
(3.65 hours
/ month
unavailable)
Between
99.0% 99.5%
Monthly
Availability
(7.3 hours /
month
unavailable)
Peak
Service
Availability
100%
Monthly
Availability
(zero
minutes /
month
unavailable)
Schedule
Service
Outage
2 minutes to
restore
application /
service
Service
Restoration
2 minutes to
restore
application /
service
Targeted
Solution
2 completely
independent
Production
environments
Business
Need
Objective
Used for
Extremely
Time
Critical
Products or
Services.
99.95%
Monthly
Availability
(22 minutes
/ month
unavailable)
Infrequently
scheduled
outages (ex
– quarterly)
1 hour to
restore
application /
service
Fully
redundant
Production
environment
with no
single points
of failure
Used for
Highly Time
Critical
Products or
Services
99.9%
Monthly
Availability
(44 minutes
/ month
unavailable)
Regularly
scheduled
outages (ex
– monthly)
2 hours to
restore
application /
service
Partially
redundant
Production
environment
with some
single points
of failure
Used for
Important
Products or
Services
that need
Intraday
Support
99.5%
Monthly
Availability
(3.65 hours
/ month
unavailable)
Regularly
scheduled
outages
(Ex –
monthly)
6 hours to
restore
application /
service
Single
Production
environment
with no
redundancy
Used for
Products or
Services
that are not
Intraday
Time
Critical
22
4.4 Application - Recovery Time Objective
Recovery Time Objectives (RTO) represents the expectations of how much time can
elapse after a business interruption until the process or application needs to be
restored. Typically, the Business Continuity Planning Group works with partners in
the lines of business and technology to ensure each system/application has an RTO
assigned reflective of the criticality of the business services delivered.
Depending on the level of RTOs, appropriate technologies, processes, roles and
responsibilities are laid to ensure that applications / business services can recover
from a disaster in a given time frame. E.g. in table below, an RTO 1 (i.e. lowest in
criticality from RTO standpoint) may require only a warm / cold DR and thereby save
energy, rack space maintenance costs. In such cases, sufficient time is available to
make technology & configuration changes to turn warm / cold DR infrastructure into
production without compromising on RTO objectives. Similarly, an RTO 5 application
is deemed as a critical application with a very little time to recover from a disaster.
Applications with RTO 5 may need a hot DR with automated failover technologies in
place since no human intervention is expected to recover such mission critical
applications from a disaster.
RTO
RTO 5
RTO 4
RTO 3
RTO 2
RTO 1
Duration (Tolerable Time to Recover)
0 - 4:59 hours
5 - 24:59 hours
25 - 72:59 hours (1-3 days)
73 – 168 hours (4-7 days)
Greater than 168 hours (more than 1 week)
So, the Availability and Recoverability objectives for an application drive the
mechanisms to be put in place to ensure that business services supported on that
application are being provisioned as per expectations and to prevent outages.
Selection of appropriate technologies to meet these service level objectives has been
Technologent Inc.
2323
23
discussed further in the target state design process where deployment patterns are
explained in detail.
4.5 High Level Cost estimation
The cost discussion in Business Service Reviews (BSR) is needed to set
expectations on what sort of changes in cost the business can expect to see based
on the target data center configuration. This is also necessary to support decision
making around the appropriate Availability and Recovery objectives.
The Availability & Recoverability objectives defined by business translate to the level
of High Availability & Data Protection technologies to be implemented on Servers,
Network & Storage environment. Therefore, Enterprises generally attach standard
costs on the basis of type of hardware, selected Availability & Recoverability
objectives for planning purposes.
At this point in the program we are not aiming for actual Bill of Material / target state
costs as those will be assessed towards the end of Target State Application &
Infrastructure design. The objective to undergo this high level estimation is to ensure
that business understands:
•
Selecting a higher AO, RPO, and RTO values for the target state would
translate to a certain standard cost levels
•
If business wants to revisit selected service level objectives, they are
prompted to do so before proceeding with detailed Application & Infrastructure
Design activities for target state.
•
Seek sign-off prior to detailed target state design.
In general, key components of Cost Data are described as below:
4.5.1 Recurring Expenses
Technologent Inc.
2424
24
Illustrative Expenses that business would see as recurring, as a result of Data Center
Migration include:
•
•
•
•
Data retention and recovery
Server support
Network chargeback’s
Server system administration chargeback’s
4.5.2 One-Time Expenses
Illustrative Expenses that business would see as one-time, as a result of Data Center
Migration include:
•
•
Approximate cost of new servers, and other hardware associated with new
data center configuration
Approximate cost of the Data Center Migration Move project and associated
application testing.
5 Application & Infrastructure Design – Target Data Center
Application & Infrastructure design for the target state is based on multiple
parameters such as business defined service level objectives, Enterprise IT –
Hardware & Software standards, opportunities for Infrastructure optimization, Data
Center efficiencies and reduction in IT costs.
In general, Target Application & Infrastructure Architecture is designed keeping in
mind:
Technologent Inc.
•
Current & Target Availability, Recovery Time & Recovery Point Objectives
•
Status of Applications (Development, Production, Sunset, Obsolete etc.)
•
Retiring Hardware & Software requiring technology refresh
•
Dependencies on Shared Hosting environments / Shared Databases.
•
Storage requirements
•
Virtualization & Consolidation opportunities
•
Existing single points of failure
2525
25
•
Capacity, Performance & Scalability requirements
•
Network & Security requirements
•
External Vendor communication requirements
•
Supported Vendor products & technologies
•
Latency Sensitivity Analysis (in view of revised network routes connecting to
Target Data Center)
Target State Application & Infrastructure design would consist of following
components that are discussed further in detail as we follow:
Design Layer
Description
Target Data Center Facilities requirements (Space, Power,
Cooling, Cabling etc.)
Core Infrastructure Layer
Shared Services Layer
Target Data Center Rack Layout
Core Network, Security & Storage Infrastructure design
Enterprise Shared Services (Authentication, Shared
Databases, Messaging, Internet, Monitoring & Management
systems etc.)
Application Architecture at target state (in view of capacity,
scalability, BSR expectations)
Application Layer
Definition of Deployment Patterns
Individual Application specific Hardware, Software
requirements
Selection of appropriate technologies to ensure Availability,
Data protection at Target Data Center in-line with BSR
expectations)
5.1 Core Infrastructure Design
Assessment of the Source Data Center environment carried out earlier provides all
the necessary high level indications to the scale, size, and type of hardware that will
be built / moved to the target data center. Although, a detailed application level
design is not available at this point in the program, the design and build out of Core
Infrastructure layer needs to occur in parallel / prior to the actual application
Infrastructure build, for obvious reasons. For capacity planning purposes, current
state analysis along with all known considerations would help in estimation of:
Technologent Inc.
2626
26
•
Number of racks of each category based on power ratings / cooling
requirements / physical size of hardware to be deployed
•
Power & Cooling requirement based on expected number of Servers, Network
& Storage devices to be hosted in the target data center.
•
Type & capacity of network & security hardware to be deployed
•
Overlaying technologies to be deployed
•
SAN / NAS / DAS / Backup Hardware & technologies to be deployed
Enterprise Technology standards are followed in creating a skeleton of Infrastructure
layer such as standard hardware, Core – Distribution – Access framework for
aggregation & Distribution of Network & Storage traffic, deployment of security zones
to host different applications / services in a systematic fashion, setup of Monitoring &
Management framework to enable pro-active monitoring, troubleshooting and
administration of Core infrastructure.
Core Infrastructure is not only designed to accommodate the present IT
requirements, but also to accommodate business growth projections of the enterprise
in the near future. So, Core Infrastructure should be flexible, modular, scalable &
capable of meeting varying service level objectives defined by business for different
applications.
Technologent Inc.
2727
27
Below is an illustrative exhibit showing Core Infrastructure components.
5.2 Enterprise Shared Services Design
Enterprise shared services serve as common service infrastructure for all the other
business applications of the organization. As, a large number of applications utilize
enterprise shared services for their full functionality, it is logical to design and build
them prior to actual application migration e.g. Authentication, Authorization, DNS,
Messaging, Monitoring, Antivirus, Security Services etc.
Technologent Inc.
2828
28
Following are some of the parameters that should considered while planning the
capacity of shared services infrastructure at target data center.
•
Enterprise vision on selection of Shared Services technologies to be deployed
at the target data center
•
Number of Intranet / Internet / VPN users accessing different shared services
in the source data center
•
Expected number of Intranet / Internet / VPN users accessing these shared
services in the target data center (based on growth trends)
•
Expected Differences in IT workload on various systems & associated
hardware & software sizing.
•
Number of network hosts / mailboxes / backup clients / security zones / DNS
zones etc. needed in the target data center based on Applications & Business
Services being migrated to the target data center.
•
Geographical locations from where shared services at target data center will
be accessed, performance requirements thereof.
•
Any additional shared services that can enhance the value to services offered
by business should be considered in the target state architecture.
•
Criticality of Shared Services planned to be offered to serve business needs.
These services should be evaluated for appropriate Service Level Objectives
(AO, RPO, RTOs). So that, their resiliency & data protection is planned
accordingly.
•
Explore opportunities to conduct a technology refresh in the target state if any
of the Shared Services at Source Data Center are running on near “End of
Life” Hardware / Software.
•
Network, Server, Storage & Database Monitoring requirements in the target
data center.
Technologent Inc.
•
Shared Database requirements (e.g. Oracle Grid, SQL cluster etc.)
•
Shared hosting requirements (e.g. Shared Web / Application Server farms)
2929
29
5.3 Application Re-Architecture – As needed
Re-architecture of certain applications is needed on a case-to-case basis. Based on
the current state analysis of certain Applications, it may be need to make revisions to
their logical architecture. Some of the illustrative reasons are:
•
End of life of associated Hardware / Software / both.
•
Consolidation of applications for cost optimization
•
Rationalization & Consolidation of databases / Business application logic e.g.
reduced individual application related database instances and moving towards
utilization of Shared Databases / Application hosting farms.
•
Feature Upgrades to cater new business scenarios
•
OS / DB Platform Migration requirements e.g. Mainframe to Distributed or DB2
to Oracle etc.
•
Changes to application interdependencies in target state that pose an impact
on application architecture
•
New security requirements in the target state data center e.g. Enterprise
strategy might enforce maintaining Application & DB Servers in different Tiers
based on certain parameters in Application profile.
•
Any latency intolerance with other dependent applications triggering an
architectural change to mitigate the same.
•
Requirements to adhere to any new Enterprise Standards in the target state.
5.4 Selection of Deployment Model / Patterns – Infrastructure
Perspective
Generally, large enterprises follow certain guiding principles to categorize level of
redundancy, reliability & data protection to its physical deployment. These guiding
principles enable a standard way of deploying applications in a data center
Technologent Inc.
3030
30
configuration. Moreover, these patterns help for a better co-ordination between IT
and Business in terms of identifying what infrastructure IT needs to provision in order
to provide a certain grade of service to the business users.
From a data center migration standpoint, selection of deployment patterns may be
needed to either enhance or optimize the infrastructure implementation in the target
state e.g. in the source data center if a lower criticality RTO / RPO application has
been deployed with unnecessary redundancies, there may be a potential to optimize
the infrastructure deployment in the target state. Similarly, if the current business
conditions require an upgrade to RTO / RPO of an application, it may mean
deployment of additional availability infrastructure or implementation of other
technologies to facilitate quicker data retrieval / better performance / zero human
intervention / elimination of any other points of failure etc.
Service Level Objectives defined by the business & Application-level design
principles need to be applied to databases, messaging servers, application servers,
web servers, and nearly any other type of workload. The deployment architecture
should consider the High Availability requirements (based on Availability objectives)
and Data protection requirements (based on the Recovery Point and Recovery Time
Objectives). As a result, three popular deployment patterns can be derived:
•
Active-Passive failover
•
Active-Active failover
•
Load Balanced
5.4.1 Active-passive failover:
This model works for nearly all application types, including databases, application
servers, web servers, and messaging servers. Cluster management software (such
as Microsoft Cluster Service, VERITAS Cluster Server, or Sun Cluster) detects either
a software or hardware failure on one node in the cluster and starts that workload on
the idle node, allocating the failed node’s IP address to the takeover node.
Technologent Inc.
3131
31
5.4.2 Active-active failover:
This model also works for nearly all application types, since it is essentially just a
variation on the active-passive failover model above. All nodes in the cluster are
running applications, with enough reserve capacity across all of them to take over
workloads from a failed node. Cluster management software monitors all nodes and
services in the cluster and enables for migration of services from failed nodes to
active nodes. As with active-passive failover, applications that maintain state with a
client application or end user will lose any data not committed to disk at the time of
failure.
Refer to below illustrative high availability patterns.
h. Active Server at Production & Active / Passive at DR site
Technologent Inc.
3232
32
i.
Technologent Inc.
3333
Active Server Cluster at Production & Active / Passive Server Cluster at DR site
33
5.4.3 Load balancing
This model works for both stateless and stateful applications and can be used for
web servers, application servers, messaging servers, database servers, and software
developed in-house. Although load balancing may not always be feasible, where it
can be implemented it provides a very high level of resiliency, performance, and can
also provide a point-of-presence advantage for some applications.
Stateless applications
Multiple nodes in a cluster run the same application. Load balancing can be easily
implemented at the network layer.
Technologent Inc.
3434
34
Stateful applications without session state replication
Affinity must be maintained between a particular client and a particular host in the
cluster for the duration of the session. This can be done at the network layer with
specially configured load balancers, or at the application layer.
a. Load Balancing between Production & DR servers in two data centers
b. Load Balancing between Production & DR Server Clusters in two data centers
Technologent Inc.
3535
35
5.5 Application specific Infrastructure Design
At this point, we are ready to design application specific infrastructure requirements.
Following list describes the various components that need to be looked at to evolve
an application’s infrastructure architecture:
•
Logical Application Architecture
•
Physical deployment architecture in the source data center
•
Appropriate Physical Deployment Model for the target state
•
Individual Servers & Databases Associated with an application
•
Application specific software requirements
•
Application specific appliances
•
Application specific LAN/ WAN & Third Party network connectivity
requirements Network Performance / Latency Sensitivity assessment (This will
Technologent Inc.
3636
36
help in feasibility assessment / impact analysis & mitigation planning for
moving an application from source to target data center)
•
Application specific External Storage & Backup requirements
•
Any client software needed on application specific servers to access
Enterprise shared services
Type of information that should be captured for each of the above mentioned
infrastructure components are explained in further detail as follows:
5.5.1 Individual Servers & Databases Associated with an application
Source Data Center
Shared Data Center
Host Name
Location
Vendor
Model
RAM
CPU Type
CPU Count
Server Class
Server General Class Size
Migration Type
Particular
<Current hostname>
<Source Server location>
<Current Hardware Vendor>
<Current Hardware Model>
Allocated capacity (GB)
SPARC / RISC / INTEL
<Count>
UNIX / Linux / Windows
Large / Medium / Small
Not Applicable
Virtualization Status
Already exists / Not virtualized
In Secured Zone?
Belongs to a Shared
environment
HA Model
No / <Name of Zone>
No / <Name of environment>
<Target hostname>
<Target Server location>
<Target Hardware Vendor>
<Target Hardware Model>
Planned capacity (GB)
SPARC / RISC / INTEL
<Count>
UNIX / Linux / Windows
Large / Medium / Small
Retain as at Source Data
Center /
Retain as at Source / cannot
virtualize / To be virtualized
No / <Name of Zone>
No / <Name of environment>
Active-Active / Active - Passive /
Load Balanced
Hostname /s
Active-Active / Active - Passive
Hostname
Location
Printer, CD writer etc.
<In Giga Hertz>
OS with version
Active-Active / Active - Passive /
Load Balanced
Hostname /s
Active-Active / Active - Passive
Hostname
Location
Printer, CD writer etc.
<Proposed in Giga Hertz>
Target OS with version
Server HA Cluster Partner
DR Model
Server DR Partner
DR Partner Location
Devices Attached
CPU Speed
Operating System
Above table shows the details of information pertaining to servers in the source data
center & a similar disposition needed against the same in the target data center.
Technologent Inc.
3737
37
Once the application architecture for target state is designed, logical & physical
diagrams are created details about servers in the target state will need to be defined.
5.5.2 Virtualization / Consolidation Assessment of Servers
Virtualization & Consolidation offer substantial benefits to Enterprises in terms of
Infrastructure & Data Center Space optimization & thereby reduction of IT costs.
Below are some the key benefits:
•
Reduction in
o
server and related infrastructure footprint in the data center
o
server management and maintenance costs
o
server delivery time & thus a higher level of business agility
o
OS licensing requirements
o
costs by delivering high availability and failover capabilities to a broader
scope of servers
•
•
Maximizes / Optimizes
o
Infrastructure and server utilization
o
standardization of DR
Provides
o
servers with complete hardware independence and allow to move,
migrate or upgrade servers with the least possible risk
o
increased consistency with the device driver configuration, which can
simplify upgrades and migrations.
o
centralized capacity management and planning
o
delegation of certain functions, while allowing for effortless delegation of
administration for the virtual machines themselves
Technologent Inc.
3838
38
However, a careful consideration is necessary to arrive at a decision on whether a
particular server should be built on a physical server or in a virtualized / consolidated
environment. Following are some of the objectives that should be kept in mind while
conducting an assessment on virtualization candidates.
•
Avoid negative performance impacts to an application
•
Maximize performance-per-watt.
•
Simplify disaster recovery processes.
•
Improve recoverability of all applications to be migrated to virtual servers.
•
Maintain availability levels of all applications.
•
Enable faster provisioning and re-provisioning of applications on virtual
machines.
•
Assess the effort required to migrate a typical server to a virtual instance.
•
If needed, conduct a preliminary testing to ensure compatibility & performance of an
Application
•
Applications which can be migrated, to a shared environment (Shared App
Farm, Shared Databases, Hosting environments etc.) should be migrated to
those environments instead of migrating to virtual servers where it allows for
better utilization, performance, or cost.
Each organization follows its own Enterprise standards & considers virtualization as a
right option for a certain set of Databases / Operating systems etc. based on the
maturity level and support infrastructure in place. The below chart provides general
guidelines on which I/O subsystems are utilized by the various server types.
Utilization statistics based on Server Type & I/O systems enables decisions around
their anticipated suitability from Infrastructure standpoint. After Virtualization
assessment from Infrastructure perspective, Applications residing on this
infrastructure need to be assessed for their compatibility / Vendor support.
Technologent Inc.
3939
39
Server Type
I/O Subsystem
Anticipated Suitability
Web Server
File and Print Server
CPU, Network
Disk, Network
HIGH
HIGH
Database Server
Disk, Memory, CPU*
(*indicative of an issue)
Transactional Server
CPU, Network, Memory
Terminal/Citrix
Server
CPU, Memory
Utility/Administration
Server
Memory
MEDIUM Note: Additional
DBMS consolidation
methodologies exist
MEDIUM to HIGH Note:
Suitability is dependent
upon the applications
utilization patterns
MEDIUM, dependent on
concurrent users and
dependent on the
application footprint
HIGH
Application Server
CPU, Memory, Network,
Disk
MEDIUM
Examples
IIS, Apache
Windows 2003 FP, SMS
Dist. Point
Oracle, SQL, DB2, UDB
MQ, Widows Domain
Controller, MOM, DNS,
DHCP,
MS Terminal Services and
Citrix Presentation server
MQ Admin Tool, Tivoli
configuration Management
server
Anti-Virus, Websphere,
Messaging, DB Reporting,
Patch Deployment, SMS
Based on the guidelines for preliminary assessment mentioned above & detailed
virtualization assessment criteria at Appendix A, a metrics score can be arrived at,
to quantify the potential virtualization candidates as Good / Likely / Possible / Not
Possible (as discussed below).
Good Candidate
A good score in all metrics associated with the candidate server reflects infrastructure
resource usage well within the virtualization target server capabilities. Candidate
server is considered a viable virtualization candidate as per the infrastructure
analysis review. Further Application teams confirm that vendor support is available on
the virtual instance & no performance issues are anticipated. Some scenarios may
need a pilot testing to arrive at conclusion.
Likely Candidate
Technologent Inc.
4040
40
A likely score in any of the metrics for this candidate reflects greater infrastructure
resource requirements for the server but the candidate server is still considered a
viable virtualization candidate per the infrastructure analysis. Further Application
teams confirm that vendor support is available on the virtual instance & no
performance issues are anticipated. Some scenarios may need a pilot testing to
arrive at conclusion.
Possible Candidate
A possible score in any of the metrics for this candidate reflects greater infrastructure
resource requirements beyond present virtualization capabilities, and thus the server
is not considered a viable virtualization candidate at this time. However, vendor
supports the application on virtual instance but there may still be a remote possibility
to virtualize it.
Not a good candidate
Neither performance metrics reflect this candidate as a viable virtualization candidate
from Infrastructure analysis standpoint, nor does vendor support the application on
virtual instance. This is not a good candidate for virtualization
Finally Likely and Possible candidates go through some level proof of concept or a
deeper dive assessment to confirm their candidacies as Good or Not Good
candidates.
5.5.3 Application specific software requirements
An application requires different pieces of software to be installed once basic
installation of Hardware/OS is completed. Some of these software components may
be industry standard off the shelf products while others may be in-house developed
software applications developed to meet specific business requirements.
Following are three commonly seen categories of software:
Technologent Inc.
4141
41
•
Business Application e.g. (Core Banking / Brokerage / Liquidity Management /
Reporting etc.)
•
Monitoring Software (Database / Network / Server Monitoring etc.)
•
Infrastructure Software (Backup / Terminal Services / Citrix / .NET / Oracle
etc.)
The below table illustrates typical information needed about each software, in the
target state.
Particular
Software ID (if any)
Software Name
Software Version / Service Pack
Software Description
Software Vendor
Software Category
Number of licenses (as applicable)
Associated Server /s
Number of instances needed
Business Criticality on Software
Responsible Party
Remarks
Unique Software Identifier
<Software Name>
<Software version>
<Description>
<Vendor Name> / in-house
<Application / Monitoring / Infrastructure>
<Count / CPU or number of users etc.>
<Hostname /s>
<Count>
Low / Medium / High
Application Team / Infrastructure Team / Third Party
5.5.4 Application specific appliances
An application may require specific appliances to be installed for its overall
functionality. Some of the devices that may fall in this category are Application
Protection Devices (APS), Data Power, and Dedicated Network / SAN switch
modules etc. that are not covered in Enterprise Infrastructure.
Following table briefly describes the type of appliance related information that should
be assessed for the target state of an application
Technologent Inc.
4242
42
Particular
Appliance Name
Vendor / Manufacturer Name
Appliance Type
Appliance Category
Business Criticality on Appliance
Responsible Party
Remarks
<Name of appliance>
<Name>
<Security / Facilities / Network / Storage>
<Shared / Dedicated>
Low / Medium / High
Application Team / Infrastructure Team / Third Party
5.5.5 Application specific LAN/ WAN & Third Party network connectivity
requirements
Network requirements are an important piece of information that needs to be
captured during target state design of an application, to ensure that application is
accessible by all desired users / customers. Below table describes typical network
related information that needs to be defined for the target state of an application.
Particular
Network Diagram for Application
DNS requirements
IP addresses for all hosts of application
Static Routes needed
Third Party Circuits
Load Balancing requirements
Virtual LAN requirements
Application specific network monitoring
(If any)
Remarks
<Physical & Logical Diagram>
<Zone / Domain & Sub domain Name>, <IP
Address>
<A.B.C.D>, <hostname>
<Specify the routes>
<Circuit ID>, <Bandwidth>, <From - To>
<Global Server Load Balancing / Local Server
Load Balancing etc>
< No. & Type of VLANs needed> (if any)
< Bandwidth utilization / Network Performance /
Network Availability to application etc.>
5.5.6 Security Requirements
Security requirements are important to secure the application against any potential
threats to fraudulent / intrusion activities & to ensure that different types of users are
granted access based on their profile and business model in place.
Below table describes typical security related information that needs to be defined for
the target state of an application.
Particular
Firewall Rules
Technologent Inc.
4343
Remarks
<Source IP addresses>
43
User / Employee Access Requirements
Access requirement - From Intranet
Access requirement - From Internet
Firewall / IDS logs
<Destination IP addresses>
<TCP/UDP ports>
<Access Permit / Deny>
User / Employee IDs
<Permitted Zones / Locations / User Groups / Domains>
<Permitted Zones / Locations / User Groups / Domains>
<Rules to be logged>
5.5.7 Network Performance / Latency Sensitivity assessment
In many data center migration scenarios, latency between the production & DR sites
changes because of the revised distance between new locations. In cases, where
latency / round trip delay between production & DR sites increase, there is a need to
evaluate its impact to all applications. Individual application specific mitigation plans
will need to be created if any application poses intolerance to increased latency.
Below table show some important performance information that should be assessed
for an application & its associated servers to ensure that due considerations have
been made in this regard.
Particular
Latency sensitive hosts
Hosts with High data transfer rate
Estimated Average Load
Estimated Peak Load (MB/sec)
Any additional Week / Month / Year end load
Peak Time of the day
Peak Day of the week
Remarks
<Hostname>
Source & Destination Host IP addresses
<MB/sec>
<MB/sec>
<Amount of load on network>
<hh:mm>
<Mon-Sun>
5.5.8 Application specific External Storage & Backup requirements
Any external storage requirements (not on physical disks of associated servers)
needs to be defined so that Enterprise Storage Infrastructure team can ensure that
Technologent Inc.
4444
44
SAN/NAS/Tape library infrastructure have sufficient capacity on disk arrays / switch
fabrics etc., to accommodate the requirements of an application.
Below table illustrates typical information that needs to be captured to enable
Storage Infrastructure Engineering teams in necessary planning & design.
Particular
External Storage type (if any)
External Storage Data Volume
Volume Of Data Change Per Day
Estimated Growth
Data Protection Type
Storage Tier
Full Backup Size
Incremental Backup Size
Data Retention Requirements
Remarks
SAN / NAS/ other
<GB>
<GB>
<GB>/month
Synchronous / Asynchronous Replication OR Non
replicated (Tapes)
Tier 1-5 (Based on business criticality)
<GB>
<GB>
<Duration in Months / Years>
5.5.9 Client software needed to access Enterprise shared services
This information is needed to know if any software clients are needed on any of the servers of an
application to access Enterprise Shared Services. This will help in correctly quantifying the application
specific requirements to Enterprise Shared Services Infrastructure group
Particular
Software ID (if any)
Software Name
Software Version / Service Pack
Software Description
Software Vendor
Software Category
Number of licenses (as applicable)
Associated Server /s
Number of instances needed
Remarks
Unique Software Identifier
<Software Name>
<Software version>
<Description>
<Vendor Name> / in-house
<Application / Monitoring / Infrastructure>
<Count /CPU or number of users etc.>
<Hostname /s>
<Count>
5.6 Facilities Requirements / Design
At this point, we have designed the target state IT infrastructure that will house the
target data center. It is important to ensure that the facilities infrastructure in the
Technologent Inc.
4545
45
target data center is capable of catering to target IT Hardware. It should also provide
sufficient capacity for growth as defined through enterprise’s data center strategy. In
the age of multi-core processors, blade server racks and grid computing
environments, modern data centers are housing much dense IT hardware thereby
requiring a very careful facilities design. Earlier while conducting the assessment of
source data center, we had captured its facilities profile that now needs to be
reviewed again in view of target IT infrastructure & scalability requirements. Following
are some of the key considerations that should be kept in mind while planning
facilities infrastructure at target data center.
•
Estimation of total UPS & raw power requirement for the hardware design to
be implemented at target data center. Number of Racks & Square footage
needed in the target data center
•
Power requirements in terms of Watts / Rack & by type of power Single Phase
/ Three Phase
•
New redundancy requirements to eliminate single points of failure
•
Change in density & capacity of servers as compared to source data center.
E.g. blade servers, disk arrays, high end Proprietary Server racks etc.
•
Backup generators needed for fault tolerance
•
HVAC capacity, cool air distribution design to ensure there are no hot spots
•
Need for any spot / rack mounted cooling systems
•
Hot Aisle, Cold Aisle design and methodology to collect heat from hot aisle to
optimizing
•
HVAC performance Weight of the hardware and strength of floor needed
•
Any new regulatory / safety requirements
•
Capacity of Power Distribution Units, MCBs
•
Data Center Energy usage monitoring system
These considerations provide inputs for any adjustments to already provisioned
facilities / assist in quantifying requirements for any new facilities infrastructure.
Technologent Inc.
4646
46
6 Move Plans
Once Current & Target State Application & Infrastructure analysis / design activities
are completed, Data Center move planning activities need to begin. A careful
Migration / Relocation planning is key to the success of the program. Ensuring the
business continuity and availability of production applications while carrying out data
center migration is more challenging than build a data center for the first time. We are
dealing with wires that are already carrying electric current, and hence much deeper
due diligence is needed to avoid any shocks. Following are some of the illustrative
activities that are recommended to create an efficient Move / Migration plan.
•
Assessment of Application & Infrastructure Interdependencies
•
Assessment of Time Zone Dependencies (Applicable, if target Data Center is
in a different Time Zone)
•
Creation of Move Bundles (Based on logical interdependencies & Business
priorities)
•
Dispositions on Infrastructure Build decisions such as:
•
Build New Hardware & Software at Target Data Center such as
•
Cold Build
•
Virtualize (Physical –to-Virtual (P2V) or Virtual-to-Virtual (V2V))
•
Physical-to-Physical Move over Network (P2P)
•
Forklift / Truck Hardware from Source to Target Data Center
•
Temporary build to swing the hardware from Source to Target Data Center
In this section we will understand these considerations in more detail
6.1 Application & Infrastructure Interdependencies
Technologent Inc.
4747
47
This assessment is needed to ensure that build out of applications and infrastructure
is phased out appropriately and applications are grouped logically for near zero
business impact.
For each application, review all the interdependencies and assess:
•
Is the application REQUIRED to be deployed at Target Data Center in order
for the parent application to operate? (Based on business functions /
processes supported by the parent application & its dependency on the bound
application to meet that requirement)
•
Can stand-in functionality or data be created for the parent application so that
it can operate for predefined period of time without the related / dependent
application?
•
Can the parent application be modified so that it could operate for required
amount of time without the related application?
•
Tolerance levels for unavailability should be classified to assess most critical
dependencies e.g. (No impact / Can process with impact / cannot process).
Technologent Inc.
4848
48
This assessment will enable in base minimum / most necessary applications that
must be grouped together for the purpose of migration.
6.2 Time Zone Dependencies
In instances, where target data center operates in a different time zone, it is required
to assess time-zone sensitivity of applications that are being planned for migration.
Generally, applications that are time zone sensitive or use timestamps or interface
with either external or internal systems in their normal processing of data and
transactions require a common time. In a disaster scenario, data replication and
recovery to a different system time may influence the integrity of the data and
transaction processing. Moreover, Highly Available applications using any sort of
active-active configuration between Production and DR data Centers require use of a
single system time. Log Shipping, Data Replication, and Message Queue processes
depend on applications using the same system time.
Recommended solution to this issue is to synchronize all platforms and their
corresponding infrastructure components to a common time (i.e. Greenwich Mean
Team (GMT) or Universal Time Coordinated (UTC). UTC is a high precision atomic
time standard. UTC has uniform seconds defined by International Atomic Time (TAI),
with leap seconds announced at irregular intervals to compensate for the earth's
slowing rotation, and other discrepancies.
When local time is required to be displayed or used within an application, provide a
utility, routine or service to determine local time and day based on location.
Following are some illustrative planning activities that should be done in this regard,
prior to Application Migration to target data center.
Technologent Inc.
4949
49
•
Means of storing data and transaction timestamps in UTC format
•
Methods to Convert local timestamps to UTC format
•
Development of scripts or processes to identify program logic that needs
remediation
6.3 Create Move Bundle Packages
Following exhibits show how Current & Target State Analysis / Design information
along with Application & Infrastructure dependencies are considered to create these
Move Bundle packages
6.3.1 Move Build Package Template (Illustrative)
Particular
Bundle ID
Application ID
Application Name
Line of Business
Target AO
Target RTO
Target RPO
Component Type
Application environment
Application Manager
Technology Delivery Manager
System Administrator
Description
<Unique Bundle / Move Group Identifier>
<Unique Application / Infrastructure Identifier>
<Brief description about the application / infrastructure>
<Business unit name>
<Bronze / Silver / Gold/ Platinum>
<hh:mm>
<1-5>
<Application / Infrastructure / Other Software>
<Mainframe / Distributed / Both>
<Name>
<Name>
<Name>
Each move bundle package should have a list of participating applications & is a
work package that can be allocated to a Build Project Manager for build & test work.
Further to the move bundle package information as described above, for each
participating application an application build package is created based on Current &
Target state Application & Infrastructure analysis / design work conducted earlier.
6.3.2 Application Build Package template (Illustrated)
Particular
Application
Technologent Inc.
5050
Description
<Unique Application / Infrastructure Identifier>
50
ID
Target
Server /s
(Relocate
from Source
Data Center)
Server
Hostname
CPU type,
size &
count
Target
Server /s
(Physical
Builds)
Server
Hostname
CPU type,
size &
count
Target
Server /s
(Virtual
Builds)
Server
Hostname
Virtual
Processor
Count
Appliances
Needed
Third Party
Circuit
Details
Databases
Source Data Center’s Server Information
Server Type
Vendor
Model
OS
(Web/App/DB)
Memory
Local Disk size
Location
Physical Build Information
Server Type
Vendor
Model
(Web/App/DB)
Memory
Local Disk size
Virtual Build Information
Server Type
Virtual Memory
(Web/App/DB)
OS
Virtual Disk
Space
Security Zone
Software
OS
Security
Zone
Location
Software
Security Zone
Location
Software
<Appliance Name> e.g. Data Power / Application Protection Systems etc.
<Circuit info> e.g. New York to Atlanta 1 Gbps Verizon or AT&T <circuit id>
Database
Name
Storage
Storage
Network &
Security
Hostname
Database Information
Database Type
Database Server
(Oracle/DB2/Sysbase
Instance Name
Hostname
etc.)
Storage Information
Storage Tier
Allocated SAN /NAS Storage Size
Network Information
IP Address
Firewall Rules (if any)
Static Routes (if any)
6.4 Update Disaster Recovery Plans
Technologent Inc.
5151
51
After having completed the detailed analysis of an application’s target state, it is
important to update all the Disaster Recovery documents prior to their migration. The
Disaster Recovery Plan comprises of both, the Application and Infrastructure
Recovery Plans. Necessity to update Disaster Recovery plans may be required
because of one or more target state scenarios mentioned (but not limited to) below:
•
Changes to Application Architecture to meet target service level objectives
defined by business.
•
Physical to Virtual Server Deployment
•
Technology Refresh to overcome retiring Hardware and OS issues
•
Changes to physical deployment architecture to overcome any network
latency / performance issues
•
Revisions in High Availability / Data Protection implementation
•
Changes in other Applications / Infrastructure / Shared Services environment
impacting the type of integration with an application.
Business continuity team is generally engaged in making changes to current Disaster
Recovery Plans (DRP) to ensure that Recovery plans are updated considering all
facets viz. Workflows, Technologies & Processes.
6.5 Create Detailed Implementation plan
Now that we have a full clarity on Target Applications, Infrastructure and move
considerations, a detailed implementation plan needs to be created. This plan should
detail following activities (illustrative)
Technologent Inc.
•
Scope, Phases & Statement of Work for various work streams of the program
•
Detailed cost breakup & funding approvals
•
Procurement of necessary hardware & software
5252
52
•
o
Network
o
Servers
o
Application & Infrastructure Software
o
Storage
o
Security
Integrated Program Schedule in view of Infrastructure readiness at Target
Data Center,
•
Availability of change windows, key resources, equipment etc.
•
Roles & Responsibilities for Build & Test activities
•
Technologent Inc.
o
Team structure
o
Education & training
o
Relocation/Deputation of resources
Project Management Plans
o
Scope Management Plan
o
Change Management Plan
o
Risk Assessment & Mitigation Plan
o
Risk Management Plan
o
Communication Plan
o
Schedule Management Plan
o
Cost Management Plan
o
Progress Tracking Tools/Repositories
•
Documented procedures & processes to Build & Test
•
Pre-Move / Post Move validation Checklists
•
Core Infrastructure installation & configuration plan
•
Third Party Network circuits
•
Co-ordination with External vendor Services
5353
53
•
o
Forklift/Trucking (As needed)
o
Application Service Providers-ASPs (As needed)
o
Network Carriers (As needed)
o
Sub-contractors (As needed)
Identification of stakeholders for sign-offs on Build & Test milestones.
Risk
Area
Typical Risk / Challenge
Migration Option
Complexity of interdependencies between various
Detailed application interdependencies
applications and sharing of hardware poses availability
analysis and individual mitigations to
risk while migrating applications
either unlink dependency transfer the
dependency to another application /
Planning & Design Phase
hardware.
Some legacy applications run on obsolete / unsupported
Re-architect the application to be able
Hardware & OS. While migrating to new data center, the
to host on newer Hardware / OS
intent is to migrate to newer Hardware, OS but Legacy
platform
Application is not supported on the newer Hardware/OS
Virtualize older OS versions (wherever
Some vendor supported software & tools are not
possible) until application is re-
compatible with technology refresh activities included in
architected. This may also be a better
the Migration
option if application is anticipated to be
sunset in the short term.
Increased distance between new data centers disables
Transfer Availability dependency to
from utilizing Active- Active / load balancing deployment
local / other nearby data centers i.e.
pattern
enhancing the redundancy & capacity.
Implement Network & Storage
configuration changes
Budgetary constraints consistently driving changes to
Divide the program scope into multiple
program scope
phases based on priorities to prevent
from losing focus on at least critical
phases because of such constraints.
Technologent Inc.
5454
54
6.6 Regulatory Considerations
There are various Regulatory requirements pertaining to the build out of Data Center
facilities as well as security & privacy of data. Generally, depending upon the location
of the data center, relevant country specific regulations are applicable. Below are
some of the regulatory considerations that should be complied to while engineering /
re- engineering data centers.
Telecommunication Infrastructure Standards (TIA-942)
TIA-942 is a standard developed by the Telecommunications Industry Association
(TIA) to define guidelines for planning and building data centers, particularly with
regard to cabling systems and network design. The standard deals with both copper
and fiber optic media.
The TIA-942 specification references private and public domain data center
requirements for applications and procedures such as:
Technologent Inc.
•
Network architecture
•
Electrical design
•
File storage, backup and archiving
•
System redundancy
•
Network access control and security
•
Database management
•
Web hosting
•
Application hosting
•
Content distribution
•
Environmental control
•
Protection against physical hazards (fire, flood, windstorm)
•
Power management
5555
55
Grounding Standards (TIA-607)
The purpose of this Standard is to enable the planning, design, and installation of
telecommunications grounding systems within a building with or without prior
knowledge of the telecommunication systems that will subsequently be installed. This
telecommunications grounding and bonding infrastructure supports a multi-vendor,
multi- product environment as well as the grounding practices for various systems
that may be installed on customer premises.
OSHA Electrical Safety requirements
Its mission is to prevent work-related injuries, illnesses, and deaths by issuing and
enforcing rules (called standards) for workplace safety and health. OSHA's electrical
standards are designed to protect employees exposed to dangers such as electric
shock, electrocution, fires, and explosions.
National Electrical Code (NEC)
The National Electrical Code (NEC) is a United States standard for the safe
installation of electrical wiring and equipment. It is part of the National Fire Codes
series published by the National Fire Protection Association (NFPA).
NEPA - National Environmental Policy Act
NEPA requires agencies to undertake an assessment of the environmental effects of
their proposed actions prior to making decisions. It offers analysis of the
environmental effects of the proposed action and possible mitigation of potential
harmful effects of such actions.
HIPAA - Health Care Companies data processing
HIPAA regulation includes the establishment of national standards for electronic
health care transactions and national identifiers for providers, health insurance plans,
and employers. It helps people keep their information private. It defines the security
and data privacy requirements for organizations handling healthcare information.
Technologent Inc.
5656
56
Sarbanes Oxley - Data Protection and Retention
The Sarbanes-Oxley Act (often shortened to SOX) is legislation enacted to protect
shareholders and the general public from accounting errors and fraudulent practices
in the enterprise. The act is administered by the Securities and Exchange
Commission (SEC), which sets deadlines for compliance and publishes rules on
requirements.
Sarbanes-Oxley is not a set of business practices and does not specify how a
business should store records; rather, it defines which records are to be stored and
for how long. The legislation not only affects the financial side of corporations, it also
affects the IT departments whose job it is to store a corporation's electronic records.
The Sarbanes-Oxley Act states that all business records, including electronic records
and electronic messages, must be saved for "not less than five years."
6.7 Enterprise Contractual considerations
Today’s Data Center environments are a blend of in-house & vendor supported
infrastructure. Overlooking these contractual obligations may have huge cost and/or
legal implications. Below are some of the contractual considerations that should kept
in mind while conducting a data center migration?
•
Lease agreements with Hardware Vendors who have leased equipment that
now needs to be migrated. This will require necessary permissions /
negotiations with leasing vendor prior to relocation to target data center.
•
Any existing maintenance contracts with IT hardware vendors should be
evaluated prior to planning a voluntary retirement / decommissioning.
•
Network Communication links are also generally provided on fixed duration
like 6 month / 1 year contract type. So, these contractual obligations should be
kept in mind while scheduling the migration
Technologent Inc.
5757
57
•
Current Data Center facilities lease agreements (if any) will also need to be
seen. E.g. notice period required to vacate current data center space & any
maintenance contracts for Power, UPS, HVAC, Fire Suppressions systems
and other facilities should be evaluated prior to scheduling migration.
7 Build Applications & Infrastructure - Target Data Center
7.1 Core Infrastructure & Shared Services Build & Readiness Testing
Now that we have a detailed implementation plan, before we can migrate
applications & their servers, core infrastructure & Enterprise Shared Services need to
be in place at target data center. Core infrastructure serves as common backbone for
all the applications & any unavailability of the same in the production environment will
adversely impact a significant number of applications. Therefore, every core
infrastructure component should be built with appropriate level of redundancy &
should have enough capacity.
Below are some of the illustrative Core Infrastructure & Shared services Build
Activities that should be undertaken at target data center
•
Racks, Physical mounting of hardware
•
Installation & Configuration of Core Infrastructure components:
•
Technologent Inc.
5858
o
Network Switching & Routing environment
o
SAN / NAS / Backup Infrastructure
o
Firewalls, IDS & other security appliances
o
Activation of Network Connections with carriers.
o
Creation of Security Zones
Installation & Configuration of Enterprise Shared Services
o
Authentication & Authorization (Identity Management etc.)
o
Logging & Monitoring (Servers, Databases, OS etc.)
58
•
o
Antivirus & Anti-spyware
o
Shared Databases (Oracle RAC etc.)
o
Messaging (Email, MQ etc.)
o
File Sharing (FTP, NDM etc.)
o
Security Services (Digital Certificates / PKI etc.)
Testing of Core Infrastructure & Shared Services:
o
Intranet, Extranet, Internet & VPN connectivity testing
o
High Availability / Failover testing
o
Performance / Stress testing
o
Ethical Hacking / Vulnerability Assessment / Penetration testing
o
Shared Services Availability & Integrity testing
7.2 Pilot Applications – Build & Test
At this point, it is recommended to group a small set of applications encompassing
varied technologies / complexity of environment for the pilot build. This pilot build
phase is beneficial to verify that:
•
Implementation plan is flawless
•
All internal & external dependencies have been taken care of.
•
To confirm the feasibility of build on a specific technology & environment
•
Actual Migration time is within available change window
•
No business services are being hampered in an unplanned fashion.
•
Hardware & Software for application (as per target design) are compatible.
Below is a list of illustrative application's build & test activities that cover a majority of
data center migration initiatives.
•
Installation & Testing of
o
Technologent Inc.
5959
Application Specific Servers (Web/App/DB etc.)
59
o
Application specific software & necessary patches (.NET, Java, Oracle
Apps etc.)
o
Core Business Application / Service (Core Banking / ecommerce /
Payroll etc.)
o
•
Necessary client software (FTP/NDM/MQ/Backup clients etc.)
Provisioning & Testing of
o
Application specific Network Access & Security e.g. (Firewall
Rules/Static Routes)
o
Application specific logging & monitoring (Performance / Utilization /
Security / Access logs / agents etc.)
o
•
•
Interconnectivity with other applications & enterprise shared services
Data Migration & Acceptance testing (for New Server builds only)
o
Load snapshot of data from Source Data Center
o
Conduct test transactions on the application
Capture test results and incorporate any lessons learned in the build & test
plans of Move Bundle Groups.
Lessons learned from the pilot implementation may suggest some alternatives /
mitigation steps to enhance the implementation plan for successful build & test
activities.
7.3 Build Move Bundle Packages
Based on the lessons learned from pilot implementation activities discussed above,
we are set to execute the move bundle & application build packages defined earlier.
Successful execution of move bundle packages requires excellent program
Technologent Inc.
6060
60
management capabilities and involvement of build specialists for various platforms /
technologies.
Based on the move bundle group’s composition, generally a move bundle group
requires co-ordination between various resources on a full time / part-time basis.
Below are some of the illustrative roles for a typical move group:
•
Build Project Manager
•
Build Specialist (s) e.g. Unix / Intel specialist
•
SMEs of Applications included in Move Bundle (e.g. Application Architect,
•
Application Manager, Technology Delivery Manager)
•
System Administrator for servers included in the relevant application build
packages.
•
Data Center Operations team – for allocation of Space / Rack / cabling /
Power & other facilities.
•
Server Virtualization team – for virtual server builds, as applicable.
•
Data Center Infrastructure engineering team – Storage, Network, and Security
etc.
Each organization follows its standard operating procedures for provisioning of
servers. These should be followed in build activities. Further, as discussed earlier
migration of a server to a target data center can fall under one of the following
categories:
•
Building a new Physical Server
•
Building a new Virtual Server
•
Moving / Fork-lifting existing server
Building new physical / virtual servers does not interrupt existing operations at source
data center whereas Moving / Fork lifting existing servers from source data center
require additional planning and co-ordination for a seamless relocation. Appropriate
Technologent Inc.
6161
61
change windows need be co-coordinated with application, business & support teams.
Further, standard shutdown and startup procedures & checklists will need to be
followed to ensure that all services are successfully restored in the target data center.
7.4 Test Move Bundle Packages
A series of test cycles is necessary to ensure that all build activities meet the
program goals. Generally, following test events are adequate
7.4.1 Unit Testing
Once each application build package gets implemented, it’s important to conduct a
unit testing for all components of that application. This test ensures that Hardware,
Operating System, Application Modules and associated software components are
functional as expected. All interconnectivity within different Tiers of an application e.g.
Web to App / App to DB are tested in this test cycle. Further, stress testing for
various servers of an application is also carried out at this stage to ensure that
hardware is capable of meeting desired performance levels for a given workload.
7.4.2 Shared Infrastructure Testing
After the successful completion of unit testing, based on the application architecture,
there is a need to test all interfaces to Shared Services environment e.g. if an
application requires a Shared Database / a Digital Certificate / Authentication Service
then necessary tests should be conducted to ensure that relevant users / queries /
RPCs etc. are performing successfully. A stress testing on Shared infrastructure
potion is also expected in this test cycle. This will provide feedback for capacity and
scalability of Shared Infrastructure environment.
Technologent Inc.
6262
62
7.4.3 System Integration Testing – Move Group level
Next level of testing is to ensure that all interdependent applications within a move
group / under the purview of a build project manager are able to successfully interact
with each other e.g. if an application A in Move Bundle Group 1 needs to transfer a
batch file using NDM to an application B of the same Move Bundle group, then some
sample files may need to be transmitted between the two applications to ensure the
network connectivity, performance, configuration parameters.
7.4.4 System Integration Testing – Program level
A program level test manager generally manages this final test. The purpose of this
test is to ensure that all inter Move group dependencies are successfully tested. This
test is also an opportunity to test all interfaces of an application including those that
were not in the same Move Bundle group as that of the application
7.5 Typical Risks, Challenges & Mitigation options – Build & Test
Phase
Build & Test Phase
Risk
Area
Technologent Inc.
6363
Typical Risk / Challenge
Migration Option
Interdependencies on multiple work streams & their
Set up a PMO to manage multiple work
timely readiness such as Data Center Facilities,
stream projects Adopt PMI best
Equipment availability, Core & Shared Service
practices for project management
Infrastructure, Third party Vendor circuits pose risk to
Define Bill of Material & initiate
Build & Test schedule
Procurement processes earlier in the
cycle to accommodate sufficient leadtime.
Engage professional services partner
for a smooth migration.
63
Procurement & equipment delivery times are high and
have tendency to impact Build Schedule the Migration
Enterprise resources are engaged in day to day
operational activities depriving the necessary focus on
build and test migration activities
Engage professional services partner
for a smooth migration.
Sequence the build activities
appropriately to finish the build of
closely related applications around the
same time.
Inability to test all the interdependent applications in a
Ensure that dependent Enterprise
single test cycle as some interfacing applications is not
Shared Services are ready by
ready yet.
completion schedule of dependent
applications
Ensure that firewalls rules & other
connectivity requirements are
implemented & tested prior to
application level test activities.
Security zones & network connectivity is not ready
Ensure that third party circuits are
commissioned in time & Vendors are
ready to support the tests as applicable.
8 Roll out & decommission
Now that we have built and tested all relevant applications & infrastructure at target
data center, it is the time to commence the rollout activities.
Rollout / Cutover plan varies for each application based on multiple factors such as:
Technologent Inc.
•
Application Criticality (Based on its Service level objectives)
•
Number & Type of Servers
•
Interdependencies with Shared Infrastructure & other business applications.
•
Type of Build (Relocated / Built a new physical server / Virtualized)
6464
64
•
Associated Databases • External Storage (On SAN / NAS / Tape media)
•
Third Party Network & Software dependencies
•
Change Windows & Availability of key contacts.
•
Data Migration plan (from Source to Target Data Center)
•
Additional Resource requirements
Decommissioning is the last part of the program after successful rollout. The
objective of decommissioning is basically to
•
Release and clean up Infrastructure at source data center.
•
Data Archives
•
Power-off / Enable re-use of released hardware.
•
Update Asset Management systems to reflect release of hardware and
software from Source Data Center. Free up racks, power and data center
space for other IT initiatives.
We will discuss on these two in further detail in the next section:
8.1 Rollout / Cutover applications to Target Data Center
Data Migration is a biggest challenge for a successful cutover / rollout of an
application. This is primarily applicable to the new physical / virtual server builds. It is
expected that application whose all servers have been relocated from source to
target data center have already been connected to target storage environment and
their data migration has already been taken care during build activity itself. Whereas,
applications for which some servers have been built new whereas a few fall in forklift
/ trucking category, data migration needs to be planned appropriately.
Technologent Inc.
6565
65
Below are some of the illustrative options in such scenarios:
•
Withhold Forklift/Trucking of any data sensitive servers until cutover change
window to ensure that snapshot of data for all servers of an application is
taken at one given point in time. Data migration & hardware relocation go
hand in hand in this scenario.
•
If servers of an application are not latency sensitive / distance between target
and source data centers does not add any latency, Forklift/ Truck relevant
servers to target data center during build activity itself and continue running an
application with infrastructure / servers split across two data centers until the
time when data migration for all new physical/ virtual builds has also been
completed. After which, the application will be fully running from target data
center’s infrastructure. This option may require additional assessment on
capacity of network to handle data traffic between the two data centers,
availability of data replication / storage infrastructure over the network.
In the case of a physical / virtual server build since a new instance of application has
been created at target data center at the same time when earlier instance is also
functional at source data center, it is necessary to smoothly bring the operational
data from source to target data center. This would additionally need that users are
seamlessly being pointed to the target data center environment, which would need
necessary network and security configuration changes in the environment.
Based on an application profile, some applications may have higher tolerance for
unavailability & thus provide enough time to
Technologent Inc.
•
Backup data from source data center environment
•
To discontinue new transactions until data is restored at the target data center
•
To redirect the users to utilize the target data center environment.
6666
66
However, data migration becomes challenging in case of applications that operate
24x7 / real-time because of the
•
Ensuring the availability of application during transition
•
Integrity requirements of the data snapshot
•
Recording the incremental changes to data after snapshot taken
•
Restoration of incremental data during transition
These requirements pose potential risks to data corruption & integrity. A careful
planning is needed to ensure that data restoration is successful & achievable within
the available change window. In many instances decisions such as airlifting a
business critical data on a Chartered flight are needed to ensure the business
continuity and security of business critical data. For some non-critical applications,
replication of data over WAN or shipment through Tape backups may be an option.
As an illustration, applications such as ecommerce, online banking, online brokerage
etc. are critical business applications that are operational 24x7 and their data
migration may be done through airlift to minimize the migration time, safeguard data
security & reduce the probable impact to the business. Weekends are generally
utilized for such events to minimize service outages. Other applications such as
product catalogs / survey repositories / market trends etc. are comparatively less
critical and alternate migration plan may work well for them.
So the typical cutover activities include (but not limited to) are:
Planning
Technologent Inc.
•
Selection of a change window.
•
Identification of Change management team
6767
67
•
Definition of Roles, Responsibilities, Processes to be followed during change
window.
•
Definition of Escalation matrix to enable key decisions like GO/NO GO based
on progress of cutover activities and issues in hand.
•
Detailed Cut-over plan including selection of Data migration options (Airlift /
•
Road transport / over WAN / over iSCSI / SAN etc.)
•
Business and Technology Partners for Post implementation validation testing
•
Checklist of critical business services that define the success criteria for an
application’s migration.
•
Tools & templates to capture the test results / incidents for IT service
management aspects
•
Compliance to any regulatory requirements in handling the data security &
privacy.
Configuration Changes
•
Engagement of Network & Security folks in changes to network routing, DNS,
firewall rules, and load balancers etc.
•
Configuration & Readiness of external storage environment at target data
center to receive the data being brought into target data center environment.
Testing & Validation
•
Conduct pilots as necessary in a test bed to ensure that data migration plan
works as expected prior to moving entire production data store of an
application.
•
Pre-Cutover and Post-Cutover validation testing.
•
Plan for point in time for restore based on business defined RTO/ RPOs.
•
Test scripts to validate integrity & performance of an application loaded with
actual data prior to certifying for production launch.
•
Testing the availability of data to other interfacing applications as per
interdependencies / application architecture including external vendors
Technologent Inc.
6868
68
8.2 Decommissioning Source Data Center Hardware
Though, a successful cutover/rollout is key to the success of a data center migration
program, appropriate steps to decommission the source data center are necessary to
ensure the IT infrastructure maintenance costs are optimized and all security,
environmental and regulatory requirements are met. Typical decommissioning
activities include:
•
Ensure that none of the applications / service is needed on the source data
center infrastructure after the cutover of applications. In instances where some
servers are shared across multiple applications, it is necessary to ensure that
all stakeholders utilizing services / applications on a hardware are
communicated about the decommissioning plans and schedule so that users
and network is enabled to utilize the newly build applications.
•
Decommissioning of certain applications that share hardware will need to wait
until all the applications sharing that hardware have been successfully rolled
out to target data center or their dependency / sharing on the hardware has
been mitigated otherwise (e.g. moving to a different hardware / changes to
application architecture that eliminates the need of shared hardware etc).
•
Archival of data for regulatory and compliance requirements (as necessary).
Backup all software components / configuration files / scripts that may be
needed in future.
•
Follow Enterprise security guidelines while handling safe disposal of electronic
hardware, data and media in transit.
•
Power-off servers & other hardware infrastructure that is released post cutover
of applications... Based on the reusability of released hardware (Model, OS
etc.), it may be reused for the build of other applications. This would save
power, need less cooling in the data center and provide some re-usable
equipment.
•
Un-mounting the released hardware to free up Data Center floor and Rack
space.
Technologent Inc.
6969
69
•
Decommission any third party circuits that were terminated to source data
center that are no longer needed as applications have now been migrated to
target data center. If any other applications that were out of scope of migration
still need any of third party circuits, the capacity requirements should be
revisited to optimize the bandwidth requirements.
•
Completed documentation on IT assets deployed in the target data center and
those that have been released from the source data center.
Technologent Inc.
7070
70
9 Appendix
9.1
Virtualization Candidacy Criteria (Illustrated)
Processor
Metrics
Candidacy Criteria Windows & Linux
Requirements:
Determine Number of Processors
Determine Processor Speed
Determine Average Utilization % of Processor
Formula:
Number of Processors x Processor Speed x CPU Utilization % = CPU Score
Candidacy Scoring:
CPU Score < or = 4096 = Good Candidate
CPU Score < or = 4096 = Good Candidate
Memory
Requirements:
Determine Physical Memory
Determine Physical Memory Utilization or Average Memory Usage
Formula:
Total Memory x Memory Utilization % = Memory Score
Disk
Candidacy Scoring:
Memory Score < or = 4096 = Good Candidate
Memory Score > 4096 and <= 6144 = Likely Candidate
Memory Score > 6144 = Possible Candidate
Technologent Inc.
7171
Requirements:
Determine Daily Average Total I/O Rate (4K pages per second)
Formula:
Daily Average Total I/O Rate (4K pages per second) = Disk I/O Score
Candidacy Scoring:
Disk I/O Score < or = 1000 I/O per second = Good Candidate
Disk I/O Score > 1000 and <= 3000 I/O per second = Likely Candidate
Disk I/O Score > 3000 I/O per second = Possible Candidate
71
Metrics
Candidacy Criteria Unix
(Count)
Disk
Paging
Processor (MHz)
Processor
Requirements:
Determine number of processors as component to virtualization ratio
Formula:
Number of Processors = CPU Score
Candidacy Scoring:
Total CPU Score <= 8 = Good Candidate
Total CPU Score > 8 = Possible Candidate
Total CPU Score > or = 4 Total CPU = Possible Candidate
Requirements:
Determine processor MHz as component to virtualization ratio
Formula:
Processor MHz = CPU MHz Score
Candidacy Scoring:
CPU MHz Score <= 1593 = Good Candidate
CPU MHz Score > 1593 = Possible Candidate
Requirements:
Paging I/O Traffic in 4K
Formula:
Paging I/O Traffic = Paging
Candidacy Scoring:
Paging Score < or = 100 I/O per second = Good Candidate
Paging Score > 100 I/O and < 200 I/O = Likely Candidate
Paging Score > 200 or = 400 I/O = Possible Candidate
Requirements:
Determine Daily Average Total I/O Rate (4K pages per second)
Formula:
Daily Average Total I/O Rate (4K pages per second) = Disk I/O Score
Network
Candidacy Scoring:
Disk I/O Score < or = 286760 I/O per second = Good Candidate
Disk I/O Score > 286760 and < 1024000 I/O per second = Likely Candidate
Disk I/O Score > 1024000 I/O per second = Possible Candidate
Requirements:
Determine Daily Average Total I/O Rate (4K pages per second)
Formula:
Daily Average Total Network Utilization (MB per second) = Network Score
Candidacy Scoring:
Network Score < or = 40MB per second = Good Candidate
Technologent Inc.
7272
72
Virtual Score
Technologent Inc.
7373
Network Score > 40MB per second and < 160MB per second = Likely Candidate
Network Score > or = 160MB per second = Possible Candidate
Requirements:
Virtual Score is the combination of all associated metrics noted above. The specific
metric is given a calculated score and highest score across the five metrics is the
score the server will be given.
Example: Processor Score = Good, Processor (MHz) Score = Good, Paging Score
= Good, Disk I/O Score = Possible, Network I/O = Good. Overall Score for Server =
Possible
73
9.2 Acronyms & Glossary
Acronyms
AO
ASP
DAS
DBMS
DHCP
DNS
DR
DRP
FTP
HA
HTTP
HVAC
I/O
IDS
IP
LAN / VLAN
MB/sec
MQ
NAS
NDM
OS
P2P
P2V
PDU
RISC
RPC
RPO
RTO
SAN
SMS
SOAP
TCP
UDP
UTC
V2V
VPN
WAN
Technologent Inc.
7474
Description
Availability Objectives
Application Service Provider
Direct Attached Storage
Database Management System
Dynamic Host Configuration Protocol
Domain Name System
Disaster Recovery
Disaster Recovery Plan
File Transfer Protocol
High Availability
Hyper-text transfer protocol
Heating Ventilation & Air Conditioning
Input/Output
Intrusion Detection System
Internet Protocol
Local Area Network / Virtual Local Area Network
Mega Bytes/sec
Queue Manager
Network Attached Storage
Network Data Mover
Operating System
Physical to Physical
Physical to Virtual
Power Distribution units
Reduced Instruction set computer
Remote Procedure Call
Recovery Point Objectives
Recovery Time Objectives
Storage Area Network
Systems Management Server
Simple object access protocol
Transmission Control Protocol
User Datagram Protocol
Coordinated Universal Time
Virtual to Virtual
Virtual Private Network
Wide Area Network
74
9.3 References:
Telecommunications Industry Association www.tiaonline.org
Occupational Safety & Health Administration www.osha.gov
National Environmental Policy Act (NEPA) www.epa.gov/Compliance/nepa
National Fire Protection Association (NFPA) www.nfpa.org
Health Info. Portability and Accountability Act www.hhs.gov/ocr/privacy/hipaa/understanding/
Sarbanes Oxley Act www.sec.gov/about/laws.shtml#sox2002
About The Author
Mark Huff
Mark Huff is a Master Solutions Architect with over 30 years of IT professional experience including
large technical program leadership, Data Center Solution Consulting, IT Infrastructure Architecture &
Strategy.
Technologent Inc.
7575
75