20150812_FG22_ Amsler_Swiss

Transcription

20150812_FG22_ Amsler_Swiss
Hybrid Flash Array
Customer Case Study
Walter Amsler - Senior Director
Global Office of Technology and Planning
Hitachi Data Systems
Flash Memory Summit 2015
Santa Clara, CA
1
THE CHALLENGING TRANSITION TO S.M.A.C
SMAC = Social Business, Mobility, Big Data Analytics, Cloud
Source: Capgemini – Key IT Requirements 2014
Hybrid Flash Array Case Study
 Swiss Re is a leading and highly diversified global company
• 151 years of experience in providing wholesale re/insurance and risk
management solutions
• delivers both traditional and innovative offerings in Property & Casualty and
Life & Health
• A pioneer in insurance-based capital market solutions, combining financial
strength and unparalleled expertise for the benefit of Swiss Re’s clients
 IT Technology Profile
• Long term Hitachi customer – yet very challenging and always evaluating
newest technologies (> 10PB usable storage capacity)
• Key Platforms: VMware, MS SQL, File Servers and Windows, Mainframe
– VMware Farm: 45 ESX stretched clusters, 96 ESX, 5270 CPU cores, 2100 VMs
Issues and Requirements
 Challenges
•
•
•
•
Financial pressure, faced with out-tasking, outsourcing, IaaS etc.
Yearly storage capacity consumption growth 35% (CAGR)
Inefficient Point Solutions e.g. VDI, File Services, e-discovery etc.
Inadequate High Availability and Site Failover Solutions
 Requirements
• Maximum Performance @ lowest cost
• Nonstop, Always On operation
• Highest level of failover automation
– Reduce time for a site failover from ~8 hours to less than 1 hour
• Maximum Efficiency and low touch storage management
The solution – Hitachi VSP G1000
Hybrid Flash Array with Dynamic Tiering
Hot Data - High Access Density
XenDesktop
Warm Data - Medium Access Density
Cold Data - Low Access Density
Unused Allocated
Hitachi Dynamic Tiering
Thin Provisioning
Dynamic Tiring
Space Reclaim
Wide Striping
Rebalancing
Grow & Shrink
Hitachi Flash
34*3.2TB = 109TB
HDD SFF 10k HDD LFF 7.2k
1480*900GB = 1,332TB
212 * 4TB = 848TB
Total Subsystem Capacity 2’289 TB
Performance and efficiency evolution
HDP  HDT  ACTIVE FLASH
Dynamic provisioning
 Thin provisioning for efficiency
 Wide striping for performance
 Fixed workload assignment
DP-VOL 1 DP-VOL 2 DP-VOL 3
Dynamic tiering
Active flash
 Automatically moves the most
active data to the fastest tier for
max performance; least active
to lower cost tiers
 30-minute to 24-hour cycle time
 Inherits HDP* advantages
Real-time − seconds to subseconds cycle time!
 HDT with LOWER RESPONSE
TIME capabilities
 Inherits HDT, HDP advantages
HDT** Pool 1
DP-VOL 1
Flash POOL
NL-SAS POOL
DP-VOL 2
Active Flash: Suddenly Active Data
Tier 0 Flash
SAS POOL
HDT Pool 2
Tier 1
SAS
Tier 2 NL-SAS
* HDP = Hitachi Dynamic Provisioning ** HDT = Hitachi Dynamic Tiering
Tier 0 Flash
Tier 1
SAS
Tier 2 NL-SAS
Hitachi Global Active Device (GAD) for
complete and consolidated DR protection
•
•
•
•
•
•
•
•
•
Distance up to 100km
Native Multipath Support
Quorum Device FC or FCIP
Snap & Clone Support
Active/Active Symmetrical
No Appliance Architecture
Zero Management Overhead
High Performance - Low Latency
Zero RTO/RPO for storage services
VSP G1000 in Production
I/O Response Time
ESX Server KPI’s
BE
HIT%
IOPS
TP
Average RT is < 2 ms, with GAD sync
Replication and only 5% Hitachi Flash!
Resp
Time
BLK
Size
Summary and key findings



Hybrid Flash Array and Dynamic Tiering Platform provide AFA benefits
•
Even with synchronous storage based replication at 20 km distance, the
Hitachi VSP G1000 provides Performance and Throughput qualities of an All Flash Array,
but with a much lower Total cost of Ownership – supporting ~ 400 servers!
•
Subsystem Average Response Times are 2-4 ms and < 2 ms for key applications
•
HFA provides the performance scalability for dynamic workloads of diverse applications
The need for Millions of IOPS – Myth or Reality?
•
In this highly consolidated environment with hundreds of servers and applications (OLTP,
BI, DWH, Analytics etc.) avg I/O Rate is around 40,000 IOPS with peaks of 90,000 IOPS
•
Blocksize is massively increasing, average Blksize 40kB and 80-120kB for read I/O
•
Most important is the consistent low response times – High IOPS and Throughput rating
indicates subsystem scalability without impact to low latency performance
Need detailed storage performance metrics for deep insights and planning
•
The right Metrics and instrumentation help deliver Quality of Service
Customer Benefits
 GAD eliminated host failover Software (SRM)
 Non-disruptive Migration during working hours
FMD
• Zero interruption - no performance degradation
 Capacity Efficiency with thin provisioning:
• 15.8 PB assigned (218% Over provisioning)
• 7.3 PB installed - 5.35 PB consumed
 75% capacity utilisation!
• Data Protection Overhead 12.5% (vs SDI 66%)
 Tiering Efficiency (e.g. for VMware Pool)
• Reduced cost by rightsizing Flash capacity
FMD
SPC1
#1
OLTP Workload
Highly Random
Highly Parallel
High IOPS / Low Latency
SPC2
#2
DSS Workload
Highly Sequential
Highly Parallel
High Throughput
2,004,941 IOPS @
0.96ms Response Time
43,012 MB/s
Business Relevant
VSP G1000 AFA performance leadership
Storage Performance Council (SPC)