some - IBM Systems Magazine, Mainframe

Transcription

some - IBM Systems Magazine, Mainframe
REFERENCE POINT
Global Events, Education, Resources for Mainframe
Upcoming Events and Education
AUGUST 11 – AUGUST 16, 2013
OCTOBER 21 – OCTOBER 26, 2013
SHARE
Boston, Massachusetts
http://www.ibmsystemsmag.com/
mainframe/events/SHARE-Boston/
IBM Systems z Technical
University
Orlando, Flordia
http://www.ibmsystemsmag.
com/mainframe/events/
IBM-System-z-TechnicalUniversity%E2%80%94Washington,D-C-/
AUGUST 20 – AUGUST 23, 2013
IBM Systems Technical
Symposium
Melbourne, Australia
http://www.ibmsystemsmag.com/
mainframe/events/IBM-SystemsTechnical-Symposium-Melbourne/
NOVEMBER 3 – NOVEMBER 7, 2013
Information on Demand
Las Vegas, Nevada
http://www.ibmsystemsmag.com/
mainframe/events/Information-onDemand-2013/
R Subscribe Now EXTRA and Marketplace eNewsletters
R Attend Now Live and On Demand Webinars
R View Now Mainframe Videos
R Search Now Find products and services in the online Buyer’s Guide
Web Exclusive Content
IT Plays a Crucial Role in
Continuous- Improvement Efforts
http://www.ibmsystemsmag.com/
mainframe/Business-Strategy/
Competitive-Advantage/continuous_
improvement/
Enterprise COBOL 5.1—
Where Tradition Meets Innovation
http://www.ibmsystemsmag.com/
mainframe/trends/IBM-Announcements/
enterprise_cobol/
EXTRA Exclusive
Features
A Closer Look at the REXX Code
for Processing SMF Data
http://ibmsystemsmag.com/mainframe/
tipstechniques/applicationdevelopment/
rexx_smf_part4/
DOME-inating Research Looks
at the Big Bang
http://ibmsystemsmag.com/mainframe/
trends/IBM-Research/big-bang/
Implementing Cloud Computing
Takes a Complete Strategy
http://ibmsystemsmag.com/mainframe/
trends/Cloud-Computing/cloud_project_
management/
Checklists for Analyzing, Planning
and Implementing Cloud
http://ibmsystemsmag.com/mainframe/
trends/Cloud-Computing/cloud_
checklist/
Webcast Event Center
1
HOUR
SESSIONS
40
1
Thursday, July 11 | 9 PT / 11 CT / Noon ET
Transforming Workloads with Operational Analytics
from Terma Software Labs
Using predictive and prescriptive analytics companies are
transforming their antiquated job scheduling products into
modern day, supercharged automation solutions for the
business
2
3
Thursday, July 18 | 8 PT / 10 CT / 11 ET
Improve Availability and Productivity with
Proactive Automation
from IBM
Gain new zEnterprise insights integrating OMEGAMON and
System Automation for z/OS
Wednesday, July 31 | 8 PT / 10 CT / 11 ET
The New zEnterprise – Creating success in your business with new solutions in the areas of big data,
analytics, cloud, mobile, and security
from IBM
Better business results with increased performance and flexibility
in a lower cost package
4
WEBINARS
ON-DEMAND
24/7
ACCESS
ANYWHERE
Wednesday, August 28 | 9 PT / 11 CT / Noon ET
Modern mainframes have no ESCON! How can I keep
my ESCON device portfolio?
from Optica
Find out how you can invest in the latest mainframe and
leverage Prizm to retain access to key ESCON and Bus/Tag
devices
5
IBM DevOps Solution for System z Series
from IBM
Best practices and tools for continuous delivery of softwaredriven innovation on System z
Each webinar in this series begins at 8 PT / 10 CT / 11 ET
Wednesday, August 7
Accelerating the delivery of multiplatform applications
Wednesday, August 14
Continuous Business Planning to get cost out and agility in
Wednesday, September 4
Collaborative development to spark innovation and integration
among teams
Wednesday, September 11
Continuous testing to save costs and improve application
quality
Wednesday, September 18
Continuous release and deployment to compress delivery cycles
REGISTER TODAY!
July/August 2013
MAINFRAME
ibmsystemsmag.com
IBMSystems
THE
FUTURE
OF
LINUX
Forward-looking organizations
consolidate on System z
PAGE 22
Linux helps
the mainframe
forge inroads in
new markets
PAGE 28
Software-defined
environments add
intelligence to
IT infrastructure
PAGE 34
MODERNIZE YOUR
PRODUCTION CONTROL
EXTEND YOUR CA 7 ® GOALS TO z/OS WITH
ThruPut Manager AE+ integrates with
CA 7 Workload Automation CA 7 Edition to
modernize your Production Control environment.
OPTIMIZE BATCH EXECUTION
SIMPLIFY PRODUCTION CONTROL
™
Free up Production Control staff from constant
monitoring and micro-managing of the
production workload.
™
Use consolidated displays for quick access to
enhanced status information.
Deploy customized alerts to anticipate problems.
™
Prioritize z/OS processing based on critical
path calculations and your CA 7 schedule.
™
Eliminate interference from lower importance
and non-production work.
™
™
™
Favor more important work and automatically
escalate when necessary.
™
Increase throughput and complete your batch
window earlier.
THE LEADER IN END-TO-END BATCH
AUTOMATION
Reduce CA 7 database maintenance.
ThruPut Manager AE+, having obtained CA
Technologies Validation, is a radical leap forward in
datacenter automation technology. It simplifies the
environment and enhances batch service to users,
while delivering year-on-year datacenter savings.
Validated
S I M P L I C I T Y ™ S E RV IC E ™ SAV I N GS
8300 Woodbine Avenue, 4th Floor
Markham, ON Canada L3R 9Y7
Tel: (905) 940-9404 Fax: (905) 940-5308
[email protected]
www.mvssol.com
INSIDE
JULY/AUGUST 2013
Cover illustration by Viktor Koen
34
28
22
10
FEATURES
DEPARTMENTS
22
6
PUBLISHER’S DESK
By design
8
IBM PERSPECTIVE
Ongoing innovation on
IBM System z
THE NEXT EVOLUTION OF
LINUX ON SYSTEM Z
The benefits of this
technological synergy
continue to advance
28
MAKING A SPLASH
Linux consolidation helps
System z forge inroads in
new markets
34
SOFTWARE-DEFINED
ENVIRONMENTS MAKE
COMPUTING SMARTER
By adding intelligence to the
IT infrastructure, enterprises
become responsive and flexible,
an interview with IBM’s
Arvind Krishna
2 // JULY/AUGUST 2013 ibmsystemsmag.com
10
IT TODAY
Economics and performance
make Linux on System z the
clear choice
14
PARTNER POV
Linux and open-source HA build
on mainframe’s strengths
18
TRENDS
New DataPower appliance for
IMS rapidly transforms data for
cloud and mobile apps
40
TECH CORNER
In addition to high performance,
System z processors are
designed to be reliable
44
ADMINISTRATOR
System z innovations
automatically define
configurations for greater
availability
47
SOLUTIONS
Compuware Workbench,
ThruPut Manager AE+
48
STOP RUN
Kochishan challenges
misconceptions about polka—
and the mainframe
Will your z/OS jobstreams
make it downstream?
Smart /RESTART lets your
applications restart from near
the point of failure — after
abends, recompiles, even system
IPLs. Your applications can run
restartably, often without source
changes.
Smart /RESTART guarantees that
your program's sequential file
and cursor position, working
storage and VSAM updates stay
in sync with changes to DB2, MQ,
IMS and other RRS compliant
resources. So you can restart
fast with assured integrity.
Smart/RESTART is a robust,
reliable and proven solution used
by Global 2000 organizations worldwide to run their mission-critical z/OS
batch applications. It’s the standard
for z/OS batch restart.
Restart made simple
Download our White Paper:
“ Beyond Restart and Concurrency:
z/OS System Extensions for
Restartable Batch Applications ”
For a free trial visit www.relarc.com, or call +1 201 420 - 0400
Fully supports
Enterprise COBOL
for z/OS® - V5.1
Relational
Architects
International
DB2, Websphere MQ and IMS are registered trademarks of IBM Corp.
TM
IBMSystems
MSP TechMedia
220 S. 6th St., Suite 500,
Minneapolis, MN 55402
(612) 339-7571
MAINFRAME EDITION
Direct editorial inquiries to [email protected]
EDITORIAL
EXECUTIVE PUBLISHER
Diane Rowell
PUBLISHER
Doug Rock
EXECUTIVE EDITOR
Evelyn Hoover
MANAGING EDITOR
Mike Westholder
COPY EDITOR
Lisa Stefan
SENIOR WRITER
Jim Utsler
TECHNICAL EDITORS
WE ASKED OUR CONTRIBUTORS:
If you could have any band play in
your backyard, who would it be?
Kelly Ryan: James Taylor
Bob Rogers:
The Jimi Hendrix
Experience
Scott Searle:
Led Zeppelin
DESIGN DIRECTOR
Chris Winn
ART DIRECTOR
Jill Adler
PRODUCTION MANAGER
Jonathan Benson
CIRCULATION MANAGER
Linda Holm
CIRCULATION COORDINATOR
Carin Russell
IBM EDITORIAL BOARD
FULFILLMENT COORDINATOR
Carrie Schulze
PROJECT MANAGER
Elizabeth Reddall
ADVERTISING/SALES
DIGITAL MEDIA
ASSOCIATE SALES MANAGER
Lisa Kilwein
VP, DIGITAL MEDIA
Kevin Dunn
Diane Rowell:
Wow—I have a pool in
my backyard, so I think it
would be the Beach Boys.
Amy Sammons:
Widespread Panic
Bob Rogers
Jim Schesvold
PRODUCTION
Don Resnick: The Band
ASSOCIATE PUBLISHER
Mari Adamson-Bray
SENIOR WEB DEVELOPER
David Waters
ACCOUNT EXECUTIVES
Kathy Ingulsrud
Darryl Rowell
ASSISTANT WEB DEVELOPER
Shawn Little
MARKETING MANAGER
Elizabeth Sutliff
Scott Carlson
Marianne Carter
Michael Dickson
Paul DiMarzio
Willie Favero
Alex Gogh
Simon Hares
Juergen Holtz
Kurt Johnson
Greg Lotko
Allen Marin
Mary Moore
Don Resnik
Kelly Ryan
Amy Sammons-Vogt
Scott Searle
Jack Yuan
E-MARKETING SPECIALIST
Bryan Roberts
R SALES TEAM CONTACT INFO See page 47
CIRCULATION
CONNECT
CIRCULATION DIRECTOR
Bea Jaeger
@mainframemag
facebook.com
/mainframemag
© Copyright 2013 by International Business Machines (IBM) Corporation. This magazine could
contain technical inaccuracies or typographical errors. Also, illustrations contained herein may show
prototype equipment. Your system configuration may differ slightly. This magazine contains small
programs that are furnished by IBM as simple examples to provide an illustration. These examples
have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply
reliability, serviceability, or function of these programs. All programs contained herein are provided to
you “AS IS.” IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS
FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.
All customer examples cited represent the results achieved by some customers who used IBM
products. Actual environmental costs and performance characteristics will vary depending on individual
customer configurations and conditions. Information concerning non-IBM products was obtained from
the products’ suppliers. Questions on their capabilities should be addressed with the suppliers.
All statements regarding IBM’s future direction and intent are subject to change or withdrawal
without notice and represent goals and objectives only. The articles in this magazine
represent the views of the authors and are not necessarily those of IBM.
The following are trademarks (marked with an *) of the International Business Machines
Corporation in the United States and/or other countries. A complete list of IBM Trademarks is
available online (www.ibm.com/legal/copytrade.shtml).
The following (marked with an *) are trademarks or registered trademarks of other companies: Intel,
Itanium and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries
in the United States and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Linear Tape-Open, LTO and Ultrium
are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries. Linux is a registered
trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows and
Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries. Other
product and service names might be trademarks of IBM or other companies.
Articles appearing in IBM Systems Magazine, Mainframe edition may have been published in
previous IBM Systems Magazine editions.
4 // JULY/AUGUST 2013 ibmsystemsmag.com
AIX
DB2
ESCON
FICON
IBM
IBM logo
MVS
OS/390
POWER
S/390
System i
System p
System Storage
System z
System z9
System z10
Tivoli
TotalStorage
VM/ESA
VSE/ESA
WebSphere
System x
z/OS
z/VM
Reprints: To order reprints, contact Kelly Carver (612) 336-9280.
IBM Systems Magazine, Mainframe edition (ISSN# 1933-1312) is published bimonthly by MSP
TechMedia, 220 South Sixth St., Suite 500, Minneapolis, MN 55402. Publications Agreement
No. 40063731, Canadian Return Address, Pitney Bowes, Station A, PO Box 54, Windsor, Ontario
Canada N9A 6J5
Printed in the U.S.A.
PUBLISHER’S DESK
By Design
PHOTOGRAPH BY CRAIG BARES
D
esign strategist Robert L. Peters once said, “Design is the
application of intent—the opposite of happenstance
and an antidote to accident.”
It’s no accident that
design is important at
IBM Systems Magazine.
It always has been. We
spend a significant
amount of time thinking
about the way our
products should appear. However, the objective of our design
is rarely “look at me.” When we sit down and think about
design, we’re almost always discussing how it can effectively
communicate a specific message.
When we’re designing a trade show booth, we know we have
about five seconds for a passing attendee to recognize who we
are, so the aesthetic design of the booth must “say” magazine.
When designing our website, we made sure the aesthetic not
only supported the design displayed in the magazine but also
assisted visitors in navigating to relevant content.
The July/August issue is the debut of our magazine redesign.
The pages of the magazine are cleaner, easier to read and more
navigable, while the updated approach does a more efficient
job of highlighting photos and graphics. I am proud of the
work our design and editorial departments have done to bring
this new design to the table. As a magazine team, we hope you
enjoy the new look and find it a more pleasurable read.
I am particularly proud of our design and edit teams
because in addition to the redesign, they’ve been working on
an iPad app for IBM Systems Magazine, Mainframe edition. It
will debut in September. I’ve flipped through the prototype—
it’s pretty cool and provides us one more way to deliver the
magazine to readers. Look for more details on the iPad app in
the coming months.
In terms of IBM System z* technology, the mainframe
is designed for reliability, availability, security and
virtualization, as this issue demonstrates. Those virtualization
capabilities have been exploited by countless organizations
who have implemented Linux* on System z to consolidate
workloads for greater efficiency. This issue focuses on the
benefits of Linux consolidation, citing real-world examples in
our cover story on page 22 and the “Making a Splash” feature
on page 28. The latter focuses on two Brazilian firms—Sicoob
and Algar Telecom—both of which gained improved business
performance and reduced IT costs by consolidating workloads
on the new Linux on System z platforms.
It all shows the essential role innovative design plays
in delivering results—whether it’s business results or
magazine content.
Doug Rock, Publisher
[email protected]
CONTRIBUTORS
R Viktor Koen
R Lars Marowsky-Brée
Viktor Koen, illustrator for the cover story on page 22,
has a bachelor’s degree from Jerusalem’s Bezalel
Academy of Arts & Design and a Master of Fine
Arts degree with honors from New York’s School
of Visual Arts, where he currently teaches. He is
a regular contributor to The New York Times, The
Wall Street Journal and Nature magazine.
The author of this issue’s Partner PoV on
page 14, Lars Marowsky-Brée has a sizable
sword and knife collection. The centerpiece
is a custom-forged matching set of Japanese
swords, also known as a daisho. Like
Marowsky-Brée himself, the swords are dressed
all in black.
To the nth degree
6 // JULY/AUGUST 2013 ibmsystemsmag.com
A cut above
IBM PERSPECTIVE
Ongoing Innovation on IBM System z
S
These technology shifts might
not appear remarkable while
we’re in the midst of them, but
looking back, we understand the
profound impact they’ve made.
Innovation has never been
optional for IBM. We strive to
lead these shifts, pioneering new
technologies to deliver greater
value to our clients. And for the
IBM System z* mainframe, that’s
been our mission, with today’s
zEnterprise* System as proof.
Over time, IBM adapted
System z technology to
accommodate a wide range of
workloads. The zEnterprise hybrid
runs a host of OSs, including
z/OS*, Linux*, Windows*
and AIX*; and uses specialty
processors for Java*, XML and
Linux workloads.
It effectively runs the classic
CICS*, IMS* and DB2* workloads, while being very adaptive to WebSphere*, C and Java
technology-coded workloads, Web
front-end serving and portal environments. Also, it can run a large
number of Linux system images.
That Linux capability, along
with a level of virtualization
and optimization that’s second
to none, enables businesses to
run tens of thousands of virtual
images on System z with an
efficient cloud-delivery model.
Accordingly, organizations
can save money through
Linux capability,
along with a level
of virtualization and
optimization that’s
second to none, enables
businesses to run tens
of thousands of virtual
images on System z
with an efficient clouddelivery model.
consolidation
on Linux.
System z technology
can accomplish
this while
sustaining the
highest levels of
performance and
throughput.
8 // JULY/AUGUST 2013 ibmsystemsmag.com
GREG LOTKO
Vice President and
Business Line Executive
for System z
Consider Nationwide
Insurance. The U.S.-based
company’s distributed server
environment was inefficient
and costly. In an effort to
reduce expenses and increase
business agility, Nationwide
consolidated 3,000 distributed servers to Linux virtual
servers running on System z
mainframes. This multiplatform private cloud has been
optimized for all of its different workloads. Therefore,
Nationwide was able to reduce
power, cooling and floor space
requirements by 80 percent. It
also reversed the expenditure
on its distributed server landscape, saving an estimated
$15 million over the first three
years of the implementation.
This is the path of
innovation we’re on;
the path we’re going
to stay on to enable
our clients’ success,
making the System z
platform even more
adaptive, efficient,
open and capable
than ever before.
PHOTOGRAPH BY BOB MARTUS
ince the introduction of computing systems, the IT industry has evolved
dramatically. It started with the advent of computer compatibility,
allowing machines across a product line to work with each other.
Later, computers moved from bipolar to complementary metal oxide
semiconductors, reducing power usage and providing higher density of
logic functions on a chip. Eventually, we saw the introduction of open-source
technologies, lowering costs and promoting collaborative development.
IT TODAY
Run the NUMBERS
Economics and performance make Linux on System z the clear choice
D
ata centers run a range of business workloads, including batch and transaction
processing, business applications, complex data analysis, collaboration and
social business. It’s easy to gravitate toward one particular server as being
good for all of these workloads; however, they all have different requirements. For that
reason, IBM offers different types of servers.
John Shedletsky
is vice president
of IBM System z
Competitive
Technology.
Emily Farmer
is a senior
research consultant
in the IBM Software
Group Competitive
Project Office.
It’s essential to understand what
hardware is best suited for which
applications and why. Determining
the best placement of business
workloads should focus on the
platform that delivers the best
performance at the best cost with
the best quality—a concept known
as “best fit.” Consider these three
workload classes and see why the
best fit is on IBM System z*:
10 // JULY/AUGUST 2013 ibmsystemsmag.com
1Linux Consolidations
Several characteristics make
the mainframe a highly efficient
platform, especially for consolidating Linux* workloads. Its
core features—processing power,
large cache and a dedicated I/O
subsystem—provide superior
scalability and throughput.
In addition, exceptional workload management, when com-
bined with the aforementioned
capabilities, makes the mainframe
especially good for consolidating
Linux workloads. Workload management refers to the effectiveness
and efficiency of the virtualization
layer to manage resources across
multiple workloads.
To realize the best cost per
workload, a platform must
maximize CPU utilization and
ensure high-priority workloads
always meet their service levels,
particularly when a mix of highand low-priority workloads are
running simultaneously.
To demonstrate the superior
workload-management
capability of System z, IBM ran
tests comparing mainframe
virtualization with a common
distributed server hypervisor
running on an Intel* technologybased server. Stand-alone highpriority workloads were run on
both platforms to measure their
CPU utilization, throughput
levels and response times.
Then the high- and low-priority
workloads were run concurrently
to see how the virtualization
layers managed resources.
On System z, the high-priority
workloads maintained their CPU
utilization, throughput levels
and response times, with the
low-priority work consuming all
but 2 percent of remaining CPU
minutes. On the Intel technologybased server, utilization of
high-priority workloads dropped
28 percent, throughput dropped
31 percent, and average response
times increased 45 percent when
the low-priority workloads were
added. It also had 22 percent
unused CPU minutes.
These tests demonstrated
nearly perfect workload management on System z. Meanwhile, the Intel server and
hypervisor’s imperfect workload
management was unable to
handle the workloads effectively
without adversely affecting the
high-priority ones. Accordingly,
the Intel technology-based
platform requires segregating
workloads onto separate servers to correctly manage them,
leading to core proliferation and
increased costs.
Looking at the system
requirements for supporting the
same workloads at equivalent
throughput on both tested
platforms, the Intel environment
requires 3.75 times more cores
and costs 2.4 times more.
A similar Linux comparison
was conducted early last year in
an IBM benchmark center for a
South American bank. This test
involved running Oracle RAC on
Intel technology-based HP servers
and on Linux on System z. The
workload required seven times
more cores and cost twice as
much on the HP platform.
WEBCAST
Transforming Workloads
with Operational Analytics
Supercharge Your Antiquated Job Scheduling Tool!
WHO SHOULD ATTEND
REGISTER
TODAY!
Mainframers, IT Execs, IT Operations
WHEN
Thursday, July 11
9 PT / 11 CT / Noon ET
AGENDA
Maximizing the performance of your business requires maximizing
the efficiency and agility of IT. Workload (job scheduling)
environments have not typically been the primary target of these
initiatives for unexplained reasons. However, this area is prime
for gaining significant benefits by applying modern analytical
approaches to managing both workload engineering and
workload operations.
Using predictive and prescriptive analytics companies are
transforming their antiquated job scheduling products into
modern day, supercharged automation
FEATURING
solutions for the business. Join us to learn
how customers are transforming their
Reg Harbeck
Featured Speaker
mainframe job schedulers using analytics.
Special Offer: Attend this webinar and
receive Reg Harbeck’s Whitepaper to DL
as well as Jim Anderson’s response to
Reg’s whitepaper.
Chief Strategist
Mainframe Analytics
Jim Anderson
Featured Speaker
VP of Product Strategy
Terma Software Labs
TO REGISTER: http://ibmsystemsmag.webex.com
ibmsystemsmag.com JULY/AUGUST 2013 // 11
IT TODAY
2 Co-Located
Business Analytics
One top 10
bank runs
approximately
1 BILLION CICS
transactions,
345 MILLION
IMS transactions
and
134 MILLION
financial
transactions each
day on a six-way
Parallel Sysplex.
That kind of
processing
capability is
unattainable
with any other
commercially
available server.
As much as IBM DB2* has
consistently been a secure and
solid repository for operational
data, System z technology hasn’t
always had the software to
transform this data and perform
rigorous and deep analytics. This
led to a “mainframe quarantine”
effect, which essentially isolated
it as an operational data store.
Meanwhile, large numbers of
extract, transform and load (ETL)
operations were performed to
move data to distributed servers
for further analysis.
Many mistakenly think
transferring data is free. IBM
lab measurements, however,
calculated the costs for extracting
data from a store, transferring
it over an Ethernet and loading
it into a receiver store. The
results were applied to a
typical mainframe quarantine
situation—in which 1 TB of data
was transferred from a System z
server to a distributed one and
then to three more distributed
servers. Assuming a four-core
IBM System z10* EC running at 85
percent and four-core distributed
servers running at 60 percent, this
kind of transfer would burn 557
MIPS and use 21 distributed cores
per day. Tallying such hardware
system and administrative labor
costs indicates this scenario would
cost more than $8 million when
amortized over a four-year period.
In addition, IBM recently
uncovered real-world cases
where this mainframe quarantine
effect has begun to consume
significant amounts of system
resources. One European
customer reported using 16
percent of total MIPS for ETL,
while an Asian bank reported
consuming 18 percent of total
MIPS for ETL.
Thankfully, the days of
mainframe quarantine may be
12 // JULY/AUGUST 2013 ibmsystemsmag.com
System z
technology is perfect for Linux
consolidation, ideal for business analytics,
and the best platform for critical data workloads.
over—due in part to the IBM
zEnterprise* Analytics System
9700. It’s a set of software
and hardware packages that
provides data warehousing,
ETL, cubing services, business
intelligence (BI) and predictive
analysis—all running on z/OS*
or Linux on System z. IBM tests
involving simple, intermediate
and complex analytical queries
on both a DB2 V10 platform and
a competitor’s pre-integrated
quarter-unit system, showed the
System z solution completed
queries 1.1 to 3.2 times faster.
For especially complex
analytical queries, IBM
offers the IBM DB2 Analytics
Accelerator, an appliance
shown to reduce query times
by factors of 10 to 1,000. Using
the internally developed IBM
BI Day Benchmark for Business
Analytics, IBM demonstrated
that the System z and analytics
accelerator combination ran
5.6 times more reports per
hour than a competitor’s most
recent pre-integrated solution—
yielding triple the priceperformance ratio.
Data
3 Critical
Workloads
The System z platform is known
for superior qualities of service.
For example, DB2 supports
complete top-to-bottom data
security through encryption. So
while DB2 has reported only 40
security patches over the past
29 years, Oracle reported 24
database security patches in the
past year alone. Coupled with
its extreme processing power,
I/O efficiency and workloadmanagement capabilities,
the System z platform is ideal
for critical data workloads
such as transaction and batch
processing.
Mainframes drive some of
the world’s biggest banks. One
top 10 bank runs approximately
1 billion CICS* transactions,
345 million IMS* transactions
and 134 million financial
transactions each day on a
six-way Parallel Sysplex*. That
kind of processing capability
is unattainable with any other
commercially available server.
A few years ago, Kookmin
Bank in South Korea ran a
single workload on a System z
platform against a TCS* BaNCS
core banking benchmark
and demonstrated nearlinear scalability, with the
highest throughput levels
exceeding 15,000 transactions
per second (tps). The best
published throughput for
the same benchmark on a
distributed server is 10,716 tps,
demonstrated by State Bank of
India on HP Superdome servers.
At that throughput level, a
zEnterprise EC12 (zEC12) would
require 32 processors, but the
HP servers required 448 Intel
processors. In other words, at
equivalent performance levels,
the System z platform showed
14 times better core density.
Pricing out a complete system—
production, development and
test systems included—showed
the System z solution to have
37 percent less total cost of
acquisition over five years.
Last year, at IBM’s benchmark
center in France, a European
bank ran an SAP core
banking benchmark, pitting a
competitor’s database solution
on Intel servers against DB2 for
z/OS. To drive the tests, the Intel
platform required 128 database
cores, but the mainframe
required only 44. The System z
platform drove 41 percent more
throughput—reaching a world
record of 59.1 postings per
hour—yet was half the unit cost
(dollar per posting per hour) of
the competitor’s platform.
Many critical data workloads
drive very high I/O bandwidth,
so IBM ran a series of inhouse multitenancy tests
(i.e., multiple workloads on
a platform sharing the same
database) to demonstrate best
fit on System z. Those tests
used both DB2 V10 and Oracle
databases, and compared
System z (both z/OS and Linux)
and a competitor’s 16-core preintegrated quarter-rack system.
When running either DB2 V10
or Oracle database on Linux, the
competitor’s system could only
support one workload, whereas
a zEC12 with four IFLs could
support five such workloads.
Running on z/OS generated
similar 5-to-1 workload results.
That’s effectively five times
the core density. Factoring in
expenses, results showed the
zEC12 platform had 25 percent
lower cost per workload.
Optimized, Efficient
and Cost-Effective
System z technology is perfect
for Linux consolidation, ideal
for business analytics, and the
best platform for critical data
workloads. It’s highly optimized,
efficient and cost-effective—
designed to deliver superior
economics, rapid query response
times and overall superior
performance for many of today’s
business workloads.
FOR MAINFRAME
VTL EFFICIENCY AND
PERFORMANCE USE... DLM CONTROL CENTER
DTS Software, Inc. offers unique efficiencies
and performance in successfully migrating
customers’ data to virtual tape systems.
DCC (DLm Control Center) provides a wide
range of components that allow installations
to more effectively install, manage, use and
migrate to the DLm system.
UÊ,LÕÃÌÊV>`Ê>`ÊÌÀ}ÊÌiÀv>Vi
UÊ`Û>Vi`ÊÌi}iÌÊ`iÛViÊÃiiVÌ
UÊ" Ê}À>ÌÊvÊÌ>«iÊLÀ>ÀiÃ
UÊV>ÌÊ>`Ê>>V}
DTS Software – the leader in Storage Management,
-ÞÃÌiÊÌÀ}Ê>`ÊÕÌ>Ì°
at
Visit us
E
SHAR
ton
in Bos
Contact us at [email protected] or
770-922-2444 to learn
ÀiÊ>LÕÌÊÕÀÊÃvÌÜ>Ài°
www.DTSsoftware.com
ibmsystemsmag.com JULY/AUGUST 2013 // 13
PARTNER PoV
Doubly DEPENDABLE
Linux and open-source HA build on mainframe’s strengths
M
ainframes are renowned for their dependability, providing a stable and
reliable platform for unmatched availability to mission-critical business
services. Linux* and open-source software (OSS) in general also have a
reputation for providing significantly above-average quality. For years, Linux has
been widely trusted with mission-critical services, which is reflected in the worldwide
deployments of Linux on System z*.
Lars MarowskyBrée is a SUSE
Distinguished
Engineer and the
founder of the
Linux Foundation
HA Working Group.
He serves as the
architect for High
Availability and
Storage at SUSE.
To increase the availability
of mission-critical workloads,
efforts have been made to
greatly reduce the likelihood
of component failure. Systems
have long been combined to form
clusters in which their capacity
beyond the immediate runtime
requirements of a workload is
used to compensate for individual
components’ faults. This is in
14 // JULY/AUGUST 2013 ibmsystemsmag.com
contrast to clustering for high
performance computing (HPC),
where added capacity is used to
boost workload performance.
While some overlap in the
technology exists, high availability
(HA) and HPC have different
priorities and goals.
For HA clusters, redundant
components must be added
intelligently so the redundancy
they provide improves the
availability of the workload
cluster hosts. The architecture
must include at least one
level of redundancy for every
potential component failure,
be it in hardware or software.
Components without redundancy
but mandatory for service delivery
are called single points of failure
(SPoF). It’s not always cost-
effective to remove all SPoFs.
Ultimately, the risk vs. cost tradeoff is a business decision.
Especially on System z, this
quality and redundancy have been
built deep into the architecture and
are transparent to the OS running
in a virtual instance. On other
architectures, it becomes the task of
the OS and its middleware to:
Ā&RPELQHLQGLYLGXDO
nodes into a more dependable cluster
Ā,GHQWLI\IDXOW\FRPSRQHQWV
Ā+DQGOHUHFRYHU\E\
switching to a backup
network interface or a
different storage path, or
migrating the service from
one node to another
$Q+$FOXVWHUVWDFNLVD
software suite capable of such
management. Linux features
one of the most advanced,
comprehensive and fully OSS
implementations, provided
primarily through a combination
RIWKH&RURV\QFPHPEHUVKLS
and messaging) and Pacemaker
SROLF\GULYHQUHVRXUFH
management) projects. Growing
from modest two-node heartbeat
clusters in the 1990s, this stack
became the de facto standard
IRU266+$DQGKDVEHHQZLGHO\
adopted by Linux distributions.
While mainframes have a
high degree of this functionality
built into their hardware and
firmware—making applicationvisible hardware faults extremely
rare—clustering the individual
instances still has advantages.
The cluster stack provides hard
consistency guarantees and
coordinates access to shared
resources. This is used by clusterconcurrent file systems such
DV2&)6ŪDQG*)6Ū,WQRWRQO\
protects against hardware failures
but also monitors the software
and initiates recovery accordingly
through restarts of the services or
rebooting the instance.
The Bigger Picture
Even the most reliable hardware
and data center cannot possibly
cope with all eventualities when
disasters strike. Therefore, a global
organization must be able to
compensate. With such scenarios,
the focus shifts from singlecomponent failures to a systemic
view of the infrastructure. The
commonly proposed mitigation
WEBINAR
Modern mainframes have no
ESCON! How can I keep my
ESCON device portfolio?
Find out how you can invest in the latest
mainframe and leverage Prizm
to retain access to key ESCON
and Bus/Tag devices.
WHO SHOULD ATTEND
Hardware and Capacity Planners and Architects,
Systems Programmers, and Storage and I/O Specialists
WHEN
Wednesday, August 28 | 9 PT / 11 CT / Noon ET
SPECIAL OFFER
Attend this webinar and complete a Prizm inquiry by
September 15 for your chance to win an Apple iPad!
AGENDA
The benefits of investing in the latest
IBM mainframe platform are compelling,
but on-going ESCON device and
application requirements require careful
consideration. Join Optica Technologies
to review the most current System
z ESCON roadmap and connectivity
landscape and learn about how Optica’s
Prizm makes the transition easy!
FEATURED SPEAKERS
Ray Newsom
System z
HW Subsystem Strategist
IBM Corporation
We’ll address your questions:
What is Prizm?
How will it fit in my datacenter?
How does it get designed and
configured?
Sean Seitz
We’ll also review real customer
examples and the key steps to
establishing a design that will
work for you.
Michael Dailey
REGISTER TODAY
VP of Technical Services
Optica Technologies, Inc.
COO, VP of Worldwide Sales
Optica Technologies, Inc.
http://ibmsystemsmag.webex.com
ibmsystemsmag.com JULY/AUGUST 2013 // 15
PARTNER PoV
strategy is to build data centers
at geographically dispersed
locations. However, distance
translates to unavoidable latency
due to the laws of physics (speed
of light), and typically higher costs
for network bandwidth.
Traditional local clusters—able
to exploit low latency between
nodes—are coupled tightly with
synchronous coordination and
replication of data with coherency
TOP-LEVEL MANAGERS
allocate “tickets” on which
resource hierarchies depend
SITE
SITE
SITE
TICKET
Only one site is granted a
specific ticket at a time
guaranteed at multiple levels
in the stack. It isn’t feasible to
maintain such tight coupling
across distant geographies.
Hence, geographic clustering is
most commonly implemented as
an asynchronous, loosely coupled
active/passive scenario, meaning
only one site is active for a given
workload. The other sites are
passive replication targets, only
becoming active after failure of
the primary site. Because data
replication is asynchronous,
such a failover will generally
incur a minimal loss of the most
recent transactions. This is
usually deemed preferable to not
providing any service at all.
The Next Level
Geographic clustering is available
as an OSS extension to Corosync/
16 // JULY/AUGUST 2013 ibmsystemsmag.com
Pacemaker clusters. It’s assumed
each site is a largely autonomous
cluster itself, taking care of
local storage, recovery, fencing,
failure detection and resource
hierarchy management. The new
component only coordinates
recovery at the next level.
To achieve this, two or three of
these sites are coupled together
in a “cluster of clusters.” The
top-level manager arbitrates the
allocation of so-called “tickets”
upon which the resource
hierarchies depend. This ensures
only one site is granted a specific
ticket at any given time, and thus
allowed to activate the resources
this protects.
This arbitration is handled
via an implementation of the
PAXOS algorithm, and the
project managing the tickets is
called the booth. All sites vote
on which site is to be granted
a ticket and the majority vote
wins. The system supports
arbitrator sites that aren’t true
clusters themselves but only
participate in the voting process.
To allow automatic and safe
allocation to a given site, it must
first be safely established that
the previous owner has fully
relinquished a given ticket. This
is easy if all sites are up and
connected. Should a site become
permanently disconnected,
however, the booth process
running there will notice and
revoke the ticket locally by force.
To speed up the recovery
process, Pacemaker will
immediately fence all nodes
hosting resources that depend
on the ticket. This is necessary to
make the cleanup time predictable.
Because a disconnected
geographical cluster doesn’t allow
for fencing to be acknowledged by
remote sites, automated failover
is only possible via a timeoutbased mechanism. The timeout
must allow for a disconnect to be
detected and the node-fencing
process to complete. Once this
time has expired, the surviving
majority can re-grant the ticket to
one of the remaining sites.
Some organizations implement
a policy where this isn’t deemed
adequate. For these scenarios,
or for those without a third
tiebreaker site, a manual process
of revoking and granting tickets
is provided.
The changes required to
Pacemaker’s core are minimal. The
tickets are represented as newly
added clusterwide attributes. A
new constraint type is added to
allow resources to depend on
them. This is fully supported
in Pacemaker and the cluster
resource manager shell. The Web
interface “hawk” also displays
ticket ownership and provides a
dashboard of multiple clusters.
Ongoing Improvements
While this framework lays
the foundation for building
geographically distributed
clusters exclusively based on OSS
technology, the OSS community
continues to explore and
implement further enhancements.
Storage replication is one
such area. While the Distributed
Mainframe-based
infrastructures can profit
from advanced OSS HA
cluster technologies
available on Linux.
Replicated Block Device included
with Linux can handle storage
replication on a per-host level,
resource agents are needed to
properly interface with third-party
storage arrays to tie these into
larger disaster recovery concepts.
Network access is critical.
While a proof-of-concept
implementation to automatically update dynamic DNS
records exists, it’s desirable to
add integration with dynamic
routing protocols, such as OSPF
or BGP4, to make the network
layer itself aware of changes.
One potential solution for this
requirement is integrating with
routing software.
Synchronizing configuration
to avoid divergence between the
sites is another area of interest
because manual replication
is error-prone. The HA stack
could replicate cluster-relevant
configurations, such as the
cluster information base or
external configuration files,
as well as apply transparent
transformation to account for
differences between the sites.
A key focus of industry
contributors, including distribution vendors, ISVs, consulting
partners and key customers, is
to develop best current practice
whitepapers to guide implementation in the enterprise of the
whole stack—from the OS, the
HA cluster software, the application workload and operating
procedures. This feedback also
informs future development of
the OSS HA stack.
Open-Source HA
and Linux
Mainframes and Linux benefit
from leveraging each other’s
strengths. Mainframe-based
infrastructures can profit from
advanced OSS HA cluster
technologies available on Linux.
Open-source HA and Linux on
System z have come a long way
since their humble beginnings.
Today, Linux has earned its
place as a trusted enterprise OS.
Multinode HA clusters on Linux are
widely deployed and reliable with
a proven track record. Customers
and solution providers are actively
seeking to deploy Linux-based,
open-source HA solutions for even
their highest-tier infrastructure
deployments, and the architecture
to implement these now exists.
ibmsystemsmag.com JULY/AUGUST 2013 // 17
TRENDS
The IBM WebSphere DataPower
appliance offers new integration
solutions to leverage the
processing power of IMS.
Flexibility Meets POWER
New DataPower appliance for IMS rapidly transforms data for cloud and mobile apps
T
he IBM WebSphere* DataPower* Integration Appliance is a purpose-built
hardware platform designed to deliver rapid data transformations for cloud
and mobile applications, secured and scalable business integration, and an
edge-of-network security gateway in a single, drop-in appliance.
Jenny Hung is an
advisory software
engineer working on
IBM IMS OnDemand to
modernize IMS as the
integration focal point in
SOA environments.
Dario D’Angelo is
an advisory software
engineer at the
IBM Silicon Valley
Laboratory and chair
of the IBM Silicon
Valley TVC.
The most recent DataPower
firmware V6.0, announced in April,
enables a new level of integration
across an enterprise environment.
The synergy among DataPower,
System z*, Rational* Software and
common transformation tooling
positions the DataPower appliance
as the premier System z gateway.
With the most recent firmware
release, the product offers new
integration solutions to leverage
the processing power of IMS*—
one of the fastest transaction and
database management systems in
the world.
Saving Time and Money
The resilient, eminently
consumable and self-recovering
DataPower appliances provide
plug-in usability with little
18 // JULY/AUGUST 2013 ibmsystemsmag.com
to no changes to an existing
network or application software.
No proprietary schemas,
coding or APIs are required to
install or manage the device,
and the appliance supports
XML integrated development
environments (IDEs) to help
reduce the time needed to develop
and debug XML applications.
DataPower appliances simplify
deployment and speed time-tomarket for services by providing:
7KHFDSDELOLW\WRTXLFNO\
Ā
transform data among a wide
variety of formats
Ā&RUHHQWHUSULVHVHUYLFH
bus (ESB) functionality,
including routing, bridging,
transformation and event
handling
([FHSWLRQDOVHFXULW\IURP
Ā
tampering as well as servicelevel runtime protection
Ā$GURSLQLQWHJUDWLRQ
point for heterogeneous
environments, helping reduce
the time and cost of integration
A cost-effective choice for nonproduction environments is the
DataPower Virtual Edition (VE),
which is available with models
XI52 and XG45. This virtual
appliance runs in the VMware
hypervisor environment and isn’t
tied to specific hardware factors.
DataPower flexibility can be
demonstrated by the capability to
connect not only the core business
of service oriented architecture
(SOA) but also to serve areas of
business-to-business connectivity
with enterprise system and
Web-application proxying. These
appliances also support:
$GYDQFHG:HEVHUYLFHV
Ā
standards
Ā:HEŪŨLQWHJUDWLRQZLWK
JSON and REST
Ā$GYDQFHGDSSOLFDWLRQ
caching
Ā5DSLGLQWHJUDWLRQZLWK
cloud-based systems
With IMS, the DataPower
appliance now extends the
broad array of direct-to-database
FRQQHFWLYLW\EH\RQG'%Ū2UDFOH
and Sybase. The integration
with IMS will help process
62$WUDQVDFWLRQVLQDIDVWHU
more secure and simplified
ZD\%\OHYHUDJLQJ'DWD3RZHU
architecture, IMS can be both a
VHUYLFHFRQVXPHUDQGDSURYLGHU
RIDQ\:HERU+773VHUYLFHV
Supported
IMS Features
In addition to its support for IMS
Connect, the DataPower for IMS
offering now includes support
for IMS Synchronous Callout and
,06'DWDEDVH'%
,06'%VXSSRUWHQDEOHV
GLUHFWFRQQHFWLRQWRDQ,06'%
WKURXJKWKH,068QLYHUVDO-'%&
GULYHU:LWKLWDSSOLFDWLRQVFDQ
issue dynamic SQL calls, such as
basic CRUD operations, against
DQ\,06'%
By leveraging DataPower
architecture, IMS can be
both a service consumer
and a provider of any
Web or HTTP services.
ibmsystemsmag.com JULY/AUGUST 2013 // 19
TRENDS
Figure 1: Inbound/Outbound DataPower Flow for IMS Callout
DataPower X152, X150B, XB62
IMS V12
SERVICES
REQUEST
Transformation
RESPONSE
An IMS Connect proxy
Ā
to IMS Connect clients—
Existing IMS Connect clients
can use this to make in-flight
modifications to headers and
payloads without changing
the client or IMS.
ĀWeb service facade to
IMS Connect transactions—
Organizations can use the
Web service features in
DataPower to quickly enable
Web service support for IMS
Connect.
IMS Synchronous Callout
support is the latest available
feature for allowing IMS to
consume an external service
through DataPower. By
defining an IMS Callout Front
Side Handler to DataPower
MPG, an IMS application can
initiate synchronous calls to
an external service through
DataPower following the IMS
Call (ICAL) protocol. The ICAL
protocol enables an application
program running in an IMS
technology-dependent region to
synchronously send outbound
Request rule
(one or more actions)
Response rule
(one or more actions)
Transformation
IMS Callout Front-Side Handler
IMS Connect support—also
known as IMS Connect Helper
or IMS Provider—enables
distributed services to drive
an IMS transaction through
DataPower. DataPower MultiProtocol Gateway (MPG) services
can be configured with an IMS
Connect back-side handler to
receive a request from a client,
process it and send it to IMS
Connect. A response will be
sent back to the client after the
message is processed by IMS.
Typical uses include:
IMS
Connect
IMS
Application
…
TPIPE
Multi-Protocol Gateway
©2012 IBM CORPORATION
Table 1: DataPower Support Matrix
Support Type
DataPower Models
Supporting V6.0 Firmware
IMS Synchronous Callout
(IMS V12 or beyond)
XI52, XI50B, XB62, XI52 VE
IMS Connect
XI52, X150B, XI50Z, XB62, XI52 VE
IMS DB (IMS V12 or beyond)
XG45, XI52, XI50B, XB62, XI52 VE
messages to request services or
data, and receive responses.
For synchronous callout
requests, an IMS application
program issues a DL/I ICAL
call and waits in the dependent
region to process the response.
DataPower retrieves the callout
request, processes it based on the
rules and actions defined in the
MPG policy, and sends it out to
the back-end service. In a similar
manner, the response is flown
back and processed through
ON THE WEB For more information about WebSphere DataPower SOA appliances,
visit ibm.com/software/integration/datapower/index.html
20 // JULY/AUGUST 2013 ibmsystemsmag.com
ICAL
(synchronous)
the MPG. Figure 1 (page 20)
illustrates the callout inbound
and outbound flow through
DataPower.
In addition, data transformation
can be optionally configured
within the MPG policy. IBM
WebSphere* Transformation
Extender (WTX) is the
recommended tooling, and it
provides the mapping between
different data formats. WTX can be
used to generate transformation
maps from resources such as
COBOL copybook, PL/I imports
and XML schema definitions.
IMS Connect support has been
available since DataPower V3.6.1.
The new IMS Synchronous Callout
support and IMS DB support will
be available in the DataPower
V6.0 release. Table 1 (page 20)
shows the support matrix.
Speedy Solution
IBM’s preliminary studies in a simplified environment indicate that
IBM WebSphere DataPower with
IMS Synchronous Callout support
is capable of processing significant
workloads. While performance
results can vary significantly
depending on an environment’s
configuration and the type of
workloads measured, these initial
studies confirm the DataPower
appliance’s capability to deliver
a reliable, performance-oriented
solution. Finally, an integration
that leverages the appliance’s purpose-built features to process XML
at nearly wire speed.
Implementation Guide
COMING SOON
I
BM plans to release the “DataPower for IMS Implementation
Guide,” which will illustrate how to deploy the solution in a
basic form using only a few relatively simple steps.
The document is meant to be a single point of reference,
covering both IMS and DataPower concepts and enabling
clients to integrate IMS and the DataPower appliance with
their application environment. It will cover: Key functions of the
DataPower Web GUI, configuration of a multi-protocol gateway,
policies and rules, as well as IMS environment considerations and
customization, including IMS Connect, IMS OTMA, ODBM, which
is necessary to enable communication between the two.
THE NEXT
EVOLUT
ION
By Jim Utsler
Illustration by Viktor Koen
W
hen the Linux*
OS made its initial
appearance on the
mainframe, the skeptics might
have responded, “Really? That’s
what x86 boxes are for.” And
many of these people chose to
run z/OS* on IBM System z* and
Linux on Intel* technology-based
servers, in the belief that these
two OSs couldn’t peacefully coexist on the same system.
Over time, however, virtualization became commonplace and
perceptions changed as more
organizations saw the benefits of
workload consolidation. Larger
organizations were the first to
see the light. As they rolled out
services supported on a multitude
of boxes, their data centers began
to bulge at the seams. Adminis-
OF LINUX ON SYSTEM z
THE BENEFITS OF
THIS TECHNOLOGICAL
SYNERGY CONTINUE
TO ADVANCE
trative overhead grew, licensing
costs skyrocketed and energy
consumption became more than a
footnote on IT budgets.
Together, these and other
factors prompted IT managers
to reconsider their segregated
workload/hardware mindset.
They began with light Linux
workloads loaded on mainframe
partitions to test whether heavyduty mainframe workloads and
lighter-fare Linux operations
could, in fact, work well together.
As it turned out, they could,
and companies soon began
moving more Linux workloads
to the mainframe. Now, most
mainframe users expect to run
TAKEAWAY
Linux workload consolidation
on System z offers multiple
benefits in terms of energy efficiency, improved security and
administrative cost savings.
An insurance company that
consolidated 292 servers
to a single mainframe
running Linux on z/VM
reduced its floor space by
97 percent, heat production
by 93.6 percent and energy
consumption by 95 percent.
some instances of Linux on their
big-iron boxes—and for good
reason. Not only does this type
of consolidation help reduce IT
operating costs, but it also allows
organizations to take advantage
of the security, availability,
scalability and manageability of
the mainframe platform.
Decisive Factors
“Another decisive factor driving
Linux on System z deployments,
as noted by customers doing
so, is the optimization of
Linux to run with System z’s
z/VM* virtualization software
environment, which makes highly
efficient use of the hardware
While about 100 documented
security breaches occur in
companies every week, during
the past 10 years, only two
such documented cases
occurred on mainframe.
The normalized staffing levels
for z/VM are as much as
13 times smaller than those
for the competitive offerings.
ibmsystemsmag.com JULY/AUGUST 2013 // 23
z/VM 6.3 Preview
resources available to it,”
according to “Enterprise Linux
Consolidation and Optimization
on IBM System z,” a whitepaper
by Jean S. Bozman. “This means
that Linux workloads deployed
directly onto System z servers or
migrated to System z from other
platforms support these features,
which are important for missioncritical workloads that cannot be
interrupted without impacting
business continuity.”
Bill Reeder, worldwide sales
leader for IT Optimization and
Cloud for System z, cites an
example of a software as a service
(SaaS) vendor that was having
issues related to outages on x86
servers. “If the systems went offline,
so did its business,” Reeder notes.
“As a result, it moved its Linux
workloads to the mainframe. Now,
its more than 115,000 registered
users don’t have to worry about
downtime.”
Indeed, in a Forrester Research
report, “The Total Economic Impact
of IBM System z,” authors Michelle
S. Bishop and Jon Erickson remark:
“With the drive to maximize costefficiency came the need to maintain
high levels of availability in an
increasingly complex distributed
environment. Many organizations
realized their existing distributed
architecture could not provide
high levels of availability as the
environment grew.”
In another case, an online U.K.
art dealer is moving its workload
to the mainframe due to issues
related to uptime and software
costs. It, too, was running Linux
in an x86 environment. But after
doing some research, the art dealer
decided it might be wise to move
Linux instances to the mainframe.
Notably, IBM didn’t make a sales
call in this case. Instead, the
customer contacted IBM.
“The firm had decided to
expand its geographical footprint,
which would have meant adding
P
lanned updates for the next iteration
of z/VM* will feature support for
1 TB of real memory to enable improved
efficiency of both horizontal and vertical
scalability. That improvement includes a
quadruple increase in memory scalability
while maintaining nearly 100 percent
resource utilization, according to an IBM
product preview.
Ultimately, that should mean better
performance for large VMs and a higher
server consolidation ratio with support
for more virtual servers than any other
platform in a single footprint, IBM predicts.
In addition to improved scalability,
better performance is planned with
support for z/VM HiperDispatch. This
support is designed to deliver higher and
more efficient utilization of CPU hardware
resources underneath multiple layers of
virtualization running multiple and diverse
workloads, especially memory-intensive
workloads running on a large number of
physical processors. With it, clients can
expect improved price performance with
more efficient dispatching of CPUs.
“Our intention is to keep leveraging
what’s coming with new System z*
hardware and z/VM versions,” says
Gerald Hosch, offering manager for
Linux* on System z. “While Linux for
System z is based on the general Linux
kernel, we develop Linux extensions to
take advantage of System z hardware
and new virtualization technologies.
In general, the System z platform is in
a unique position to deliver not only
outstanding consolidation capabilities,
but the highest levels of security and
performance as well, not available to
other platforms.”
For more details, visit www.vm.ibm.
com/zvm630/.
—J.U.
24 // JULY/AUGUST 2013 ibmsystemsmag.com
more, smaller servers. It instead
decided to go with one box that
fits all of its needs and can easily
grow as it does,” says Gerald
Hosch, offering manager for Linux
on System z. “We’re witnessing
that with bigger clients as well.
They’re now saying it’s a better
choice to select a centralized,
highly virtualized and optimally
shared environment, which is a
given with the z/VM virtualization
on System z.”
The success of this consolidation, running Linux servers as
virtual guests under z/VM, can be
attributed to several factors, including reliability, availability and
serviceability (RAS), scalability,
security and manageability. Thanks
to easily understood administrative
tools, for example, staffing can
be drastically reduced or optimized because of the relative ease
involved in both deploying and
maintaining z/VM installations.
According to the report
“Comparing Virtualization
Alternatives—What’s Best For
Your Business?” from Solitaire
Interglobal Ltd. (SIL), “The
noticeably lower staffing level
for z/VM deployment and use is
directly attributable to an efficient
unified workflow, as well as a
substantially different and fully
integrated mechanism to handle
the allocation of virtualized
resources. This is of special note
as the organization increases in
size or if an organization is on the
path to a cloud service delivery
model. The normalized staffing
levels for z/VM are smaller than
those for the competitive offerings
by as much as 13 times.”
Hosch clarifies that the effective
savings depends on the number
of servers consolidated to Linux
on System z. “If you consolidate
15 x86 servers, the savings are
manifested in smaller amounts.
However, if you look to the bigger
companies consolidating up to
100, 300 or even more servers,
the savings can result in a very
large number that’s visible not
only in IT, but to the company as
a whole.”
DRASTIC CUTS
An insurance company consolidated 292 SERVERS to a single mainframe, reducing
Floor space by
Heat production by
Energy consumption by
A Tactical Point of View
97
93.6
95
Moving ever-increasing Linux
workloads to the mainframe consumes less energy than one-off
servers and their test-and-backup
boxes. “If you’re running at 95
percent CPU utilization, you’re
only paying a 5 percent energy
tax, which is consumed power
that’s not serving any useful purpose,” Reeder explains. “If you’re
running at 55 percent utilization,
you’re paying a 45 percent tax
related to energy consumption
for powering your boxes, cooling
the data center space, et cetera.
PERCENT
PERCENT
PERCENT
That adds up very quickly.”
The SIL report supports
this assertion: “The System z
platform coupled with the z/VM
mechanisms have a synergy that
significantly reduces the impact
on the environment. This impact
affects the square foot area
required within a data center, the
electrical power consumption
necessary to run the equipment,
the cooling necessary to handle
radiated heat within the physical
facility and also the overall
carbon footprint.”
Hosch points to an insurance
company that consolidated 292
servers to a single mainframe.
Previously, it had been running
HP and Sun servers with more
than 500 cores. The average
utilization for that architecture
ibmsystemsmag.com JULY/AUGUST 2013 // 25
was around 30 percent. After
moving to System z with
22 IFLs running Linux on z/VM,
it reduced its floor space by
97 percent, heat production
by 93.6 percent and energy
consumption by 95 percent.
From a tactical, business point
of view, having Linux workloads
sitting directly next to System z
workloads means faster response
times. This is particularly crucial
when it comes to business
intelligence, when real-time
data access is critical. Without
that, both decision makers and
automated processes might wait
for distributed batch processing to
occur before taking action.
“You can have, for example,
more than 11 different types of
databases living and supported on
the same platform that I can run
business intelligence against in
real time. To take any enterprise
forward, I can’t figure out why
anyone wouldn’t want to do that,”
Reeder adds.
The same applies to software
licensing fees. Many vendors
charge customers based on the
number of cores their applications
are running on. In a distributed
environment, such as the 500-
About 100
documented
security breaches
occur in companies
every week.
plus cores previously deployed by
the insurance company, that can
quickly add up. But by moving to a
virtualized System z environment
with 22 IFLs, those fees dropped
dramatically for the insurer.
“Right now, core pricing is an
absolute advantage, and that’s in
part how I encourage customers
into moving to System z,” Reeder
says. He cautions, though,
that the pricing model could
change. Nonetheless, he and
Hosch still perceive this as a
distinct advantage over current
distributed-computing models.
System z security is another
advantage. The baked-in security
measures that protect core z/OS
applications and functionality
also protect Linux instances
running within this environment.
“Mainframe security is a
characteristic that is inherited
by the workloads running on
System z. That means that the IBM
RACF [Resource Access Control
Facility] security, or other security
software provided by a thirdparty ISV, will apply to the Linux
workloads running on top of the
System z hardware platform.
High levels of encryption (256bit security) are supported,
In 10 years,
ONLY TWO
documented
cases have
occurred on the
mainframe.
Both cases
had the same
source: employees
viewing records
they weren’t
supposed to.
26 // JULY/AUGUST 2013 ibmsystemsmag.com
conferring the high levels of
security specified by federal
governments and international
standards for encryption,”
Bozman writes.
Reeder adds, “Statistically,
about 100 documented security
breaches occur in companies
every week. In the past 10 years,
as noted in the SIL study, only two
such documented cases occurred
on the mainframe, and they both
had the same source: employees
viewing records they weren’t
supposed to. They violated
rules, but no data was actually
transmitted. So if I’m thinking
secure Linux servers, I’m thinking
Linux on the System z.”
Optimized
Environments
The Linux OS has certainly
become a staple in many
computing environments. In
fact, it’s become so entrenched
in business operations that many
organizations would cease to
function without it.
That said, the only question that
remains is which platform to run it
on. In some cases, x86 servers are
probably well suited. But in others,
where the OS is used to host heavylifting workloads, the mainframe is
more appropriate. Especially when
running many Linux servers, the
System z environment represents
operational efficiency, high
performance and data throughput,
and fewer software licenses.
But as Reeder notes, there’s no
one-size-fits-all Linux environment. “Is Linux on the mainframe
the solution to all problems?
Absolutely not, but using it in
concert with everything else that’s
out there does help customers
optimize their environments to
deploy more cost-effective and
secure solutions.”
Jim Utsler is senior writer for IBM
Systems Magazine and has been covering technology for more than 20 years.
TAKEAWAY
➜ Following the zEnterprise
EC12 announcement in
August 2012, IBM has seen
strong demand and growth
in the market—particularly
in organizations running
Linux on System z.
➜ Many of these clients
are new mainframe shops
from emerging markets.
They’re joining the ranks
of those who recognize
the advantages of the
zEnterprise platform in
meeting business goals.
➜ Among these are Sicoob
and Algar Telecom of Brazil.
Both have reduced IT and
energy costs while boosting
business performance
by consolidating their IT
environments on Linux on
System z.
Linux consolidation helps System z
forge inroads in new markets
By Mike Westholder
Illustration by
Peter Crowther
B
ased on the data, it’s
hard to argue the IBM
zEnterprise* EC12
(zEC12) didn’t make a big
splash. In the wake of last
year’s zEC12 announcement,
the vaunted mainframe closed
2012 on a high note. During
Q4, mainframe revenues were
up 56 percent from the 2011
fiscal year, while the number of
shipped MIPS grew 66 percent—
the highest amount ever.
Notably, it wasn’t only
longstanding mainframe clients
upgrading to the latest version of
System z*, but also more than 70
new installations were reported.
Of those, more than half were
first-in-enterprise mainframe
installations. This increase was
reflected in the top 30 growth
markets—including Brazil,
China, India and Russia—where
mainframes are traditionally
less common. System z revenue
was up 65 percent in those
growth markets, compared to an
increase of 50 percent in more
established markets, including
North America, Japan and
Western Europe.
Not bad for the mainframe,
which some critics contend
relies too heavily on a loyal but
shrinking, old-guard customer
base. So who are the recent
converts to mainframe, and
what’s motivating them to take
a closer look at System z? Two
recent examples from Brazil
illustrate who makes up the new
guard and their motivations.
In particular, the examples
show that many companies
are consolidating their IT
infrastructures and taking
advantage of z/VM* to run Linux*
on System z. Today, more than
3,000 applications worldwide
run on Linux on System z. And
the results are striking.
Sicoob Banks
on System z
Brazil’s largest cooperative
financial institution, Sicoob
offers banking and credit
services to more than 2.5
million people. In recent
years, as Brazil has grown
into a global economic
player, Sicoob has sought to
keep pace and become the
primary provider of financial
services to its members.
“Our challenge has been
to create an institution that
is more adaptable to the
national growth scenario
but with stronger social
appeal, unlike a traditional
bank,” says Marcos Vinicius,
the organization’s head of
technology infrastructure.
Ricardo Antonio, CIO
at Sicoob, explains: “Our
aim is to be the primary
financial institution for our
members. Increasingly, this
will mean offering a complex
set of products and services
through self-service mobile
channels, available 24-7.
Our members need to feel
that they can ‘take their
bank with them’ wherever
they go.”
ibmsystemsmag.com JULY/AUGUST 2013 // 29
In the past, however, meeting that challenge was
difficult in terms of Sicoob’s IT infrastructure, which
had limited processing power, Vinicius says. Before
consolidating onto the organization’s first mainframe
platform, that infrastructure was built on a multitude
of servers that couldn’t keep pace with the cooperative’s growing needs.
“The challenge was to maintain a growth model
for the servers, adding new servers one by one, which
turned out to be unsustainable financially,” Vinicius
says. “In addition, the infrastructure’s administration
became rather complex with so many servers, many
more technicians to manage them and the financial cost
of acquisition and maintenance. So we began analyzing
processing alternatives.”
Accordingly, Sicoob embarked on a technology
infrastructure project to better position it for economic
growth. Analysis of the alternatives led the organization
to IBM System z. The solution involved consolidating
400 Intel* processor-based servers onto a virtualized
Linux environment on two IBM zEnterprise 196 servers
and one System z10*. The cooperative also deployed
IBM DB2* to support 50 major databases, InfoSphere*
DataStage* software for data transformation and reporting, and Cognos* software for data analytics.
“Challenges and opportunities have led us to
restructure our technology infrastructure and adopt
IBM System z mainframe technology, which guarantees greater stability and performance for our products
and services,” says Denio Rodrigues, the cooperative’s
superintendent of technology infrastructure. “This facilitates our growth by lowering the cost of maintenance
and administration in the production environment and
by reducing power consumption in the data center. The
key benefits in adopting IBM System z are availability,
scalability, performance, security, lower licensing costs,
easier management, less use of space in the data center,
and in particular, reduced energy consumption.”
Sicoob’s business results have been remarkable, and
the adoption of System z has led to significant gains.
“The System z solution has effectively met all of the criteria evaluated with respect to availability, performance,
security, scalability, processing and storage capacity,”
Rodrigues notes. “This has enabled the growth of our
business products and our network in general. Over the
past year, through our self-service channels, we grew
by nearly 600 percent. Internet banking grew by 200
percent. For mobile solutions, growth was 600 percent.
It would not have been possible to support this growth
without IBM System z.”
Replacing 400 servers with just three mainframes (and
the remaining 30 legacy servers that are being migrated) has delivered enormous benefits. The organization
is saving more than 6 million kwh of electricity each
30 // JULY/AUGUST 2013 ibmsystemsmag.com
BENEFITS OF
System z
SICOOB
➜ Saves more than
6 MILLION kwh of
electricity annually,
avoiding 270 tons of
carbon dioxide emissions
➜ Saves about
$1.5 MILLION a year
➜ Reduced electricity
consumption by
23 PERCENT compared
to 2007, despite
significant growth in
transactional volumes
and account numbers
Denio Rodrigues,
superintendent of
technology infrastructure
Ricardo Antonio, CIO
Marcos Vinicius,
head of technology
infrastructure
year—avoiding 270 tons of carbon
dioxide emissions—along with saving
about $1.5 million a year. Electricity
consumption is 23 percent lower than
in 2007, despite significant growth in
transactional volumes and account
numbers. During that period, Sicoob
saw a 60 percent rise in in-branch
transactions, a 625 percent increase
in ATM transactions, and more than 1
million new current accounts opened.
“IBM System z gives us flexible
and robust processing capable of
handling extreme growth with very
well-integrated tools for extracting,
storing and manipulating data,”
Vinicius says. “We have reduced
the complexity of our technology
with fewer servers, less administration, lower software maintenance
costs and a significant reduction in
energy consumption.”
Algar Telecom
Transformation
Similarly, Brazil’s Algar Telecom
identified the need to achieve
better floor-space utilization and
greater operational and energy
efficiency for its IT infrastructure
back in 2011. At that time, its
x86 servers and HP Superdome
clusters were supporting core
business applications but reaching
their limits, resulting in rising
maintenance costs.
“In the past, we did not take a very
strategic approach when it came to
expanding our IT infrastructure—we
just added new servers as demand
increased,” explains Rogério Okada,
IT manager of Algar Telecom. “As
time went by, problems began
to add up. The environment was
simply not efficient or sustainable.
We suffered from poor performance
with frequent service interruptions
and the complexity and cost of
maintaining everything was starting
to get out of control.”
To address those problems, the
firm’s IT team turned to IBM to help
design and implement a solution
consisting of multiple platforms,
VISIT US AT: SHARE TECHNOLOGY EXCHANGE 9 AUGUST 12 - 14, 2013 9 BOSTON, MA 9 BOOTH 309
FDR/UPSTREAM
PROTECTING YOUR PENGUINS
FDR/UPSTREAM Linux on System z
If your organization is like many others, you are finding that the values and benefits of running Linux on System z
are plentiful. Our customers report that server consolidation and moving those open systems applications onto
Linux on System z results in ✔Lower cost of operations ✔Less servers ✔Fewer software licenses ✔Fewer
resources to manage ✔The mainframe legendary dependability ✔Less energy, cooling and space.
And given that the Linux data is now on the System z mainframe who better than INNOVATION,
the makers of FDR, to provide the backup tool!
FDR/UPSTREAM provides reliable, file and system level data protection for Linux on System z leveraging high
performance z/OS mainframe media, including support for HiperSockets. UPSTREAM provides integrated data
de-duplication features, and online database agents for Oracle, DB2 and Domino. Easy-to-use Director GUI
provides access to all UPSTREAM system features. Automated UPSTREAM operations with email notification
and exception reporting allows the protection of Linux on System z data into existing disaster plans.
FDR /UPSTREAM uses the tools and systems that you have been using for decades;
integrating with your existing z/OS tape management, security and scheduling systems.
For a FREE 90-Day Trial contact us at:
973.890.7300, [email protected] or visit www.fdr.com/penguins
CORPORATE HEADQUARTERS: !'/(-.,+0()//*(
'**.22
'1
E-mail: [email protected] [email protected] 2http:/ / www.innovationdp.fdr.com
EUROPEAN
OFFICES:
FRANCE
GERMANY
NETHERLANDS
UNITED KINGDOM
NORDIC COUNTRIES
software products and STG Lab
Services to achieve the technological refresh needed through
infrastructure simplification and
server consolidation. Algar Telecom
performed a large-scale server
consolidation to the zEnterprise
System, moving many workloads
from x86 servers to the platform.
The most important ones—the
x86 database instances—were
migrated to Red Hat Linux running on z/VM V6.2. Likewise, the
application servers and workloads
based on Microsoft* Windows*
were migrated to IBM BladeCenter*
HX5 blades on the zEnterprise
BladeCenter Extension (zBX).
Everything was managed from
a unique entity, the zEnterprise
Unified Resource Manager.
The migration to zEnterprise
resulted in many tangible and
intangible benefits for Algar,
including:
:LWKWKHPLJUDWLRQRIPRUH
Ā
than 80 servers from x86 to the
zEnterprise platform—and the
deployment of new workloads
on the platform—Algar achieved
savings of 50 to 70 percent in
data center floor space, energy
and cooling.
Ā2SHUDWLRQDOHIILFLHQF\LQ
creased by at least 30 percent
and recovery of the entire environment after a massive outage
was cut from hours to minutes.
Ā7KHFDSDELOLW\WRVFDOHXSDQG
allocate resources dynamically
increased the IT environment’s
overall availability and
improved the time to market for
new applications and products.
BENEFITS OF
System z
BRAZIL
Ā7KHLQIUDVWUXFWXUHDUFKLWHFWXUHZDVVLPSOLILHG
and operational risks were reduced due to fewer
servers and fewer physical connections. This
enabled greater resilience and availability.
Now, Algar Telecom has greater confidence that
its data will be safe and available when needed.
“We have completely transformed our infrastructure
and the way we manage it with the IBM zEnterprise
6\VWHPü2NDGDVD\Vû:LWKRXUFRUHEXVLQHVV
applications running on the most reliable and secure
platform in the marketplace, we can deliver better
service to more customers and focus on growing a
better business.”
Sensible Approach
ALGAR
TELECOM
➜ Achieved savings of
50 TO 70 PERCENT in data
center floor space, energy
and cooling
➜ Increased operational
efficiency by at least
30 PERCENT
➜ Recovery of the entire
environment after a
massive outage was cut
from HOURS TO MINUTES
Sicoob and Algar are only two examples of organizations that have discovered and taken advantage of the
capabilities of the zEnterprise System. As it becomes
more widely adopted, more IT professionals are taking notice that consolidation with Linux on System z
reduces costs while improving performance, security
and availability.
More companies are rethinking the outdated
approach of simply adding yet another server or two
to the data center to meet capacity needs. Instead,
smarter organizations are realizing what longtime
mainframers have known for decades, that consolidating critical workloads from myriad servers
to a single System z environment saves money and
enables growth.
“We’re experiencing a worldwide market shift
based on the convergence of mobile, analytics and
cloud,” says Doug Balog, general manager, IBM
System z. “IBM’s recent zEnterprise EC12 growth
demonstrates how System z is uniquely positioned
to deliver greater value in this new era as the secure
cloud for operational data.
“The bottom line is this: Implementing an IT strategy that is built on System z helps clients like Sicoob
and Algar realize an improved customer experience
while achieving a greater business advantage,”
Balog concludes.
Mike Westholder is managing editor of IBM Systems
Magazine, Mainframe edition.
Rogério Okada, IT manager
ONLINE
For video and
case studies, visit:
SICOOB
Video: http://youtu.be/O3x2Vhg-GtY
Case Study: http://ibm.co/11R3RCx
32 // JULY/AUGUST 2013 ibmsystemsmag.com
ALGAR TELECOM
Case Study: http://ibm.co/Zg0YuA
Enabling the infrastructure for smarter computing
2013 IBM Systems Technical Event Series
Enroll in one of IBM Systems and Technology Group’s (STG) premier technical training events,
available around the world. If you need to build skills in a short period of time, learn more about
the latest products, attend ‘how to’ technical sessions, hands-on demos, labs/workshops taught
by product experts, these are the places to be.
Save the dates and enroll. Now, that’s smart!
North America Events
s /CTOBER IBM Power Systems Technical University at Enterprise Systems2013 /RLANDO &LORIDA
s /CTOBER IBM System z Technical University at Enterprise Systems2013 /RLANDO &LORIDA
Europe Events
s 3EPTEMBER IBM Systems Technical Symposium -OSCOW 2USSIAN &EDERATION
s /CTOBER IBM PureSystems and Storage Technical University (featuring System x) "ERLIN 'ERMANY
s .OVEMBER IBM Power Systems Technical University !THENS 'REECE
s .OVEMBER IBM System x Technical Symposium $UBAI 5!%
Asia Pacific Events
s !UGUST IBM Systems Technical Symposium !UCKLAND .EW :EALAND
s !UGUST IBM Systems Technical Symposium -ELBOURNE !USTRALIA
South America Events
s IBM Systems Technical University LOCATION AND DATES TO BE ANNOUNCED
Learn more about these events, enroll
and view sponsorship opportunities at
ibm.com/systems/conferenceseries.
Follow IBM Systems technical events
discussions on Twitter at
twitter.com/IBMTechConfs.
© International Business Machines Corporation 2013. Produced in the USA. IBM, the IBM logo, ibm.com, Power Systems, PureSystems, System Storage, System
x and System z are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at www.ibm.com/legal/copytrade.shtml
ARVIND KRISHNA,
IBM DEVELOPMENT
AND MANUFACTURING
GENERAL MANAGER
Software-defined environments make
SMARTER
By adding intelligence to the IT infrastructure,
enterprises become responsive and flexible, an
interview with IBM’s Arvind Krishna
By Kristin Lewotsky // Photography by Matt Carr
I
BM is pioneering a new computing model called Smarter Computing. One of its core attributes is the
software-defined environment (SDE). Smarter Computing is IBM’s approach to IT innovation in an
information-centric era. It helps IT leaders seize the opportunities of a smarter planet by thinking
differently about the way organizations can leverage cloud, unlock the power of big data, and secure
critical information.
SDE is becoming an increasingly necessary component of Smarter Computing. To learn more about this,
IBM Systems Magazine tapped the expertise of IBM Development and Manufacturing General Manager
Arvind Krishna.
IBM Systems Magazine:
Let’s start with the basics.
What is SDE?
Arvind Krishna: A software-defined environment encompasses the
capability to manage and deploy
all workloads in the business onto
the underlying elements—compute,
storage and networking—in a way
that is responsive to the workload
and also allows it to be set up
within a few minutes or hours. With
SDE, we take the entire computing
infrastructure and make it programmable, in a sense, so that code
that runs somewhere outside this
infrastructure makes it behave in a
much more flexible way.
ISM: What has led to the
emergence or need to evolve
in this direction?
AK: Fifteen years ago, most
enterprise computing focused on
transactional activities like: What
did I sell to whom? What is my
internal accounting? How about
my human records, like payroll? All
of those transactions tended to be
bounded and the users were fairly
well-defined and jobs were weeks
or months in the planning. Rarely
would users be able to do something in 30 minutes or an hour.
Today, both the speed and
unpredictability of the business
environment have increased. Let’s
suppose you’re a financial services
provider. Every bank today has to
do things in response to the credit
environment in terms of deciding
whether to approve or not approve
a transaction. It’s about writing a
set of rules and then invoking those
TAKEAWAY
s ! SOFTWAREDElNED ENVIRONMENT 3$%
PROVIDES THE CAPABILITY TO MANAGE AND
DEPLOY ALL WORKLOADS IN A BUSINESS ONTO
THE UNDERLYING ELEMENTSCOMPUTE
STORAGE AND NETWORKINGIN A WAY
THAT IS RESPONSIVE TO THE WORKLOAD AND
ALSO ALLOWS IT TO BE SET UP WITHIN A FEW
MINUTES OR HOURS
s 3$% DOESNT JUST DO COMMON TASKS
BETTER IT ENABLES THINGS THAT WERE NEVER
IN THE REALM OF POSSIBILITY BEFORE
s 7ITH 3$% THE SAME INFRASTRUCTURE
THAT BRINGS RESILIENCE ALSO CAN PROVIDE
MIGRATION DOWN THE ROAD
ibmsystemsmag.com
ibmsystemsmag.com
JULY/AUGUST
JULY 2013 // 31
35
rules. You update them how often?
Once a week? What happens if you
get a flood of activity?
Maybe a new report comes out
or certain companies are hiring
or laying people off—now you
need to develop new rules and
invoke them in real time. To do
that, you would need to expand
your compute environment
perhaps by 10 times for that hour
but then take it back down for
the rest of the month. The problem is that you can’t set up 1,000
machines to do that analysis if
you only use those resources
once a month. If you had an SDE,
you could develop and invoke
new rules to address that urgent
business need, but the rest of the
time you could use those resources to do other things. SDE doesn’t
just do common tasks better; it
enables things that were never in
the realm of possibility before.
ISM: What does it mean for an
entire IT environment to be
programmable?
AK: I like to draw an analogy
from the Internet in 1995. Back
then, we had switched, voice-centric telephone networks. Carriers
worried about quality of service
and they worried about dropped
calls. As a result, they built a huge
amount of intelligence into the
network—big switches that did
echo cancellation and so on. The
big negative was that if the traffic
was different from what they’d
built the network around, it was
really hard to carry it.
Then the Internet came along
and we saw that the entire bulk of
the network could be dumb. Then
all it’s doing is carrying packets.
The Internet was built with logic
at the edges, which is where you
decide what kind of session you
want—a work session, video session, email or an application. You
decide what kind of bandwidth you
want, et cetera.
“SDE allows us to get very,
very precise and much more
targeted, and as a result, more
efficient at what we are doing.”
—Arvind Krishna, general manager,
IBM Development and Manufacturing
What we’re trying to do with
SDE is bring that same level of
flexibility into the IT infrastructure. Some people call it “wire
once and then forget”—you can
configure it to your application
need without ever worrying about
the wiring again.
ISM: How is cloud computing
related to the SDE?
AK: SDE is based on three components—the compute, network and
storage components—and you can
bring those together into how you
deploy workloads, et cetera. Cloud
computing comes together with
SDE as a way to deploy workloads.
I think the people who provide
cloud computing could leverage an
SDE to make cloud, itself, better. In
some sense, SDEs are going to be
infrastructure, and service really
begins to both evolve and deliver
its complete value.
ISM: Virtualization gave way
to the cloud. Is the cloud now
36 // JULY/AUGUST 2013 ibmsystemsmag.com
Wire Once &
Then Forget
With SDE, you
can configure
your IT
infrastructure to
your application
need without
ever worrying
about the wiring
again. Some
people call it
“wire once and
then forget.”
giving way to SDEs or architectures? Does SDE truly represent
advancement in IT or is it
cloud by another name?
AK: Things like virtualization and
cloud computing are necessary
for SDE, but they’re certainly not
sufficient. Virtualization tells me
that I can abstract the workload or
the application from the physical
infrastructure, and that’s necessary because I don’t know which
infrastructure I’m going to run
it on until the very last instant.
Virtualization doesn’t tell me how
I scale, though, or how I optimize
the workload. By the way, with
SDE, some elements may not be
virtualized, if the client chooses.
You may be running dedicated systems for specific business reasons.
However, in order to respond to
the needs of the applications for
speed, elasticity or flexibility—and
for being more efficient— there is a
need for an SDE.
Take our earlier banking
example with high variability
in transactions—if you put it on
a machine that’s too small, it’s
going to run painfully slow. If
you use a machine that’s too big,
you’re actually wasting capacity.
Today, a human might step in and
decide a certain workload needs a
machine with that much compute
capacity, that much memory
and that much storage, but how
do you go about optimizing that
entire placement? Virtualization is
not going to do that. You need the
next layer up that can look upon
all of the compute resources as a
pool and decide where to place
things; one that can determine
when a workload is going to be
very storage-intensive so it needs
to provision not just a virtual
machine but storage with enough
IOPS and bandwidth to service
it at the desired service level. To
make this a reality, you need all
of the capabilities I’m describing,
not just one of them.
ISM: What are some of the
ways clients would benefit
from an SDE?
AK: At the end of the day,
despite our best efforts, there
continues to be a risk of hardware and software failures.
Today, we spend a lot of energy
talking about disaster recovery
plans. If you were very naïve
today, you would say, “If I have
100 compute units worth of
workload, I need to have another
100 just in case it fails.” With
SDE, I might only need another
five. If one part fails, I can go to
the other. If I get really squeezed,
I can decide what’s critical and
I’ll recover that, but these other
workloads can actually wait. SDE
allows us to get very, very precise
and much more targeted, and as
a result, more efficient at what
we are doing. Remember, that’s
not just the compute cost but the
complete stack of expense—the
electricity, the data center space,
the humans to maintain it, all
of that.
Looking forward, what about
upgrades and migrations in hardware or software? Today, the only
case in which migration happens
really quickly is on the mainframe.
Everywhere else, it takes time—I
put up a parallel infrastructure,
then transfer the workloads, test,
make changes, test some more,
make more changes. If I’m lucky,
a few weeks later I may have a
migration done.
With SDE, the same infrastructure that gets you resilience also
will get you migration down the
road. You could actually move
that workload to a new instance.
Infrastructure migration becomes
a question of pushing buttons and
just a simple test. We’re not quite
there yet, but soon.
AK: I would guess that about a
third of them are ready, a third
of them are willing to explore
or would like to get ready, and
probably a third of them are not
yet ready. That’s why I said SDE
cannot be only about virtualiza-
tion. In some cases, you also have to run workloads
that are native and are not yet virtualized.
ISM: What capabilities does IBM offer to partner
with clients aspiring to deploy an SDE?
AK: At the compute level, SDE is about how to correctly deploy virtualization, placement and all those
ISM: So do you think companies out there are ready for
an SDE?
ibmsystemsmag.com JULY/AUGUST 2013 // 37
App for iPad
Launching in September
Ne
w
November/December 2012 ibmsystemsmag.com
MAINFRAME
IBMSystems
THE FUTURE OF
CLOUD
IBM SmartCloud unleashes the
mainframe’s inherent capabilities,
GM Doug Balog says PAGE 20
DTCC tackles trades
with zEnterprise
PAGE 14
Make the leap toward
cloud integration
PAGE 26
elements. For virtualization,
IBM has VMControl (for Power*
and KVM), zManager (for z/VM*
and PR/SM*), Flexible Resource
Manager for PureSystems*, et cetera—OpenStack-derived engines
that include the placement tools,
et cetera. For the networking
level, there’s Open Daylight,
which is the open-source version
of software-defined networking.
The management tools we do on
top of that are necessary because
our virtual machines, in turn,
connect to everything else, and
that’s where the networking piece
is. For storage, we’ve got offerings
like SmartCloud* Orchestration,
which help orchestrate across all
of these pieces. The IBM Storwize* family provides storage
virtualization, Easy Tier* data
optimization and real-time data
compression.
ISM: How does IBM stand
apart from the competition in
this area?
AK: We talked about compute,
network and storage, as well as
orchestrating how to make all of it
work together. I think most people
are either doing one or one-and-ahalf of these four pieces. They’re
doing what they can do, because
they’re looking at it from their
narrow lens as opposed to from
the perspective of what their
clients need to succeed.
IBM has all of the parts. We can
also help with the execution. We
have folks in Global Technology
Services who can help people
deploy their private cloud as well
as SDEs. For all of these pieces,
if all you want is assessment and
help, our technical architects in
the Systems and Software group
will come do that for you. If you
want us to do it for you, then our
folks in services can come and do
that as well.
ISM: Why do enterprises need
to consider adopting Smarter
Computing and an SDE? What
pain points will it address?
AK: It’s all about attacking the
speed question and the cost question. Think about it, in any business
today can we truly respond to an
opportunity in minutes or hours? If
enterprises cannot solve the speed
problem, they won’t be competitive with their
peers. If they don’t attack the cost question,
they’re going to have a lower multiple than their
peers. I think SDE is becoming one of those
things that is a requirement, not a choice.
Kristin Lewotsky is a freelance technology writer
based in Amherst, N.H.
L:7>C6G
I]ZCZlo:ciZgeg^hZ¿8gZVi^c\
hjXXZhh^cndjgWjh^cZhhl^i]
cZlhdaji^dch^ci]ZVgZVhd[
W^\YViV!VcVani^Xh!XadjY!
bdW^aZ!VcYhZXjg^in#
L=DH=DJA96II:C9
)2K\KR/:*K\KRUVKXY/:3GTGMKXY
K^OYZOTMSGOTLXGSKI[YZUSKXY
L=:C
=KJTKYJG_0[R_b6:):+:
HE:8>6AD;;:G
6iiZcYidgZXZ^kZV[gZZYdlcadVYd[i]ZWdd`!ÁHnhiZbo[dg9jbb^ZhÂ#
6<:C96
Dg\Vc^oVi^dchd[Vaah^oZhVgZjcYZg^ciZchZXdbeZi^i^kZegZhhjgZidegdk^YZ
cZlVcY^begdkZYhZgk^XZh/Id^ccdkViZ!idY^[[ZgZci^ViZ!idYZa^kZgcZlkVajZ#
6cY!idYd^iVaa[dgaZhhl^i][ZlZggZhdjgXZh#I]ZgZ^hVcZlhdaji^dc^ci]Z
HnhiZbo[Vb^an#I]ZcZlo:ciZgeg^hZhnhiZbZcVWaZhWZiiZgWjh^cZhh
gZhjaihl^i]^cXgZVhZYeZg[dgbVcXZVcYÍZm^W^a^in^cVadlZgXdhieVX`V\Z!
ZmiZcY^c\i]ZcZlbV^c[gVbZiZX]cdad\niddg\Vc^oVi^dchd[Vaah^oZh#
<gZ\Adi`d!>7BK^XZEgZh^YZci!o:ciZgeg^hZHnhiZb!l^aaY^hXjhh]dl
i]ZcZlo:ciZgeg^hZhnhiZbl^aaWg^c\kVajZidWjh^cZhhZhaVg\ZVcY
hbVaa#GVn?dcZh!>7BK^XZEgZh^YZci!HnhiZboHd[ilVgZHVaZh!
l^aaY^hXjhh]dli]ZcZlo:ciZgeg^hZhd[ilVgZhdaji^dchl^aa
ZmeVcYi]ZhnhiZb^ci]ZVgZVhd[W^\YViV!VcVani^Xh!
XadjY!bdW^aZ!VcYhZXjg^inidegdk^YZVidiVahnhiZb
[dgi]Z[jijgZ#
;:6IJG:9HE:6@:GH
-XKM2UZQU
8G_0UTKY
>7BK^XZEgZh^YZci!
o:ciZgeg^hZHnhiZb
>7BK^XZEgZh^YZci!
HnhiZboHd[ilVgZHVaZh
G:<>HI:GID96N
]iie/$$^WbhnhiZbhbV\#lZWZm#Xdb
ibmsystemsmag.com JULY/AUGUST 2013 // 39
TECH CORNER
Architected for AVAILABILITY
In addition to high performance, System z processors are designed to be reliable
T
he IBM System z* design focuses on reliability, availability and serviceability
(RAS). To achieve an extremely dependable commercial system,
each component in its hierarchy must have good error detection and
recoverability. The microprocessors within each System z machine provide
significant performance improvements over their predecessors while retaining the
dependability expected from IBM mainframes.
C. Kevin Shum is a
Distinguished Engineer
in IBM Poughkeepsie’s
Systems and
Technology Group,
working in the
development
of System z
microprocessors.
Scott B. Swaney is a
senior technical staff
member working on
hardware and system
design and diagnostics
for IBM servers in
Poughkeepsie, NY.
Although fault-tolerant design
techniques are known to many,
their application is important.
System z processors are developed with diligent incorporation
of checking logic designed to
detect faults in the underlying
circuits, which can be transient
(due to charged particle strikes)
or permanent (due to circuit
failures). In addition to thorough
error-detection coverage, the designs strive for highly transparent
error handling.
Their capability to seamlessly
detect and correct faults while
applications run is essential to
maintaining near-zero downtime.
Meanwhile, system availability data
is collected and monitored so unforeseen problems can be identified
and timely updates provided.
Error Detection
The logic in typical processors
comprises arrays, dataflow and
control. Arrays are typically used to
hold large, structured sets of data,
such as caches. Error detection is
implemented by including check
bits with the data written to the
array. They’re used to validate the
data when it’s later read from the
array. The two categories of check
Figure 1: Instruction Retry
Any Error
Detected
Block Checkpoint
sparing
Initiate Sparing
Determine
recoverability
retry
Write through
to L3 any
checkpointed
storage updates
Notify L3 this
core is now
temporarily
fenced off
Array structures
re-initialized
Hardware states
corrected /
refreshed
40 // JULY/AUGUST 2013 ibmsystemsmag.com
Refresh starts
Notify L3 core
is online
Start processing
bits are parity and error-correcting
code (ECC).
Parity bits indicate whether data
has an even or odd number of “one
bits.” If a single bit changes value—
or flips—in the array, the parity bit’s
setting will be inconsistent with the
number of one bits, thus indicating
an error. ECC provides stronger
detection than parity. Depending
on the coding schemes used, it can
identify one or more flipped bits
and can help correct the data.
The best checking method depends on how each array is used.
For those with transient data,
such as store-through caches,
erroneous data can be discarded
and re-fetched or regenerated, so
simple parity is used. For arrays
such as store-in caches that
might contain the sole copy of
the data, ECC is used.
In dense array technologies,
normal background radiation
particles have enough energy to
flip multiple physically adjacent
cells in an array. Similarly, some
classes of latent defect mechanisms may affect pairs of adjacent cells. To ensure checking
effectiveness isn’t compromised
for multicell failures, data in the
arrays is organized in the current
generation of hardware such that
physically adjacent cells are in
different parity or ECC groups.
Dataflow includes arithmetic
structures that perform such
operations as addition, subtraction and multiplication. Residue
checking is usually used for arithmetic functions. For example,
when checking the sum of two
numbers, each operand can be
factored by three, and multiples
of three discarded; the same is
done for the total. The sum of the
residues for the operands should
equal the residue for the correct
total. If not, an error is detected.
Dataflow may also comprise simply passing and multiplexing data
and addresses. These are usually
checked by a check-bit scheme
similar to those used for arrays.
Control logic checking involves
many different techniques—including detecting illegal protocols, illegal states and state
transition, and hangs, as well as
using local duplication of circuits that are compared to one another. All error checks of System z
processor cores are gathered in a centralized recovery controller. If an error is detected, instruction
processing is immediately halted before erroneous
values are used and an instruction retry sequence
commences, as shown in Figure 1 (page 40).
WEBINAR
Improve Availability
and Productivity with
Proactive Automation
Gain new zEnterprise insights integrating
OMEGAMON and System Automation for z/OS
WHO SHOULD ATTEND: IT and enterprise; IT managers;
application programmers and managers; system analysts;
operations managers; system administrators
WHEN: July 18 | 8 PT / 10 CT / 11 ET
AGENDA: Join us for a complimentary webcast as we
discuss how using Tivoli® OMEGAMON XE to improve
zEnterprise® monitoring – interfaced with System
Automation for z/OS® – can minimize the time it takes
to find and fix problems. Proactive automation enables
correlation of problems across applications and drives
automated problem resolution without manual overhead.
This webcast will give examples of zEnterprise situations
where having OMEGAMON® and System
Automation working together can proactively
resolve problems. Learn how these two
products together can save you time
and make it easier to achieve SLAs.
FEATURING:
Joachim Schmalzried
Certified IT Specialist,
IBM Software Group
REGISTER NOW
http://ibmsystemsmag.webex.com
ibmsystemsmag.com JULY/AUGUST 2013 // 41
TECH CORNER
Error Recovery With
Instruction Retry
Since the 1990s, each System z
microprocessor core has used a
similar instruction retry recovery
mechanism. In general, ECCprotected processor states, called
checkpoint states, are saved
while instructions are completed
without error. Upon detecting any
error, the checkpoint states are
frozen and a processor-recovery
sequence commences.
The processor first resets all
control states and purges local
array structures containing
transient data. Then, processor
state registers and any usage
copies are refreshed from a
checkpoint state. In some cases,
where an ECC-protected backup
isn’t feasible, two or more copies
of a particular processor state
are kept so a bad copy can
be refreshed from a good one
during the recovery sequence.
During the whole recovery
sequence, the multiprocessor
fabric controller is notified that
this processor is undergoing a
localized reset sequence that
includes invalidating its local
cache entries. After all states
are reset or refreshed, the
processor resumes instruction
processing, starting from the
restored checkpoint. This entire
recovery process takes less than
a millisecond and is typically
transparent to software. On very
rare occasions when instructions
cannot be retried, software will
be notified by a machine check.
When restarting from a
checkpoint, special action must be
taken to allow forward progress to
continue past a permanent circuit
fault. When errors are detected
in static random addressable
memory (SRAM) arrays, typically
used for caches and other history
buffers, processors remember the
compartment where the error was
detected. When a specific error
recurs within a predetermined
interval, the faulty compartment
is disabled. If a persistent fault
spans multiple compartments or
is in control of dataflow logic that
cannot be quarantined, processor
sparing is initiated.
Outside the processor, the
large shared local L3 and L4
caches in zEnterprise* 196 and
zEnterprise EC12 are made up of
many embedded DRAM (eDRAM)
arrays containing billions of
dense transistors. At extremely
low-defect densities of less than
one part per billion, multiple
defects would still be in large
eDRAM caches, so the array
design incorporates redundant
rows and columns that can be
configured to steer the data
around any defects. Aggressive
array-test patterns are used to
identify defects and configure the
caches after chip fabrication.
At run time, these eDRAM
caches are covered by ECC.
When correctable errors are
Scott B. Swaney
holds multiple
patents related
to processor
recovery
and system
availability.
C. Kevin Shum
was named the
Asian American
Engineer of the
Year in 2012.
Figure 2: Retry vs. Sparing
Running state
Instruction retry
Processor sparing
Soft Error
Hard Error
Checkpoint State
Core X
No Error
GP1=0x14343433
CT=-0x12344324
...
On Fault
42 // JULY/AUGUST 2013 ibmsystemsmag.com
Running state
Core Y
No Error
detected, trap registers capture
the address being read, the
compartment the data came
from and ECC data. Similar
to how core SRAM arrays are
reconfigured to quarantine
defects by avoiding further use
of a compartment, larger eDRAM
arrays quarantine defects by
avoiding further use of affected
cache lines. Boundaries are
more granular for these eDRAM
arrays, so multiple errors can be
quarantined without impacting
performance.
Processor Sparing
Processor sparing uses the same
concepts as instruction retry, as
illustrated in Figure 2 (below).
Checkpoint states are transferred
from failed processors into healthy
ones, which resume operation
from the checkpoint.
Firmware assistance is required
to migrate checkpoint states.
When the recovery controller
determines that an operation on
a failed core cannot be restarted,
it signals a malfunction alert
to firmware. System firmware
examines state registers to
validate the failed processor’s
checkpoint. It extracts the
micro-architected state registers
from the failed processor using
a sideband access path designed
into each processor. It then
stores the checkpoint state into
a reserved memory location and
selects a healthy spare processor
to take over. Firmware running
on the spare processor loads the
checkpoint state and signals the
processor to resume processing
from the loaded checkpoint.
Most configurations include
one or more dormant spare
processors, so sparing operations
typically involve no loss of capacity and are completely transparent
to software. For configurations
without a dormant spare processor, the LPAR configuration af-
fects whether capacity changes or
loss of a processor must be reported to software. Also, when failed
processors have been replaced,
very rare situations remain where
a failed process cannot be retried
and software must be notified by
a machine check.
a laser-sharp focus in microprocessor planning and design that incorporates the best error-detection
coverage and recovery mechanisms helps IBM attain its goal of
having the least downtime among
commercial servers.
Additionally, proactive system
monitoring in the field allows
engineers to watch for trends,
discover potential problems,
and prepare solutions before
customers are affected.
Proactive Monitoring
System z servers continuously log
system availability data that can
be sent to IBM for monitoring. This
data includes information about:
$YDLODELOLW\UHODWHGV\VWHP
Ā
events
Ā7KHV\VWHPùVHQYLURQPHQW
including power-off, poweron-reset, LPAR activation,
configuration changes, and
firmware update events
Ā$PELHQWWHPSHUDWXUH
Ā5HODWLYHKXPLGLW\
Ā5HFRYHU\DQGVHUYLFHDFWLRQV
IBM uses this data for statistical
analyses, validating predictive
failure-rate models and identifying
machine behaviors that fall
outside expected boundaries.
For example, multiple processors
going through instruction retry
recovery within a short time might
indicate a problem. So, firmware
levels can be compared against
known problems and patch
adoption accelerated. Also, the
data can reveal components that
are degrading outside expected
parameters and need to be
replaced. Often, the machine will
continue recovering transparently
from problems until repairs or
updates can be made.
WEBINAR
Presenting a webcast series on
IBM DevOps Solution for System z
Best practices and tools for continuous delivery of
software-driven innovation on System z
DevOps helps establish easier, quicker handoffs from planning
and development to deployment and other siloed areas. In this
complimentary series of webcasts, we’ll discuss this collaborative
approach for continuous software delivery, and how IBM integrated
development and operations tools and processes can help optimize
the entire lifecycle of your applications.
All one-hour sessions begin at
11 a.m. EDT / 3 p.m. GMT/UTC
August 7
Accelerating the Delivery of Multiplatform Applications
August 14
Continuous Business Planning to Get Cost Out and Agility In
September 4
Collaborative Development to Spark Innovation and Integration Among Teams
September 11
Continuous Testing to Save Costs and Improve Application Quality
September 18
Continuous Release and Deployment to Compress Delivery Cycles
For more information, or to register for
one or more of these webinars go to:
www.ibmsystemsmag.com/devops
Built for Dependability
Each System z processor is
designed not only to provide a
significant performance increase
over its predecessors but also to
maintain the same industry-leading RAS characteristics expected
from IBM mainframes. Maintaining
ibmsystemsmag.com JULY/AUGUST 2013 // 43
ADMINISTRATOR
PLUG AND PLAY
for z/OS Storage Devices
System z innovations automatically define configurations for greater availability
D
ata centers run a range of business workloads, including batch and transaction
processing, business applications, complex data analysis, collaboration and
social business. It’s easy to gravitate toward one particular server as being
good for all of these workloads; however, they all have different requirements. For that
reason, IBM offers different types of servers.
Harry M.
Yudenfriend
is an IBM Fellow
with Systems and
Technology Group,
System z and
Power.
System z* Discovery and
Auto-Configuration (zDAC) is
the mainframe’s capability to
exploit and support incremental
additions to a running system in
a plug-and-play fashion. The history of how z/OS* and System z
evolved into this capability
starts with two configuration
definition paradigms.
Most distributed servers use
a host-discovery methodology
to define I/O configurations. OS
servers discover devices they’re
allowed to use over individual
host bus adapters and put them
into I/O configurations. Systems
administrators use storage-area
network (SAN)-based tools to
control which servers and host
bus adapters are allowed to see
which devices. Fabric zoning
and logic unit number (LUN)
masking are typical techniques
for controlling which servers
have access to which devices over
what paths. When new devices
are added to the SAN, distributed
servers dynamically discover new
resources and put them to use.
System z mainframes, however, use a host-based I/O defi-
44 // JULY/AUGUST 2013 ibmsystemsmag.com
nition methodology using the
Hardware Configuration Dialog
(HCD). Devices and channels
used to access them are defined
by configuration data contained
in host processors. The tooling
assigns device names, allocates
bandwidth by allotting sets of
channels used to access devices,
and enforces security policy by
controlling which LPARs (OS
images) can access devices.
These host-based tools provide
interactive consistency checking
between software and processor views of I/O configurations
while policing software and
hardware limits.
Host configuration definitions
are easily modified by defining
new configurations and dynamically selecting them. The OS determines the differences between
old and new configurations and
modifies the I/O configuration to
match the new definition.
To simplify the process, z/OS
clients desired a plug-and-play
capability for adding devices to
I/O configurations. Discovery of
new devices and their channel
attachments eliminates mis-
matches between planned host
definitions and cable plugins. If
definitions and actual configurations don’t match, problems are
discovered when activating the
new configurations and trying to
format the new devices and bring
them online.
Enterprise class I/O configurations are designed for high
availability, so single points of
failure and repair must be avoided
for critical devices, but planning
an I/O configuration to prevent
them is a complex activity. Such
single points of failure include
channel, switch port and control
unit port boundaries. A plug-andplay approach can automatically
define high-availability configurations, eliminate human error and
improve system availability.
The Challenge
To automatically define efficient
I/O configurations that helped
meet service-level agreements
for system performance and
availability requires some additional infrastructure.
When new devices are added,
clients might not know what data
will be placed on them. Also,
workloads constantly change,
some growing faster than others.
Dedicating enough resources to
meet all possible requirements
simply isn’t affordable. When
clients plan I/O configurations
they tend to over-configure I/O
to avoid constraining mainframe
CPU capabilities. So the first
step was enabling the system
to dynamically tune itself to
meet the workload requirements
as defined in z/OS Workload
Manager (WLM).
To manage I/O priorities,
WLM started by providing the
infrastructure needed to specify
goals, monitor workload against
them and dynamically adjust
resources and priorities to favor
important work that misses goals
at the expense of less important
work. When z/OS workloads
miss goals because of I/O delays
caused by resource contention,
for example, the OS raises the I/O
priorities of more important work.
The z/OS I/O Supervisor (IOS)
ensures I/O requests are queued
and executed in proper order.
Channel subsystems manage the
work queues in priority order,
both initially and when re-driving
requests after busy conditions
are encountered. System z I/O
has been built from the casters
up with instrumentation that
allows the construction of smart
algorithms to manage resources
and assign priorities.
FICON* channels also prioritize
the execution of I/O requests.
This capability was extended to
allow z/OS technology to pass
WLM-derived I/O priorities to
control units through the SAN in a
device-independent way. Control
units can honor I/O priority by
throttling link bandwidth to favor
higher-priority I/O requests,
optimize access to RAID ranks
and prioritize reconnections after
resolving cache misses.
Parallel access volumes (PAV)
technology was invented to
allow multiple simultaneous
I/O operations from a
single z/OS image to a DASD—
while maintaining the capability
to measure fine-grained I/O
service time components. This
reduced the time spent queued in
the OS while waiting for device
availability. The number of PAV
aliases assigned to a logical
volume controls the number of
simultaneous requests that can be
started. WLM could dynamically
move PAV aliases among logical
volumes to help workloads meet
goals when I/O queuing time is
the source of delays.
Enhanced virtualization
techniques were added to
make PAV technology much
more responsive to workload
demands and to more efficiently
utilize System z I/O addressing
constructs. This HyperPAV
technology can virtually eliminate
OS I/O queuing time. HyperPAV
assigns PAV aliases to I/O devices
as application and middleware
I/O requests need them, based
on I/O prioritizations assigned
by WLM. HyperPAV also provides
virtualization of System z I/O
addressing so OSs can more
effectively utilize the number of
alias device addresses available
across sharing systems, as every
OS image can use the same alias
address for a different base
device at the same time. Also with
HyperPAV, when an I/O request
finishes for a device, the next
request executed is the highest
priority request for the set of
devices for that control unit. This
COMPLEMENTARY
PERFORMANCE TECHNOLOGIES
I
BM System z* High Performance FICON* (zHPF) technology was created to allow
more efficient execution of frequent I/O requests. It allows operations to remain
queued in storage subsystems rather than retransmitted after busy conditions. Better
workload management results from storage subsystems management of I/O operation
execution order after device reserves are released. The zHPF technology can also quadruple I/O rate capabilities without additional channel, control-unit or switch ports.
In the zEnterprise* EC12 servers, IBM has enhanced the channel path selection algorithm used to choose routes for executing storage I/O requests. It steers traffic away
from congested channel paths and control-unit ports toward channel paths experiencing better initial command response times. Multisystem congestion is managed using
the comprehensive I/O measurements.
The IBM DS8000* mainframe storage supports many autonomic I/O enhancements.
The adaptive multistream prefetch (AMP) algorithm improves cache management
efficiency, reducing time required to satisfy cache misses. The DS8000 algorithm uses
hints provided by middleware running under z/OS*, and itself decides how much data
to pre-fetch asynchronously from disk. AMP adjusts pre-fetching based on the applications’ need to avoid overutilizing disks when the applications don’t require the resources. DS8000 also implements Wise Ordering of Write to minimize head movement at
backend drives.
With Easy Tier* technology, the DS8000 subsystems learn data usage patterns over
time to proactively promote “hot” data to faster devices such as SSD to help optimize
application performance, and demote “cold” data to slower devices.
—H.M.Y.
ibmsystemsmag.com JULY/AUGUST 2013 // 45
ADMINISTRATOR
Harry M.
Yudenfriend
was named an
IBM Master
Inventor in
2001 and has
achieved his
33rd invention
plateau.
provides more comprehensive and
effective I/O prioritization and
improved efficiency.
The DS8000* I/O Priority
Manager extends WLM I/O
priority queuing to provide
more advanced techniques for
managing the response times and
throughput through the storage
subsystems when running mixed
workloads with different servicelevel requirements.
Dynamic Channel Path
Management (DCM) allows
I/O configurations to be
defined in a coarser fashion.
Specified policies indicate
how many channel paths can
be dynamically added to and
removed from control units, to
adjust bandwidth as needed.
DCM also provides the system
with the capability to recognize
when failing components expose
the system to degraded RAS
characteristics and dynamically
adjust the configuration to avoid
single points of failure or repair.
Mainframe I/O
Configuration Definition
Collectively, these foundational
autonomic technologies
optimize I/O execution in host
and storage subsystems to
meet specified goals within
the bounds of existing I/O
configurations. With them
in place, organizations can
enhance the configuration
definition process to allow
incremental updates to running
systems to be more automated.
The new zEnterprise* zDAC
functions provide this capability.
Mainframes provide the
infrastructure zDAC needs. Host
software can interrogate the
SAN fabric name server and
SAN fabric N_Ports to discover
I/O configuration changes.
When the OS analyzes potential
updates, it can determine the I/O
configuration changes needed
46 // JULY/AUGUST 2013 ibmsystemsmag.com
FICON
technology and the DS8000
enterprise class storage subsystem on System z
provide reliable access to data and virtually
guarantee the data is correct.
to allow use of the devices
and control units. Meanwhile,
it maximizes I/O availability
characteristics and performance
capabilities within policy
limits specified by the client
in HCD. zDAC allows z/OS and
System z technology to discover
the information about new I/O
resources needed to make smart
I/O configuration proposals.
These proposals take all single
points of failure in the physical
host, SAN and target storage
subsystems into account.
zDAC simplifies I/O
configuration, eliminating
errors that occur when physical
configurations don’t match
information used to create host
I/O definitions. When they don’t
match, several symptoms will be
surfaced to the z/OS staff. These
include z/OS health checks
that periodically explore I/O
configurations for availability
risks. Other errors arise when
I/O device paths are found to
be nonoperational, or from
mis-cabling issues.
Although z/OS technology
helps ensure configuration errors won’t cause data
corruption, it cannot prevent
suboptimal availability characteristics when a configuration is
incorrectly defined. When this
happens, clients can optionally
run z/OS health checks that
will identify the single points of
failure when they occur. They
must carefully look for these
symptoms of configuration
errors and fix them before a
failing component jeopardizes
the system.
HCD and zDAC provide an
additional tool that helps
validate the physical security of
the host and storage subsystems
in the SAN. HCD presents a
list of available FICON storage
subsystems and hosts visible
from each system. This allows
clients to verify that devices and
hosts have been isolated from
each other using soft- and hardzoning SAN functions.
Continued Leadership
With the highest degrees of
resource utilization, z/OS
technology offers the capability
to meet client goals while
running diverse workloads. The
I/O capabilities are designed to
provide predictable, repeatable
I/O performance when running
these workloads, including
running online transaction
processing concurrently with
batch and backups. FICON
technology and the DS8000
enterprise class storage
subsystem on System z provide
reliable access to data and
virtually guarantee the data is
correct. In addition, System z
features built-in, in-band I/O
instrumentation. The mainframe
uniquely measures components
of I/O service times, with
industry leadership in I/O delay
measurement, effective I/O
management algorithms and
data integrity.
Illustration by Craig Ward
Compuware Workbench
ThruPut Manager AE+
4HE LATEST VERSION OF THIS MODERNIZED MAINFRAME APPLICATION DEVELOPMENT
interface features a new File-AID Data Editor that supports browsing
AND EDITING )-3 DATABASES IN ADDITION TO $" AND OTHER MAINFRAME FILE
systems. Enhancements also include:
Designed to modernize and simplify batch service delivery
in a z/OS* JES2 environment, this product extends the
scheduling goals of CA Workload Automation CA 7 Edition
(CA 7) to the z/OS execution arena. Features include:
s
s
s
s
s
Compuware
MVS Software
! SIMPLIFIED FILE AND DATAMANAGEMENT PROCESS
!DDITIONAL FLEXIBILITY WHEN DEBUGGING PROGRAMS
!DVANCED INTERNAL DIAGNOSTICS AND COMPONENT LEVEL CHECKING
3UPPORT FOR ALL MAJOR ENVIRONMENTS AND FORMATS OF #OMPUWARES
developer productivity lines—Abend-AID, Hiperstation, Xpediter,
Strobe and File-AID
R OS SUPPORT: z/OS* 1.10 and above
R PRICE: Variable
R URL: www.compuware.com
ADVERTISER INDEX
s
s
s
4HE CAPABILITY TO ACCOUNT FOR THE DEPENDENCIES
specified in the CA 7 scheduling database and
automatically identify critical path jobs
-ANAGEMENT OF PRODUCTION EXECUTION ACCORDING TO THE
CA 7 schedule and due-out times
-INIMIZED EXECUTION DELAYS
/PTIMIZED RESOURCE UTILIZATION
R OS SUPPORT: z/OS 1.8-2.1
R PRICE: Variable
R URL: www.mvssol.com
PAGE
PAGE
2013 IBM Systems Technical Event Series //
www.ibm.com/systems/conferenceseries
33
Advanced Software Products Group Inc. //
www.aspg.com
19
Maintec // www.maintec.com
25
ColeSoft Marketing Inc. // www.colesoft.com
17
Optica Technologies //
http//ibmsystemsmag.webex.com
15
Compuware //
www.compuware.com/mainframesolutions
7
Relational Architects International Inc. //
www.relarc.com
3
DTS Software Inc. // www.DTSsoftware.com
13
Rocket Software Inc. //
www.rocketsoftware.com/opentech
C2
Enterprise Systems 2013 IBM Conference //
www.ibm.com/enterprise
C3
SUSE // www.suse.com/zslesconsolidate
9
Fischer International Systems Corp //
www.FISC.com
21
Terma Software Labs //
http//ibmsystemsmag.webex.com
11
39, 41, 43
Tributary Systems // www.tributary.com
27
31, C4
Trident Services Inc. // www.triserv.com
5
4
Visara International // www.visara.com
37
IBM Webinars // http//ibmsystemsmag.webex.com
Innovation Data Processing //
www.fdr.com/penguins, www.fdr.com/IAM
Jolly Giant Software Inc. //
[email protected]
MVS Solutions Inc. // www.mvssol.com
CONTACT
THE SALES TEAM
1
ASSOCIATE PUBLISHER
Mari Adamson-Bray
(612) 336-9241
[email protected]
ASSISTANT SALES MANAGER,
SOUTHEAST, SOUTHWEST AND
ASIA-PACIFIC
Lisa Kilwein
(623) 234-8014
[email protected]
ACCOUNT EXECUTIVE,
NORTHEAST, NORTHWEST
AND CANADA
Kathy Ingulsrud
(612) 313-1785
[email protected]
ACCOUNT EXECUTIVE,
MIDWEST AND EUROPE
Darryl Rowell
(612) 313-1781
[email protected]
ibmsystemsmag.com JULY/AUGUST 2013 // 47
STOP RUN
Music LESSONS
Kochishan challenges misconceptions about polka—and the mainframe
A
fter nearly 35 years working in IT, Stefan Kochishan,
senior director of mainframe marketing at CA
Technologies, has deep roots in mainframe technology.
But his roots in music—particularly polka—run even deeper.
Mike Westholder
is managing editor
of IBM Systems
Magazine, Mainframe edition.
The son of German immigrants, Kochishan’s father (also
Stefan) played the button box
accordion and harmonica, and
his older brother, Karl-Heinz, is
an accomplished accordionist.
Having taken up the saxophone
at age 8 and the clarinet at 11,
Kochishan, 54, has played with
various musical groups over the
years—including his brother’s
band, Europa, for nearly 43
years and Polka Power California for 20.
Q. Would you say polka music is a family tradition or a
profession?
A. Both! Though I wouldn’t really say it’s a profession for me—
more like a very deep passion
for being creative and creating a
fantastic customer experience.
To me, performing provides
great stress relief from my daily
marketing responsibilities.
Q In addition to polka, have
you performed with other
types of bands?
A. I’ve had the great opportunity to play with several bands
and music styles from polkas
and waltzes to very tradition-
48 // JULY/AUGUST 2013 ibmsystemsmag.com
al ethnic music—Croatian,
Serbian, Turkish, Bulgarian,
Greek—and more modern styles
like swing, Dixieland, jazz,
blues and some rock. I was also
a studio musician for a while,
playing for commercials. I’ve
played ocean cruises, the Lawrence Welk Theatre in Branson,
Mo., Las Vegas and countless
venues. One nonconventional
place I’ve played was on a plane
on the tarmac waiting for a
storm to pass. It was pretty cool
having the flight attendants and
passengers all singing along to
pass the time.
Q. What’s the most memorable event you’ve played?
AThat’s easy—the City of La Mesa
[Calif.] Festival with over 10,000
spectators. The place was packed
and jumping!
Q. What about polka
is most appealing to you?
A. Really, there are so many
types of polkas: German, Slovenian, Polish, Czech, Mexican,
Ukranian, et cetera. It’s the variety of styles to emulate and the
upbeat, driving, happy sounds
that appeal to me.
Stefan Kochishan,
senior director of
mainframe marketing
at CA Technologies
Q. What’s the biggest misconception about polka?
A. Just like many misperceptions
about the mainframe—it’s old
technology, outdated, it won’t last,
it’s only for senior workers—newer
styles of polka “ain’t yer daddy’s
polkas.” There’s a resurgence of
popularity for the music among
younger generations. It’s totally
evolved and will be around—just
like the mainframe—for years and
years to come.
"ONNET #REEK #ONFERENCE #ENTER | Orlando, FL | /CTOBER n
IBM brings together its System z® Technical
University, Power Systems™ Technical University
and a new executive-focused Enterprise Systems
event, all delivered in four-and-a-half action packed
days, in one premier event.
Enterprise Systems2013
highlights include:
s
s
s
s
s
s
s
s
s
s
s
s
ATTENDEES EXPECTED
#LIENT CASE STUDIES
2OBUST EXECUTIVE STRATEGY SESSIONS
EXPERT TECHNICAL SESSIONS ACROSS TRACKS
)NDEPTH COVERAGE OF RECENT PRODUCT ANNOUNCEMENTS
0EEKS INTO )4 TRENDS AND DIRECTIONS
)"- AND INDUSTRYRENOWNED SPEAKERS
(ANDSON LABS CERTIlCATION TESTING AND
post-conference education
#OMPREHENSIVE 3HOWCASE 3OLUTION #ENTER
)"- "USINESS 0ARTNER EDUCATION
%XCLUSIVE NETWORKING OPPORTUNITIES
4OPNOTCH ENTERTAINMENT
Join us for this inaugural event
Register now at
ibm.com/enterprise
All in one premier event!
System z
Technical University
at Enterprise Systems2013
Power Systems
Technical University
at Enterprise Systems2013
Enterprise Systems
Executive Summit
at Enterprise Systems2013
Who Should Attend:
s Business and IT executives and leaders
INTENT ON IMPROVING lNANCIAL PERFORMANCE
enhancing organizational effectiveness and
achieving industry leadership through IT
infrastructure investments.
s IT professionals and practitioners focused
on sharpening expertise, discovering new
innovations and learning industry best practices.
© International Business Machines Corporation 2013. Produced in the USA. IBM, the IBM logo, ibm.com, Power Systems and System z are trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide. A current list of IBM trademarks is available on the Web at www.ibm.com/legal/copytrade.shtml
IAM – Improving Performance for
Batch and Online VSAM applications
Despite all the hardware and software changes that have occurred in the last 40 years,
IAM still outperforms VSAM. Even compared to enhancements like VSAM SMB, VSAM
LSR, hardware compression and extended format files, the IAM structure provides better
performance and takes less CPU time, DASD space and EXCPs than VSAM. In addition,
the IAM/PLEX (Cost option) provides z /OS SYSPLEX Record Level Sharing (RLS) for
IAM datasets across multiple z /OS systems and LPARS.
Scan the QR code to request the latest White Paper on IAM.
For a FREE, No-Obligation 90-Day Trial call: 973-890-7300,
e-mail: [email protected] or visit: www.fdr.com/IAM
Visit us at: SHARE Technology Exchange August 12-14, 2013 Boston, MA Booth 309
CORPORATE HEADQUARTERS: !'/(-.,+0()//*(
'**.22
'1
E-mail: support@ fdrinnovation.com 2sales@ fdrinnovation.com 2http:/ / www.innovationdp.fdr.com
EUROPEAN
OFFICES:
FRANCE
GERMANY
NETHERLANDS
UNITED KINGDOM
NORDIC COUNTRIES