Untitled - High Tech

Transcription

Untitled - High Tech
TMap NEXT® Testing Clouds
Ewald Roodenrijs
Sogeti Netherlands
2011
Colofon
©2011 Sogeti Netherlands B.V.
Book and ePub
production LINE UP boek en media, Groningen
Cover design Andréas Prins
Book design Jan Faber
isbn 978-90-75414-00-4 (book), 978-90-75414-36-3 (ePub)
A
ll rights reserved. No part of this publication may be reproduced, ­stored in
a retrieval system, or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording or otherwise, without the publisher’s
prior consent.
Content
Preface 5
Management Summary 7
1 An Introduction to TMap NEXT Testing Clouds: Testing in the Cloud Era 11
A Move to the Cloud Era 11
TMap NEXT Testing Clouds: Testing in the Cloud Era 13
A Step Back: The History of TMap 15
Testing Evolves: TMap in Steps 16
Test Approach Guide 20
Reading Guide 22
2 Framework and Importance of Testing: Even in the Cloud 23
What is Testing? 23
Pitfalls: What Testing is Not 26
Why Test: The Risks and … 26
Benefits: What Does Testing Deliver? 28
The Role of Testing: Who Tests 29
What is Structured Testing? 41
3 The Business in Charge of IT: The Cloud 45
The Moment of Transformation for IT 45
Product to “Whatever as a Service” 47
Information Technology to Business Technology 47
The Cloud Era 47
The Creation of Business Technology 55
Business Technology = IT as a Commodity? 55
4 “Whatever” as a Service: Cloud-Enabled Software Testing as a Service 57
Cloud Service Models: “Whatever” as a Service 59
Drivers for STaaS Adoption 60
How Does That Work: The STaaS Process 61
STaaS Provider Process Model 69
Cloud-Enabled STaaS: The Conclusion 74
3
TMap NEXT® Testing Clouds
5 Testing Cloud Strategy: A Move to 3D 77
Creating a Cloud Test Strategy: A Move to 3D 78
Business in Control: Business Driven Test Management 78
Business in the Driver’s Seat: BDTM for Cloud 79
The Second Step: An Analysis of the Cloud Risks 86
The Next Steps: A Cloud Test Strategy 94
6 Testing the Cloud: In, On or With… 97
Testing the Cloud Itself: Cloud Infrastructure 97
Functional Testing of the Cloud Infrastructure 105
Quality of the Cloud Infrastructure Using “Agile” 107
Cloud Applications: Testing on the Cloud (SaaS) 108
Instant Deployable Test Infrastructure: Testing on the Cloud 117
7 Cloud Risks: Worth Testing for… 121
Compliance, Data Privacy and Security: A Need for Insight 123
Control: Private vs. Public Cloud in Security 140
Reliability: Cloud Recovery Testing 141
Performance of the Cloud: Test It? 147
All These Risks: Is the Business Even Ready for the Cloud? 151
References 153
Index 155
4
Preface
I’m one from “Generation Now”: I want my news, social contacts, phone calls, messages,
mail and other information now! Not in a day, 5 minutes or a year: now. Services are available so that I can have it now. The cloud supports me on that, and it supports my ability
to test that out. I can have testing services available now, with all the opportunities to get
testing running now. So the cloud lets me be Generation Now.
So in November 2010 when the idea came up for a book about cloud testing, my first thought
was that I had to do it: now!
This book is different from its TMap predecessors. Whereas the books in the TMap series
are handbooks with step-by-step information, this book is an innovation, about testing
clouds for the early adopter. It describes the cloud business model for testing, Business
Technology and steps we took in cloud projects. It can be seen as a successor of the Seize
the Cloud book Sogeti published in late March of 2010. Where that book lacks discussion
of testing, this book carries on to focus on testing.
Cloud, at its simplest, is Internet-based computing with the use of shared resources and
software provided on demand with reduced management effort. As a service it is still at
an early stage, but the growth of cloud-based computing is outstripping even the most
optimistic predictions. It’s early 2011 and almost all forecasts of “the” most important IT
technologies name cloud computing in their Top 3.
That growth is based on a compelling value proposition: speed to market, agility to bring forward or retire service, and the chance to move expenditure from CapEx into OpEx. Although
the cloud is still in its infancy, it is increasingly clear that the cloud model will supplement,
if not entirely replace, mainframe and client/server installations in the years to come.
In my view, the cloud is a business model or platform on which testing must be carried out
just like any other service. It enables convenient, on-demand network access to a shared
pool of configurable computing resources. Those resources, from networks, servers, storage, applications and services, can be rapidly provisioned and released—thereby drastically
reducing management effort and service provider interaction.
The cloud is not only an IT opportunity, but a strategic business opportunity; it creates
the ability to get the business in charge of IT and change from Information Technology
5
TMap NEXT® Testing Clouds
(IT) to Business Technology (BT). But the cloud market is still in a very early stage and will
continue to grow and evolve. And as the Cloud Era emerges, testing will change! Not only
for information systems, but also for testing the infrastructure, cloud-enabled applications, and the ability to have instant deployable test infrastructure. Testing applications
on the cloud is the same as testing applications on a traditional infrastructure. Only what
is tested is different.
Books like this are seldom written exclusively by the author. So I would like to take this opportunity to thank all of the people who helped me in creating the content of this book:
Andréas Prins
Mark Buenen
Nick Lloyd
Ramanathan Iyer
Dirkjan Kaper
Alfonso López de Arenosa
Rob Baarda
Richard Ammerlaan
Erik Smit
Flavien Boucher
Kanchan Apte
Anantharaman Iyer
Michiel Boreel
Pierre Bedard
Michiel Rigterink
Karl Snider
John Bloedjes
Dan Hannigan
A special thank you goes to my Sogeti colleagues Leo van der Aalst, Nicolas Claudon and
Clare Argent. Leo for having the vision for STaaS in 2008, Nicolas for helping me out with
the security risks in the cloud and Clare was of great help in indicating improvements to
the English language.
This book could not have been published without the assistance of Sogeti Nederland B.V.
Software Control management team and the IBM Alliance; I am particularly grateful to
Nijs Blokland, Marc Valkier, Marco Kortman and Jean-Marc Gaultier.
I have written this book with great enthusiasm. And I feel that using the cloud for testing
has a lot of potential for how we use testing services in the future. It will give us testers the
ability to be the Generation Now. I believe this book will interest those people who are part
of that generation, and that it will help them in using the cloud for the benefit of testing.
Ewald Roodenrijs
“Somewhere in the Clouds,” April 2011
6
Management Summary
What is the cloud? The cloud at it simplest is internet-based computing, with the use of
shared resources and software provided on demand with reduced management effort. The
cloud it is still at an early stage, but the growth of cloud-based computing is outstripping
even the most optimistic predictions. It’s early 2011 and in almost all forecasts of “the” most
important IT technologies name cloud computing in their Top 3.
That growth is based on a compelling value proposition: speed to market, agility to bring
forward or retire service, and the chance to move expenditure from Capital Expenditure
(CapEx) into Operational Expenditure (OpEx). Although the cloud is still in its infancy, it is
increasingly clear that the cloud model will supplement, if not entirely replace, mainframe
and client/server installations in the years to come.
Am I saying that on my own? No, all the mayor analysts have their opinion on and are
looking into the cloud. Gartner predicts that, as soon as 2012, one in five businesses will
have no IT assets at all. They will simply leverage the capabilities of the cloud as computing
becomes available to businesses in much the same way as a utility like electricity [Pettey,
2010]. IDC expects the server revenues from the public and private cloud to increase by
29% and 62% respectively—a disparity explained by the heightened security considerations,
lower appetite for risk, and lower responsiveness to financial drivers in the public sector
[IDC, 2009].
In 2010 they forecast a growth in cloud service adoption, fuelled by the “Pay-as-you-Go”
model that means you pay only for what you use. Forrester goes further still, explicitly referring to cloud as not an adjunct, but a successor to traditional approaches; “Cloud computing
is a sustainable, long-term IT paradigm, and the successor to previous mainframe, client/
server, and network computing eras” [Ried, 2010].
As cloud computing evolves, and cloud service adoption becomes ever more wide-ranging,
a new global infrastructure is being created; this infrastructure can easily be connected to
traditional infrastructure (including legacy systems). But it is not just for business IT assets
that clouds remove previous limitations. It does the same from a software or application
testing perspective, removing the typical constraints presented by having to test on clientowned or internal resources. A cloud infrastructure creates significant new opportunities
for software quality assurance and testing.
7
TMap NEXT® Testing Clouds
This book describes the two aspects of the cloud, the business model aspect and the
cloud platform. On both which testing must be carried out just like any other service. It
enables convenient, on-demand network access to a shared pool of configurable computing resources. Those resources, from networks, servers, storage, applications and services,
can be rapidly provisioned and released—thereby drastically reducing management effort
and service provider interaction.
Naturally, there is an understandable nervousness about this new approach and questions
are being asked around integration, security and implementation. But, in our view, these
challenges are outweighed by the advantages, as cloud breaks down limitations caused
by testing on internal resources, and gives test teams the opportunity to free themselves
from issues relating to the internal availability of hardware, applications and services, and
enable a more effective way to collaborate.
We have identified a number of test cloud models, simply based on cloud vendor solutions.
These are:
•Private cloud – A cloud owned by the client on which a private cloud is implemented.
•Public cloud – A public cloud that is available to the client on demand.
•Hybrid cloud – A group of clouds composed out of two or more clouds (private, community or public).
•Community cloud – A group of clouds composed out of two or more public or hybrid
clouds to form a community.
This cloud hype is still too many times perceived as an IT matter, a matter that is best left
to infrastructure technicians. The cloud is not an IT opportunity, but a strategic business
opportunity; it creates the ability to get the business in charge of IT and change from
Information Technology (IT) to Business Technology (BT). But the cloud market is still in a
very early stage and will continue to grow and evolve.
As the Cloud Era emerges testing will change! This consist of several aspects; testing the
infrastructure, cloud-enabled applications, and the ability to have instant deployable test
infrastructure. This all has its impact on the way we do testing in the future. As other types
of applications will not disappear, it doesn’t replace what we test, but provides another
addition to software testing. Testing applications on the cloud is the same as testing applications on a traditional infrastructure. Only what is tested is different.
8
Management Summary
With testing clouds there are a lot more parties involved in testing. Not only the client and
the stakeholders, the business, but also 3rd party suppliers of standard or Software as a
Service (SaaS) applications, new quality attributes due to the cloud infrastructure, and the
growing importance of non-functional requirements.
These new items to test for make testing the cloud different, not even to think about the
consequences of the cloud business model. The on-demand use of resourcing, testing tools
and infrastructure provides the opportunity to create a pay-per-use test service; Software
Testing as a Service (STaaS). With STaaS the benefits of the cloud as a business model are
used to provide a testing services to clients.
The cloud also delivers us more risks on security, data integrity, privacy issues, data recovery
and performance. Is the Business even ready for IT? All these risks can be attended to by
testing for them. But that’ll be a costly exercise and the cloud offers a reduction in costs.
Other counter measures need to be taken to create a trusted cloud solution. Measures
that decrease the risks de cloud created, but also increase the quality of the solution. These
actions help create a better solution, better in use and more efficient.
We are not yet fully realizing the opportunities that exist today. It’s easy to be blinded by
the small everyday issues so that the larger goal stays out of reach. Now that businesses
and public organizations are “wired” and the cloud has emerged as a collection of Internet platforms and tools for connecting, integration and sharing of data and processes, it
has become possible to think about the bigger issues that are going unaddressed; IT will
become a commodity that can be turned on and off when needed!
The cloud will keep the promise IT has done the business, it’ll support the business and
create Business Technology. But with all these risks is the business even ready for the cloud?
IT needs to support the business with the usage of the cloud. And with support not only
the creating of a cloud is meant, but a change in its mindset. As also testers can help the
business with looking for some actions and measures than can support in creating a higher
quality of the solution.1
1 These actions can also help in traditional applications.
9
1
An Introduction to
TMap NEXT Testing Clouds:
Testing in the Cloud Era
The growth of the cloud-based market is outstripping even the most
optimistic predictions. That growth is based on a compelling value
proposition: speed to market, agility to bring forward or retire service,
and the chance to move expenditure from CapEx into OpEx. For software testing, cloud also offers a range of opportunities.
A Move to the Cloud Era
When the business asks, “What is the cloud, when should I consider a cloud service model,
when not and how can it be deployed?” this is a great question for IT to give a thorough
answer. An answer that can help the business in understanding the cloud better and make
a correct decision on implementing it. When looking at cloud service and deployment
models the cloud comes back to its original starting point; cloud computing, as this laid
the way for the cloud to be created.
11
TMap NEXT® Testing Clouds
Cloud computing has been an up and coming market since 2006, but the underlying concept
of cloud computing is old, even “prehistorically” for IT services; it dates back to 1961. In those
years, Prof. John McCarthy said, “Computation may someday be organized as a public
utility.” He was the first to publicly suggest (in a speech given to celebrate MIT’s centennial)
that computer time-sharing technology might lead to a future in which computing power
and even specific applications could be sold through the utility business model (like water
or electricity). Almost all the modern-day characteristics of cloud computing—elastic—as a
utility, online and the illusion of an infinite supply the comparison to the electricity industry,
and the use of public, private, government, and community forms, have been thoroughly
explored by Douglas Parkhill in 1966 [Parkhill, 1966]. This idea of a computer or information
utility was very popular in the late 1960s, but faded by the mid-1970s as it became clear
that the hardware, software and telecommunications technologies of the time were simply
not ready. However, since 2000 the idea has resurfaced in new forms.
Figure 1.1 Interest in the cloud throughout the years
The term “cloud” is borrowed from telephony in that telecommunications companies, who
until the 1990s primarily offered dedicated point-to-point data circuits, began offering Virtual
Private Network (VPN) services with comparable quality of service, but at a much lower
cost. By switching traffic to balance utilization as they saw fit, telecom companies were
able to utilize their overall network bandwidth more effectively. The cloud symbol was used
to denote the demarcation point between the responsibilities of the provider from that of
the user. Cloud computing extends this boundary to cover servers as well as the network
infrastructure. The first scholarly use of the term “cloud computing” was in a 1997 lecture
by Ramnath Chellappa.
12
1 An Introduction to TMap NEXT Testing Clouds
Amazon played a key role in the development of cloud computing by modernizing their data
centers after the dot-com bubble which, like most computer networks, would use 10-20%
of their capacity at any one time while leaving room for seasonal spikes. Having found
that the new cloud architecture resulted in significant internal efficiency improvements
whereby small, fast-moving “two-pizza teams” could add new features faster and more
easily. Amazon initiated a new product development effort to provide cloud computing to
external customers, and launched Amazon Web Service (AWS) on a utility computing basis
in 2006. In 2007, Google, IBM and a number of universities embarked on a large scale
cloud computing research project.
TMap NEXT Testing Clouds: Testing in the Cloud Era
TMap is Sogeti’s prominent Test Management approach to the structured testing of information systems. The approach is described generically, since the specific makeup of this,
the best fitting test approach, depends on the situation in which it is applied.
TMap can be summarized in four essentials (see Figure 1.2):
1
2
3
4
TMap is based on a business-driven test management approach.
TMap describes a structured test approach.
TMap contains a complete tool box.
TMap is an adaptive test method.
The first mentioned essential is directly related to the fact that the importance of the business in IT; the IT-business case (the justification of a project) for organizations is continuously
growing. For testing this means the choices on what risks to cover with testing, what results
are to be delivered and how much time and money to spend need to be made based on
rational and economic grounds. For cloud this means testing is done on-demand according
to the business’ needs. This is why TMap has developed the business driven test management approach, which can be seen as the “leading thread” of the structured TMap test
process. This essential is explained in this book from three perspectives:
•From testing Information Technology to testing Business Technology (Chapter 3)
•How to create a cloud test strategy (Chapter 4)
•Testing the cloud itself (Chapter 5)
•The cloud risks for testers (Chapter 6)
•The cloud business model for testing: Software Testing as a Service (Chapter 7)
By describing a structured test process (essential 2) and giving a complete tool box, TMap
answers the classic questions what/when, how, what with and who. With the description
of test process use has been made of the TMap life cycle model: a development cycle
13
TMap NEXT® Testing Clouds
Figure 1.2 TMap NEXT model of essentials
related description of the test cycle. The life cycle model describes what/when should be
carried out.
Besides this, to be able to execute the test process properly, several issues in the field of
infrastructure (what with), techniques (how) and organization (who) should be implemented.
TMap provides a lot of applicable information in the shape of examples, checklists, technique descriptions, procedures, test organization structures and test environment/tools
(essential 3).
Furthermore, TMap has a flexible design so that it can be applied to several system development situations: i.e. new development as well as maintenance of information systems,
development in-house or a purchased package and with outsourcing of (parts of) the testing (essential 4).
In this chapter, a sequential overview is given of how TMap became a standard approach
to structured testing, the reasons for a new version of TMap, the key points of TMap and a
number of suggestions concerning which chapters are of interest to which target groups.
14
1 An Introduction to TMap NEXT Testing Clouds
A Step Back: The History of TMap
In the international testing world, TMap is a familiar concept, as this approach to testing
has been in existence since 1995. While it is not necessary to know the history of TMap
in order to understand or apply the approach, this section invites you to take a glimpse
behind the scenes.
Standard Approach to Structured Testing
This book was preceded by the Dutch book Testen volgens TMap (= Testing according to
TMap) in 1995, Software Testing (a guide to the TMap approach) in 2002 (both books by Pol,
Teunissen and Van Veenendaal) and TMap NEXT for result-driven testing in 2006 (Koomen,
Van der Aalst, Broekman and Vroon).1 The books turned out to be, and still are, bestsellers.
TMap has evolved over recent years to become a standard for testing information systems.
It is currently applied in hundreds of companies and organizations, including many banks,
insurance companies, pension funds and government organizations. The fact that TMap is
seen to be a prominent standard approach to structured testing is demonstrated by facts
as, for example, suppliers of test tools advertising with the words “applicable in combination with the TMap techniques”; test professionals who ensure that TMap experience is
prominent on their CV or, increasingly, that they are TMap-certificated; recruitment advertisements in which test professionals with knowledge of TMap are sought, and independent
training institutions that provide TMap courses.
Figure 1.3 History of TMap
The Strength of TMap
The strength of TMap can largely be attributed to the considerable practical experience that
is the basis for the method. This experience comes from hundreds of professional testers
in as many projects over the last twenty years! The aim of this book is to be a valuable aid
in coping with most, if not all, challenges in the area of testing now and in the near future.
1 Up-to-date information about the translations in other languages can be found at www.tmap.net.
15
TMap NEXT® Testing Clouds
The disadvantage of a book is that the content, by definition, is static, while in the field of
IT new insights, system development methods, etc., are created with great regularity.
It would be commercially irresponsible to compile and publish a new version of the TMap
book with every new development in IT. To enable TMap to keep up with current developments, an expansion mechanism is created. An expansion describes the way TMap has
to be applied in a new development. Recent expansions, in the form of white papers, are
included in the book TMap Test Topics[TestTopics, 2005]. To keep the TMap users updated
about the new expansions, different means of communication are used. For example, the
large number of presentations and workshops that are given at test conferences, the
popular TMap Test Topics sessions (in which current test themes are presented) and the
many articles in the various specialist publications. All of this makes TMap what it is now:
“A complete test approach, with which any organization can successfully take on any test
challenge, now and in the future!”
Tip
Take a look at www.tmap.net. You will find there, among other things:
––downloads (including white papers for expansions, checklists, test-design techniques and a glossary)
––published TMap articles
––TMap newsletters (if you wish to receive these automatically, you can register for
this on the site).
Testing Evolves: TMap in Steps
“Why all these new versions of TMap,” you may ask, “Was there something wrong with the
previous version?” No, however, since it the first version appeared on the market, various
new developments have taken place both in IT and testing. And in order to keep TMap as
complete and up to date as possible, these have been incorporated in the new version and
add-ons have been created. This concerns developments such as the increasing importance
of IT to organizations, a number of innovations in the area of system development and
the transformation from Information Technology to Business Technology. Besides this, the
tester appears to be better served by a test approach written as a guide, rather than as a
testing manual. An explanation of this is given in the following sections.
Increasing Importance of IT to Organizations
Since the end of the nineties, the use of software or Information Technology in general,
has become increasingly important to organizations, so that IT projects are more often
16
1 An Introduction to TMap NEXT Testing Clouds
initiated and managed from within a user organization. This is prompted by the following
developments:
•Cost reductions of IT development and management. IT is required to be cheaper
and the business case (the “why” in combination with costs and profits) formulated
more clearly.
•Growing automation of business processes. More and more business processes
within organizations are either automated or strongly dependent on other automated
processes.
•Quicker deployment of automation. With the growth of automation, IT has grown
from being a company support resource to one that differentiates the company from
the competition. This means that the flexibility and speed with which this resource can
be deployed is of crucial importance in beating the competition.
•Quality of automation is becoming more important (see the practical example “Consequences of Software Failures”). The fact that IT end users currently make sure they
have their say, combined with the fact that CEO’s, CFO’s and CIO’s are made personally
responsible for the accuracy of the company’s financial information leads to (renewed)
interest in quality of IT.
These four aspects are summed up as “more for less, faster and better.” A consequence
of this is that IT projects are becoming increasingly dynamic and chaotic in nature. This
can put great pressure on the testers and increase the relative share of testing within IT. In
order to make the test process manageable and to keep it that way, the “business-driven
test management” (BDTM) approach was developed for TMap, and has been used in this
book. With this, the creation of a test strategy is directly related to the business risks. It
enables the client to make responsible, risk-based choices in the test process. By making
these choices, the client has significant influence on the timeline and the costs of the test
process.
In More Detail
Consequences of Software Failures
Because IT has become more important within organizations, the impact of any
software problems is increased.
Some examples are:
––revenue loss;
––brand/reputation loss;
––compensation claims, and productivity loss.
17
TMap NEXT® Testing Clouds
The fact that this can involve considerable losses is demonstrated by the following
real-world examples:
––A sporting goods manufacturer suffered a 24% drop in turnover (€100 million) in a
single quarter because of software failures in the stock administration. In the days
that followed, after the company had announced that they had problems with the
software, the share value decreased by more than 20%.
––Through a failure in their encryption software, a financial advisory organization
displayed their customers’ social security numbers and passwords in legible text
on their website. This caused distress among the customers and led to a sizeable
loss of business.
––After a pharmaceutical wholesale company had gone bankrupt, the parent
company submitted a claim for damages of €500 million to the software supplier,
claiming that the “enterprise software” had been faulty.
Innovations in the System Development Area
The test principles on which TMap is based, came into being in the eighties, and, of course,
they relate to the system development methods of that time. Nowadays, these methods
are often referred to as waterfall methods. Their most important characteristic is the
purely sequential execution of the various development activities. Today there is a strong
growing interest in other methods. The most important characteristic of the new generation of system development methods is incremental or iterative development, with the test
process increasingly being integrated in the development process. With the Agile approach
as the most profound example of this. For more information on testing in Agile Software
Development Environments (ASDE) please look into “Testing in Agile Software Development
Environments with TMap NEXT®” [Davis, 2010].
Since, in the course of various activities, testing touches on the chosen system development method, it is inevitable that a test approach like TMap should evolve in step with
changes in the area of system development. How to apply TMap in certain situations is
often already laid down in the previously mentioned expansions. The key points of the
expansions have now been integrated in this new version of TMap, so that the book again
provides a current and as complete as possible overview of how TMap can be applied in
a variety of situations.
Transformation from Information Technology to Business Technology
Today cloud is the hype in IT. It tells us that it can fulfill the promise IT has been making to
the business for years. It will move Information Technology over to Business Technology
18
1 An Introduction to TMap NEXT Testing Clouds
as this has an impact on the business itself; business decisions are increasingly becoming technology decisions. Many business processes are so technologically enhanced that
business and technology are interwoven. Today technological innovation and business
innovation are synonymous. This brings out a lot of opportunities to the business that we
now just have to realize!
The cloud, among others, makes possible the move to Business Technology. However, as
the cloud is IT, this needs to be tested. With IT so interwoven with business and other parts
of our everyday life, the risks of encountering problems are extremely high. TMap can be
applied to test the cloud: it can help test the cloud infrastructure, create test infrastructures,
test cloud-enabled applications and it can be used on testing services.
Integrating Quality Assurance with PointZERO®
It is commonly known that our IT landscape is too expensive. Too much time and effort is
going into delivering new solutions and even more into maintaining the existing ones. The
landscape is becoming increasingly complex to manage and poor quality negatively impacts
the competitive position of enterprises. High quality, timely and cost efficient IT solutions
that are completely aligned with business demands are required to be able to compete in
today’s business world, and agility is crucial. This requires testing and quality assurance
to evolve as well. Aligning the quality of (IT) solutions with the business requirements and
reaching the desired level of quality faster and at a reduced cost requires continuous
change and improvement.
Figure 1.4 Testing and quality assurance are part of the development process at PointZERO®
19
TMap NEXT® Testing Clouds
We need to change the existing ways of working and move from classical functional and
acceptance testing, to even more adequate and efficient solutions. Important elements
are:
•The effort to increase and improve the collaboration between different IT disciplines such
as developers, designers and testers to reach a better standard of quality jointly
•Taking quality measures much earlier in the development lifecycle
•And finally, the ever increasing need to align all IT activities, including testing, with the
demands of the business. We need to make sure that each euro that is invested, adds
value to your company.
To achieve these goals, it is necessary to start thinking of quality right from the start—by
testing ideas, concepts and documents. As soon as a new initiative arises, testing and
quality assurance need to become part of the total development process at PointZERO®.
However, our present way of working does not always accommodate this concept. In most
cases, it will require some time to develop and grow. The concept of PointZERO® provides a
framework and growth model to support the realization of better, cheaper and faster quality.
Some elements of this concept have been around for quite some time and are considered
“proven technology.” Other parts are relatively new or still in the development stages.
Test Approach Guide
The previously mentioned four developments, i.e. the increasing importance of IT within
organizations, the innovations in system development, the transformation from Information
Technology to Business Technology, and integrating quality assurance with PointZERO®,
indicate the increased dynamic of the various development and test environments. In a
situation like this, a manual is often seen as being too rigid. Furthermore, it appears that
a broader public with a wide range of competencies is increasingly carrying out testing,
so there is now a greater need for more comprehensive descriptions of how specific TMap
activities can be carried out.
This book, TMap NEXT Testing Clouds, has been revised in response to the transformation
from Information Technology to Business Technology. In short, TMap NEXT Testing Clouds
offers the tester a guide to delivering results to the client in Business Technology.
Testing the Cloud: What Can TMap NEXT Testing Clouds Offer?
But what exactly does TMap NEXT Testing Clouds offer—what can you do with it? You will
find the answer to these questions in this section, in which a general overview is given of
the assistance that TMap NEXT Testing Clouds can supply and where TMap NEXT Testing
20
1 An Introduction to TMap NEXT Testing Clouds
Clouds can be applied. In the book TMap NEXT®[TMap NEXT, 2006], we go into this in
more detail and more attention is paid to the way in which TMap can be applied in various situations.
Where Can TMap NEXT Testing Clouds Help?
In order to assist the tester in his work, TMap NEXT Testing Clouds explains how to carry
out certain activities, or how these are supported by TMap. This concerns help with:
•the translation of the clients (cloud) demands and requirements into a concrete test
strategy and management of the execution;
•assisting the test manager, testing coordinator and/or tester to deal with cloud infra-
structure, cloud-enabled applications and software testing as a service;
•the execution of, among other things:
–– a cloud risk analysis;
–– a test cloud strategy, and
–– a (non-)functional test
•the setting up and the management of test infrastructure for the current and other
projects;
•the execution of the test activities with real-world examples, tips and also detailed explanations of certain aspects, and
•considering the test process as much as possible from an exterior vantage point, by
answering, for example, practical questions (what does testing actually deliver?) and
making use of general project information.
Where Can TMap NEXT Testing Clouds Be Applied?
TMap NEXT Testing Clouds addresses the following possibilities of applying TMap:
•where there is either a demand-supplier relationship (e.g. outsourcing) between client,
developer and tester (each with their own responsibilities), or a collective interactive
approach;
•with iterative, incremental, waterfall and agile approaches;
•with new development in the cloud and migration of information systems to the cloud;
•in situations with combinations of development approaches, such as in-house, reusebased, use of Software as a Service and assembling of purchased modules, all within
a single IT architecture;
•with coverage of non-functional requirements of the information system in the test
approach, and
•in situations where the cloud infrastructure needs to be tested.
21
TMap NEXT® Testing Clouds
Reading Guide
The book TMap NEXT Testing Clouds has been written to give answer on the following
business problems:
•You need to have a test executed in a short period of time and only for a short time frame
(test manager, program manager, business manager or manager of IT department).
•You are requested to test a cloud infrastructure (test managers, test coordinators, infrastructure testers and testers).
•You are requested to test cloud applications and/or the integration of them in the enterprise architecture (test managers, test coordinators and testers).
The book contains a short introduction to “TMap Next for result-driven testing”: Chapter 1
“An Introduction to TMap NEXT Testing Clouds” and Chapter 2 “Framework and Importance
of Testing: Even in the Cloud.” These chapters are interesting for all the target groups.
Chapter 3 “The Business in Charge of IT: The Cloud” and Chapter 4 “Whatever” as a Service:
Cloud-Enabled Software Testing as a Service” are of interest for the test manager, program
manager and business manager and manager of the IT department.
Chapter 5 “Testing Cloud Strategy: A Move to 3D,” Chapter 6 “Testing the Cloud: In, On
or With…” and Chapter 7 “Cloud Risks: Worth Testing for…” are mainly interesting for the
target groups of test managers, test coordinators, infrastructure testers and testers.
At various places in the book, definitions, practical examples, tips and more detailed explanations are provided. These can be recognized by title, box and a green background.
22
2
Framework and Importance
of Testing: Even in the Cloud
This chapter provides an introduction to testing in general and focuses
on structured testing. No specific (prior) knowledge of TMap or clouds
is required in order to understand this. In sequence, an explanation is
given of: what is understood by testing, why testing is necessary (and
what it delivers), what the role of testing is and what structured testing
involves.
What is Testing?
While many definitions of the concept of testing exist, one way or another they all contain
comparable aspects. Each of the definitions centres on the comparison of the test object
against a standard (e.g. expectation, correct operation, requirement). With this, it is important to know exactly what you are going to test (the test object), against what you are
going to compare it to (the test basis) and how you are going to test it (the test methods
and techniques).
23
TMap NEXT® Testing Clouds
The International Standardisation Organisation (ISO) and the International Electrotechnical
Commission (IEC) apply the following definition [ISO/IEC, 1991]:
Definition
Technical operation that consists of the determination of one or more characteristics of a given product, process or service according to a specified procedure.
This definition is more an IT definition, but testing is not an IT concern anymore; it’s a business concern. Testing supplies insight in the difference between the actual and the required
status of an object. Where quality is roughly to be described as “meeting the requirements
and expectations,” testing delivers information on the quality. It provides insight into, for
example, the risks that are involved in accepting lesser quality. For that is the main aim of
testing. Testing is one of the means of detection used within a quality control system. It is
related to reviewing, simulation, inspection, auditing, desk-checking, walkthrough, etc. The
various instruments of detection are spread across the groups of evaluation and testing:1
•Evaluations: assessment of interim products.
•Testing: assessment of the end products.
Put bluntly, the main aim of testing is to find defects: testing aims to bring to light the lack
in quality, which reveals itself in defects. Put formally: it aims to establish the difference
between the product and the previously set requirements. Put positively: it aims to create
faith in the product.
The level of product quality bears a relationship to the risks that an organisation takes
when these products are put into operation. Therefore, in this book we define testing as
follows:
Definition
Testing is a process that provides insight into, and advice on, quality and the
related risks. This to allow the business to appreciate and understand the risks of a
software implementation.
Advice on the quality of what? Before an answer to this can be given, the concept of quality
requires further explanation. What, in fact, is quality?
1 The theory also refers to verification and validation. Verification involves the evaluation of (part of) a
system to determine whether the products of a development phase meet the conditions that were set
at the beginning of that phase. Validation is understood to mean determining whether the products
of the system development meet the user needs and requirements. [IEEE, 1998]
24
2 Framework and Importance of Testing
Definition
The totality of features and characteristics of a product or service that bear on its
ability to satisfy stated or implied needs [ISO, 1994].
In aiming to convert “implied needs” into “stated needs” we soon discover the difficulty of
subjecting the quality of an information system to discussion. The language for discussing
quality is lacking. However, since 1977, when McCall [McCall, 1977] came up with the proposal to divide the concept of quality into a number of different properties, the so-called
quality characteristics, much progress has been made in this area.
Definition
A quality characteristic describes a property of an information system.
A well-known set of quality characteristics was issued by the ISO and IEC [ISO 9126-1, 1999].
In addition, organisations often create their own variation of the above set. For TMap, a
set of quality characteristics specifically suited to testing has been compiled, and these are
listed and explained in TMap NEXT®. The cloud, and specifically the cloud infrastructure,
has its own quality characteristics, these are explained in Chapter 6.
What, then, is the answer to the question: “Advice on the quality of what?”
Since, where quality is concerned, the issue is usually the correct operation of the software,
testing can be summed up as being seen by many to mean: establishing that the software
functions correctly. While this may be a good answer in certain cases, it should be realised
that testing is more than that. Apart from the software, other test objects exist, the quality
of which can be established. That which is tested, and upon which quality recommendations are subsequently given, is referred to as a test object.
Definition
The test object is the information system (or part thereof) to be tested.
A test object consists of hardware, system software, application software, organisation,
procedures, documentation or implementation. Advising on the quality of these can involve—
apart from functionality—quality characteristics such as security, user friendliness, performance, maintainability, portability and testability.
25
TMap NEXT® Testing Clouds
Pitfalls: What Testing is Not
In practice, it is by no means clear to everyone what testing is and what could or should
be tested. Here are a few examples of what testing is not:
•Testing is not a matter of releasing or accepting something. Testing supplies advice on
the quality. The decision as regards release is up to others (stakeholders), usually the
commissioner of the test.
•Testing is not a post-development phase. It covers a series of activities that should be
carried out in parallel to development.
•Testing is something other than the implementation of an information system. Test results
are rather more inclined to hinder the implementation plans. And it is important to have
these—often closely related—activities well accommodated organisationally.
•Testing is not intended initially to establish whether the requested functionality has
been implemented, but to play an important part in establishing whether the required
functionality has been implemented. While the test should of course not be discounted,
the judgement of whether the right solution has been specified is another issue.
•Testing is not cheap. However, a good, timely executed test will have a positive influence
on the development process and a qualitatively better system can be delivered, so that
fewer disruptions will occur during production. Boehm demonstrated long ago that the
reworking of defects costs increasing effort, time and money in proportion to the length
of time between the first moment of their existence and the moment of their detection
[Boehm, 1981]. See also “What does testing deliver?” in the next section.
•Testing is not training for operation and management. Because a test process generally
lends itself very well to this purpose, this aspect is often too easily included as a secondary request. Solid agreements should see to it that both the test and the training would
be qualitatively adequate. A budget and time should be made exclusively available for
the training, and agreements made as regards priorities, since at certain times choices
will have to be made.
It is the task of the test manager, among others, to see that these pitfalls are avoided and
to make it clear to the client exactly what testing involves.
Why Test: The Risks and …
In Chapter 1 it is explained that IT has been increasing in importance to organisations since
the end of the nineties. But with this, many organisations are plagued by projects getting
out of hand in terms of both budget and time, owing to software defects during the operation of the developed information systems. This shows that organisations are accepting,
or having to accept, systems without having real insight into their quality. In many cases,
26
2 Framework and Importance of Testing
too, there is a lack of good management information upon release. This often results in big
risks to the company operations: high reworking costs, productivity loss, brand/reputation
loss, compensation claims and loss of competitiveness through late availability of the new
product (revenue loss) may be the consequences.
Figure 2.1 Cloud risks that can be tested for
Before an information system goes into production, the organisation will have to ask itself
explicitly whether all requirements are met. Have all the parts and aspects of the system
been explored in sufficient depth? Besides the functionality, have checks been carried out
on, for example, the effectivity, performance and security aspects? Or, as ISO puts it: has it
been established whether the product possesses the characteristics and features necessary
to meet the states or (even more difficult) implied needs? What is self-evidently implied to
one may be a revelation to another.
Have all the errors been reworked, and have any new errors been introduced in the course
of reworking them? Can the company operations depend on this system? Does the system
really provide the reliable solution to the information issue for which it was designed?
The real question is: what risks are we taking and what measures have been taken to reduce
those risks? In order to avoid obtaining answers to these crucial questions only at the operational phase, a good, reliable testing process is required. That demands a structured test
approach, organisation and infrastructure, with which continuous insight may be obtained
in a controlled manner into the status of the system and the risks involved.
27
TMap NEXT® Testing Clouds
Benefits: What Does Testing Deliver?
While a structured test approach is considered to be of great importance, the question
“What do you think of the test process?” is generally answered with “Expensive!” This
response can seldom be quantified, since it is often a gut reaction, unsupported by figures.
Testing is expensive. Yes, that is true if you only look at the test costs and disregard the test
benefits. Test costs are based on, among other things:
•the costs of the test infrastructure, and
•the hours worked by the testers and their fees.
Test benefits are [Black, 2002]:
•The prevention of (high) reworking costs and consequential damage to the production
situation, thanks to defects being found during testing and rectified within the system
development process. Examples of consequential damage are: revenue loss, brand/
reputation loss, compensation claims and productivity loss.
•The prevention of damage in production, thanks to errors being found during testing,
and, while not being solved, being flagged as “known errors.”
•Having/gaining faith in the product.
•Facilitating good project management through the supply of (progress and quality)
information.
If there is a way of expressing test benefits in money, the answer to the question “What
does testing deliver” may be, from a test-economic perspective:
Test Yield = Test Benefits – Test Costs
Although this appears to be a simple calculation, in practice it is very difficult to estimate
how much damage would have been caused by failures that were found during testing,
had they occurred at the production stage. And anyway, how do you translate, for example,
potential loss of image into money? In the literature, some attempts have nevertheless been
made at making this calculation, for example [Aalst, 1999].
However, it remains difficult to establish exactly what having faith in the quality of a product,
or gaining (progressive) information really delivers. Despite that, within the world of testing
there are more and more tips and tricks to be found that make it possible to observe one
or more of the following defects and to operate accordingly:
•too much testing is carried out, so that the costs of finding a defect during testing no longer
offset the damage that this defect would potentially cause if it occurred in production
28
2 Framework and Importance of Testing
•too little testing is done, so that more issues occur in production and the reworking
costs of these are proportionately higher than the test costs would have been to find
the defects during testing
•testing is carried out ineffectively, so that the test investment is not recouped.
The Role of Testing: Who Tests
This section explains both the significance and role of certain test concepts in their environment. Spread across the following subjects, the associated concepts are explained:
•Testing and quality management;
•Testing: how and by whom;
•Test and system development process;
•Test levels and responsibilities, and
•Test types.
Testing and Quality Management
Quality was, is and remains a challenge within IT (see also examples in the section “Testing
Evolves: TMap in Steps” in Chapter 1) and if IT wants to move to Business Technology quality
is of greatest importance to get this right! Testing is not the sole solution to this. After all,
quality has to be built in, not tested in! Testing is the instrument that can provide insight into
the quality of information systems, so that test results—provided that they are accurately
interpreted—deliver a contribution to the improvement of the quality of information systems.
Testing should be embedded in a system of measures in order to arrive at quality. In other
words, testing should be embedded in the quality management of the organisation.
The definition of quality as expressed by the ISO (see section “What is Testing?”) strongly
hints at its elusiveness. What is clearly implied to one is anything but to another. Implicitness is very much subjective. An important aspect of quality management is therefore
the minimisation of implied requirements, by converting them into specified requirements
and making the degree visible to which the specified requirements are met. The structural
improvement of quality should take place top-down. To this end, measures should be taken
to establish those requirements and to render the development process manageable.
Definition
Quality assurance covers all the planned and systematic activities necessary to
provide adequate confidence that a product or service meets the requirements for
quality [ISO, 1994].
29
TMap NEXT® Testing Clouds
These measures should lead to a situation whereby:
•There are measurement points and units that provide an indication of the quality of the
processes (standardisation);
•It is clear to the individual employee which requirements his work must meet and also
that he can evaluate them on the basis of the above-mentioned standards;
•It is possible for an independent party to evaluate the products/services on the basis of
the above-mentioned standards, and
•The management can trace the causes of weaknesses in products or services, and consider how they can be prevented in future.
These measures may be divided into preventive, detective and corrective measures:
•Preventive measures are aimed at preventing a lack in quality. They can be, for example,
documentation standards, methods, techniques, training, etc.;
•Detective measures are aimed at discovering a lack of quality, for example by evaluation
(including inspections, reviews, walkthroughs) and testing, and
•Corrective measures are aimed at rectifying the lack of quality, such as the reworking
of defects that have been exposed by means of testing.
It is of essential importance that the various measures are cohesive. Testing is not an
independent activity; it is only a small cog in the quality management wheel. It is only one
of the forms of quality control that can be employed. Quality control is in turn only one of
the activities aimed at guaranteeing quality. And quality assurance is, in the end, only one
dimension of quality management.
Testing: How and by Whom
Testing often attracts little attention until the moment the test execution is about to begin.
Then suddenly a large number of interested parties ask the test manager about the status.
This section demonstrates, however, that testing is more than just the execution of tests.
We then explain the ways of testing and who can carry out the testing.
There Is More to Testing
Testing is more than a matter of taking measurements—crucially, it involves the right planning and preparation. Testing is the tip of the iceberg, the bigger part of which is hidden
from view (see Figure 2.2).
30
2 Framework and Importance of Testing
Figure 2.2 Measurements are only the tip of the testing iceberg
In this analogy, the actual execution of the tests is the visible part, but on average, it only
covers 40% of the test activities. The other activities—planning and preparation—take up
on average 20% and 40% of the testing effort respectively. This part is not usually recog-
31
TMap NEXT® Testing Clouds
nised as such by the organisation, while in fact it is where the biggest benefit, not least
regarding time, is to be gained. And, significantly, by carrying out these activities as much
as possible in advance of the actual test execution, the testing is on the critical path of
the system development programme as briefly as possible. It is even possible, because of
technical developments (test automation), to see a decreasing line in the percentage of
test executions regarding preparation and planning.
Ways of Testing
There are various ways of testing (in this case, executing tests). For example, is the testing
being done by running the software, or precisely by not running it? And is a characteristic
of the system being tested using test cases specially designed for it, or precisely not? A
number of ways of testing are:
•Dynamic explicit testing
•Dynamic implicit testing
•Static testing
Dynamic Explicit Testing
With dynamic explicit testing, the test cases are explicitly designed to obtain information on
the relevant quality characteristic. With the execution of the test, or the running of software,
the actual result is compared against the expected result in order to determine whether the
system is behaving according to requirements. This is the most usual way of testing.
Dynamic Implicit Testing
During dynamic testing, information can also be gleaned concerning other quality characteristics, for which no explicit test cases have been designed. This is called dynamic implicit
testing. Judgements can be made, for example, on the user-friendliness or performance of
a system based on experience gained without the specific test cases being present. This can
be planned if there has been a prior agreement to provide findings on it, but it can also take
place without being planned. For example, if breakdowns occur regularly during the testing.
In that case, a judgement can be made concerning the security of company operations.
Static Testing
With static testing, the end products are assessed without software being run. This test
usually consists of the inspection of documentation, such as security procedures, training,
manuals, etc., and is often supported by checklists.
In More Detail
A Paper End-to-End Test
End-2-end testing is one of the most difficult tests there is. Functional everything
should work fine separately and with the end-2-end test all the systems in the com-
32
2 Framework and Importance of Testing
plete process should be linked with each other. A small mistake in one of the substantive systems and everything goes wrong. Therefore, everything should really
be good. Unfortunately some systems (read “departments”) sometimes speak a
different language with each other. Because when “System X” delivers something
else compared to what “System Y” expects there is a mismatch on the content.
Thus, the (functional) specifications are not correctly discussed with each other,
communication!
Now how do we solve this? One way is to run a paper end-2-end test as a static test
in preparation for your end-2-end test. What is a paper end-2-end test? All parties
sit down with each other and talk through the process that takes place within the
systems. By creating a transparent flow of the process with all parties present this
yields to a number of advantages, namely:
––Possible mismatches in the specification are found;
––These mismatches are transparent and can directly be discussed;
––The test specifications with attention to the end-2-end test are a lot simpler because the actual process has been made visible to/by all parties, and
––Everyone has the same mindset with the end-2-end test, making communication
better and easier.
Who Tests?
Anyone can do testing. Who actually does the testing is partly determined by the role or
responsibility held by someone at a given time. This often concerns representatives from
development, users and/or management departments. Besides these, testing is carried out
by professional testers, who are trained in testing and who often bring a different perspective to testing. Where, for example, a developer wants to demonstrate that the software
works well (“Surely I’m capable of programming?”), the test professional will go in search
of defects in the software. Moreover, a test professional is involved full-time in testing,
while the aforementioned department representatives in many cases carry out the testing
as a side issue. In practice, the mix of well-trained test professionals and representatives
from the various departments leads to fruitful interaction, with one being strong in testing
knowledge and the other contributing much subject or system knowledge.
Test and System Development Process
The test and system development processes are closely intertwined. One delivers the products, which are evaluated and tested by the other. A common way of visualising the relationship between these processes is the so-called V model. A widely held misunderstanding is
that the V model is suited only for a waterfall method. But that misrepresents the intention
behind the model. It is also eminently usable with an iterative and incremental system devel-
33
TMap NEXT® Testing Clouds
opment method. Therefore, with such a method, a V model can be drawn, for example, for
each increment. Many situations are conceivable that influence the shape and the specific
parts of the V model. A few situations are shown in the box “Influences on the V Model.”
With the help of the V model, the correlation between test basis, evaluation and testing
(test levels) is explained in this and the following subsection.
In More Detail
Influences on the V Model
The form and specific parts of a V model can vary through, for example:
––The place of the testing within the system development approach.
–– Using a waterfall development method with characteristics including: construction of the system in one go, phased with clear transfer points, often a lengthy
cyclical process (SDM, among others).
–– Using an incremental and iterative development method with the following possible characteristics: constructing the system in parts, phased with clear transfer points; short cyclical process (DSDM and RUP, among others).
–– Using an agile development method characterised by the four principles:
individuals and interaction over processes and tools, working software over
extensive system documentation, user’s input over contract negotiation, reacting
to changes over following a plan (extreme programming and SCRUM, among
others).
––The place of testing within the life cycle of the information system.
–– Are we looking at new development or the maintenance of a system?
–– Does this involve the conversion or migration of a system?
––A self-developed system, a purchased package, purchased components, or distributed systems.
––The situation whereby (parts of) the system development and/or (parts of) the testing are outsourced (outsourcing and off-/near shoring, among other things).
Left Side of the V Model
In Figure 2.3 the left-hand sideshows the phases in which the system is built or converted
from wish, legislation, policy, opportunity and/or problem into the realised solution. In
this case, the left-hand side shows the concepts of requirements, functional and technical
designs and realisation. While the exact naming of these concepts is dependent on the
selected development method, it is not required in order to indicate the relationship between
the system development and test process at a general level.
34
2 Framework and Importance of Testing
Figure 2.3 The left side of the V model
Evaluations
During the system development process, various interim products are developed. Depending on the selected method, these take a particular form, content and relationship with
each other and can be tested on these.
Definition
Evaluation is assessing the intermediary products in the system development process.
In the V model, the left-hand side shows which interim products can be evaluated (against
each other). In evaluation, the result can be compared with:
•The preceding interim product. For example, is the functional design consistent with the
technical design?
•The requirements from the succeeding phase. For example, can the builder realise the
given design unambiguously and are the specifications testable?
35
TMap NEXT® Testing Clouds
•Other interim products at the same level. For example, is the functional design consistent
internally and with functional designs related to it?
•The agreed product standard. For example, are there use cases present?
•The expectations of the client (see box “Realised Requirements”). Is the interim product
still consistent with the expectations of the acceptors?
With this, various techniques are available for the evaluation: reviews, inspections and
walkthroughs (see also “TMap NEXT” [TMapNEXT, 2006]).
In More Detail
Realised Requirements
What about the trajectory of wish, legislation, etc., to product? Will, for example,
all the requirements be realised, or will something be lost along the way? A survey
carried out by the Standish Group unfortunately shows a less than encouraging
picture. The findings of the survey (see Figure 2.4), in which the percentage of
realised requirements was determined, shows that, of the original defined requirements, only 42% to 74% are actually realised by the project [The Standish Group,
2011].
Besides normal evaluation results (the finding of defects) a well organized and
executed evaluation process can deliver a contribution to a higher realisation percentage in respect of the original defined requirements.
Figure 2.4 Realized requirements
36
2 Framework and Importance of Testing
Test Levels and Responsibilities
In a system development phase, a separation can be made between the responsibilities of
client, user, manager and system administrator on the one hand and system developer and
supplier on the other. In the context of testing, the first group is collectively known as the
accepting (requesting) party and the second group as the supplying party. Other concepts
that are also mentioned in this connection are the demand and supply organisations. At
a general level, there are two possible aims in testing:
•The supplying party demonstrates that what should be supplied actually is supplied.
•The accepting party establishes whether what has been requested has actually been
received and whether they can do with the product what they want to/need to do.
Right Side of the V Model
In Figure 2.5, a horizontal dotted line indicates this (formal) separation. In practice, the separation is less concrete and system developers will be called in for the information analysis
and the setting up of the specifications, and will also supply support with the acceptance
test. The expertise of users and administrators will also be employed in the building activities. It is important to define the various responsibilities clearly. This certainly applies to the
testing. Who acts as the client of a test, who accepts it, who wants advice on the quality
and who will test what, and when?
Testing takes place at the right-hand side of the V model. With this, a distinction is often
made within the supplying party between testing by the developer and testing by the
project/supplier:
•Testing by developer. For example, by a programmer, technical designer or engineer.
•Testing by project/supplier. For example, by a project or supplier of software or package
supplier, or maintenance organisation.
In practice, this distinction in (test) responsibilities is translated into the grouping of test
activities into test levels.
Definition
A test level is a group of test activities that are managed and executed collectively.
For every phase of construction, there are one or more test levels. A misconception in this
is that the test level rectangles in the V model are seen as phases of the system development process. However, these represent the test execution (the measuring phase) of a test
level.
37
TMap NEXT® Testing Clouds
Figure 2.5 places the development and system tests under the responsibility of the supplying
party and the acceptance tests under the responsibility of the accepting party:
•Development tests: tests in which the developing party demonstrates that the product
meets the technical specifications, among other things.
•System tests: tests in which the supplying party demonstrates that the product meets
the functional and non-functional specifications and the technical design, among other
things.
•Acceptance tests: tests in which the accepting party establishes that the product meets
expectations, among other things.
Figure 2.5 The right side of the V model
Although test level is a much-used concept, in practice people often have difficulty in substantiating it. It does not appear to be possible to designate the test level set. Even within
one company, it is often impossible to define one set that should be used in every project.
In this book, we refer to the three test levels mentioned. In incidental cases, in order to
describe a certain case appropriately, these test levels could be further subdivided.
As mentioned previously, there is no such thing as one standard set of test levels. This is
simply because it strongly depends on the organisation, the project and even the individual. But of course, there are some indications available for arriving at a relevant test
38
2 Framework and Importance of Testing
level categorisation in a particular situation. You can see these indications in TMap NEXT®
[TMapNEXT, 2006].
Test Basis and Test Levels
A test level is aimed at demonstrating the degree to which the product meets certain
expectations, requirements, functional specifications or technical specifications. System
documentation is often used here for reference. In certain situations, usually in cases
of migration operations, the current production system may also serve for reference. If
there is little, no, or only obsolete system documentation available, the knowledge of, for
example, the end users and product owners may be used for reference. There are many
sources of information that can be used for reference in testing. The general term for this
is “test basis.”
Definition
The test basis is the information that defines the required system behaviour.
This is used as a basis for creating test cases. In the event that a test basis can only be
changed via the formal change procedure, this is referred to as a “fixed test basis.”
Figure 2.6 uses arrows with the text “Input for” to indicate which information sources can
be used in which test level as a basis in order to derive test cases. From the model it also
appears that it is possible that the same test basis is being used in two test levels. This
often happens when there are two different parties carrying out a test according to their
Figure 2.6 Test basis in the V model
39
TMap NEXT® Testing Clouds
individual responsibilities. In the illustrated model, a functional design is used as test basis
by the supplying party to demonstrate to the accepting party, for example, that the system
complies with this design. However, the accepting party uses the same functional design
as a test basis in order to check whether the system supplied actually complies with this
design.
It is obvious that in such a situation, there is a chance of duplicate testing being carried
out. This can be a conscious and perfectly valid decision, but it can be equally justifiable
to combine certain test levels.
Test Types
During the testing of a system, various types of properties can be looked at—the so-called
quality characteristics. Examples of these are functionality, performance and continuity.
At detail level, it may be, however, that a certain quality characteristic is too general, making
this difficult to maintain in practice. The quality characteristic should then be cast in a form
that is easier for the tester to use. In the example of functionality, risks are conceivable in
the area of interfaces, in- and output validations, relationship checks, or just the processing.
With performance we could look to load and/or stress risks. And in the example of continuity, there is the matter of the importance of backup, restore and/or failover facilities. This
commonly used form of quality characteristics is called the “test type.”
Definition
A test type is a group of test activities with the intention of checking the information system in respect of a number of correlated (part aspects of) quality characteristics.
On the website http://www.tmap.net, you will find a list of a number of common test types.
This list is not exhaustive and will vary from test project to test project, and from organisation to organisation.
A strange fish among these is the “regression” test type: a test type was, after all, intended
to provide detailed information on the specific risks with regard to the relevant quality
characteristics, while regression, on the contrary, is a rather general term and in fact cites
a specific risk in itself.
Definition
Regression is the phenomenon that the quality of a system deteriorates as a whole
as a result of individual amendments.
40
2 Framework and Importance of Testing
Definition
A regression test is aimed at verifying that all the unchanged parts of a system still
function correctly after the implementation of a change.
Often, the establishment of whether a regression has taken place is an aim in itself. It is
therefore better to pay some attention to it here, where the distribution of quality characteristics across test levels is being considered in general terms.
When filling in the detail, thought could be given to what is meant by “correct functioning” in the above definition. Does this concern, for example, functionality, performance
or failover facilities? In fact, all the quality characteristics and test types can be part of a
regression test.
What is Structured Testing?
In practice, it seems that testing is still being carried out in an unstructured manner in
many projects. This section, besides citing a number of disadvantages of unstructured testing and advantages of structured testing, also cites a few characteristics of the structured
approach.
Disadvantages of Unstructured Testing
Unstructured testing is typified by a disorderly situation, in which it is impossible to predict the test effort, to execute tests feasibly or to measure results effectively. This is often
referred to as “ad hoc testing.” Such an approach employs no quality criteria in order
to, for example, determine and prioritise risks and test activities. Neither is a test-design
technique employed for the creation of test cases. Some of the findings that have resulted
from the various studies of structured and unstructured testing are:
•Time pressures owing to:
–– absence of a good test plan and budgeting method
–– absence of an approach in which it is stated which test activities are to be carried
out in which phase, and by whom
–– absence of solid agreements on terms and procedures for delivery and reworking
of the applications.
•No insight in or ability to supply advice on the quality of the system due to:
–– absence of a risk strategy
–– absence of a test strategy
–– test design techniques not being used, therefore both quality and quantity of the
test cases are inadequate.
41
TMap NEXT® Testing Clouds
•Inefficiency and ineffectiveness owing to:
–– lack of co-ordination between the various test parties, so that objects are potentially
tested more than once, or even worse: not tested at all
–– lack of agreements in the area of configuration and change management for both
test and system development products
–– the incorrect or non-use of the—often available—testing tools
–– lack of prioritisation, so that less important parts are often tested before more
risk-related parts.
Figure 2.7 A structured test approach
Advantages of a Structured Testing Approach
So what are the advantages, then, of structured testing? A simple, but correct, answer to
that is that in a structured approach, the aforementioned disadvantages are absent. Or,
put positively, a structured testing approach offers the following advantages:
•it can be used in any situation, regardless of who the client is or which system development approach is used;
•it delivers insight into, and advice on, any risks in respect of the quality of the tested
system;
•it finds defects at an early stage;
•it prevents defects;
•the testing is on the critical path of the total development as briefly as possible, so that
the total lead time of the development is shortened;
•the test products (e.g. test cases) are reusable, and
•the test process is comprehensible and manageable.
42
2 Framework and Importance of Testing
Features of the Structured Testing Approach
What does the structured testing approach look like? In general, it can be said that a
structured testing approach is typified by:
•Providing a structure, so that it is clear exactly what, by whom, when and in what
sequence has to be done.
•Covering the full scope and describing the complete range of relevant aspects.
•Providing concrete footholds, so that the wheel needn’t be reinvented repeatedly.
•Managing test activities in the context of time, money and quality.
43
3
The Business in Charge of IT:
The Cloud
The growth of cloud-based computing is outstripping even the most
optimistic predictions. It’s early 2011 and almost all forecasts of “the”
most important IT technologies name cloud computing in their Top 3.
These forecasts and studies have contributed to hype around cloud
computing that is reaching C-level executives. Regrettably, too many
times cloud is perceived as an IT matter, a matter that is best left to
infrastructure technicians. The truth is that the cloud is not an IT opportunity, but a strategic business opportunity. It creates the ability for the
business to be in complete charge of IT and change from Information
Technology (IT) to Business Technology (BT). But the cloud market is still
at very early stages and will continue to grow and evolve.
The Moment of Transformation for IT
The latest recession has shown that a crisis can hit on a global scale and companies have
to react. Receiving money was an issue during this financial crisis [Hoenig, 2009]; it was
triggered by a liquidity shortfall in the United States’ banking system and resulted in the
45
TMap NEXT® Testing Clouds
collapse of large financial institutions, the bailout of banks by national governments, and
downturns in stock markets around the world. It contributed to the failure of key businesses,
declines in consumer wealth, estimated in the trillions of Euros, substantial financial commitments incurred by governments, and a significant decline in economic activity.
Within this context, businesses needed to save on costs, wherever possible! And they had to
do this in the new world of business—characterized as international, connected and chaotic.
As a result, company CxOs are seeking alternatives to reduce costs, improve service, and
manage risks. In short, they are looking at the cloud.
Figure 3.1 From LAN applications to cloud services [Hoenig, 2009]
IT has changed over the last 20 years, as Figure 3.1 shows, transforming from LAN applications to cloud services. The cloud growth is based on a compelling value proposition: speed
to market, agility to bring forward or retire services, the chance to move expenditure from
capital expenditures (CapEx) to operational expenditures (OpEx), and a range of opportunities for reducing costs—the last mentioned is likely to be a key incentive for cloud growth!
In this context, the cloud is not only changing the way people do technology, it is changing
the way people do business. It is cloud computing that is changing business, changing
products and services, changing markets and even changing innovation itself. It all needs
to be flexible!
46
3 The Business in Charge of IT
Product to “Whatever as a Service”
As a result of this flexibility “Whatever as a Service” solutions are created, not just products,
but also a complete service. A service based on continuity, recurrence and trust, not a
“sell once” principle. As vendors want to keep their clients, they need to sell a service that
is of added value to them. These services are to be integrated into the current available
enterprise architecture, and, as these services are systems by themselves, they become a
system of systems [SeizeTheCloud, 2011]. All systems are connected to each other to perform
a service to the business.
Information Technology to Business Technology
It must be clear that the cloud should not be a goal by itself; it enables IT to be part of the
business and do what it was created for—to support the business when needed, ensuring that technology aides the business. Business decisions are becoming more and more
dependent on technology. What happens when an email system goes down? Or the inquiry
database is corrupted? A lot of business processes, maybe even all, are so technologyenhanced that business and technology are interwoven.
In the future companies will move from Information Technology to Business Technology,
where technological innovation and business innovation are synonymous. Technology can
bring direct business opportunities. This is not a long-term view, it’s something that is
happening now. People and companies are becoming smarter in using technology; for
example using social media has completely changed marketing, advertising and even
news and public opinion. Other companies have outsourced part of their IT or have no
internal IT at all.
The Cloud Era
What is the Cloud Era? To understand this, it’s important first of all to know what the cloud
is. According to the National Institute of Standards and Technology (NIST), cloud computing is defined as:
Definition
Cloud computing is a model for enabling convenient, on-demand network access
to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service
models, and four deployment models [NIST, 2009b].
47
TMap NEXT® Testing Clouds
Cloud services are derived from cloud computing. Where cloud computing is broadly a
technology infrastructure model, cloud services are more of a utility business model. Cloud
services are enabled by cloud computing; cloud computing is hidden underneath the business or consumer service, and most of the time is maintained by the party that offers the
service.
IDC provides the following framework to distinguish between cloud computing and cloud
services:
Definition
Cloud computing is an emerging IT development, deployment and delivery model,
enabling real-time delivery of products, services and solutions over the Internet
(i.e., enabling cloud services) [IDC, 2010].
Definition
Cloud services are consumer and business products, services and solutions that
are delivered and consumed in real-time over the Internet [IDC, 2010].
In short, a cloud service is virtually any business or consumer service that is delivered and
consumed with the use of Internet technology in real-time. Cloud computing, an important,
but much narrower term, is the IT environment—encompassing all elements of the full
“stack” of IT and network products (and supporting services)—that enables the development, delivery and consumption of cloud services
In this book, we are referring to the business model where we use the term “cloud services”
and the infrastructure model where we use the term “cloud computing.” And the cloud
computing definition from NIST is also used in this book.
The Cloud Era will emerge over the coming years as a separate business model on how we
do business, not IT. In the Cloud Era, IT will become more of a commodity, like tap water
that can be turned on and off when needed.
What Makes a Cloud a Cloud?
It is the characteristics of the cloud which make it possible to create Business Technology.
But what are these characteristics? In other words—what makes a cloud a cloud? To answer
this question, it’s first of all important to recognize that no one expert agrees on all the
characteristics. Figure 3.2 shows the most used characteristics, which are based on the five
characteristics of the National Institute of Standards and Technology [NIST, 2009b] and
three added characteristics. These are outlined below.
48
3 The Business in Charge of IT
On-demand self-service. Cloud computing resources can be procured and disposed of by
the consumer without human interaction with the cloud service provider. This automated
process reduces the personnel overhead of the cloud provider, cutting costs and lowering
the price at which the services can be offered.
Resource pooling. By using a technique called “virtualization,” the cloud provider pools
his computing resources. This resource pool enables the sharing of virtual and physical
resources by multiple consumers, “dynamically assigning and releasing resources according to consumer demand” [NIST, 2009b]. The consumer has no explicit knowledge of the
physical location of the resources being used, except when the consumer requests to limit
the physical location of his data to meet legal requirements.
Broad network access. Cloud services are accessible over the network via standardized
interfaces, enabling access to the service not only by complex devices such as personal
computers, but also by light weight devices such as smart phones.
Rapid elasticity. The available cloud computing resources are rapidly matched to the actual
demand, quickly increasing the cloud capabilities for a service if the demand rises, and
quickly releasing the capabilities when the need drops. This automated process decreases
the procurement time for new computing capabilities when the need is there, while preventing an abundance of unused computing power when the need has subsided.
Measured service. Cloud computing enables the measuring of used resources, as is the
case in utility computing. The measurements can be used to provide resource efficiency
information to the cloud provider, and can be used to provide the consumer a payment
model based on “pay-per-use.” For example, the consumer may be billed for the data transfer volumes, the number of hours a service is running, or the volume of the data stored
per month.
3rd party integration. As cloud services are only created with one focus and are highly
standardized one single service doesn’t cover the full service portfolio of any consumer.
Specialized services are created to cover that one part and integrate with other providers.
This 3rd party integration through standardized methods leads to an integrated solution
based on multiple services.
49
TMap NEXT® Testing Clouds
Figure 3.2 Characteristics of the cloud and cloud services,
based on NIST definitions and Sogeti research
Multi-tenancy. Enables sharing of resources and costs across a large pool of users thus
allowing for centralization of infrastructure in locations with lower costs (such as real
estate, electricity, etc.), peak-load capacity increases (users need not engineer for highest
possible load-levels), and utilization and efficiency improvements for systems that are often
only 10–20% utilized.
Device and location independence. This feature enables users to access systems using a
web browser, regardless of their location or what device they are using (e.g., PC, mobile). As
infrastructure is off-site (typically provided by a third-party) and accessed via the Internet,
users can connect from anywhere.
What Is Not a Cloud?
While a degree of hype surrounding cloud has been mostly within IT, soon it will jump over
to the wider business, with some concerns around the use of the cloud. These concerns
refer to cloud washing in the industry, whereby companies re-label their existing products
as cloud computing resulting in a lot of marketing innovation on top of real innovation.
The result: more overblown hype surrounding cloud. The cloud may be seen as over-hyped
and misunderstood in the short term, but it is revolutionary in the long term, representing
more of a gradual shift in our thinking about computer systems and not a sudden transformational change. This is explained by Forrester Research VP Frank Gillett in an interview
he did in 2008 [Gillett, 2008].
There is even a “Cloud Washing 101” [Glauser, 2010] available for “wannabe cloud washlings,” which shows the following playbook:
50
3 The Business in Charge of IT
•Create a new consulting service to help cloud customers to feel assured.
•Dust off old CD boxes, apply white-out liberally, change words like “network”, “storage”
and “software” to “cloud” and “SaaS.” Give no thought to the actual meaning of any
terms.
•Form an industry consortium for the purpose of gathering luddites in the spirit of creating standards that will save the dinosaurs.
•Announce cloudy partnerships simply because a fusion of synergy always means
1 + 1 = 3.
•Call everything a cloud service just for effect, even if it is simply a basic IT service. Cloud
is a much bigger tent than any of us ever realized.
•Convince customers they need cloud management tools in addition to the management tools they already have. Apparently the cloud broke the old tools, or was that the
Internet’s fault?
Adopting the Cloud: A New Global Infrastructure
As the cloud evolves, and cloud service adoption becomes ever more wide-ranging, a new
global infrastructure is being created. This infrastructure can easily be connected to traditional infrastructure (including legacy systems), as shown in Figure 3.3. But it is not just
for business IT assets that the cloud removes previous limitations. It does the same from a
software or application testing perspective, removing the typical constraints presented by
having to test on client-owned or internal resources. A cloud infrastructure creates significant new opportunities for software quality assurance and testing. It enables convenient,
on-demand network access to a shared pool of configurable computing resources. Its great-
Figure 3.3 Connecting cloud environments with traditional environments
51
TMap NEXT® Testing Clouds
est benefits lie in the scalability, the ability to reduce costs, greener IT, location-independent
access and an instantly deployable infrastructure.
How is a cloud scalable? It’s a bit technical, but clouds utilize the option to distribute the
usage over multiple servers (evenly). This enables the infrastructure, platforms and software to be distributed over these different servers, thus enabling the opportunity to create
scalable services. And the business is waking up to the fact that scalability is as important
to their success as access to capital and markets. They usually don’t know how important
scalability is until their systems fail. When customers and revenue are lost, they then
look into the underlying causes and find their systems don’t really have the performance
capability they thought they did. Unfortunately cloud platforms are not a magic solution
to scalability, as they are not infinite, but the cloud makes it feasible and affordable to
rescale by powers of ten rather than multiples of two, which obviously helps to overcome
performance bottlenecks for applications.
Lowering Total Cost of Ownership: No More Maintenance
The cloud promises cost reductions by lowering maintenance costs and the Total Cost of
Ownership (TCO). But how? The use of standardized services can create lower maintenance
costs. Multiple versions or types of software generates the need for maintenance for all
these versions or types. When only one standard application or service is used to provide
the needed functionality, the amount of maintenance will decrease significantly. The installation of cloud applications is easier, since they don’t have to be installed on each user’s
computer. And as only one version of the application is used, it is also easier to support
and to improve, since the changes reach the clients instantly.
Other opportunities that reduce the maintenance costs are a more effective utilization of
resources. Most systems are only utilized for 10-20%, and when multi-tenancy is applied on
resources, their efficiency rate can go up. As Figure 3.4 shows, instead of using separate
environments for each project, they can all be done on one environment with peak utilization
of around 13 and the least at 5; this compared to more (or less) availability than needed.
Multi-tenancy can also provide centralization for infrastructure in locations where the costs,
such as real estate, electricity, etc, are lower.
52
3 The Business in Charge of IT
Figure 3.4 Environment utilization rate ‘traditional’ vs. cloud
When all costs for the service, used on a per-need basis, are shared with many other subscribers in the cloud, this creates the flexibility to lower TCO of the services compared to
traditional usage of a product.
The higher efficiency ratio of resource utilization with cloud computing offers another benefit: it makes a greater contribution to the environment than normal data centers. By sharing
cloud resources for their test infrastructure, businesses will use IT resources “on demand”
53
TMap NEXT® Testing Clouds
and therefore eliminate waste. In addition, clients using cloud data centers can minimize
energy use and deliver environmental savings in CO2 of up to 55%.
As working with cloud services is an online service, it generates a location-independent
access to these services, for example the use of cloud-based storage providers like Dropbox
or “Office” applications with Google Apps and Microsoft Office 365. This enables working
with “The New World of Work.” You can login to these services wherever and, as they are
available 24/7, whenever. Please make sure you have an Internet connection!
Cloud computing is Internet-based computing. This generates an infrastructure in the cloud
and this infrastructure can be one like in Figure 3.5. And all the major cloud service providers, like Amazon, Rackspace Cloud, Salesforce, Skytap, IBM, Microsoft and Google offer
this as a service to clients, see Figure 3.5. This provides us with the opportunity to instantly
deploy needed (test) environments. When virtualized, parts of the infrastructure are incorporated in the cloud, they can be used on-demand. This not only makes the implementation
easier but also the execution of the different instances in the cloud. The virtualized parts of
the infrastructure can be added or removed from the cloud whenever needed, creating a
flexible option to create infrastructure (including the needed configuration).
Figure 3.5 Who are offering the cloud?
Naturally, there is an understandable nervousness about this new approach and questions
are being asked about integration, security and implementation. But these challenges are
outweighed by the advantages, as cloud gives business the opportunity to free themselves
from issues relating to the limitations caused by internal availability of IT resources (hardware, applications and services), and enables a more effective way to collaborate.
54
3 The Business in Charge of IT
The Creation of Business Technology
Cloud computing has evolved to cloud services and into “the Cloud.” But why has an IT development led to a business service? This is because IT has not lived up to its role in supporting
the business. The last decade it has mostly been a cost to the business, slow to react to the
business needs and seemingly trying to shape technology execution to what IT wanted or
what was good enough, rather than what the business needed. Today the characteristics of
the cloud can make cloud an aid to business and turn IT into BT. Business Technology is the
conjunction between IT, business models and the consumers, see Figure 3.6. The technology
aids the business to manage its relationship with the market.
Figure 3.6 Business Technology: where IT, business and consumers congregate join
Business Technology = IT as a Commodity?
As the cloud promises in part that technology will become more business-driven, it is still
based on the interaction between business and technology. The cloud shifts IT from a
separate entity to something embedded in everyday business.
Technology is part of everyday life, in business as much as in our personal life. Take away
software, networks and the Internet from any organization and it would come to a standstill
almost immediately. This is more so now than ever before. In the recent past, we could prob-
55
TMap NEXT® Testing Clouds
ably manage without technology for a day or two. But today it’s more than just a greater
dependency: we are no longer doing the same things. Supported by technology, we are
starting to do new things that we simply could not do without that technology.
But we have also got used to software as a normal and even standard part of daily life.
Some types of software are so intrinsic to everyday life that we cannot function without it,
like email. As these software products are standardized applications and have found their
way into all parts of our organization, the minute they fail, they show us how much we
need them, and CIOs can get fired as a result. Looking closer at what we now demand of
Information Technology, it becomes clear that cloud is a good model for provisioning and
paying for technology. It will give a boost to whatever we are doing.
However despite all these developments, we are not yet fully realizing the opportunities
that exist today. It is easy to be blinded by the small everyday issues, so that the larger
goal stays out of reach. Now that businesses and public organizations are “wired” and the
cloud has emerged as a collection of Internet platforms and tools for connecting, integration and sharing of data and processes, it has become possible to think about the bigger
issues that are going unaddressed. IT will become a commodity that can be turned on and
off when needed!
56
4
“Whatever” as a Service:
Cloud-Enabled Software Testing
as a Service
As the importance of IT testing is growing, demand for services is also
growing. Businesses want a service to provide them with on-demand,
pay per use testing, but still against the correct quality. Some important drivers for this are:
•Business demands are higher and expectations are for “first time right”
software launches;
•Legislation and regulations (e.g. SOX, SAS70, Basel II/III act and ­Clinger
Cohen act); put stronger the business demands on quality assurance
and test processes;
•Mergers, chain integrations, globalization and technological developments lead to more complex IT chains, and
•Business demands swift, flexible, high quality and cost effective IT services that contribute to the business processes.
IT has become a utility. The business departments’ demand guarantees from IT services that
IT implementations will not threaten business continuity. The business department demands
57
TMap NEXT® Testing Clouds
a test process that clearly demonstrates that requirements have been sufficiently met, and
that risks for deployment are acceptable. Testing will become a utility also.
Test service providers who can offer the test process as demanded by the business departments will be very successful, especially if these providers always: make the client’s objectives highest priority and commit to focusing primarily on the success of the client’s business;
making the testing of IT move into the testing of BT. A robust and successful collaboration
with the client is founded on the skills of the providers’ test professionals, highly industrialized test processes, open communication, flexible use of tooling and infrastructure, and full
transparency regarding objectives, measurable results, responsibility, operation procedures
and costs. The model to support this all is called: Software Testing as a Service.
The cloud is divided into three types of layers, named IaaS, PaaS and SaaS. Software Testing as a Service is executed as part of all three of these layers as it can be used to test the
infrastructure, platform and software, or all combined, see Figure 4.1.
Figure 4.1 STaaS and its effect on the three different cloud layers
58
4 “Whatever” as a Service
Cloud Service Models: “Whatever” as a Service
Infrastructure as a Service (IaaS)
This is the foundation of cloud computing as we now understand it—the remote provision
of process capacity, storage, networking and other basic computing resources. An organization simply contracts for the necessary amount of virtual infrastructure and installs
its operating system, applications and data. Depending on the cloud provider and commercial model, these resources can be highly flexible and scalable, responding and billing
the client, according to demand and load. On one hand, this approach gives the client an
extraordinary amount of freedom, but on the other hand, the client needs to exercise close
management. Availability and reliability of the rented infrastructure, including “fall over”
and data protection, is the responsibility of the cloud provider.
Platform as a Service (PaaS)
For this service, the cloud provider offers a programming platform and tools for the client
to develop its own applications. This development platform is mostly based on .NET or
Java, extended with cloud-specific services. As such, it has great appeal for organizations
who want to build their own software, without the need or costs of building and running
their own data centers. In addition, they benefit from effectively outsourcing the management and maintenance of the underlying cloud infrastructure (network, servers, operating
systems and storage).
Software as a Service (SaaS)
Here the cloud provider offers complete applications for end users, created on PaaS. Only
the service is paid for, normally on a unit cost basis. The software does not run locally on
a machine but in the cloud and is delivered via a browser. The most well-known example
of this would be Google Apps and Salesforce.com. The client only needs to very lightly
configure the application before its immediate use.
The downside of this simplicity is that the client has very little influence on how the application works or on its future development life cycle. The amount of control on the development
life cycle is governed by the SaaS provider, acting on behalf of a massive client base, passing
on to those clients the economies of scale and investment efficiencies. Here, as elsewhere,
clients benefit from moving spend out of CapEx and into OpEx, while retaining an ability
to move as fast as the marketplace in which the SaaS sits.
59
TMap NEXT® Testing Clouds
Software Testing as a Service (STaaS)
Software Testing as a Service (STaaS) is a model for software testing used to test an application as a service provided to clients across the Internet. By eliminating the need to test
the application on the client’s own infrastructure and equipment with testers on site, STaaS
alleviates the client’s burden of installing and maintaining test infrastructure, test tooling,
sourcing, and (test) support. Using STaaS can also reduce the costs of testing through less
costly, on-demand pricing.
From the STaaS provider’s standpoint, STaaS has the attraction of providing stronger protection of its test approach and establishing an ongoing revenue stream. The STaaS provider
can test the application on its own server or, even better, on a cloud infrastructure. This
way, the client can reduce their investment in server hardware too.
Drivers for STaaS Adoption
The traditional rationale for test outsourcing is that by applying economies of scale to the
testing of applications, a test service provider can test better, cheaper and faster than companies can themselves. STaaS could be the next step in test outsourcing. Several important
changes made to the way we work could make a rapid acceptance of STaaS possible:
•The testing industry has matured into a standard practice. In the past, executives
viewed corporate test centers as strategic investments. Today, people consider testing
to be a cost center and, as such, it is suitable for cost reduction and outsourcing. has
become commodity, so has testing!
•Testing by companies themselves is expensive. In-house testing activities require
expensive overhead including, for example, salaries, health care, liability, and physical
building space.
•Standard test approaches are available. With some exceptions, testers can use a
standard test approach to test any application. For instance TMap® Next [TMapNEXT,
2006] can be used for multiple types of applications.
•A specialized testing provider can target global markets. A testing provider specialized in testing widespread applications (packages) can more easily reach the entire
user base.
•Security is sufficiently well trusted and transparent. With the broad adoption of SSL,
VPN and Citrix, testing providers have a secure way of accessing the applications under
test. This still allows the environments to remain isolated from each other.
•Wide Area Network’s bandwidth has grown drastically. In addition to network quality of service improvement, it is now possible for testing providers to successfully and
consistently access remote locations and applications with low latencies and acceptable
speeds.
60
4 “Whatever” as a Service
•The cloud creates the availability of computing power is available when needed.
Cloud computing enables us to create on-demand test infrastructure. The client is reassured that (test) infrastructure is no longer scarce and is not a mysterious environment
that must be carefully managed. Test infrastructure has the capacity and flexibility to
be quickly increased and decreased without upfront investment, moving expenditure
from CapEx to OpEx.
•Testing tools become available on an on-demand, pay-per-use basis. Cloud provides
the opportunity and flexibility to use those tools as and when needed. Tests that need
specific tools can be executed at the moment that they are needed and only paid for
when being used.
How Does That Work: The STaaS Process
Software Testing as a Service is a model of software testing in which an application is
tested as a service provided to clients across the Internet. The process is that a client has
a test demand, which is sent through the Internet to a STaaS provider. After the testing is
completed, the STaaS provider sends the client a test report (see Figure 4.2).
Figure 4.2 The interaction between the client and STaaS provider: the STaaS process
61
TMap NEXT® Testing Clouds
What happened in between? How did the provider deal with the test demand? Did the
provider use a “real-time STaaS” or a “real-enough-time STaaS”? In addition, how did the
provider deal with other challenges like test infrastructure, 24/7 availability and the client/
provider communication?
Real-Time and Real-Enough-Time STaaS: Using a Web Interface
In a real-time STaaS (Figure 4.3), the test demand is implemented without human intervention by the provider. In the ultimate form of the real-time STaaS, a test object (e.g. application software), including test basis (e.g. requirements, use cases, set of heuristics), design
and architecture model, is offered to the STaaS provider. Without human intervention this
is implemented in a (cloud-based) test environment. All the testing is performed by human
simulators against the model and neural network forecasting, and a test report is sent to
the client. Is the scenario above real-time STaaS science fiction? No, with the enablement
of cloud environments and the use of pay-per-use tooling real-time, STaaS is a reality.
Figure 4.3 The real-time STaaS interface
Optional services that are already available are:
•Performance Testing as a Service (PTaaS): The periodical execution of performance measurements or tests of applications from various worldwide locations;
•Security Scanning as a Service (SSaaS): Self-service scanning of applications with various
forms of reporting available;
•Test Cloud Load Testing: The periodical execution of usage loads for various applications
around the world, using the cloud as a load model;
•Regression subscription to periodically check the external and internal links on a web
site. Are the links for instance still working correctly and not broken?
•Regression subscription for application interfaces in a suite of applications. Monitoring
the health and functionality of the application landscape.
•Test Infrastructure in the Cloud: The provisioning of underutilized test infrastructure in
a cloud environment, enabling the creation of multiple environments as needed.
62
4 “Whatever” as a Service
In a real-enough-time STaaS (Figure 4.4), the test demand often requires human intervention in the workflow. The demand is carried out behind the “scenes” by many humans,
although it appears as if the test demand is carried out by computers. By its very nature,
this introduces a latency and unpredictability to the STaaS process.
Figure 4.4 The real-enough-time STaaS interface
Available services of existing real-enough-time STaaS:
•Work Package broker. Through a formal test demand mechanism, everything about the
assignment is specified in a work package. This includes items such as: what should be
tested, how and when, what criteria should be used, and what knowledge is necessary.
The work packages are stored in a kind of virtual (digital) cupboard. The work package
can be pushed by the Work Package broker to, or pulled by, a tester or work package
team that possesses sufficient means to carry out the work package. The work package
serves as the contract between client and provider.
•Managed Testing Services (MTS). MTS is the structured form of a Work Package broker
that is specialized for a particular client or application. Through MTS, the provider takes
full responsibility for test assignments, with clear commitments expressed in KPIs on quality, cost level and time to market. MTS is organized in so-called test lines—an operational
organization to provide test services to one or more clients. A test line has a fixed team
of testers, infrastructure, test tools and standardized work procedures. Every test line
has a permanent key team of testers who ensure continuity and knowledge retention.
There is also a flex team; when the work available on their test line is insufficient, the flex
team is temporarily assigned to other test lines. It is a flexible pool of testers deployed
to test lines with the most work pressure.
Test Infrastructure: In the Cloud
With STaaS it should be possible to test an application from anywhere in the world, regardless of the location of the tester and the client. This requires special attention to the test
63
TMap NEXT® Testing Clouds
infrastructure. Figure 4.5 contains an existing and operational infrastructure used by a
provider.
Figure 4.5 An example of Test infrastructure in the cloud
The tester, either from home or the provider’s office, has a remote connection tool on his
computer by which he or she can establish a connection to a public, private or hybrid
cloud infrastructure where the System under Test (SuT) is hosted. Based on a certain classification, the tester can log onto the cloud environment just as any other environment. If
a connection with the client’s intranet (or private cloud) is needed, a secure tunnel is set
up. For example, cloud-based virtual workstations connect via a VPN connection to access
the client’s intranet or private cloud. In this way, multiple clients can be connected on the
cloud, each with its own virtual machines, and security is guaranteed.
Other possibilities are:
•hosting the test infrastructure by the STaaS provider, and
•outsourcing the test infrastructure to a third party hosting provider.
Availability: 24/7
When it is possible to test an application from all over the world through the Internet, the
provider and his testers should be available 24/7. In this situation, a test demand is not
64
4 “Whatever” as a Service
rejected just because it is the night where the provider is situated. The provider just needs
a broad network of testers spread across the different time zones, or needs testers available 24/7 in a specific time zone. Because the demand for testing services will fluctuate,
it is recommended that the provider has a fixed pool of testers and a pool of flex testers
(Figure 4.6). In practice, students have proven to be very suitable as flex testers; they like
to work in virtual environments, are time-independent and location-independent and can
be paid per assignment.
Figure 4.6 STaaS resourcing model in a fixed pool of testers and flex testers
In addition to the capacity of the fixed pool, the flex testers provide the required flexibility
to cope with peaks and variation of required workload. Assignment of the flex testers is
based on the planned test capacity demand and agreed reaction times. The learning time
of flex testers is usually relatively short due to the provider’s standard working practices.
In principle the flex testers leave the fixed pool at the end of the demand peak.
Of course, using test tools and test automation could also support a STaaS provider’s 24/7
availability.
The STaaS Governance Model: Test Demand vs. Test Supply
The STaaS provider has to distinguish the various interaction points where the client and
provider can interact and communicate. Figure 4.7 gives an overview of the generic governance model used by a STaaS provider.
65
TMap NEXT® Testing Clouds
•Client contract manager vs. Provider delivery manager. Agreement on a strategic
level regarding contracts and SLA. At this level there is a responsibility for setting up
the contracts and SLA.
•Client manager test services vs. Provider test line manager. Set up and maintain the
standard procedures and KPIs. Initiate and start work packages.
•Client project manage vs. Provider test manager. Planning and monitoring progress
of test activities. Progress reporting, defect reporting and management. Delivering the
conclusive test report after finishing the test.
•Client development teams vs. Provider test coordinator. Findings after the testability
review are reported. If required, testers and designers meet in a session to clarify the
findings. The result is a clear unambiguous test basis.
•Client development teams vs. Provider test coordinator. Intake of delivered software.
The initial test is performed on the basis of agreed entry criteria. Issues regarding the
initial test are reported to project management and development. If required, a meeting is set up to clarify issues, and the result is a system with sufficient quality to start
test execution.
•Client development teams vs. Provider test coordinator/test engineer. Re-test of
resolved defects. Through a delivery document, the developer lists which defects are
resolved in the new build. This document is the basis of the re-tests.
Figure 4.7 Governence model of a test line
66
4 “Whatever” as a Service
•3rd party SaaS supplier vs. Provider test coordinator/test manager. Integration of
the SaaS software in the SuT. Through a delivery document the supplier lists how to
install and integrate the SaaS software in the system. This document is the basis of the
system integration testing.
STaaS Provider Services
A service item is a certain element of the test process offered to the client for which the
STaaS provider is responsible, and can be highly varied. Moreover, the established service
offering can be modified when new services are proposed or existing ones are eliminated.
The STaaS provider must deliver a result based on the demand. The delivery must occur
within the pre-defined timeframe, at pre-defined costs, and at a pre-defined quality level.
The provider is responsible for guaranteeing continuity in delivering the result.
Some typical service items are, as shown in Figure 4.8:
•Usability Testing;
•End-2-End Testing;
•Performance Testing;
•Load Testing;
•Assessments, like TPI NEXT® or other scans;
•Test Infrastructure Management;
•Security Scanning;
•Test Data Management;
•Cloud Quality Assurance or Cloud Assessment Testing;
•Structured Evaluations;
•Regression Testing, and
•Test Script creation, using Model Driven Quality Engineering.
But also:
•Security Testing;
•Defect Management, and
•Testability Reviews.
67
TMap NEXT® Testing Clouds
Figure 4.8 Cloud-enabled Testing Services that can be delivered by a STaaS provider
In More Detail
Performance Testing on the Clouds
A performance test is usually done at the end of the test phase. This because of
the need of a well enough performing environment. The environment that is most
production-like is the acceptance environment. This creates a risk in applications
that are highly dependent on a high performance, because the defects are found
late in the process and are therefore expensive to fix. But how can we help this?
With a test cloud this can be done in every environment. Whenever the infrastructure needs to be upgraded to a production-like infrastructure this can be achieved
in the cloud. After the needed performance tests the environment can again be
decreased.
With the use of the needed infrastructure from a cloud a more “real” load can be
generated than the virtual load from arduous tools. A cloud enabled performance
test tool, like for example Cloud Test from SOASTA (see Figure 4.9) works with the
cloud to generate the needed load and stress to test an application (which is or
isn’t in a cloud environment).
68
4 “Whatever” as a Service
Figure 4.9 SOASTA architecture model for a cloud-based performance test using an IBM Cloud
STaaS Provider Process Model
A number of processes have to be set up by the STaaS provider to offer these services. The
STaaS provider process model (Figure 4.10) consists of two parallel primary processes:
•The process for the actual execution of the service in an assignment;
•The process that supports and monitors the execution.
The processes serve to support the assigned employees’ collaboration needed to accomplish
the contracted services. The processes are described in detail below.
69
TMap NEXT® Testing Clouds
Figure 4.10 STaaS provider process model
Initiation
This is the first phase for execution of the assignment. The assignment always comprises
one or more service tailored to the client’s specific demand. The initiation phase serves to
describe accurately the scope of the assignment. This can be done by creating a so-called
assignment description and as an option asking the client to approve it. An assignment
description concretely describes:
•the STaaS service asked for, including preconditions and basic assumptions;
•clear commitments expressed in KPIs on quality, cost level and time to market;
•the infrastructure to be tested on/where the SuT is installed;
•the agreements on monitoring by the STaaS provider in relation to communication lines,
progress reporting and consultation, and
•the deliverables.
Furthermore, the initiation phase is used to identify what is available in the provider’s organization for (re)use on behalf of the assignment. This may include templates, standards,
existing test scripts, test models or test design patterns from previous assignments, test
environments and tooling.
70
4 “Whatever” as a Service
In More Detail
Test Design Pattern
A test design pattern is a generic set up test structure and/or test strategy, which
solves a specific common type of test design problem. Test design patterns are
generically described, offering the advantage of a recognizable solution pattern,
regardless of the implementation details. Using test design patterns accelerates
the communication of a test assignment because the solution of a common test
design problem has, in fact, been given a “name.”
Execution
In this phase, the assignment is executed, conforming with the agreements with the client
as described in the assignment description. Furthermore, the parties communicate via the
agreed communication lines on the results, progress, risks and bottlenecks in the execution
of the assignment.
Completion
Reuse of resources is one of the success factors of the provider. In this phase, the assignment
is assessed and a satisfaction measurement is agreed with the client. The lessons learned
from the assessment are fed back into the provider’s organization and incorporated into
the new version of the service. This results in formal process improvement embedded into
the processes of the provider’s organization.
Support and Monitoring
The provider continuously supports and monitors the assignment process as described
above. The progress, risks and bottlenecks involved in the execution of the assignment are
monitored. Where and when necessary, the involved parties reach new agreements on
the assignment.
Delivery Management
This process covers activities that aim to acquire assignments for the provider and manage (long-term) relationships. Examples are maintaining the test environment,1 repeated
testing of releases, and live monitoring of applications. A contract is created to govern
how both parties will handle the assignment. It specifies agreements on the service level
provided by the provider.
1 When the test environment is located in the cloud, this can be turned off to save costs.
71
TMap NEXT® Testing Clouds
Security and Compliancy
The test environment is created on a cloud-based infrastructure. The well-publicized nervousness around security in the cloud is an issue that all service providers are working hard to
answer. Fundamentally the nature of cloud computing means the data of one consumer
is often stored alongside the data of another. To some extent that challenge is being met
through encryption, which is often used to segregate data-at-rest, but this is not a cure-all
and a thorough evaluation of the encryption systems used should always be undertaken.
Test data is frequently sensitive and its location is therefore important, since data entering
or exiting national borders can contravene national and international regulations, such as
the EU Data Protection Directive. To address this, the STaaS solution is transparent about
the geographic location where data and services are stored, and it allows clients to keep
data on their own servers, using a VPN connection.
Planning
The planning process ensures that the “right” tester is deployed to each assignment. “Right”
in this context means that the knowledge and competencies of the tester match the knowledge and competencies required for the assignment. Other planning aspects are:
•required availability of the tester (during working hours, weekends, 24/7);
•required or available location (office, home, offshore, nearshore , onshore);
•required or available (cloud) infrastructure (public, private or hybrid cloud);
•required or available test tooling, and
•technology (bandwidth and processing power availability).
Service Management
The range of services provided by the STaaS provider is not set in stone—it may grow or
recede. To this end, it must be determined periodically whether the current service offering
is in line with the requested services. In addition, services must be known (to the client,
assignment management and tester) and the products for the services must be up-to-date
and in line with the latest developments.
Human Resource Management
The process of human resource management aims, among other things, to continuously
develop the skills and career of the provider’s testers. This requires matters like defined job
positions with associated competencies, continued training and remuneration levels.
72
4 “Whatever” as a Service
Financial and Operational Management
Financial management is a continuous process based on budgeting (what are the expected
costs and benefits) and monitoring (what are the actual costs and benefits). Operational
management can be executed based on many factors. Examples of these factors are:
•the percentage of assignments completed within the agreed key performance indicators
(KPIs) on quality, cost level and time to market;
•pay-per-use of the needed infrastructure, tools and resources per agreed timeframe,
and
•the percentage of test services acquired as compared to test services acquired by competitors.
The test cost reduction was achieved through the following measures:
Measure
Short explanation
Level of cost reduction
Resource rationalization
Assign tasks to employees with
matching seniority level. Focus on
healthy ratio for test management
vs. test coordination vs. senior test
engineers vs. junior test engineers.
Combined these three measures
have led in practice to cost
reductions of 5-15%.
Sufficiently lean
core team
The size of the key team is adjusted
to the highs and lows of the midterm forecast in such a way that the
average level of occupation for the
key team is > 95%.
Alternatives for
idle time
Within testing dealing with idle
time is a common phenomenon.
Idle time for a test team can rise
to 20% of the test effort. By using
the economies of scale of the test
line a very flexible process is set up
that allows prompt re-assignment of
testers to other projects in the case
of idle time. In practice the test line
has proven to reduce idle time to a
level below 5%.
Uniform process
Install a uniform test process, with
standardized test products and
procedures. Maintain a key team to use and
re-use the process and test deliverables in multiple projects.
By installing a uniform test
process team cost reductions up
to 5% of original test costs have
been achieved.
73
TMap NEXT® Testing Clouds
Measure
Short explanation
Level of cost reduction
Test automation
Proper use of test automating contributes to test cost reduction.
Test Tools (as a
Service)
The test tools were only used when
needed, resulting in a more flexible use of the tools and even the
integration of new test tooling to
accelerate the testing process.
The use of test automation
and the needed test tools on
demand has resulted in a combined cost reduction up to 25%
of original test costs.
Near shoring and
off shoring
Through off shoring and near shoring testing activities are transferred
to regions with lower cost rates.
In practice the amount of off/near
shoring depends on certain conditions and varies from 0% to 70% of
all testing activities.
Taking into account the investment cost (translations, remote
connections, extra QA and
communication effort) test off
shoring has resulted in cost
reductions varying from 10% to
30%.
Cloud infrastructure
By moving the existing test infrastructure to a (public) cloud infrastructure the costs for stand-by time
and management were reduced.
Only when the environment was
needed it was turned “on.”
No CapEx was needed and only
OpEx for the use of the cloud
infrastructure, this resulted in
a cost reduction varying from
25% to 50%.
Table 4.1 Measures taken by the STaaS provider to enable cost reduction
Cloud-Enabled STaaS: The Conclusion
The Benefits
Thanks to survival of the fittest, and increasing competition, STaaS providers are constantly
looking to upgrade their services to ensure that they provide their clients with the best full
test service solution available. Therefore, these providers have to make sure they:
•Use the scarce expertise on structured testing, infrastructure and tools optimally to
create a complete service;
•Work on improving the test processes continuously, by using new insights, automated
test case execution and design, and reusing the available information;
•Use the different qualities and skills of an international and professional test capacity
from multiple locations around the world to provide an constant available workforce to
prepare and execute the testing;
•Create an on-demand test service, either as a single service or embedded in long term
service contract;
74
4 “Whatever” as a Service
•Industrialize the test services (“test factory”), by using not only tools for test manage-
ment, but also for test design and test execution;
•Produce reliable test product quality to provide a thorough insight in the applications
risks;
•Have an available cloud-based test infrastructure with on-demand tooling to perform
the testing;
•Have a competitive pricing model on a pay-per-use and fixed-price basis;
•Give advance insight into costs and lead time.
With all this, the STaaS provider takes full responsibility for test assignments, with clear
commitments expressed in KPIs on quality, cost level and time to market. A solution should
be available for single applications, full projects, and portfolios. STaaS leads to cost optimization as well as demonstrable improvements in quality of testing, test process, test
deliverables, test results and flexibility of test operations.
The Challenges
A STaaS provider must devote continuous attention to a number of challenges. Managing
these challenges is critical for the provider’s long-term success:
•The provider must determine on a continuous basis whether the services offered still
match the client’s demand. The client, not the provider, determines the required quality
level;
•The professionalism of the provider is based on the knowledge and competency of the
testers on the one hand, and the stability of the tester population on the other. If there
is a continuous inflow and outflow of people in the “pool of testers,” there is no stability
and no solid basis for knowledge building;
•Often an important client objective is cost savings. One way to achieve this is by archiving
test ware, test data, and (cloud-based) test infrastructure for reuse;
•Continuous attention to optimizing, organizing and refining test objects as an intellectual
property is critical;
•As a provider, it is important to render an objective assessment of the delivered software
or hardware, independently of the client. On the other hand, the client may have other
interests (less costs, short time-to-market). This is an important challenge that may pose
contradictions;
•The test basis (e.g. requirements, use cases, design specifications, heuristics) should be
available in English or translatable into a language that is understood by the testers,
preferably in a clear and simple form using tools such as HP QualityCenter Requirements
or IBM Rational Requisite Pro;
•If the quality of the test basis is inadequate, an alternative way to gain domain knowledge is needed;
75
TMap NEXT® Testing Clouds
•The test infrastructure must comply with rules and regulations stated by the client, but
also the country or union the test data resides in. The use of a cloud deployment model
must take into account the security and local compliance;
•And last but not least, test environments should be accessible from various locations.
76
5
Testing Cloud Strategy:
A Move to 3D
Is testing in the Cloud Era so different from how we have been doing
our testing? Yes and no. Testing applications on the cloud is the same
as testing applications on a traditional infrastructure. Only what is
tested is different. With cloud there are a lot more parties involved in
testing: not only the client and the stakeholders, the business, but also
3rd party suppliers of standard or SaaS applications. There are also
additional quality attributes due to the cloud infrastructure, and nonfunctional requirements become more important.
These new items to test make testing the cloud different, not even taking into account the
consequences of the cloud business model. The on-demand use of resources, testing tools
and infrastructure provides the opportunity to create a pay-per-use test service: Software
Testing as a Service (STaaS). With STaaS the benefits of the cloud as a business model are
used to provide a testing service to clients. There will be more on STaaS in Chapter 6.
77
TMap NEXT® Testing Clouds
Creating a Cloud Test Strategy: A Move to 3D
Cloud is not only technology and infrastructure that drives change and provides opportunities for change, but also a business model that thrives on change. When discussing clouds
it is therefore important to start on a strategic level, as it is for testing clouds.
Important questions that we need to ask ourselves are, among others, do we have a test
strategy that helps us evaluate the potential of the cloud, does the cloud provide opportunities to change the test strategy, how can we assure the compliance of the cloud, but
also what are the strategic areas where cloud could be an enabler, and how do we utilize
the opportunities the cloud gives us? These are questions that are all valid and that all
need to be answered, while keeping in mind the business aspect as we move to Business
Technology.
Business in Control: Business Driven Test Management
The exploration of the cloud has to start with the business strategy. Business driven test
management[BDTM, 2008] puts the business or business case in the driving seat regarding a testing project. With that in mind, together with the business interests (client, service
owners, users of services and other stakeholders) a (cloud) test strategy can be created. As
with testing of traditional applications, the testing of cloud infrastructure and applications
is done by covering the most important risks, but with a deeper attention to the three cloud
service types of Infrastructure, Platform and Software.
Figure 5.1 The cloud test strategy: leveraging the business, BDTM aspects and the cloud layers
78
5 Testing Cloud Strategy
Why not test all the cloud services as thoroughly as possible? If an organization has an
infinite number of resources available, then this is an option. But in real life, no organization
has indefinite resources, and as the time-to-market with cloud is important, no organization
has the time for that, either. Therefore choices have to be made concerning what must
be tested and how thoroughly it must be tested. The core of the test strategy is that what
really needs to be tested is tested, to cover the product risks. Neither too little testing, nor
too much testing. The business must be certain that the testing covers all potential risks
and what might go wrong if the requirements are not met. This means that a risk analysis
must be made before a single test case is considered, specified or executed.
Keep in mind that the business also wants to keep the costs of the tests as low as possible
and wants to understand the quality of the product as soon as possible. The theory of IT
governance classifies project control into four aspects: result, risk, time and costs. Business Driven Test Management (BDTM) uses these four aspects as the basis for the test
strategy. Business driven test management therefore leads the way to the test strategy of
cloud applications, it helps create the cloud test strategy by leveraging the business, the
three cloud layers in IaaS, PaaS and SaaS, and the four BDTM aspects of result, risk, time
and costs (see Figure 5.1).
Business in the Driver’s Seat: BDTM for Cloud
The cloud starts and finishes with the goals and identity of the business. Business driven
test management connects seamlessly with this approach. All investments and choices with
the cloud, therefore also with cloud testing, are inferred from the interests of the business:
the business case.
The business driven test management approach of Sogeti has four aspects [BDTM, 2008].
First BDTM sets the client, the business, at the core. The test manager gives the business
influence over the four control aspects: result, risk, time and costs. Second, the test manager
communicates the test goals in the language of the business. Third, with BDTM all tests are
based on product risks; the cloud test strategy is based on cloud risk analysis. The fourth
and final aspect of BDTM is making the results of tests visible for the business. The steps
of BDTM aim to achieve this (see Figure 5.2).
79
TMap NEXT® Testing Clouds
Figure 5.2 Summary of the BDTM steps
1 Establishing the assignment and collecting test goals
The business formulates the assignment, taking account the four BDTM aspects: result,
risk, time and cost. Establishing the test goals aims to determine what, in the eyes of the
business, are the desired results of the testing activities for the cloud services. A test goal
is a success criterion for the test assignment as specified in the language of the business.
The result of this step is the formulation of the assignment and a test goal table.
80
5 Testing Cloud Strategy
2 Determining the risk class
Based on a risk analysis, it is established what must be tested (object parts), on what
cloud layer and what must be examined (characteristics): this is the cloud risk
analysis (CRA). The risk class related to the test goals and the relevant characteristics
is established for each object part. If multiple test levels are involved, the test levels that must be set up are determined
in a plan that covers all test levels (the Master Test Plan). The cloud test strategy also
defines, for each combination of a characteristic/cloud layer/object part, an indication
of the relative intensity of testing in a specific test level. This step results in a cloud risk
table.
What follows is an iterative process—formulating the test strategy:1
3 Determining the test intensity
Deciding whether a combination of characteristics and object parts must be tested
lightly, moderately or intensively determines the test intensity. The risk class for each
object part, as defined in the previous step, is used as a starting point to determine the
test intensity. The initial principle followed here is the greater the risk, the more thorough
the required testing.
4 Estimation, planning and feedback
An overall budget is established for the test, which is plotted in a planning stage. This
budget is confirmed with the business and, depending on their needs, adjusted if necessary. In the latter case, steps 3 and 4 are repeated. This gives the business ongoing
direct control of the test process, enabling the business to manage it on the basis of the
balance between result and risk versus time and cost.
End of iteration
These steps result in a strategy table.
5 Allocating test design techniques
If the business agrees to the budget and planning, the test intensity is translated into
concrete statements regarding the desired coverage. This involves allocating test design
techniques to the combinations of characteristic/cloud layer/object part, taking circumstances into account such as the available test basis, projections and experience of the
test team. The techniques are used to design the test cases at a later stage.
1 From this point the cloud is like other software and is not other then described in Business Driven
Test Management. These steps will not be elaborated further.
81
TMap NEXT® Testing Clouds
Figure 5.3 Allocating test design techniques: from test goals to test cases
Figure 5.4 BDTM tables for the cloud
82
5 Testing Cloud Strategy
This step results in a test design table for each test level.
6 Providing insight and control options
Throughout the test process, the test manager provides the client and other stakeholders
with adequate insight into and control options for the IT Governance aspects:
•result: the test goals achieved
•risk: the risks covered
•time: whether the end date—or deadline—is realized
•cost: whether the test project remains within the agreed budget.
The First Step: Establishing the Assignment and Collecting Test Goals
The testing activities do not always easily correspond to the result that the business wants
to achieve. This is why there needs to be a translation of the desired business result into the
test goals. Examples include result definitions expressed in terms of realized/operational
critical success factors, business processes, enterprise architecture, time-to-market, agility
of the service; other examples involve the contracts, the service level agreement (SLA) and
so on. The term “test goal” is used as a collective term.
Definition
A test goal is a success criterion for the test assignment formulated in the language of the business (or other stakeholders).
Establishing the assignment and identifying test goals covers step 1 of the BDTM steps.
It creates a clear understanding for all stakeholders and the business like of what needs
to be done and what does not. Establishing the assignment requires an understanding
of the objective of the project, the (project) organization, the setup of the development
process, the (cloud) system to be tested, and the requirements with which the cloud must
comply. By clarifying the goals and expectations at the start of the process, uncertainty,
miscommunication and disappointment at a later stage are prevented. These objectives
and expectations are derived directly from the business case.
Definition
The business case provides the justification for the project and answers the questions: Why do we need to do this project? What investments are needed? What
objectives does the customer want to achieve by using the result of the project?
A (cloud) service can exist on all cloud service types or layers, or two of them, or even just
one. See Figure 5.5. It’s important to define the test goals for the correct cloud service type.
A SaaS implementation will have other test goals than that of a PaaS implementation.
Keeping in mind the connection and the interusage of the three service types and each
83
TMap NEXT® Testing Clouds
other’s services, the test goals will have an overlap over the different layers. Naming all of
them separately creates a logical list of all the test goals for IT, but for the business people
they need to be combined, as these stakeholders have no knowledge of nor interest in that
part of the cloud.
Figure 5.5 Cloud services can exist over all cloud layers
Whether they are explicit or not, a few things stand out when looking at the importance
of the business case for testing. The why is wholly or mainly irrelevant for testing purposes
using traditional applications, but for the cloud it matters. Not only in the cloud, the business
and IT are becoming more interwoven with each other, and IT is expected to participate in
the creation of a cloud service, a cloud application or an infrastructure in the cloud.
Good test goals are as important as good communication—and formulating them is just as
hard! In addition to this important role in communication, the test goals are an important
input for the cloud risk analysis (CRA).
The following parts of the definition require additional attention and clarification:
•Success criterion of the test assignment implies that as long as the reports show that
“all is well” with the success criteria, execution of the test assignment can proceed as
planned. However, as soon as there is a risk that a success criterion might not be realized, adjustment by management is necessary.
•In the language of the business implies that a test goal must reflect the perception of
the business and is therefore by preference not formulated in IT terminology.
84
5 Testing Cloud Strategy
Collecting the test goals is not a clearly delineated, separate activity; it occurs simultaneously with establishing the assignment. The test goals can only be established correctly
if the test manager is able to consult with the business. Clarifying the test goals correctly
at the start of the process prevents uncertainty, miscommunication and disappointment
at a later stage. The result of collecting the test goals is an inventory of the test goals in
a test goal table.
Example
A test goal is a success factor of the test process, specified in the language of the
business. Test goals are presented in the test goal table, like this example. The
indicated cloud test goals and associated relevant characteristics are based on
experience.
No.
Test Goal
Service Type
1
Business process A and service 1 work according to specifications
PaaS
SaaS
1.1
Business process A is a new process
SaaS
1.2
Service 1 is a new service to develop with very complex business
rules (for example, a calculation module to calculate output and
cost deductions of insurance)
PaaS
1.3
After a fault in a financial service, a rollback must be possible
(maximum independence between services)
SaaS
2
Business process B and service 2 work according to specifications
IaaS
PaaS
SaaS
2.1
Business process B is an existing process and is not modified
SaaS
2.2
Service 2 does not disrupt the existing and new business processes,
as it is an existing, much used service that is not modified
IaaS
PaaS
3
Non-cloud architecture can connect to cloud architecture (for
example, out-of-the-box applications such as SAP and Siebel, legacy
applications on mainframe, existing databases)
IaaS
PaaS
4
Standardization of services
IaaS
PaaS
SaaS
4.1
All services and business processes are re-usable, currently and in
the future
PaaS
SaaS
4.2
Standards are correctly applied to new services (for example XML,
SOAP, technical services such as security, authorizations, transformations and logging)
IaaS
PaaS
85
TMap NEXT® Testing Clouds
No.
Test Goal
Service Type
5
No unauthorized access is allowed to the infrastructure, platform
and services
IaaS
PaaS
SaaS
5.1
Unauthorized access to commercial information is not possible, the
security is guaranteed at server level (for example LDAP), service
level (error running) and transport level (for example https)
IaaS
PaaS
5.2
Architecture and software are implemented according to the
requirements of the Project Start Architecture
IaaS
SaaS
6
Existing legacy data is correctly accessed by services, opening
documents up in the cloud has no consequences for the existing
legacy functionality
IaaS
PaaS
7
The performance is scalable and non-cloud systems perform as
usual
IaaS
PaaS
SaaS
7.1
The performance of the existing legacy material is not affected by
being accessed through the cloud
IaaS
PaaS
7.2
Performance is scalable for up to 72 times standard production
performance levels
IaaS
7.3
The client has an offer for a car insurance policy on the screen
within 4 seconds
SaaS
7.4
High volumes of requests over the Internet to the client database
do not have an adverse influence on the performance of the database responding to requests by employees of the business
IaaS
PaaS
SaaS
Table 5.1 An example test goal table
Services play a major role in the test goal table. The business will not frequently designate
a specific service as a test goal, because the business is more concerned about the business process as a whole. A stakeholder that does want to make services test goals is the
enterprise architect. It is the responsibility of the enterprise architect to standardize services so they are reusable in future and to implement services and architecture in general
according to standards imposed by the enterprise architect.
The Second Step: An Analysis of the Cloud Risks
Establishing the risks of a cloud service, application or infrastructure covers step 2 of the
BDTM steps. It creates a cloud risk analysis from the formulated assignment and test goals.
With that in place, the focus in the product risk analysis is on the product risks; in other
words, the risk for the organization if the product does not have the expected quality.
86
5 Testing Cloud Strategy
Definition
A cloud risk is the chance that the cloud will fail as measured in relation to the
expected damage if it does.
Cloud risk = Chance of failure * Damage
where Chance of failure = Chance of faults * Use frequency
What are the risks of clouds? The first thing to know is that with clouds, testing should be
applied on three layers: the infrastructure, platform and software layer. These layers are
aligned with the three service models of cloud computing (see Figure 5.6). For more information on the cloud computing service models, see Chapter 6. Additionally, the testing
has to be performed by two different parties: namely the cloud provider, offering Software
as a Service (SaaS), and the cloud consumer, consuming and developing cloud-enabled
applications inside the cloud environments. Not only tests against the software layer, but
also Platform as a Service (PaaS) and Infrastructure as a Services (IaaS) have different test
goals and therefore need to be tested (separately) by both parties, as the cloud provider
cannot assure the performance of the developed applications.
Figure 5.6 Service types of the cloud and their suppliers
87
TMap NEXT® Testing Clouds
Determining the cloud risks can partly be derived from what a cloud is made up of (as
Service Oriented Architecture (SOA) is composed of layers), the characteristics of the cloud
(on-demand self-service, resource pooling, broad network access, rapid elasticity, measured
service and multi-tenancy) and security. As every service type and every application has
a specific source, they often introduce their own risks, which have to be targeted by the
developers and testers alike. See Figure 5.7.
Figure 5.7 The cloud with its service models, deployment models and characteristics
All this should be considered as a motivating argument for aggressively testing and consequently mitigating all the risks to the infrastructure itself and the internal interactions it
is affected by (like operating system calls, network communication, file system access, and
the like), to guarantee the correct and secure operation of applications. No matter what
segment or layer of a cloud a developer is testing, assuring its quality through sophisticated
testing plays a crucial role.
As a helping hand a non-exhaustive list of risks of failure and damage relevant to the cloud
has been established. The list in Table 5.2 can help when determining the product risks of
the cloud.
1
The damage that a service can cause increases if this service is related to various other
services and is part of various business processes.
2
Damage to reputation and loss of client confidence increase if errors arise in the cloud
affecting external cloud consumers. This includes the use of web interfaces for consumers, such as Internet banking, but also online interfaces, such as for intermediaries who
want to have an offer approved in real time.
88
5 Testing Cloud Strategy
3
The chance of failure of future cloud services increases if development standards are not
taken into account when developing these services. It is therefore difficult to reuse the
cloud service in the future, meaning that more software adjustments are necessary.
4
The chance of failure of a service does increase if non-functional system requirements
are not implemented at the service level. Think in particular of performance and security
requirements.
5
The chance of failure of a business process increases if the underlying services have not
been aligned with the business process.
6
The chance of failure of a business process increases as new technology is used more.
For many companies, the passage to SOA means that a new administration organization must be arranged to be able to support the new technology.
7
The chance of failure of a business process increases if the required changes in chain
applications (ERP application, legacy applications) for SOA are relatively large.
8
The chance of failure of a business process increases if data errors can be contributed to
external service providers. The effect of faulty information provision by external service
providers is more difficult to control.
9
The chance of failure of a business process increases with the implementation of a
rollback.
10
The chance of failure on authorization increases if the release of authorizations is regulated by a central security service for a large number of users and a large number of
data collections.
11
The multiple layers of the cloud and the use of application-specific code often introduce
their own risks, especially for security testing. Each layer could have a security breach.
12
The myth of infinite scalability: the scaling ability is solely determined by the application—the technologies it uses and its internal architecture—not the infrastructure.
13
As cloud computing delivers us “hosted” infrastructure, the Service Level Agreement
(SLA) determines its continuity. Looking into these boundaries is the ultimate test.
14
System boundaries within the cloud should be clearly stated. When these boundaries
are not clear or not enforced, the system can behave differently than expected.
15
Cloud environments and applications are hosted using a variety of methods and locations. Different laws and regulations around the hosting of data can affect these different locations.
Table 5.2 List of potential chances of failure and damages of the cloud
Knowledge of the cloud infrastructure, platforms, services, applications, organization, possible damage and chance of failure is required to execute the cloud risk analysis (CRA). Such
knowledge is nearly always distributed across multiple parties and people inside and outside
the organization. In practice, the test manager is often the facilitator and organizer of the
CRA, approaching various people who can contribute knowledge about the cloud risks.
89
TMap NEXT® Testing Clouds
When executing the CRA, the test manager must keep the purpose of the CRA in mind: an
understanding, shared by all stakeholders, of the cloud risks corresponding to the characteristics and object parts of the cloud. He must devote his attention to the fact that, in
addition to the cloud risks, the CRA also brings process risks (relating to the test process),
product risks (related to the applications in the cloud), new product requirements and test
goals to light.
The cloud risk analysis (CRA) consists of the seven steps as shown in Figure 5.8.
Figure 5.8 The seven steps of the cloud risk analysis (CRA)
1 Preparation. In the first step, the test manager creates an overview of damage and
chance-of-failure elements that may be relevant to the CRA. This is done on the basis
of existing information, such as the requirements for standard SaaS solutions, designs,
enterprise architecture design, or similar documents.
2 Determining relevant elements (per cloud service type). Based on the test goals, the
participants of the CRA determine the set of damage and chance-of-failure elements on
which the CRA must focus. These elements are divided into the three cloud service types,
although every element can be on every service type or layer. The CRA is then executed
for the sub collection of the damage and chance-of-failure elements collected in step 1.
3 Agreement standard services. All participants establish a mutual agreement on which
levels of the cloud or its applications will employ standard services. These services
are then integrated into several statements of work for the supplier of these standard
services.
90
5 Testing Cloud Strategy
4 Determining damage. The participants determine the damage level for each part of
the cloud based on the cloud architecture and service types. These are separated by the
processes within the cloud and not standard services.
5 Determining chance of failure. For those parts that are customized or part of the endto-end tests (chain risks) the participants establish the chance of failure per characteristic
based on the object parts that constitute the cloud system (test object).
6 Determining risk class. The participants determine the risk class for the combination
of characteristics, service types and object parts on the basis of the risk of damage and
chance of failure.
7 Completeness check. A completeness check is the last step in this process.
The Product Risk Analysis vs. the Cloud Risk Analysis
What is the difference between a cloud risk analysis and a product risk analysis? The differences are around:
•The result of a cloud risk analysis is a 3D model of the risks (see Figure 5.9). It gives insight
in the damages and chance of failure per characteristic, object part and service type.
•The larger amount of stakeholders (step 1), like for IT the Enterprise architects, owner
of the cloud layer, 3rd party service suppliers, and for the business Marketing and end
services users.
•Within clouds, a service is the relevant object part as a part of a business process
(step 2). Functionality, for example, is no longer formed by a number of subsystems but
by services. The characteristic functionality can be subdivided into the various services
and the totality of the object parts is the business process. The same reasoning applies
to the remaining characteristics. To get a complete overview of all services and business
processes, which fall within the scope of the cloud project, the object parts are arranged
by characteristic in a table, see Table 5.3.
•Agreements on what are and what aren’t standard services (step 3). These standard
services are not tested separately, but only in the end-to-end test. The 3rd party service
supplier can be enforced to comply with a Statement of Work (SoW) where the expected
quality of the service is agreed upon. The use of Quality Gates can help in getting transparency in the quality of the service.
•Functional testing is of lesser importance. As their supplier approves the functional
requirements of the standard services, functionality is of lesser risk. But non-functional
requirements are not sufficiently allocated in the tests of the supplier. Integration of the
standard services in the cloud has the priority of test, for example performance, security and integration testing. Non-functional requirements should get a higher risk class
compared with functional requirements.
•Chain risks are always determined in a cloud project (step 5), as a cloud consists of
multiple service types they should always be tested at least once in an end-to-end test.
91
TMap NEXT® Testing Clouds
•Because of the greater complexity and dependence of standard services the risk classes
of High, Medium and Low are not always sufficient. A more empirical method of risk
classes is preferred, like for example numbers.
Figure 5.9 CRA: a 3D end result
In More Detail
Besides a traditional performance test, performance and continuity can be tested
per service as with security, (suitability of) infrastructure is related to cloud infrastructure components. A large number are standard software components that get
their specific configuration for the business.
Characteristic
Service
Type
Objects
Functionality SaaS
Screen app 1/GUI
Service A
Business process 1
PaaS
Screen app 1/GUI
IIS
–
IaaS
ESB
Private interface
–
Service A, B, C, E
All business processes
–
PaaS
Screen app 3/GUI
WAP
–
IaaS
ESB
–
–
Performance SaaS
92
5 Testing Cloud Strategy
Characteristic
Service
Type
Objects
Security
SaaS
All services
All business processes
–
PaaS
All apps
All Windows
All ERP
IaaS
All VPN
ESB
Private interface
SaaS
Screen app 5/GUI
All business processes
–
PaaS
Windows 3
ERP 2
Legacy 1
IaaS
VPN to Legacy 1
ESB
–
SaaS
Business process 2
Business process 5
–
PaaS
All Windows, ESB
IIS
WAS
IaaS
–
–
–
SaaS
Screen app 1/GUI
Screen app 2/GUI
Non-legacy services
PaaS
–
–
–
IaaS
ESB
–
–
SaaS
All business processes, –
ERP
–
PaaS
ESB
WAP
WAS
IaaS
–
VPN’s
–
SaaS
Non-legacy apps
Non-legacy services
–
PaaS
IIS, ESB
Windows 1, 2 & 5
WAP
IaaS
–
VPN’s
–
Continuity
Suitability
Reusability
Infra­
structure
Scalability
Table 5.3 An example for determining object parts for clouds
Which services as object parts are involved in which characteristic follows from
the analysis of the test goal table, the cloud design and the PSA. If, for example,
a test goal is connected with making an offer in which characteristics such as relevant functionality and performance are mentioned, then the services, which are
involved in making an offer, are classified by both the characteristic functionality
and the characteristic performance.
For cloud projects, it is important that the mode of classification is standardized. The risk
class allocated to a (part of the) cloud can be found from the service registry, with an
explanation of why the relevant risk class has been added. With a standardized mode of
93
TMap NEXT® Testing Clouds
classification, it is possible later to understand why the specific risk class is provided. Moreover the risk class for the cloud can easily be reconsidered with the standardized mode of
classification if the relevant service is to be reused.
The Next Steps: A Cloud Test Strategy
The last step of the cloud risk analysis is verification that the result is as complete as possible. For this purpose, the separate risk tables are merged into a single overview showing
the risk class for each combination of characteristic and object part.
The test goals are also incorporated into the overview to make executing the completeness
check easier for the business. Based on the risk table, as shown in Figure 5.4, the cloud test
strategy is formed. This is transmitted to the participants and business. They are asked to
approve the result. The business grants definitive approval and makes a decision in the
event any points of discussion arise.
The final result obtained is an overview showing the risk class per combination of characteristic and object part, like the example shown in Table 5.4. It is the basis for selecting the
test intensity for each combination of characteristic and object part in the cloud testing
strategy. The test strategy around cloud implementations gives insight into the quality and
compliance of the cloud by setting up the correct tests, checks and acceptance criteria.
Characteristic/
Object Part
Service
Type
Test Goal
Risk
Class
C
Functionality
Application 1
PaaS
SaaS
Business process A and service 1 work according to
specifications
C
Application 2
IaaS
PaaS
SaaS
Business process B and service 2 work according to
specifications
C
Application 3
PaaS
SaaS
Business process A and service 1 work according to
specifications
C
Business processes
IaaS
PaaS
SaaS
Business process A and service 1 work according to
specifications
Standardization of services
B
A
Security
All services
94
IaaS
PaaS
SaaS
No unauthorized access is allowed to the infrastructure, platform and services
Standardization of services
A
5 Testing Cloud Strategy
Characteristic/
Object Part
Service
Type
Test Goal
Risk
Class
Business
­processes
IaaS
PaaS
SaaS
Business process A and service 1 work according to
specifications
No unauthorized access is allowed to the infrastructure, platform and services
A
B
Performance
Applications
IaaS
PaaS
SaaS
The performance is scalable and non-cloud systems
perform as usual
A
All services
IaaS
PaaS
SaaS
The performance is scalable and non-cloud systems
perform as usual
Existing legacy is correctly opened up by services,
the opening-up in the cloud has no consequences for
the existing legacy functionality
C
Business
­processes
IaaS
PaaS
SaaS
Business process A and service 1 work according to
specifications
Business process B and service 2 work according to
specifications
B
C
Stability
Non-legacy
­applications
IaaS
PaaS
Existing legacy is correctly opened up by services,
the opening-up in the cloud has no consequences for
the existing legacy functionality
The performance is scalable and non-cloud systems
perform as usual
C
A
Continuity
Application 5
SaaS
Standardization of services
B
Business
­processes
IaaS
PaaS
Business process A and service 1 work according to
specifications
Business process B and service 2 work according to
specifications
A
B
Suitability
Business process
PaaS
Business process A and service 1 work according to
specifications
Business process B and service 2 work according to
specifications
C
Reusability
Non-legacy
­services
B
SaaS
Standardization of services
C
95
TMap NEXT® Testing Clouds
Characteristic/
Object Part
Service
Type
Test Goal
B
Infrastructure
All infrastructure
IaaS
PaaS
Existing legacy is correctly opened up by services,
the opening-up in the cloud has no consequences for
the existing legacy functionality
Non-cloud architecture can connect to cloud
architecture (for example, out of the box applications such as SAP and Siebel, legacy applications on
mainframe, existing databases)
Table 5.4 An example of a risk table for a cloud implementation
96
Risk
Class
B
6
Testing the Cloud:
In, On or With…
As the Cloud Era emerges, testing will change! Changes are ahead
not only for testing information systems, but also for testing the infrastructure and cloud-enabled applications, even with the ability to have
instant deployable test infrastructure. What are the changes that
come along with the cloud? It enables the move to testing the Business
Technology. This all has an impact on the way we will do testing in the
future. As the current types of applications will not disappear due to
cloud applications, it doesn’t replace what we test, but it provides an
addition to software testing.
Testing the Cloud Itself: Cloud Infrastructure
Why test the (cloud) infrastructure? Nowadays, no one cares about where his or her text
messages and e-mails are stored. There is little or no realization that there is a whole world
that lies behind these everyday things. Everyone just assumes that they are there when
they are requested. In order to facilitate and especially to guarantee this, a cloud provider
needs to take significant measures relating to the cloud infrastructure. The temporary
97
TMap NEXT® Testing Clouds
unavailability of these services or the loss of those messages would undoubtedly cost the
cloud provider existing and potential customers.
Infrastructure solutions often consist of hardware and/or software components. They are
generally the standard of the larger parties and suppliers. In addition, the solutions belong
to the category of proven technology. More and more infrastructure components are commercial off-the-shelf (COTS) products such as appliances. But cloud solutions only exist
involving software components.
In More Detail
Traditional Infrastructure
Unfortunately traditional infrastructure is tested with minimal or no attention to
the testing process. Standard requirements when starting an infrastructure project
are a number of general requirements, like 99.9% availability, at least the same
performance as in the present situation, et cetera. Rarely are these requirements
made SMART (Specific, Measurable, Achievable, Realistic and Timely).
During the construction phase the infrastructure is configured, migrated or directly
deployed. Normally after the design and construction phase testing should commence, but as the testing is conducted under pressure nobody tests the (test) infrastructure. What happens is that the infrastructure is implemented like a pilot, and
after successful completion the pilot (the infrastructure) will go into production.
Regarding the implementation of cloud infrastructure, as in cloud computing solutions,
it cannot be sufficient to just verify that it works. In addition to a utility function there are
other important aspects, like availability, security, performance, scalability and adaptability.
These also must be tested before the whole cloud system can go into production.
Business Demands a Working Infrastructure, Even in the Cloud
As not only infrastructure, but all of IT, becomes more and more a commodity, the various
infrastructure components become “invisible” to the users. Only workstations, laptops
and printers are visible. And as these have become normal they doesn’t remind people
of infrastructure, only appliances. With the cloud, on-demand and “Anywhere, Anytime,
Anyplace” come to mind. Although this is already closer to reality than some think, it still
needs some work. For example, whereas traditional infrastructure has a maximal 99.9%
availability, cloud infrastructure has promise of 100% availability.1
1 Google Apps makes a new promise: No downtime.
98
6 Testing the Cloud
The underlying cloud infrastructure forms the foundation for cloud solutions. For instance,
there is an enormous variety of (mobile) devices that needs to be supported so that information can be ported to the underlying systems. The demands of such collaboration,
“The New World of Work,” Voice over IP (VoIP), use of smart phones and apps sets high
demands on the infrastructure. In the past, e-mail was an application with moderate use.
Nowadays e-mail is inextricably linked to the business processes of an organization. And
if the e-mail system fails the CIO can be fired, as e-mail is a standard part of business and
facilitates other applications and solutions. It’s already a commodity (see Figure 6.1). The
infrastructure, like any utility, has become less visible while the world around it has become
more dynamic and change is a given.
Figure 6.1 E-mail is already a commodity to the business: it’s Business Technology
Cloud Solutions Not Only “Need to Work,” But Work “Well Enough” for
Business
With the passing of time the IT infrastructure has become increasingly complex. Fifteen
years ago, an e-mail solution was a PC on a table in the corner of the room, connected to
the Internet via a modem. Today, IT infrastructure items are end-to-end solutions consisting
of integrated (and therefore) complex components. A typical e-mail architecture consists of
clustered and/or virtualized servers, dual front-end servers, webmail functionality, spam
filters, anti-virus solutions, firewalls in a demilitarized zone2 environment, and a huge variety
of devices (fixed and mobile). The necessary network facilities are still not fully available.
Changing one of these components can have far-reaching consequences for the entire chain,
resulting in production disruptions of the mail application, but also of business applications
that use the underlying infrastructure.
Now with cloud the infrastructure becomes even more complex. Not only do we have virtualized parts of the infrastructure in the cloud, but also the platforms are virtualized in
2 In computer security, a DMZ, or demilitarized zone is a physical or logical subnetwork that contains
and exposes an organization’s external services to a larger untrusted network, usually the Internet
[Wikipedia].
99
TMap NEXT® Testing Clouds
the cloud. And to add to this, third parties can even host the infrastructure and platforms.
As organizations are increasingly dependent on technology, the number and complexity
of infrastructure changes is increasing. Often enough something about failing systems
appears in the press. Although not all of these problems can be traced to infrastructure
components, it is important to realize that (cloud) infrastructure is a link in the IT chain and
a chain is only as strong as its weakest link. So it needs to be tested, but how thoroughly,
against what, and what to look for?
How thorough the test is depends on the test strategy: the higher the risk, the more thorough
the test. Regarding the implementation of cloud services, it’s not sufficient to verify that “it
works,” it also needs to be available, scalable, reliable, adaptable, secure and accountable.
All of these are non-functional quality attributes of the cloud (see Figure 6.2). But not all
of them need to work perfectly correctly; they only need to work in the way the business
would like them to work. IT moves into the business!
Figure 6.2 The Cloud infrastructure quality attributes
100
6 Testing the Cloud
Non-Functional Quality Attributes of the Cloud Infrastructure
These non-functional quality attributes are all very nice, but what do they mean and how
can they be tested? Note that when testing cloud infrastructure there are always three layers to test for, and these non-functional quality attributes only target the IaaS.
Scalability
A cloud in the first place should be sufficiently scalable to meet the variable demand of the
business. Scalability focuses on the dynamic capacity of the cloud infrastructure, the supporting organization, and the processes of growing and shrinking the volume of services.
It’s a required property of the cloud, which indicates its ability to handle growing amounts
of work in a graceful manner. It also means the ease with which a system can be expanded/
upgraded when there is an increase in users and the need for more speed, processing and
storage capacity, and downgraded when there is a decrease in users and less need for
speed, processing and storage capacity.
Testing for scalability is a difficult task, as you need to know what to test for. That is why
the cloud provider mostly does this, but a cloud consumer might want to test it in their
integration tests. With the cloud the scalability can be enormous. It’s possible to use a
huge amount of computing power and storage space if needed. An agile load test can test
the boundaries of the scalability. It creates a (high) volume of load when needed to check
for the maximum load allowance of the infrastructure.3 For example, the scalability can be
tested with an increasing performance demand on the infrastructure; more and more virtualized machines are added to the infrastructure to see if it can cope with the performance
demands. Testing the shrinkage of the infrastructure cannot be done, but this quality can
be tested within the PaaS or SaaS layer of the system as a whole.
Availability
The cloud infrastructure is available in sufficient quantities to meet the (often implicit)
demands of the organization. The availability is partly determined by business requirements,
both explicit and implicit, such as confidence and expectation on the part of the organization (which is only made explicit in the event of a disruption). Availability in this context
relates primarily to the continuous operation of systems in accordance with SLA, and it is
the degree to which a system is available for the users (at the desired times).
For IaaS it’s important to test the availability, as it is one of the pillars of the cloud. If a
cloud interface isn’t available, its benefits will disappear. Public clouds should have at
least 99.99%4 availability, as the service of the cloud provider enables the movement of
data to an online server if another one fails. Currently Google has an availability of 100%
for its enterprise mail solution. The cloud availability can be tested like normal availability
3 Theoretically the cloud has an unlimited amount of computing power and storage space. But in
practice there are always boundaries.
4 Private clouds can have lower availability, but that’s determined in its design.
101
TMap NEXT® Testing Clouds
of infrastructure systems. A test therefore should be conducted to test the ability of the
system to recover from an individual hardware component failure, such as a network or
disk problem.
In More Detail
Availability Testing vs. Reliability Testing
Traditional testing for availability means running an application for a planned
period of time, collecting failure events and repair times, and comparing the availability percentage to the original service level agreement (SLA).
Where reliability testing is about finding defects and reducing the number of failures, availability testing is primarily concerned with measuring and minimizing the
actual repair time. That may seem odd at first, but take another look at the formula
for calculating percentage availability [Wikipedia]:
(Mean Time between Failures / (Mean Time between Failures + Mean Time to
Recovery)) × 100
Notice that as Mean Time to Recovery (MTTR) trends towards zero, the percentage
availability trends towards 100%. This idea becomes the essential focus of availability testing: reduce and eliminate downtime.
The closer the testing is to real-world situations, the better the test confidence.
Some organizations are reluctant to allocate fully configured server machines and
isolated network environments to a long battery of availability testing. Just remember that a software defect found after deployment costs ten times more to fix than
if found before deployment.
Availability is typically specified in nines notation. For example, 3-nines availability
corresponds to 99.9% availability. A 5-nines availability corresponds to 99.999%
availability. Downtime per year is a more intuitive way of understanding the availability. See Table 6.1, which compares the availability and the corresponding downtime.
Availability
Downtime
Availability
Downtime
90% (1-nine)
36.5 days/year
99.99% (4-nines)
52 minutes/year
99% (2-nines)
3.65 days/year
99.999% (5-nines)
5 minutes/year
99.9% (3-nines)
8.76 hours/year
99.9999% (6-nines)
31 seconds/year
Table 6.1 The availability vs. the corresponding downtime
102
6 Testing the Cloud
Reliability
The cloud supports the business while protecting against damage from unwanted interruptions to supply facilities or theft/loss of data. It’s a set of attributes that bears on the
capability of software to maintain its level of performance under stated conditions for a
stated period of time [ISO 9126-1, 1999], and it consists of the following sub-characteristics:
Maturity, Fault Tolerance, Recoverability, and Reliability Compliance. If one of these subcharacteristics fails, the business will notice it immediately and the system will not respond
as expected or as required! A cloud solution needs to be reliable, as the business will
depend on it.
All these sub-characteristics need to be tested for, especially with a cloud infrastructure, as
an “as a service” infrastructure solution must be reliable under the required circumstances.
Testing is done by using a security-like test approach. The test cases need to be setup by
negative testing, and not only do the requirements need to be checked, but they also need
to be challenged by approaching them negatively. Negative testing, or abuse cases, can
define if the requirements are met. A test mindset of “is the infrastructure reliable?” will
determine the quality of the Infrastructure as a Service. See the boxed text above on “Availability vs. reliability testing.”
Adaptability
A cloud is easily adaptable to changing organizational needs. These include the ability
for updates/upgrades to take effect and for new applications to be made available, but
also for connections with other clouds and even organizations (community clouds). The
infrastructure needs to have the ability to work in a different environment without having
to execute extra actions to enable this. A cloud is characteristically always available and
able to work on and with different platforms. The PaaS layer is built on the IaaS layer and
needs to support this. Standardization of the different versions of the infrastructure improves
the adaptability of any piece of hardware and software. A cloud infrastructure needs to
support the business processes that the service provides.
When testing adaptability, timeliness is usually an important factor. Timeliness is defined
as the degree to which the information is available in time in order to take the measures
the information was meant for. Adaptability can be tested dynamically and explicitly with
a real-life test (RLT) [TMapNEXT, 2006].
In More Detail
Real-Life Test [TMapNext, 2006]
With the real-life test (RLT), it is not the intention to test the system behavior in separate situations, but to simulate the realistic usage of the system in a statistically
responsible way. This test mainly focuses on characteristics such as effectiveness,
103
TMap NEXT® Testing Clouds
connectivity, continuity and performance of the system being tested. Many defects
that are found with a real-life test are connected with a system’s use of resources:
––crashing of transactions following lengthy use;
––crashing of transactions that are carried out in a particular sequence;
––inadequate response times and speed of processing;
––insufficient memory or storage space available;
––insufficient capacity of peripherals and data-communication network, and
––unavailability of system components after an update.
To be able to test whether a system can handle realistic usage of it, that usage
should be somehow specified. This also serves as the basis for the test basis and, in
this context, is often referred to as the profile. The two most common types are:
––Operational profile. Simulation of the realistic usage of the system, by carrying out
a sequence of transactions, which is compiled in a statistically responsible way.
––Load profile. Simulation of a realistic loading of the system in terms of numbers of
users and/or transactions.
Security
A reliable cloud should provide adequate security for the required level of integrity and
confidentiality. Security is a very broad term, but at a minimum it should involve user
identification and the screening of systems and data residing within the cloud. Security is
important in the cloud infrastructure, with a direct link to the multi-tenancy characteristic
of the cloud. This multi-tenancy and the elasticity of cloud solutions create a security risk,
which can be a great threat and isn’t effectively solved by the cloud providers.
In More Detail
Multi-Tenancy and Elasticity
The elasticity characteristic of the cloud enables the opportunity to grow and
shrink in infrastructure performance and capacity. The allocated computing power
is enlarged (or reduced) when needed. However, when the infrastructure is located
in a certain part of the cloud, it’s unknown what part of the cloud will be allocated.
This could be in certain “risk areas” within the cloud.
These risk areas are allocated to infrastructure managed by unknown or obscure
parties. One of the most known risks of multi-tenancy solutions is the unknown
“neighbor”: it might be a criminal or hacker that wants to have access to the stored
information.
104
6 Testing the Cloud
When the elasticity creates an enlarged allocation of the infrastructure next to a
risk area, that can generate a security risk. Mutual arrangement with the cloud
provider could determine what will happen as the allocation grow. For example, an
IP range could be set, or the physical server that the enlargement is executed on
could be determined.
The security of the infrastructure is also an attribute that, partly, needs to be addressed in
a negative manner, using abuse cases. How is the infrastructure secured and how “easy”
is it to get access to this? Using special companies to execute hacker-like tests as part of
an integrated test solution for the infrastructure is of added value.
Accountability
Calculating resources, tooling and infrastructure costs associated with its use on the cloud
is maybe the most important reason to move to a cloud, so accountability is a main characteristic of the cloud infrastructure. This covers audit trails and specific accountant demands
(e.g. Sarbanes-Oxley). The pay-per-use requires an optimal accountability of cloud services
and applications.
Accountability can be tested statically with a checklist composed of the setup of certain
measures. The realization of these measures in the system can be tested in a dynamic and
explicit way.
The qualities and characteristics shown above are based on generic quality attributes for
cloud infrastructure. These quality attributes are not all critical for every IaaS solution, and
in some cases there could be more needed for testing purposes. It is possible to deviate
from this list if the situation requires. But it is recommended that the initial quality attributes
use this fixed set of quality attributes as a basis, and only after identification of all risks to
decide on the inclusion and/or removal of certain quality attributes.
Functional Testing of the Cloud Infrastructure
As with all the non-functional characteristics of the infrastructure layer of the cloud, it’s
only tested on this level. But the functional requirements need to be tested while integrating the different pieces with each other. In other words, all three layers need to be tested
and the integration between these layers is also part of the test. An integration test over
the complete infrastructure is the way to start this.
All different parts of the IaaS need to be connected with each other, either physically or
using simulation techniques, and while using data to communicate between these parts.
105
TMap NEXT® Testing Clouds
All data should be able to go from A to B. To test this, a Program Interface Test5 is the best
test technique to use [TMap, 2002].
In More Detail
Program Interface Test: For the Integration of IaaS [TMap, 2002]
The program interface test (PIT) is a test design technique that is used to test the
interfaces between the various programs, infrastructure components and/or modules. After testing the independent components of the cloud, this technique is used
to verify whether the components still function correctly after integration with real
data flows.
When two components in an infrastructure work with each other, the integration
of these two needs to be tested. Executing test cases created with PIT permits
verification of whether component 1 and 2 function correctly together. If there is a
component 3 (or more), these are replaced with stubs and drivers. The integration
between component 1 and 2 is done with real data.
The goal of the PIT is to test the interfaces between the various components,
programs and/or modules, and thus check if the various components interpret the
data flows in the same way. This is because at these points in the programs many
defects occur, produced by differences between the two components. These types
of misinterpretations and incorrect execution are dealt with during this test. The
program interface test focuses especially on the interfaces and not on the correctness and completeness of processing. It’s end result is a test script, like, for
example, that in Table 6.2.
Test case
1
2
3
Object
Site 1
Other 3
Number 8
Geo
Sketch
Final
Sketch
Mutation
GIDS
BAG Total
BAG Total
Status
Not measured
Demolition
Not measured
Geo note
Y
Y
N
Date
Today + 4
Today - 5
Today
Document#
34576
23
7685
Document date
Today + 2
Today - 7
Today + 2
Expected Result
Measured
Deleted
Not measured
Table 6.2 The result: an example of a PIT test script
5 For more information see the explanation on Test Design Technique – Program Interface Test at http://
www.testingthefuture.net/2009/08/test-design-technique-%E2%80%93-program-interface-test/.
106
6 Testing the Cloud
Quality of the Cloud Infrastructure Using “Agile”
It is important to know which quality the cloud infrastructure has to meet. The mostly implicit
demands should be made explicit in the cloud requirements. The business is in the driver’s
seat, but it needs to specify clear requirements or use an agile approach in creating and
checking the cloud infrastructure.
An infrastructure is set up with the usage of virtual infrastructure components; virtual
machines emulate the physical infrastructure. When these components are created from
standard components, such as IaaS, it’s possible to quickly connect the components and
establish the correct high-level functioning.
An infrastructure engineer can setup part of the cloud infrastructure and check within
minutes to see if it is functioning correctly. The system is built like a LEGO® building where
every component, either a system, an interface or any other infrastructure component, is
checked to see if it works. When it doesn’t function as needed, a direct change can correct
the situation. When it is shown to work, another component can be added to the infrastructure; every component is integrated as needed in the infrastructure. This agile approach is
based on iterative deployment in which requirements and solutions evolve in combination
and change is embraced, which would therefore appear to offer a potential solution to the
problems in a traditional approach. The setup of an agile framework consists of designing
the infrastructure, defining the process and assigning people to run the process.
Figure 6.3 The deployment of an agile framework consists of infrastructure, process and people
The incremental deployment processes have been specifically developed to increase both
speed and flexibility. The use of highly iterative, frequently repeated and incremental process
107
TMap NEXT® Testing Clouds
steps and the focus on business involvement and interaction theoretically supports early
delivery of value to the business. But the infrastructure basis is in defining the requirements
based on Availability, Scalability, Reliability, Adaptability, Security, and Accountability, as
described earlier.
Cloud Applications: Testing on the Cloud (SaaS)
Cloud applications are still few compared to traditional applications, but they are the
future. But what are cloud applications? When the question is asked, “can you name a few
cloud applications?” most people answer Salesforce.com, Facebook, Google Apps and even
Microsoft Azure.6 Four hits (where one isn’t actually a cloud application), as the best-known
examples. Are there more? Yes and they are growing in numbers!
But how do we test these cloud applications? What’s so special about them that they need
a different type of testing than traditional applications? Cloud applications are applications
that are created to leverage the opportunities the cloud gives them, but they also work with
the disadvantages the cloud offers, like, for example, standardization.
Figure 6.4 The cloud is defined by its service model, deployment model and usage
6 Microsoft Azure is not a cloud application, but an infrastructure and platform.
108
6 Testing the Cloud
Mostly cloud applications are based in the third cloud layer: SaaS. They are Software as
a Service solutions that run completely on cloud infrastructure and platforms. And that
is exactly the reason the testing of SaaS applications is different from traditional applications. When they are integrated in the current architecture they need to be tested on three
levels: namely the infrastructure, the platform and the application itself (see Figure 6.4).
The usage of standard services of applications also means a change for system testing.
Functional testing will be executed at a minimum, as the standard applications are already
tested and approved by the supplier. But that doesn’t say anything about how it integrates
into the client’s cloud.
Testing the Business: Testing Business Technology
As we said in Chapter 3, the cloud moves Information Technology to Business Technology.
Testing cloud applications is in line with that. With software testing we are not testing the
IT anymore, but the BT. But how do you test BT? As the “B” in BT stands for business, a
business approach should be used with testing. In Chapter 5, see the section “Creating a
Cloud Test Strategy.” The business is put in the driver’s seat with BDTM. But is that enough
for testing SaaS solutions?
In More Detail
Is IT Already a Commodity?
Technology is part of everyday life, in business as much as in our personal lives.
Take away the applications, e-mail, networks and internet from any organization
and it would come to a standstill almost immediately. This is more so now than
ever before: in the days of phone and fax, we could probably do fine without technology for a day or two.
The difference today is more than just greater dependency: we are no longer doing
the same things. Supported with technology we are starting to do new things that
we simply could not do without that technology. And now that technology has
found its way into all parts of our organization, we are asking new things of it.
Looking closer at what we now demand of Information Technology, it becomes
clear that cloud computing is a good model for provisioning and paying for technology. It will give a boost to whatever we are doing.
When testing for BT, the business scenarios are the most important part of what to test
with. The application should perfectly fit the business needs and demands in doing its work.
So setting up the test using the user scenarios or using the Real Life Test technique is what
is needed while testing SaaS applications. The user scenarios are worked into test scripts,
which are easy to read for the business and are, preferably, executed automatically.
109
TMap NEXT® Testing Clouds
To create these scripts multiple options are valid, but the two most widely available are
using the test technique real-life Test and Process-Cycle Test (PCT) and creating a (test)
model that derives test scripts from user scenarios using Model Driven Quality Improvement (MDQI). As the Business Technology is tested, the testing should connect with the
business and connect with the business processes.
Example
Process-Cycle Test (PCT) [TMapNEXT, 2006]
The process-cycle test is a technique that is applied in particular to testing the
characteristic of Suitability (integration between the administrative organization
and the automated information system). The basis of the testing should include
structured information on the required system behavior in the form of paths and
decision points. The process-cycle test digresses on a number of points from most
other test design techniques:
––The process-cycle test is not a design test, but a structure test: the test cases issue
from the structure of the procedure flow and not from the design specifications.
––The predicted result in the process-cycle test is simple: the physical test case
should be executable. This checks implicitly that the individual actions can be carried out. In contrast to other test design techniques, no explicit prediction is made
of the result, and so this does not have to be checked.
In testing the Business Technology, the business plays a key role in determining how effective the testing is. If the business doesn’t take the time to look at a system and help with
setting up and executing the test scripts, it’ll be very difficult for testers do an adequate
assessment of the system’s quality. Although the business can only help with checking and
approving the system, it plays a vital part in the final outcome of the tests.
To help the business, testers also should focus on the business aspects of the system. That
can be done by using test design techniques that are appropriate for creating test scripts
from a business standpoint. Test design techniques like the Process-Cycle Test (PCT), Use
Case Test (UCT) and Real-Life Test (RLT) [TMapNEXT, 2006] are of great use when creating
the test scripts. These test scripts focus more on the business use of the system than others. This focus provides the testers with the tools they need to execute tests of the Business
Technology because they are in line with the needs of the business.
Therefore the testing can start at the moment the project starts. When the business requirements or scenarios are defined, the testing can start creating the test scripts from these
scenarios; the testers don’t have to wait until the code is written. The code is already written and tested functionally by the supplier, but the application needs to be successfully
integrated with the system. That integration starts by determining how business is planning
110
6 Testing the Cloud
to use the system. As that is defined at the start of the project, testing can also start at
that moment. It is even necessary to start thinking of quality right from the start by testing ideas, concepts and documents. The principle is to start at PointZERO®. However, our
current way of working does not always accommodate this concept, and in most cases it
will require some time to grow into it.
Non-Functionals of SaaS Don’t Need to Be Tested?!?
As stated earlier, the SaaS functional requirements are covered by the supplier’s tests, but
the non-functional requirements are only tested by the supplier in the supplier’s own environment: an environment that can differ from the eventual environment.7 Cloud applications
run over three layers and a change from the supplier’s environment is fundamental.
Example
Testing Salesforce.com
At a delivery company they are testing a Salesforce.com implementation. To test
the application, the test team is using the business requirements and user experiences to build the required test scenarios. The business is consulted and “used”
throughout the process in both an iterative and an incremental manner.
These test scenarios are tested in the beginning of each (two week) sprint. After
the last sprint, the test scenarios are complete, and they can be used for testing
the system, testing system integration and acceptance testing.
What differs in testing a SaaS implementation from an on premise or bespoke
development project is the reduced testing that is done of the functional aspects.
The supplier has a well-documented set of functional limitations. Therefore it’s a
matter of revealing and clarifying these limitations to the business. A lot more testing is done on the business-related functions.
With traditional applications, any way you look at the (functional) design documents you will
notice the lack of non-functionals. They are hardly mentioned. The design describes what the
application needs to do to work and why this is needed, but nothing further… For instance,
take an online store. What would we like to do in an online store? We order an item, shop
around, then before buying, we log in. This is a simple but accurate scenario for an online
store. The business expects that it is properly constructed and working. A functional tester
will use the design document and use a system test to ensure that the product meets the
specifications, most of the time. The tester will make recommendations and report on the
7 With cloud infrastructure, even the IaaS is always a separate part of the test object.
111
TMap NEXT® Testing Clouds
quality of the product. This can be positive when the specifications are met, but even after
a positive report from this tester a customer might still not want to shop.
Why? The reason is that some things, like security, usability or performance are not regulated. What if you have a purchase in the cart and when you click on it, to see it, it takes
30 seconds to show up? What if you have to click on an item 5 times to put that item into
your cart because you have to go through 4 screens? What if you can order products for
free? Good for you, but maybe not for the store owner, and maybe you are even breaking
a law?
A lot of people call these things non-functionals. But is it a non-functional? It can be reversed!
Does a shop function when customers stand in a line for the cashier for 3 hours? If products
are for free? Or if they have to look for toothpaste under “make-up”? No! The shop will close
down! So, part of the function of a shop is how the shop operates.
“Within a cloud application, functional requirements are met!” Why? Because they are
tested and confirmed by the supplier. But the non-functional requirements are not. For
testing a cloud application it is necessary to look at the operation of the software. Testing
from a user or a business perspective!
Some non-functional specification will be in the design documentation, for example, about
security and the loading times of screens. If they are in the design documentation, then test
cases can be created to cover them. But if not, then test cases have to be defined to cover
the full functioning of the application. Together with the business needs, these requirements
can be integrated into the test cases. And don’t forget to put them in the documentation! The
quality of the application will be greatly enhanced because these requirements are met.
Here are a few questions that can help uncover the non-functional requirements. See also
TMap NEXT [TMapNEXT, 2006]:
•What requirements are listed in the field of security, performance and usability?
•How are the requirements translated into the design documentation?
•Are the requirements created with the SMART criteria [SMART, 1981] in mind (Specific,
Measurable, Attainable, Relevant and Time-bound), for example “screen should appear
within 5 seconds” instead of “the screen must be displayed within a reasonable time”?
•Are the requirements explained in only one way?
In More Detail
A checklist and a real-life test are excellent test design techniques for this. An alternative is that it is arranged with the supplier of the software that specific software
is delivered that can help to cover the non-functional requirements.
112
6 Testing the Cloud
Using Models for Business Approval: Model Driven Quality Improvement
The functional test cases for cloud applications have to be derived from the business, so
that the application is created/bought to match the need of the business. One of the ways
to create those functional test scripts is by using Model-Based Testing (MBT). With MBT a
(test) model is used, derived from the business requirements, to automatically generate
test cases. Most of the time the challenge with this approach is to include the creation of a
formal model in which the operation of (part of) the application is represented. Creating this
model is done by hand. When this model is finished, it can be read by a tool that enables the
creation of test cases. An additional advantage is that MBT allows the creation of standard
test cases, something that is an added value for the ability to offshore or outsource test
execution. MBT also claims it has the opportunity to check the quality of the requirements
in an early stage in the development process.
Unfortunately this is where implementing Model-Based Test design will fail in a lot of
projects. Because if a team has never learned to do and use modeling, if a test team is not
able to create a complex model, how can they do it? And besides, the organization around
them isn’t ready to work with these types of complex test models. The acceptance of these
models isn’t easy where they cannot be verified because the business doesn’t understand
the complexity.
MBT only covers one part of the possibilities models have for testing. See the boxed test,
“Model Driven Quality Improvement or Model-Based Testing?” MDQI doesn’t always need a
complex model. A simple and understandable model can be used, a model that testers can
understand and work with, and that represents a part of the behavior of the application.
And start with simple parts the business can also understand and approve.
For example, using a simple process flow can be the start of a project. Maybe this flow
doesn’t hold all the information, but it is complex enough to generate the test cases8
that cover the total business process. By starting test activities with Model Driven Design
Improvement, it’s possible to make models that are understandable for the business part
of the organization.
In More Detail
Model-Based Testing Experience
When testers are asked about their experience with Model-Based Testing, the
answers are often not very clear or the testers don’t have any experience with this
type of testing. When you ask the same question to the business, you hear, “That
is an old approach that doesn’t work for us.” And that is correct. MBT has already
been very promising for at least 15 years.
8 The test cases generated from a process flow via the test design technique Process-Cycle Test with
test-depth 1 or 2.
113
TMap NEXT® Testing Clouds
Business and testers who have tried in the past to use MBT have become a bit
skeptical. And that is one reason why Model Driven Quality Improvement is a better name for this type of testing. It just takes away the concerns the business and
testers have about MBT and encourages them to look at something that can really
improve the quality of software.
Model Driven Quality Improvement is not just a simple new term, it’s a combination of
techniques that offer a type of testing to the business that not only generates test cases
but, more importantly, delivers added value to improve the quality of the final result.
Definition
“Model Driven Quality Improvement is the acceleration of the development process and the improvement of the quality of the software, by using and with validating and verifying models.”
With MDQI, specific test models are still needed in high-risk environments with a high complexity. These test-specific models can be successful where a formal approach is needed. The
business can specify the detail level, and if a high-level model is approved by the business,
from there the detail levels are filled in.
MDQI offers testers the opportunity to create functional test cases in line with the business
by using a simple model, but sometimes more functional test cases are needed, even for
cloud-based applications. This depends on the coverage, risks and the type of model, and
what types of tests are needed.
In More Detail
Model Driven Quality Improvement or Model-Based Testing?
The last couple of years people have been talking about integrating testing with
other parts of the development lifecycle (ALM). A good thing, but if testers strive to
a perfect test approach without involving other parties in the ALM this is just a sub
optimization.
If you look at the definition of Model-Based Testing (MBT) the one most used is:
“Automation of test design,” by Utting and Legeard [MBT, 2006]. This definition
is quite “small” when you compare it with the MDQI definition by Andréas Prins:
“Model Driven Quality Improvement, is the acceleration of the development process and the improvement of the quality of the software, by using and with validating and verifying models.”
114
6 Testing the Cloud
Model-Based Testing is only focused at the testing part of the ALM and not at the
other disciplines; it’s not focused at improving the designs, doing evaluations of the
designs, giving feedback, and helping the business with old legacy systems to create test sets, like for example regression test sets. Model Driven Quality Improvement is a better approach. An approach that really gives answers to the business,
project and testing problems.
Model Driven Quality Improvement is the complete set of doing different things
with models. Model-Based Test Design (according to the definitions above) is just
one of them.
Figure 6.5 MBT as part of MDQI
With the MDQI approach testers are not only focused making the model for testing purposes, but making models to improve the total quality of the product and
process.
To be successful and integrate with the other parties you should use a model used
by the other parties. This doesn’t mean per definition that you should use exactly
the same models, but there should be one model that everybody in the project
agrees upon, especially the business.
115
TMap NEXT® Testing Clouds
Model Driven Quality Improvement can only be effective if all parties in the ALM
work together! For example a business analyst who makes the requirements can
do this in MS Excel, or in a tool for designing requirements. But if this is only text it
isn’t very clear and there is room for assumptions. If the business analyst creates a
model that is in a format good enough to be checked by the (test) team there is the
first quick win; aligning the tasks.
Example
An Example of MDQI
At a telecom company testers received documentation or information from
business analysts that was not good enough to create sufficiently detailed and
structured test cases. This was because there was still room for assumptions and
interpretations. To compensate for this, they used models (like flow diagrams) to
verify the designs. In making these models, they found a lot of inaccuracy and
incompleteness in the documentation.
Modeling was the first step to discovering these flaws. And in making these models
the assumptions and interpretations disappeared. By doing this at the very start of
the project they improved at the beginning what was able to lead to higher quality
testing at the next step (development). In short, they did an evaluation focused at
completeness by making a model of the design!
Integrating SaaS Services: Using the Supplier
When creating a test strategy for the cloud standard SaaS services are used to integrate
into the application landscape of the business. These SaaS services or applications are a
direct result of the rationalization behind the application portfolio: use standardized tools
when possible. But how to handle these services when testing? Do they need to be tested?
Or what do you or don’t you test?
For testing purposes, these SaaS applications make it somewhat easy. Because it’s a SaaS
application the software the supplier offers is a complete application for end users. The
software doesn’t run on a local machine, but in the cloud. These SaaS applications can
also be “apps” that are installed locally but run on another service, like a cloud. Only light
configuration is needed on the business side. These SaaS applications are created to support
a business service and only that. For example, within Google Apps the Agenda only offers
agenda functions like a calendar and tasks, unlike Microsoft Exchange that has a whole lot
more to offer. The downside of course is that this standardization and simplicity offers the
client very little influence over how the application works and its future development.
116
6 Testing the Cloud
Integrating these SaaS applications into the business architecture requires some skills. But
more and more SaaS applications come with multiple integration options. As standards
develop around this, better integration will be available with multiple other applications.
But these applications do need to be tested if the integration is done correctly and the
software can be accepted. Only integration and acceptance testing, not system testing? As
it’s a complete application delivered by its supplier, it’s an application that should function
correctly. It’s a full product.
If the SaaS application is (newly) developed to be integrated into the current architecture,
some checks on the quality could be appropriate. Regular checks maybe agreed upon
with the supplier to confirm that the quality is as expected, so that there are no surprises
when the application is delivered. Quality Gates can help with that. Traditionally, a Quality
Gate could consist of the sign off by all stakeholders of a finalized functional design before
starting the technical design.
In cloud or SaaS projects the “gates” consist of checks to confirm the project is still on
schedule and delivering the expected quality. These checks are agreed upon by the supplier and the business and integrated in the test cloud strategy. In this test cloud strategy,
determine who will conduct the acceptance test. In general it’s agreed upon that a third
party, for instance the test team, will execute this acceptance test for the business. The
SaaS supplier is responsible for the QA of its software, the business for the acceptance
and integration.
In More Detail
Checking, Exploring and Accepting
The business needs to accept that a SaaS application is suitable. But how does
“accepting” correspond to testing? Testing is developed out of three things,
namely:
1“Check” is the determination whether a created software product complies with
the specifications.
2“Explore” is an (in depth) investigation of non-specified parts of a (created) software product, looking for errors.
3“Accept” is whether the described specification or product is the correct solution
to the (user’s) problem.
Instant Deployable Test Infrastructure: Testing on the Cloud
A fitting test infrastructure is required for (dynamic) testing of a test object. And so, testers are dependent of the test infrastructure, because without a test infrastructure no test
117
TMap NEXT® Testing Clouds
can be executed! To ensure a good quality test object and as preparation for production,
it’s recommended to have an infrastructure that makes this available. The DTAP model is
a technical aid to support testing, DTAP stands for Development, Test, Acceptance and
Production.
Figure 6.6 The best practice in test infrastructure:
separate environments for development, test, acceptance and production
The principle of this model is that every user of the infrastructure has a type of environment safe from intrusion by others. The environment types are equal to the four stages in
software development: development, test, acceptance and production. This model is also
of high value in the Cloud Era.
Development Environment
A development environment is the environment where the application will be developed
and/or modified. During development, the application is tested for any errors by the developer in the development environment. The unit test is also performed in this type of environment. The design and related test activities of this environment are conducted as part
of the development process. When it’s necessary to do a test, the developer uses a part
of the area to be used for these tests. Ideally the development platform offers standard
facilities for testing, such as test data, test tools and procedures for versioning, transfer,
defects and error recovery. When this is the case, this provides the developers with enough
opportunities to manage their testing process.
An important aspect that developers face is the manageability of their environment. In
practice, all too often a programmer has five or more versions of a program on its hands.
Preserving the relationship between test cases, test results and the test object then demands
118
6 Testing the Cloud
a lot of attention. Once the developer agrees on the application, it is transferred to the next
environment: the test environment.
The applicability of cloud solutions is ideal for development environments. A development
environment is often relatively small in size and thus lends itself to relatively easy introduction in cloud solutions (IaaS and/or PaaS).
Test Environment
The test environment exists to test (parts of) the whole system in both technical and functional testing. These tests should be performed in a manageable environment. Manageable
means that resources like included software, documentation, test files and test ware are
available and can be managed. In the transfer of new or modified software, the tester must
be verifiable and tests must be reproducible.
The individual tests from one (sub) system should be able to take place separately from the
tests of other (sub) systems. The simultaneous use of the same test data, especially in this
context, can be responsible for many problems.
In this type of environment tools can be used to give an insight on a technical level about
different events to test. Examples are the use of SQL to watch the database directly, having
direct access to the system logs and the possibility to start and stop batches.
The use of cloud solutions for test environments strongly depends on the nature of the
applications, the information system (administrative or technical), and the implementation of test levels and test types. But criteria like (sensitivity) test data, dependencies, and
interfaces determine whether a test environment can be offered using a cloud solution.
The diversity of a test environment is, unlike the development environment, larger, but
generally at an IAAS level.
Acceptance Environment
The acceptance environment offers the future users and administrators the possibility of
testing the test object in an environment that is as much as possible production-like. It
happens that this type of environment is divided into a user-acceptance test (UAT) and
production-acceptance test (PAT). The user-acceptance test is also regularly executed in
the test environment).
Often organizations find the acceptance environment (for the PAT) expensive. This is not
surprising, since for the PAT, it is important that the environment is not only functional but
also technically the same as the production environment. This means, logically, that a PAT
119
TMap NEXT® Testing Clouds
environment must have the same hardware as the production environment. A PAT environment is actually a second production environment: it’s production-like.
All levels of testing can be executed in cloud environments, but for some types of tests a
test environment in the cloud delivers even more added value. Separate test levels can now
be executed on separate environments, as desired. Testers don’t have to wait until the end
of the testing phase to move to a “production-like” environment to do performance, load
and stress tests. A production-like environment can be created as needed.
An end-to-end test might be set up in the cloud. All the necessary servers and images can
be added to the cloud to create that end-to-end environment. When all the different parts
are integrated with each other, a full end-to-end test can be executed. Even end-to-end
tests that transcend a client can be done in the cloud. All necessary components can be
published in the cloud to create the whole chain of systems. Thus the business processes
can also be tested.
The cloud gives (testers) the opportunity to instantly deploy needed (test) infrastructure.
When virtualized parts of the infrastructure are incorporated into the cloud they can be used
on-demand. This enables us to create and delete infrastructure with the click of a mouse!
120
7
Cloud Risks:
Worth Testing for…
Clouds can help IT. The cloud is scalable, less expensive to maintain,
enables Green IT, provides instantly deployable (test) infrastructure,
reduces Total Cost of Ownership (TCO), utilizes servers more effectively, uses independent locations, and promotes Business Technology.
But this comes with a price tag. The cloud also delivers more risks for
security, data integrity, privacy issues, data recovery and performance.
Besides, is the business even ready for IT?
All of these risks can be addressed by testing for them. But that’ll be a costly exercise, and
the attraction of cloud is the reduction in costs. Other counter measures need to be taken
to create a trusted cloud solution: measures that decrease the risks the cloud created, but
also increase the quality of the solution. These actions help create a better solution, better
in practice and more efficient.
Testers can help the business by looking for functions and measures that can increase the
quality of the cloud solution.1 These risks are some of the most common for cloud solutions,
but some are more likely to occur than others in different cases where cloud solutions are
1 These actions can also help in traditional applications.
121
TMap NEXT® Testing Clouds
used. Major concerns2 where testers can support the business to integrate the cloud can
be categorized as follows:
•Compliance: regulations may prohibit the use of clouds for certain workloads and
data.
•Data privacy: a shared, multi-tenant infrastructure increases the potential for unauthorized exposure, especially in the case of public clouds.
•Less control: some businesses are uncomfortable with the idea of their information on
systems they do not own in-house.
•Security management: how will today’s enterprise security controls be represented in
the cloud?
•Reliability: there are worries about service disruptions affecting the business.
Figure 7.1 Cloud risks where testers can help the business
2 Other concerns are the performance and the ability of the cloud to scale up and down.
122
7 Cloud Risks
Even though the cloud enables the creation of Business Technology, the business still needs
to be ready to approve and, more importantly, to use the cloud solution.
Compliance, Data Privacy and Security: A Need for Insight
Although the cloud provides tangible business benefits for organizations, the security, compliance (data integrity) and privacy challenges associated with the cloud require special
consideration. Security is always named as the biggest issue for cloud adoption. And that
is, of course, correct. But why is this so important in the cloud? Isn’t it important with traditional systems? Yes, it’s just as important, but as the cloud is the hot topic of the moment,
its deficiencies are receiving part of that attention.
Cloud Security Issues: All around Us
The cloud also increases risks for security and compliance, so it poses a question that needs
to be asked: “is the business even ready for IT to use a cloud?” Of course it is, but the business cannot just sit back and relax. The business has to proactively work with IT to create
a cloud solution that will enable the move to BT.
The cloud allows organizations to use services and store data outside their own control.
This development raises security questions and should produce a degree of wariness about
using cloud services. Think about these risks[Brodkin, 2008]:
•Privileged user access. Data stored and processed outside the enterprise’s direct control
brings with it an inherent level of risk, because outsourced services bypass the physical,
logical and personnel controls IT exerts over in-house programs.
•Regulatory compliance. Data owners are responsible for the integrity and confidentiality of their data, even when the data is outside their direct control, which is the
case with external service providers such as cloud providers. Where traditional service providers are forced to comply with external audits and obtain security certifications, so should cloud providers. Cloud providers who refuse to undergo this scrutiny are signaling that customers can only use them for the most trivial functions. Most, if not all, of the leading cloud providers do not support on-site external audits at
the request of customers. As a result, some compliance cannot be achieved because
on-site auditing is a requirement that cannot be satisfied: for example, the Payment Card
Industry level 1 compliance.
•Data location. The exact location of data in the cloud is often unknown. Data may be
located in systems in other countries, which may conflict with regulations prohibiting data
from leaving the country or state. It is advisable to investigate whether cloud providers will
commit to keeping data in specific jurisdictions and whether the providers will make contractual commitments to obey local privacy requirements on behalf of their clients. 123
TMap NEXT® Testing Clouds
For example, the EU Data Protection Directive places restrictions on the export of personal data from the EU to countries whose data protection laws are not judged as
“adequate” by EU standards [EuropeanCommission, 1995]. Without proper attention,
European personal data might be moved outside the EU in violation of the directive.
Figure 7.2 The data is subject to the laws of the country where the data is stored
Using the Data Protection Directive or Safe Harbor Principles[EuropeanCommission, 1995]
can help in regulating the processing of personal data within and outside of the European
Union. These Safe Harbor Principles3 allow personal data to be stored outside the EU when it
upholds these principles. The Safe Harbor Privacy Principles allow US companies to register
if they meet the European Union requirements and obtain proper certification.
These Principles provide:
•Notice. Individuals must be informed that their data is being collected and told how it
will be used.
•Choice. Individuals must have the ability to opt out of any collection and transfer of the
data to third parties.
•Onward Transfer. Transfers of data to third parties may only occur where the third
parties are organizations that follow adequate data-protection principles.
•Security. Reasonable efforts must be made to prevent loss of collected information.
•Data Integrity. Data must be relevant and reliable for the purpose it was collected for.
•Access. Individuals must be able to access information held about them, and correct
or delete it if it is inaccurate.
•Enforcement. There must be an effective means of enforcing these rules.
3 Safe Harbor Arrangement Official Site.
124
7 Cloud Risks
In More Detail
Transfer of Personal Data to Third Countries under the Data Protection Directive
Third countries is the term used in EU legislation to designate countries outside
the European Union. Personal data may only be transferred to third countries if
that country provides an adequate level of protection. Some exceptions to this rule
are provided: for instance, when the controller can personally guarantee that the
recipient will comply with the data protection rules.
A European “Working Party” negotiated with the US about the protection of personal data, and as a result the Safe Harbor Principles were crafted.
The United States prefers a “sectoral” approach to data protection legislation,
which relies on a combination of legislation, regulation, and self-regulation, rather
than governmental regulation alone. The US recommended in the Framework for
Global Electronic Commerce that this approach should be led by the private sector,
and companies should implement self-regulation in reaction to issues brought on
by Internet technology.
To date, the US has no single data protection law comparable to the EU’s Data
Protection Directive. Privacy legislation in the United States tends to be adopted on
an ad hoc basis, with legislation arising when certain sectors and circumstances
require (e.g., the Video Privacy Protection Act of 1988, the Cable Television Protection and Competition Act of 1992, the Fair Credit Reporting Act, and the 2010 Massachusetts Data Privacy Regulations). Therefore, while certain sectors may already
satisfy the EU Directive (at least in part), most do not.
•Data segregation. The shared nature and massive scale characteristic of the cloud make
it likely that one’s data is stored alongside data of other consumers. Encryption is often
used to segregate “data-at-rest,” but it’s not a cure-all. It’s advisable to do a thorough
evaluation of the encryption systems used by the cloud provider. A properly built but
poorly managed encryption scheme may be just as devastating as no encryption at all,
because although the confidentiality of data may be preserved, availability of data may
be at risk if it is not guaranteed.
•Recovery. Cloud providers should have recovery mechanisms in place in case of a disaster. According to Gartner: “Any offering that does not replicate the data and application
infrastructure across multiple sites is vulnerable to a total failure.” Cloud providers should
have guidelines concerning business continuity planning, detailing how long it will take
for services to be fully restored. For example, Zmanda, an open source backup provider,
125
TMap NEXT® Testing Clouds
offers a cloud backup solution named Amanda Enterprise.4 Figure 7.3 shows that the
cloud provides an excellent means to radically simplify the data recovery process.
Figure 7.3 The cloud also offers excellent ways to help in data recovery
•Investigative support. Gartner warns that “investigating inappropriate or illegal activity
may be impossible in cloud computing, because logging and data may be co-located and
spread across ever-changing sets of hosts and data centers.” If cloud providers cannot
provide customers with a contractual statement specifying support for incorruptible logging and investigation, Gartner says that “the only safe assumption is that investigation
and discovery requests will be impossible” [Gartner, 2008]. Clients need to have trust in
the service, and audits or third-party investigations are not always an option.
•Data lock-in. Availability of client data may be at risk if a cloud provider becomes
insolvent or is acquired by another organization. Providers should have and disclose
procedures whereby customers can retrieve their data as needed, and as importantly,
in the data format of their choice. If the data is presented in a format proprietary to the
cloud provider, it may be unusable by any other provider. The use of open standards by
providers to prevent data lock-in is recommended, but not always supported.
In his paper, Guido Kok proposes a Cloud Computing Confidentiality Framework (CCCF)
that will enable companies to review the possibilities for engaging in cloud-based services,
4 Amanda Enterprise for Cloud Based Data Recovery.
126
7 Cloud Risks
based on the confidentiality of the data used within the company [Kok, 2010]. The goal of the
Framework is to explain the differences between security in cloud environments in the various cloud deployment models (private, public, hybrid and collaboration), and the security
in present-day information security practices. As it is a good practice for every enterprise
to follow such a risk management strategy to secure their data and information systems,
the framework we present here will be relevant to every entity interested in working with
cloud-based information systems.
In More Detail
Cloud Deployment Models
The different deployment models of the cloud are mainly Private, Hybrid and Public
Clouds, as well as Community and Hosted Private Clouds.
––Private clouds. Private clouds run in the service of a single organization, where
resources are not shared by other entities. “The physical infrastructure may be
owned by and/or physically located in the organization’s datacenters (on-premise)
or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated
service provider respectively” [Bardin, 2009]. Private cloud users are considered to
be trusted by the organization, in which they are either employees, or have contractual agreements.
––Public clouds. Public clouds are based on massive-scale offerings to the general
public. The infrastructure is located on the premises of the provider, who also owns
and manages the cloud infrastructure. Public cloud users are considered to be untrusted, which means they are not tied to the organization as employees and that
the user has no contractual agreements with the provider.
––Hybrid clouds. Hybrid clouds are a combination of public, private, and community
clouds. Hybrid clouds leverage the capabilities of each cloud deployment model.
Each part of a hybrid cloud is connected to the other by a gateway, controlling the
applications and data that flow from each part to the other. Where private and
community clouds are characteristically either managed, owned, and located on
the organization’s or a third-party provider’s site, hybrid clouds combine the characteristics of being both on the organization’s and third-party provider’s site. The
users of hybrid clouds can be considered as both trusted and untrusted. Untrusted
users are prevented from accessing the resources of the private and the community parts of the hybrid cloud.
––Community clouds. Community clouds run in the service of a community of organizations, having the same deployment characteristics as private clouds. Community users are also considered as trusted by the organizations that are part of the
community.
127
TMap NEXT® Testing Clouds
Regardless of which delivery model is utilized, cloud offerings can be deployed in four
primary ways, each with its own characteristics. The characteristics of the deployment
models are:
1
2
3
4
who owns the infrastructure;
who manages the infrastructure;
where is the infrastructure located, and
who accesses the cloud services.
Figure 7.4 The basic cloud model: public, private and hybrid (including a community cloud)
Responsibility for Security
A big issue in cloud surrounding security is “who has what responsibility?” This is outside
the level of the cloud, IaaS, PaaS or SaaS. In , and the breakdown of this responsibility is
shown using Responsibility Matrices. For each cloud layer, the responsibility differs between
the cloud provider and the client.
Security Type
IaaS Provider
Client
Infrastructure Security Network level
128
Redundancy of the network layer
Implement strict default security
groups IDS Logs, audits Implement DOS & DDOS filters
Secure connection (firewall +
encryption)
7 Cloud Risks
Security Type
IaaS Provider
Client
Host level
Ensure prevention and detection
controls
Virtualization security
Restrict physical & logical access
to the VM hypervisor
Data storage separation
Control reports (login activities)
Responsible for what data is in
the cloud
Should ask for security information under NDA
Virtualization Software Security
OS image should be hardened
and running only minimum services for your application
Assume your virtual server will
be available to anyone online,
restrict access, or keep it current
Run a host firewall and limit access
Install a hosted IDS
Install a log server to centralize
each log with a higher security
protection
Protect access to the hardened
image and verify the integrity
Require private key to access the
hosts and safeguard them
Restrict physical & logical access
to the VM hypervisor
Do not keep decryption key in the
cloud
Do not allow password-based
authentication for shell access
Require a Sudo password for
super-user privileges
If security is compromised, shut
down the affected sector and
make a snapshot for forensics
Data security
Data in-house
Classify data
Be aware of compliance regulation
Data transfer
Encrypted connection
Encrypted connection
Data storage
Encrypted data
Provide integrity file
Track the data
Encrypted data for storage
Key encryption management
Data processing
Data separation
Application security
As an application cannot use encrypted data, ensure data could
be used unencrypted in the cloud
Data deletion
Clear and sanitize with respect to
NIST recommendations
Meta data
Collect Audit and Archive infrastructure logs
129
TMap NEXT® Testing Clouds
Security Type
IaaS Provider
Client
Identity
­provisioning
Add, Modify, Delete user capabilities
Support for file provisioning
Ideally SPML support
Publish account management
policies, provisioning methods, role
definitions
Support for developers, testers,
end users and administrators
API support for provisioning
Provisioning and deprovisioning
Definition of virtual infrastructure
roles and associated privileges
Automated provisioning via an
organization-wide standard such
as SPML based on roles (Business
and privileged)
API support for provisioning users
Configure virtual machines to use
the LDAP/ AD when possible over
an encrypted connection
Configure virtual machine images
with pre-populated users and
groups of people who need to access the VM
At first login, credentials should
be changed
Audit VM and remove unnecessary users
Federated identity
SSO
SAML support and how to connect
it to the service identity provider
(use case examples…)
Authentication
management
Login and static password
SSL support Delegated authentication
For IT personnel, establish a VPN
connection
If possible, use the LDAP or active
directory or SSO via VPN
Whenever possible, a VPN should
be used for an application user,
otherwise the application should
accept authentication requests
in a standard format (SAML, WS
Federation, etc.)
Compliance
­management
Custom profile abilities
Define personal user attributes
Respect provisioning & deprovisioning policies to deny revoked
authorization
IAM
130
7 Cloud Risks
Security Type
IaaS Provider
Client
Security ­Management
Availability
­management
Provide service health dashboard
Redundant architecture
Measure availability Set & explain maximum quota triggers (access to API, % of CPU, %
memory …)
Include features to let administrators set some quota trigger
Be aware of SLA
Monitoring tools to check virtual
server health
Redundant and reliable Internet
connection & network services
DNS routing services and authentication services
Availability of the virtual servers
and attached storage (persistent
and ephemeral)
Availability of virtual storage
Access control
Provide network filtering
Federation support
Responsible for access to the
network servers and application
platform infrastructure
Managing the access control to
administrative processes (backup,
hosts, hypervisors, network maintenance, firewall…) through a strong
authentication and role-based
process to support provisioning
and revocation of administrative
privileges
Network access with virtual firewall
Virtual server access
Inside customer’s office, access
to computers to manage cloud
services should be restricted
VPC ­management
Responsible for measuring the
vulnerabilities, patching them and
configuring systems owned by the
CSP
The scope is:
•networks,
•hardware,
•hosts
•hypervisors,
•applications
•management console used by
the customer to managed their
virtual infrastructure
Responsible for personal computers and in particular the browser
Applications or services interfacing with the SaaS
Responsible for the VPC of the virtual infrastructure, which includes
active VM and host images
hardening the standardized image, with a minimal approach to
privilege configuration
virtual network configuration and
virtual firewall policies
Intrusion
­detection
Monitoring intrusion of the cloud
layer: hypervisor, application, log
events, DOS & EDOS and web
management console
Monitoring virtual network interfaces
Monitor log activities (VM, databases, applications)
Monitor services delivered by
third parties that you use (data
encryption, storage usage…)
131
TMap NEXT® Testing Clouds
Security Type
IaaS Provider
Client
Incident response
Track incident regarding host avail- Track the incident
ability
Inform the user
Help the CSP to remediate
Data Privacy
Collection
Custom SLA capabilities
Usage
Be aware of compliance & regulation and decide what can be
stored online
Specify data will be stored on a
cloud
Data governance to ensure data
collected is only used in the context for which it was collected
Retention &
­destruction
Erase and sanitize when space is
reallocated
Destruction of the media when it is
replaced
Destroy encryption key for encrypted data
Destroy data according to compliance & regulation
Location &
­transfer
Need to specify the data location
Data cannot be transferred to
third parties without notice to the
data owner
Table 7.1 IaaS Responsibility Matrix
showing the differentiation between the IaaS provider and the client
Security Type
PaaS Provider
Client
Infrastructure Security
Network level
Redundancy of the network layer
Implement strict default security
groups
IDS
Logs, audits
Implement DOS & DDOS filters
Monitor & measure
Redundant Internet connection
Secure connection (firewall +
encryption)
Monitor & measure
Host level
Ensure prevention and detection
controls
Virtualization security
Restrict physical & logical access to
the VM hypervisor
Data storage separation
Provide activity reports
Responsible for what data is in
the cloud
Should ask for security information under NDA
132
7 Cloud Risks
Security Type
PaaS Provider
Client
Application level
Availability
Data confidentiality & integrity
Assess the platform software
Provide activity reports
Secure the runtime engine
Sandbox architecture
Publish API for authentication and
authorization control
SSL & federation support
Responsible for operation security management:
•authentication and authorization management
•strong password policies
•implement strong authentication if supported by CSP
•encrypt data if possible
•audit reports
Understand applications upon
which third parties depend and
assess that they are secured, if
applicable
Ask for containment and isolation architecture information
Data Security
Data in-house
Classify data
Be aware of compliance regulation
Data transfer
Encrypted connection
Encrypted connection
Data storage
Encrypted data
Provide integrity file
Track the data
Encrypted data for storage
key encryption management
Data processing
Data separation
Application security
As an application cannot use
encrypted data, ensure data
could be used unencrypted in
the cloud
Data deletion
Clear and sanitize with respect to
NIST recommendations
Meta data
Collect Audit and Archive infrastructure logs
133
TMap NEXT® Testing Clouds
Security Type
PaaS Provider
Client
Identity
­provisioning
Add, Modify, Delete user capabilities
Support for file provisioning
Ideally SPML support
Publish account management
policies, provisioning methods, role
definitions
Support for developers, testers, end
users and administrators
API support for provisioning
Provisioning
Deprovisioning
Profile management
Administrative management
User provisioning should be
done over an encrypted channel
If the CSP does not support
SPML, try to look for an SPML
gateway to connect to the CSP
Audit accounts and deauthorize
unnecessary accounts
Federated identity
SSO
SAML support and how to connect
it to the service identity provider
(use case examples…)
Authentication
management
Login & static password
SSL support
Delegated authentication
Authorization
management
Support administrator and user
Assignment of user privileges
roles (developers, testers, end usLogs and Audit management
ers)
Support trusted networks (to connect to)
Give access to logs & audits
Ideally XACML & Oauth support
Monitor creation and removal of
users and who performed the action (to monitor rogue acts by cloud
employees)
Control access to the customer’s
code repository and development
environment
Special rights for developers to
access database, directory services,
file repository …
Control privileges for moving
applications from development
environment to test environment to
production
Compliance
­management
Custom profile abilities
IAM
134
Password policies
Define personal user attributes
Respect provisioning & deprovisioning policies to deny revoked
authorization
7 Cloud Risks
Security Type
PaaS Provider
Client
Security Management
Availability
­management
Provide service health dashboard
Redundant architecture
Measure availability
Set & explain maximum quota triggers (access to API, % of CPU, %
memory,…)
Include features to let administrators set some quota trigger
Be aware of SLA
Measure availability / month
Redundant and reliable Internet
connection
Access control
Provide network filtering
Federation support
Provisioning support
Responsible for the access to the
network servers and application
platform infrastructure
Provisioning
Deprovisioning
Profile management
Administrative management
Responsible for access to the application deployed, provisioning
and authentication of end users
VPC management
Responsible for measuring the
vulnerabilities, patching them and
configuring systems owned by the
CSP the scope is:
•networks,
•hardware,
•hosts,
•hypervisors,
•applications,
•management console used by
the customers to managed their
virtual infrastructure
Responsible for personal
computers and in particular the
browser
Applications or services interfacing with the SaaS
Responsible for the deployed
application VPC domains cover:
•analyze source code
•black box testing
•penetration testing
Intrusion
­detection
Monitoring intrusion of the cloud
layer: hypervisor, application, log
events, DOS & EDOS, web management console and privilege escalation attack
Monitoring intrusion of deployed
application using the PaaS
platform
Incident response
Track the incident
Inform the user
Help the CSP to remediate
Data Privacy
Collection
Custom SLA capabilities
Be aware of compliance & regulation and decide what can be
stored online
135
TMap NEXT® Testing Clouds
Security Type
PaaS Provider
Usage
Client
Specify data will be stored on a
cloud
Data governance to ensure data
collected is only used in the context for which it was collected
Retention &
­destruction
Erase and sanitize when space is
reallocated
Destruction of the media when it is
replaced
Destroy encryption key for encrypted data
Table 7.2 PaaS Responsibility Matrix
showing the differentiation between the PaaS provider and the client
Security Type
SaaS Provider
Client
Infrastructure Security
Network level
Redundancy of the network layer
Implement strict default security
groups
IDS
Logs, audits
Implement DOS & DDOS filters
Monitor & measure
Redundant Internet connection
Secure connection (firewall +
encryption)
Monitor & measure
Host level
Ensure prevention and detection
controls
Virtualization security
Restrict physical & logical access to
the VM hypervisor Provide activity reports
Responsible for what data is in
the cloud
Should ask for security information under NDA
Monitor & measure
Application level
Availability
Data confidentiality & integrity
Assess the platform software
Provide activity reports
Responsible for operation security management:
•authentication and authorization management
•strong password policies
•implement strong authentication if supported by CSP
•encrypt data if possible
•audit reports
Data Security
Data in-house
136
Classify data
Be aware of compliance regulation
7 Cloud Risks
Security Type
SaaS Provider
Client
Data transfer
Encrypted connection
Encrypted connection
Data storage
Encrypted data
Provide integrity file
Track the data
Data storage separation
Encrypted data for storage
key encryption management
Data processing
Data separation
Application security
As an application cannot use
encrypted data, ensure data
could be used unencrypted in
the cloud
Data deletion
Clear and sanitize with respect to
NIST recommendations
Verify that data has correctly
been erased with provider
Meta data
Collect audit and archive infrastruc- Log access
ture logs
IAM
Identity
­provisioning
Add, Modify, Delete user capabilities
Support for file provisioning
Ideally SPML support
Publish account management policies, provisioning methods and role
definitions
User provisioning should be done
over an encrypted channel
Federated identity
SSO
SAML support and how to connect
it to the service identity provider
(use case examples…)
Authentication
management
Credential management
SSL support
Delegated authentication
Password policies
Authorization
­management
Support administrator and user
roles
Support trusted networks (to connect to)
Give access to logs & audits
Ideally XACML & Oauth support
Assignments of user privileges
Logs and audit management
Compliance
­management
Custom profile feature
Define personal user attributes
Respect provisioning & deprovisioning processes to deny
revoked authorization
User provisioning and deprovisioning
Role awarding
User provisioning should be
done over an encrypted channel
If the CSP does not support
SPML, try to look for an SPML
gateway to connect to the CSP
Audit accounts and deauthorize
unnecessary accounts
137
TMap NEXT® Testing Clouds
Security Type
SaaS Provider
Client
Security Management
Availability
­management
Provide service health dashboard
Redundant architecture
Measure availability
Be aware of SLA
Measure availability / month to
report
Redundant and reliable Internet
connection
Access control
Provide network filtering
Federation support
Provisioning support
Provisioning & deprovisioning
processes
Profile management
Administrative management
processes
VPC management
Responsible for networks, hardware, hosts, applications and storage owned by the CSP
Responsible for personal
computers and in particular the
browser
Applications or services interfacing with the SaaS
Intrusion detection
Monitoring intrusion of the cloud
layer: hypervisor, application, log
events, DOS & EDOS, web management console and privilege escalation attack
NA
Incident response
Track incident
Troubleshoot
Keep customer informed
Track the incident
Inform the use
Help the CSP to remediate
Custom SLA capabilities
Be aware of compliance &
regulation and decide what can
be stored online
Data Privacy
Collection
Usage
Retention &
­destruction
Specify data will be stored on a
cloud
Data governance to ensure
data collected is only used in
the context for which it was
collected
Erase and sanitize when space is
reallocated
Destruction of the media when it is
replaced
Destroy encryption key for
encrypted data
Table 7.3 SaaS Responsibility Matrix
showing the differentiation between the SaaS provider and the client
138
7 Cloud Risks
By using these tables, a matrix is created that enables the business, the cloud provider and
the (security) test team to understand who does what to maintain security in the cloud.
Static or Dynamic Security Testing: Do Both!
To determine how secure the cloud is, the Responsibility Matrix will help, but another option
is to directly test the security of the cloud. This can be done by doing a static security test
to test the application or service from the inside out—by examining its source code, byte
code or application binaries for conditions indicative of a security vulnerability. Another
option is a dynamic security test of the application to test the application from the outside
in—by examining the application in its running state and trying to poke it and prod it in
unexpected ways in order to discover security vulnerabilities.
Both static and dynamic security testing look at the application by itself in the SaaS layer.
In a cloud environment, security testing should be applied on three layers, namely the
service, the infrastructure and the platform layers. Additionally, security testing has to be
performed by two different parties, on the one side by the cloud providers, offering Software
as a Service, and on the other side, by cloud consumers, developing custom applications
to be executed inside cloud environments. As the latter more or less make use of Platform
and Infrastructure as a Service, the cloud provider itself cannot assure the security of a client’s application, as application-specific code often introduces its own risks, which have to
be addressed by the developers or a dedicated security testing team. However, this should
not be an excuse for cloud providers to disregard testing of the security of the cloud’s
infrastructure (PaaS and SaaS). Moreover, this highlights a good reason to aggressively
test and consequently mitigate risks to the infrastructure itself and its internal interactions
(like operating system calls, network communication, file system access, and the like), to
guarantee that application run smoothly. This is due to the fact that such a secure cloud
infrastructure boosts the overall security of the environment and results in even more secure
systems. Hence, no matter which side or layer of a cloud a developer or tester is operating
from, assuring security through sophisticated testing plays a crucial role.
Many vendors of security testing tools provide static and dynamic (application) security
testing capabilities. The ability to test an application both statically and dynamically will
become increasingly important, for these reasons:
•Some vulnerabilities can be found only with static security testing, others with dynamic
security testing; testing in both ways yields the most comprehensive testing.
•Many web applications that would traditionally be scanned with dynamic security testing
tools also use a significant amount of client-side code, in the form of JavaScript, Flash,
Flex and Silverlight. This code must also be analyzed for security vulnerabilities, typically
using static analysis.
139
TMap NEXT® Testing Clouds
This is true for traditional applications and even more for cloud applications where we have
three different layers with security issues to test.
Control: Private vs. Public Cloud in Security
The common view of private versus public cloud computing suggests that it is prudent
to permit only public information to enter public cloud environments. With the process
described in the cloud computing confidentiality framework, an organization can assess
security controls that need to be in place to protect the information in question.
When an organization considers a cloud service offering an operational environment for
the information in question, both parties can perform a gap analysis to determine which
security controls are required to ensure the integrity of the information, and which security
controls the cloud service provider supports. The difference between the required controls
and the supported controls is called the security gap. To reduce the organizational risk that
the security gap imposes, the NIST recommends the following three options to close the
gap between what security is needed and what security external service providers offer:
•“Use the existing contractual vehicle to require the external provider to meet the addi-
tional security control requirements established by the organization” [NIST, 2009a].
•“Negotiate with the provider for additional security controls (including compensating
controls) if the existing contractual vehicle does not provide for such added requirements” [NIST, 2009a].
•“Employ alternative risk mitigation measures within the organizational information system when a contract either does not exist or the contract does not provide the necessary
leverage for the organization to obtain needed security controls” [NIST, 2009a].
If the cloud provider can implement the additional controls demanded by the organization,
the public cloud environment of the provider meets the security requirements set by the
organization.
The issue here is that giving up control makes it hard to keep overall complexity under
control. IT groups end up purchasing services, business users provision their own solutions,
and so forth (see Figure 7.5). Redundancy and cost explosion are real risks. The risk is often
not truly acknowledged by business users who just want quick solutions.
A communal business and IT approach, like enterprise architecture, provides guidelines
on how to use cloud services and ensure an active role for the IT department in brokering
services. Business and IT governance processes should include strong controls that focus
on continuously reducing complexity [SeizeTheCloud, 2011].
140
7 Cloud Risks
Figure 7.5 A risk for the cloud: sharing control
Reliability: Cloud Recovery Testing
The cloud is often considered unreliable, but is that accurate? Fortunately, the reliability of
the cloud can be tested, as testing is an essential part of disaster recovery planning and a
key component of the Disaster Recovery Plan (DRP). The DRP shows the processes, policies
and procedures involved in preparing for recovery or continuation of technological infrastructure critical to an organization after a natural or human-induced disaster. Disaster
recovery is a subset of business continuity. While business continuity involves planning to
keep all aspects of a business functioning in the midst of disruptive events, disaster recovery
focuses on the IT or cloud systems that support business functions.
Frequently the creation of a DRP results in a false sense of security. Management has the
tendency to relax and think they have that part of the system in order. But if you don’t test
your disaster recovery plan, there’s a risk that it won’t work as expected when it’s really
needed. The Disaster Recovery Test as part of business continuity testing is becoming an
annual event for most IT departments, but with the cloud it should become an even more
regular event so that the business can check whether their services are sufficient regulated.
As more trust is imparted to the cloud it’s a very healthy thing to do, and it is mandated
by a lot of regulators and almost insisted upon by internal audit teams.
With testing business continuity, the implicit risks in the business process are revealed.
The most mission critical processes need to be tested more thoroughly and the task critical processes less thoroughly. As Figure 7.6 shows, the criticality of the business process is
calculated by the chance of failure and the cost of potential damage if, in this case, up-time
and business capability is lost.
141
TMap NEXT® Testing Clouds
Figure 7.6 Business continuity: the importance of maintaining the business processes
Whenever possible, choose a remote site recovery strategy that can be tested as frequently
as possible. Testing the disaster recovery process has the following benefits:
•showing that the disaster recovery plan works;
•discovering problems, mistakes, and errors, and resolving them before they are relied
upon to work;
•educating staff in executing tests and managing disaster recovery situations;
•reminding the members of the organization, IT and business, of the necessity of such a
disaster recovery plan and the importance of planning accordingly; and
•increasing awareness of the disaster recovery strategy.
After each test, use the detailed logs and schedules to identify any errors in your procedures,
and eliminate them. Retest the changed procedures, and then incorporate them into your
recovery plan. After changing the recovery plan, completely revise all existing disaster
recovery documents.
Steps to Take in a DRP Test
First, performing a DRP test without proper risk management can put an organization at
significant risk. To put things into perspective, let’s analyze the steps, risks and (counter)
measures of a disaster recovery test; see Table 7.4 [Spirovski, 2010].
142
7 Cloud Risks
DRP Test
Step
Activity
Risks
Measures
1 Failure
In order to perform a disaster
situation, the
primary systems
need to be caused
to fail on some
level
Databases damaged or
not saved properly due
to forced shutdown or
forced power failure
Hardware components
failing due to forced
shutdown or power
failure
Spilt-brain cluster due to
uncontrolled sequence
of server and storage
failures
Perform full backup prior to
the initiation of the DRP test
Backup components and
vendor support are on hand
during the entire test
Not performing a direct
forced shutdown but forcing a
network-level isolation at the
routers
Actual failure of
primary system during
the test
Failure of the primary
system while the DR
system is found to be
non-functional
Every interested party must
be fully aware of the test,
including business custodians,
directors of divisions and top
management who would initiate the real Business Continuity Plan
Full backup prior to the initiation of the DRP test at DRP
site, and full vendor support.
of primary
systems
2 Activation Severing any
of Disaster
Recovery
systems
relation between
the DR and the
primary systems
and running the
DR systems as primary, temporarily
3 Reconfig-
Intervening in the
end-user environment in a way that
will make them
use the DR system
Configuration error
that might cause the
end-user to input test
data into the primary
systems
Configuration error
that might cause the
primary system to stop
functioning
Scripted and documented
steps of reconfiguration, where
all steps should be performed
by 2 persons: one observing
the other’s actions
4 Reverting to the
primary
systems
Resuming the
primary systems
at some level and
reestablishing the
relation between
the DR and the
primary systems
Configuration error
that might cause the
primary system to stop
functioning
Copying of test data
that was input into the
DR test system back
into the primary location
Failure of primary systems during resumption
Scripted and documented
steps of reconfiguration, where
2 persons should perform
all steps: one observing the
other’s actions.
Fully controlled and documented process of resumption,
which guarantees that only the
primary system is data master
Full backup prior to the initiation of the DRP test, Backup
components and vendor support on hand during the entire
test
uring the
user environment
Table 7.4 Steps, risks and measures of a disaster recovery test
143
TMap NEXT® Testing Clouds
With all these risks, is it more prudent not to perform a DRP test? Absolutely not! Performing
the DRP test actually confirms that things are running. And if something breaks, you can be
better prepared for it next time. Not performing the test will just make the business think
everything is great, until a problem occurs. And that problem is certain to arise sometime.
Execute frequent tests early during cloud implementation or when the disaster recovery
plan is prepared. Once all the major problems have been removed, less frequent testing is
possible. The test frequency will depend on:
•the interval between major changes in the cloud infrastructure, platform or software;
•how current the business wants to keep the recovery plan; and
•how critical and sensitive the business processes are (which is an issue with all testing,
but the more critical the processes are, the more frequently testing may be required).
So, perform the DRP test regularly, but with a whole set of countermeasures for the possible
problems that could arise during the test.
In More Detail
Cloud Cracks: Break Down or Support
In April 2011 the Eastern US data center of Amazon’s Web Services (AWS) failed
of. This can be the start on some negative talking about the cloud. The cloud is
certainly starting to show its cracks.
But is this a bad thing? Is it a not good thing for the cloud’s image that it wasn’t
reacting more so than multiple sites and services, like Reddit and Foursquare, that
went down? No, it’s not! For the first time we’ve seen what can happen when the
cloud fails. And we can now keep it in mind when creating these services and we’ll
know what to do when it happens again.
To anticipate on this cloud users should “design with failure in mind.” A cloud is
also software and software can fail. But in early 2011 everybody was on “Cloud 9”
with the cloud and only the pessimists were talking about cloud failures. Now
they’ve seen what can happen and they can prepare for it. Not only the cloud providers like AWS, but also the owners of those cloud services.
Cloud providers and brokers should have recovery mechanisms in place in case
of a disaster. According to Gartner: “Any offering that does not replicate the data
and application infrastructure across multiple sites is vulnerable to a total failure.”
Cloud providers should have guidelines concerning business continuity planning,
detailing how long it will take for services to be fully restored.
144
7 Cloud Risks
According to the research firm Gartner, the cloud market will grow to €71 billion
next year. Companies will continue to weigh the benefits of the cloud, its massive
cost savings and easy scalability, against the relatively small risk and annoyance
of outages—risks and annoyances that could also have happened with a failure
in their own non-cloud data centers. The level of cost savings gained from moving
to the cloud depends on all kinds of variables. The study by Booz Allen Hamilton
“The Economics of Cloud Computing” [BoozAllenHamilton, 2009] found that “the
benefit-to-cost ratio of a non-virtualized 1,000-server data center could reach 15.4:1
after implementation, and total life cycle cost may be 66% lower than maintaining
a traditional data center.”
In the AWS case, luckily nothing catastrophic happened; no private data was lost
or “left on the street.” So everybody can sleep well. However, this failure helps to
build the hype around the cloud into a more supported use of its services. The failure doesn’t make cracks in the cloud so it’ll break. But it will make cloud construction better and we will be able to enjoy the benefits even more in the near future.
Amazon, other cloud providers and their cloud customers will learn from this outage, because these savings from cloud are simply too great to dismiss, and they
will keep the customers coming.
Specifying a Disaster Recovery Test
In order to determine which business processes to test, testers must keep the core of the
planning process in mind. Each business-critical process defined in the DRP should be
completely reassessed and prioritized based on the Cloud Risk Analysis (see Chapter 5)
of threats, vulnerabilities and safeguards. Performing mandatory recovery testing on processes with a high CR is a no-brainer and easily defensible to the business. It is the less
obviously critical processes that will require decisions from the business as to what levels
they deem acceptable
This process can be simplified by using the damage ratio of the CRA. The CRA is then used as
input for a ranking system with which the business can make decisions based on empirical
data as opposed to subjective evaluations. The damage level that was determined in step
four of the Cloud Risk Analysis is the input for a ranking system; when using numbers while
determining the damage level these numbers can be directly used to determine the DRT.
In More Detail
When another method is used to classify the damage level, one that isn’t easily
quantifiable, those values can be transformed into numerical data.
145
TMap NEXT® Testing Clouds
For example, the standard TMap NEXT® uses the values High, Medium, and Low.
These values need to be transferred to numerical values. This can be done easily by
replacing “3” for High, “2” for Medium, “1,” and “0” for Low or None. But a method
that provide for better distinctions is preferred.
I prefer to use the numerical range “0, 1, 3, 5, 9.” The differentiation in the numbers
reflects a wider difference between the risks. The “0” and “1” are directly exchangeable with None and Low, but Medium can now be dispersed between “3” and “5,”
and High between “5” and “9.”
When reassessing the risk values, the business should also keep in mind the levels
of damage. By doing so, it will be clear what effects risk can have on different levels
of damage.
Characteristic: Functionality
Process
Subprocess
Product requirement
Damage
Arguments
Sales
----
Compliance with the functional requirements
9
Loss of revenue if the
sales process breaks
down
Sales
Advice
With an eye to the legal duty 5
of care, the advice given and
how the client decides to deviate from the advice must be
recorded
High fines and negative
press will result for the
company if this functionality does not work
(correctly)
Sales
Offer
The offer must contain the
correct price
3
An incorrect price may
result in loss of revenue
Booking
Offer
The accepted offer must be
filed with the date in the
filename
1
The date is automatically stored in the data
of the offer
Table 7.5 Damage table for the characteristic Functionality
In the example provided in Table 7.5, processes with a damage level score of 9, 5 and 3 will
be tested, but a process with a score of 1 will not.
The key to this method is that the business has provided information based on current
analysis, and therefore has set the boundaries for testing; the business is in charge of the
testing.
146
7 Cloud Risks
Performance of the Cloud: Test It?
Another common concern about the cloud is its performance. Does it meet the enormous
expectations held out, including the hype that its performance is scalable to infinity? The
answer is “no.” There is no infinite amount of servers in cloud centers. But there is enough
to serve the scale of performance needs. There are not many applications (including websites) that need a load of more than 100,000 virtual users, and even fewer need a load of
more than 1,000,000.
When using traditional testing to create a load for these applications it can be a very
expensive experience. A license for 100,000 virtual users with the two most often-used
performance test tools (HP Performance Center and IBM Rational Performance Tester)
will cost a lot of money. The cloud can leverage this load, which is only actually needed a
few times a year. It can create the load for the few minutes that it is needed, reducing the
costs overall!
But the cloud also provides performance. It may not be indefinite, but it is large. What is
large and what is enough? There are two ways to look at this. First, one could do a benchmark test of the performance of the application and repeat that performance in a cloud. The
problem is that the application will only perform as it does now. But performance requests
can also exceed the business capacity. For example, seasonal variations at Christmas
might even generate requests far in excess of maximum capacity. Not every system has
the same pattern of performance usage. Cloud centers are equipped to support the need
for performance on an overall average rating; they can handle the performance requests
as needed, giving the appearance of infinite capacity.
But cloud providers cannot have a multitude of services running to satisfy the same performance requests. For example, if a cloud has multiple clients all with salary calculations,
that creates a performance problem. That performance problem will be shown at the end
of the month, when the calculations are generally done. When all of those services are
executed at the same time, they put a larger demand on the performance capacity, which
will lead to the same issues that exist today when computing power is needed.
A cloud client needs to have insight into the cloud’s usage volumes and practices, either
through a Service Level Agreement (SLA) or through guarantees by the cloud provider. When
selecting a cloud provider, this risk needs to be taken into account. Cloud providers are
reluctant to give information about their usage volumes, but if a client wants to be sure the
cloud can leverage its performance requests, this information is going to be necessary.
147
TMap NEXT® Testing Clouds
Elasticity of the Cloud Service: Use the Cloud for Load Testing
The same principles that exist for testing traditional applications also exist on cloud services. These services can also be tested for performance issues. However, when testing the
performance of cloud services, they’ll be more related to the elasticity of the cloud, or, in
other words, the on-demand scaling up of the service. The usage of (automatic) load patterns helps test the cloud performance.
For example, IBM Rational Performance Tester and SOASTA Cloud Testing have the option
to test an application with standardized load patterns. These patterns represent several
ways in which cloud applications are used. Patterns like those shown in Figure 7.7 show the
different patterns that emerge:
•Unpredictable burst. When the usage volume of a service is not yet known, or the
service is known to fluctuate in an unpredictable manner, this pattern can test whether
the service can handle that fluctuation.
•Predictable burst. When the usage of a service is predictable, like for example seasonal
variations, the load patterns can be adjusted to provide the necessary elasticity of the
service.
Figure 7.7 Automatic load patterns to help test the elasticity of the cloud service
148
7 Cloud Risks
•Periodic usage. Services that are used at specific times in a certain period have special
load patterns. When, for example, a service runs twice a month and the rest of the month
the usage rate is nearly nil, it only needs the computing power at those moments. The
load pattern should be adapted to test this situation.
•Hyped usage. Sometimes there is a lot of hype creating frenzied requests for information. Information requests or news feeds can be subject to enormous demand. Testing
the maximum load of these services can provide insight in the risks that are associated
with these issues.
In More Detail
At the end of 2010 WikiLeaks was in the news about the unveiling of US Foreign
Office diplomatic documents and a criminal investigation of Julian Assange, the
founder of WikiLeaks. After the news broke that PayPal, MasterCard and Visa were
not allowing any money transfers to WikiLeaks, these organizations were hit by
hackers with DDoS attacks. DDoS stands for distributed denial-of-service, and it is
an attack on a website to make it unavailable to its potential users. As a result, the
sites of MasterCard, Visa and PayPal were unavailable. They went “down.”
Commonly a DDoS attack saturates the target machine (the website) with external
communication requests. As a result, the target cannot respond to the traffic, or
responds so slowly as to be rendered effectively unavailable. It consumes the target’s resources so that it can no longer provide its intended service, or obstructs
the communication media between the intended users and the victim so that they
can no longer communicate adequately.
WikiLeaks also gets hit a lot with DDoS attacks, but they have thought of an
answer to these attacks: they moved their website into Amazon’s EC2 cloud. As
only the website itself has been transferred to Amazon, for now the data is hosted
in France, so the US government cannot make any claim to the data. With this solution, WikiLeaks hopes to fend off any attacks on its website. As the cloud is elastic,
it can increase its saturation by adding extra services from the cloud. As a result,
it won’t be totally swamped and will function normally. The cloud helps in security,
instead of being a security risk!
149
TMap NEXT® Testing Clouds
Figure 7.8 A graphical explanation of a DDoS attack [Wikipedia]
In More Detail
A Winter Cloud
In the last few days the Netherlands was seized by Father Winter. Global warming
at its best? Or just a cold end of 2010? The fact is that transport systems have had
problems with the snow and ice. I myself got stuck in Dublin at the start of this
month because of the winter weather. Not only flights were cancelled, public transport came to a halt and we all got stuck on the freeway for hours. The websites of
these transport organizations were unavailable, due to heavy traffic. As a result
150
7 Cloud Risks
people were unable to get information about their travel and didn’t know what to
do.
The websites were overloaded with traffic from people looking to see if their flight
was still going, rebooking their cancelled flight, checking the times of delayed
trains, looking for traffic jams before they left or seeing what roads were closed
down due to ice and snow. All this information became unavailable. But what could
help find a solution to this problem?
The websites were unavailable because they were not prepared for that amount
of traffic. They didn’t have enough bandwidth and servers available; they were not
scaled to that large amount of requests. If the websites had been in a cloud environment they could have had that scalability, which would have allowed adding
more services when the available amount of services came to a critical low.
This not only would have helped this past winter, but also during the ash cloud in
Western Europe when airline web traffic was overloaded due to the eruption of the
Eyjafjallajökull volcano. A cloud offers websites the needed scalability!
All These Risks: Is the Business Even Ready for the Cloud?
The cloud will keep the promise IT has made to the business. It will support the business
and create Business Technology. But with all these risks, is the business even ready for the
cloud? The cloud is not the magic solution IT needs to help them achieve this BT. The business needs to support the move to using the cloud. And support means not only permitting
cloud use, but also actively embracing it as a change in the business mindset.
Services will be standardized with fewer unnecessary or less commonly used features. There
should be a strategy as to which data and services are stored in the cloud, what type of
cloud deployment model is used for which services or data, and what are the non-functional
requirements of the services. As business manager you need to know what the requirements
and usage capacity of the cloud is. You’re not going to the cloud just because there’s a lot
of hype about it, but because it serves a need: a need that is greater than merely reducing
the Total Cost of Ownership. The cloud serves the larger demand for greater flexibility and
faster time-to-market!
151
References
[Aalst, 1999] L. van der Aalst,and C. de Koning, Testing expensive? Not testing is more expensive!, Informatie, October 1999, www.tmap.net
[Bardin, 2009] J. Bardin, J. Callas, S. Chaput, P. Fusco, F. Gilbert, C. Hoff, D. Hurst, S. Kumaraswamy, L. Lynch, S. Matsumoto, B. O’Higgins, J. Pawluk, G. Reese, J. Reich, J. Ritter, J. Spivey, and J. Viega. Security guidance for critical areas of focus in cloud computing. Technical
report, Cloud Security Alliance, April 2009
[BDTM, 2008] TMap NEXT: Business Driven Test Management, L. van der Aalst, R. Baarda,
E. Roodenrijs, J. Vink and B. Visser, 2008
[Black, 2002] R. Black, Keynote at Eurostar2002, Copenhagen
[Boehm, 1981] B.W. Boehm, (1981), Software Engineering Economics, Prentice-Hall Inc., Englewood Cliffs, isbn 0-13-822122-7
[BoozAllenHamilton, 2009] T. Afford and G. Morton. The Economics of Cloud Computing, 2009,
http://www.boozallen.com/media/file/Economics-of-Cloud-Computing.pdf
[Brodkin, 2008] J. Brodkin, Gartner: Seven cloud-computing security risks, Retrieved September
23, 2009, from Network World, from http://www.networkworld.com/news/2008/070208cloud.html
[Davis, 2010] Testing in Agile Software Development Environments with TMap NEXT®, C. Davis
and L. van der Aalst, 2010 (http://www.tmap.net/Images/Testing_in_Agile_Software_Development_Environments_with_TMap_NEXT_sept%202010_tcm8-59700.pdf)
[EuropeanCommission, 1995] 95/46/EC of the European Parliament and of the Council of 24
October 1995 on the protection of individuals with regard to the processing of personal data
and on the free movement of such data. Official Journal of the EC. 23
[Gartner, 2008] Assessing the Security Risks of Cloud Computing, Retrieved December 5, 2009,
http://www.gartner.com/DisplayDocument?id=685308
[Gillett, 2008] Cloud Computing is Hyped and Overblown, Forrester’s Frank Gillett.....Big Tech
Companies Have “Cloud Envy”, 2008
[Glauser, 2010] Cloud Washing 101 - The cloud marketing playbook, R. Glauser, http://servicenow.web5.hubspot.com/Blog/bid/42570/Cloud-Washing-101-The-cloud-marketing-playbook
[Hoenig, 2009] Connecting to the Cloud, G. Hoenig, 2009 (http://blogs.i365.com/cloud-connected-recovery/connecting-to-the-cloud/)
[IDC, 2009] IDC Cloud Services Forecast, F. Gens, October 5, 2009
[IDC, 2010] Defining “Cloud Services” and “Cloud Computing,” F. Gens, 2008
[IEEE, 1998] The Institute of Electrical and Electronics Engineers, Inc. (1998), IEEE Std 1028-1997
153
TMap NEXT® Testing Clouds
Standard for Software Reviews, 345 East 47th Street, New York, NY 10017-2394, USA, isbn
1-55937-987-1
[ISO, 1994] ISO 8402 (1994), Quality Management and Quality Assurance: Vocabulary, International Organization of Standardization
[ISO 9126-1, 1999] ISO/IEC 9126 part 1 (1999), Information Technology: Software Product Quality
– Part 1: Quality Model, International Organization of Standardization
[ISO/IEC, 1991] ISO/IEC Guide 2 (1991), General terms and definitions concerning standardization and related activities, International Organization of Standardization
[Kok, 2010] G. Kok, Cloud Computing & Confidentality (2010), home.student.utwente.nl/
g.r.kok/.../CloudComputingAndConfidentiality.pdf
[MBT, 2006] Practical Model-Based Testing: A Tools Approach, M. Utting and B. Legeard, 2006,
Morgan Kaufmann; isbn-13: 978-0123725011
[McCall, 1977] J.A. McCall, P.K. Richards and G.F. Walters (1977), Factors in software quality,
RADC-TR-77-363 Rome Air Development Center, Griffis Air Force, Rome (New York, USA)
[NIST, 2009a] NIST/SEMATECH, e-Handbook of Statistical Methods, 2009, http://www.itl.nist.
gov/div898/handbook/eda/section2/eda23.htm
[NIST, 2009b] The NIST Definition of Cloud Computing, P. Mell and T. Grance, 2009
[Parkhill, 1966] The Challenge of the Computer Utility, D. Parkhill, 1966
[Pettey, 2010] Gartner Highlights Key Predictions for IT Organizations and Users in 2010 and
Beyond, Christy Pettey and Holly Stevens, January 23, 2010
[Ried, 2010] The Evolution Of Cloud Computing Markets, Stefan Ried, Ph.D., Holger Kisker,
Ph.D., Pascal Matzke, July 6, 2010
[SeizeTheCloud, 2011] Seize the Cloud, M. van den Berg and E. van Ommeren, March 2011,
Sogeti Group
[SMART, 1981] There’s a S.M.A.R.T. way to write management’s goals and objectives, G.T. Doran,
(1981). Management Review, Volume 70, Issue 11 (AMA FORUM), pp. 35-36
[Spirovski, 2010] Mitigating Risks of the IT Disaster Recovery Test, Retrieved April 1, 2010, from
http://www.shortinfosec.net/2010/03/mitigating-risks-of-it-disaster.html
[TestTopics, 2005] TMap Test Topics, T. Koomen and R. Baarda, 2005
[The Standish Group, 2011] The Standish Group, (2011), CHAOS MAnifeStO: The Laws of CHAOS
and the CHAOS 100 Best PM Practices, The Standish Group International, Boston, MA
[TMap, 2002] Software testing: a guide to the TMap approach, M. Pol, R. Teunissen and E. van
Veenendaal; isbn 0-201-74571-2
[TMapNEXT, 2006] T. Koomen, L. van der Aalst, B. Broekman and M. Vroon (2006), TMap Next:
for resultdriven testing, ’s-Hertogenbosch: Tutein Nolthenius Publishers, isbn 90-72194-80-2
[TMapSOA, 2008] TMap SOA Model, S. Hoevenaars, J. van Lieshout, J. Berends and J. Doorman, 2008
[Wikipedia] Wikipedia, http://en.wikipedia.org/wiki/Main_Page
[Zech, 2010] Risk-Based Security Testing in Cloud Computing Environments, P. Zech, 2010
154
Index
A
acceptance environment 119
acceptance test 38
accountability 105
adaptability 103
agile 18, 107
availability 101
availability testing 102
B
BDTM 17, 78, 79
business case 13, 83
business driven test management (BDTM) 78, 79
Business Technology 18, 47, 55, 109
C
cloud:
characteristics 48
cost reductions 52
layers 58
performance 147
cloud applications 108
cloud business model 48, 78
cloud computing:
definition 47, 48
history 12
cloud deployment models 127
Cloud Era 11, 13, 47
cloud infrastructure 51, 97
accountability 105
adaptability 103
agile approach 107
availability 101
functional testing 105
non-functional quality attributes 101
155
TMap NEXT® Testing Clouds
reliability 103
scalability 101
security 104
cloud provider 59, 87, 97, 101, 123, 139, 147
cloud risk 87, 121, 123
cloud risk analysis (CRA) 86
product risk analysis vs. ~ 91
steps 90
cloud service models 59
cloud services 48
definition 48
cloud test strategy 78, 94
cloud washing 50
community cloud 127
compliance 123
D
data privacy 123
Data Protection Directive 124, 125
development environment 118
development test 38
Disaster Recovery Plan (DRP) 141, 142
Disaster Recovery Test 145
DTAP model 118
E
elasticity 49, 104, 148
environment 52
evaluation 24, 35
G
Green IT 52, 121
H
hybrid cloud 127
I
IaaS 58, 59
infrastructure 51, 63, 97
traditional 51, 98
Infrastructure as a Service (IaaS) 59
156
Index
L
load testing 148
location-independent access 52, 54, 65
M
Managed Testing Services (MTS) 63
Model Driven Quality Improvement 113, 114
N
non-functional requirements 111, 112
O
on-demand 49, 51, 54, 61, 77
P
PaaS 58, 59
pay-per-use 49, 61, 77, 105
performance test 68, 92, 147
Platform as a Service (PaaS) 59
PointZERO® 19, 111
private cloud 127, 140
process-cycle test 110
program interface test 106
public cloud 127, 140
Q
quality assurance 29
quality attributes:
non-functional 101
Quality Gate 117
quality management 29
R
real-life test 103
recovery testing 141
regression 40
regression test 40, 41
reliability 103, 141
reliability testing 102
requirements:
non-functional 111, 112
resource pooling 49
157
TMap NEXT® Testing Clouds
Responsibility Matrix 128
risks 26, 86, 121, 123
S
SaaS 58, 59
integrating SaaS services 116
Safe Harbor Principles 124
scalability 101
security 104, 123
responsibility for ~ 128
security issues 123
security testing 139
self-service 49
service 47, 57
Software as a Service (SaaS) 59
Software Testing as a Service (STaaS) 58, 60, 61
STaaS 58, 60
benefits 74
challenges 75
drivers for adoption 60
governance model 65
process 61
real-enough-time 62, 63
real-time 62
STaaS provider:
process model 69
services 67
system test 38
T
test basis 39
test design pattern 71
test environment 119
test goal 83
test goal table 85
test infrastructure 63, 117
testing:
~ and quality management 29
~ and system development process 33
benefits 28
definition 23
execution 30
158
Index
levels and responsibilities 37
reasons for ~ 26
role of ~ 29
structured ~ 41
ways of ~ 32
test level 37, 39
test strategy 78, 94
test type 40
TMap 13, 15, 16
essentials 13
history 15
traditional application 84, 111
traditional infrastructure 51, 98
V
virtualization 49
V model 33, 34, 37
W
Work Package broker 63
159