The Evolution of the System of Radiological Protection: The

Transcription

The Evolution of the System of Radiological Protection: The
The Evolution of the System
of Radiological Protection:
The Justification for New ICRP
Recommendations
Roger H Clarke
Chairman ICRP
Chilton, Didcot
Oxon
OX11 0RQ
Abstract
The present ICRP recommendations were initiated by Publication 60 in 1990 and
have been complemented by additional publications over the last twelve years. It
is now clear that there is a need for the Commission to summarise the totality of
the number of numerical values that it has recommended in some ten reports. This
has been done in this paper and from these, a way forward is indicated to produce a
simplified and more coherent statement of protection philosophy for the start of the
21st century. A radical revision is not envisaged, rather a coherent statement of current
policy and a simplification in its application. The Commission plans their publication
in 2005.
1. Introduction
The 1990 system of protection, set out in Publication 60, was developed over
some 30 years. During this period, the system became increasingly complex as the
Commission sought to reflect the many situations to which the system applied. This
complexity involved the justification of a practice, the optimization of protection,
including the use of dose constraints, and the use of individual dose limits. It has
also been necessary to deal separately with endeavours prospectively involving
radiation exposure, ‘practices’, for which unrestricted planning was feasible for
reducing the expected increase in doses, and existing situations for which the only
feasible protection action was some kind of ‘intervention’ to reduce the doses. The
Commission also considered it necessary to apply the recommendations in different
ways to occupational, medical, and public exposures. This complexity is logical, but it
has not always been easy to explain the variations between different applications.
The Commission now strives to make its system more coherent and comprehensible,
while recognising the need for stability in international and national regulations, many
of which have relatively recently implemented the 1990 Recommendations. However,
new scientific data have been produced since 1990 and there are developments in
1
societal expectations, both of which will inevitably lead to some changes in the
formulation of the Recommendations.
The previous 1977 Recommendations were made in Publication 26, which established
the three principles of the system of dose limitation as Justification, Optimization and
Limitation. Assessments of the effectiveness of protection can be related to the source
that gives rise to the individual doses (source-related) or related to the individual
dose received by a person from all the sources under control (individual-related).
Optimization of protection is a source-related procedure, while the individual-related
dose limits provide the required degree of protection from all the controlled sources
(see Figure 1).
Optimization of protection was to be applied to a source in order to determine that
doses are ‘as low as reasonably achievable, social and economical considerations
being taken into account’, and decision-aiding techniques were proposed. In particular,
the Commission recommended cost-benefit analysis as a procedure to address the
question, ‘How much does it cost and how many lives are saved?’ The Commission
recommended that the quantity Collective Dose should be used in applying those
optimization techniques to take account of the radiation detriment attributable to the
source in question. This quantity was unable to take account of the distribution of
the individual doses attributable to the source. Attempts were made to address this
problem in Publications 37 and 55, by suggesting a costing of unit collective dose that
increased with individual dose received, the procedure was essentially never adopted
internationally.
The individual is protected from ALL
regulated sources by the DOSE LIMIT
All individuals are protected from a
SINGLE source by the DOSE CONSTRAINT
Figure 1. Source-related and Individual-related criteria
2
2. The 1990 and Subsequent Recommendations
The issue was partially resolved in the 1990 Recommendations: while it was still
stated, as in 1977, that in relation to any particular source within a practice, the doses
should be as low as reasonably achievable, social and economic factors being taken
into account, it then continued:
‘This procedure should be constrained by restrictions on the doses to individuals
(dose constraints), or the risks to individuals in the case of potential exposures
(risk constraints), so as to limit the inequity likely to result from the inherent
economic and social judgements’ (Paragraph 112).
The concept of the constraint has not been clearly explained by the Main Commission
in its subsequent publications. It has not been understood and, although it has been
the subject of debate by international bodies, it has not been sufficiently utilised nor
has it been implemented widely. The Commission now aims to clarify the meaning
and use of the constraint.
The dose constraint was introduced because of the need to restrict the inequity of any
collective process for offsetting costs and benefits when this balancing is not the same
for all the individuals affected by a source. Before 1990, the dose limit provided this
restriction, but in Publication 60 the definition of a dose limit was changed to mean
the boundary above which the consequential risk would be deemed unacceptable. This
was then considered to be inadequate as the restriction on optimization of protection
and lower value constraints were required to achieve this.
This introduction of the constraint recognised the importance of restricting the
optimization process with a requirement to provide a basic minimum standard of
protection for the individual from the source under consideration.
The principles for intervention set out in Publication 60 are expressed in terms
of a level of dose or exposure where intervention is almost certainty warranted
(i.e., justified), which is followed by a requirement to maximise the benefit of the
intervention (i.e., the protection level should be optimized). This is effectively an
optimization process and therefore it may be seen in exactly the same terms as for
practices, i.e. there is a restriction on the maximum individual dose and then the
application of the optimization process that is itself expected to lead to lower doses
to individuals.
It can be seen then that all of the Commission Recommendations since 1990, both
for practices and for interventions, have been made in terms of an initial restriction
on the maximum individual dose in the situation being considered, followed by a
requirement to optimize protection. This underlines the shift in emphasis to include
the recognition of the need for individual protection from a source.
3
Since the 1990 recommendations there have been nine publications that have provided
additional recommendations for what are effectively to be regarded as ‘constraints’ in
the control of exposures from radiation sources. When ICRP 60 is included, there
exist nearly 30 different numerical values for ‘Constraints’, listed in Table 1, in the
ten reports that define current ICRP recommendations. Further, the numerical values
are justified in some six different ways, which include,
•
•
•
•
•
•
Individual annual fatal risk,
Upper end of an existing range of naturally occurring values,
Multiples or fractions of natural background,
Formal cost-benefit analysis,
Qualitative, non-quantitative, reasons, and
Avoidance of deterministic effects
The new recommendations should be seen, therefore, as extending the
recommendations in Publication 60 and those published subsequently, to give a
single unified set that can be simply and coherently expressed. The opportunity is also
being taken to include a coherent philosophy for natural radiation exposures and to
introduce a clear policy for radiological protection of the environment.
The question to be addressed is whether, for the future, fewer constraints may be
recommended that are sufficient to encompass the needs of radiological protection,
and whether they can be established on a more uniform and consistent basis.
Table 1. Compilation of the Existing ICRP ‘Constraints’ to Optimization
SITUATION TO WHICH IT APPLIES
Effective Dose*
Basis
Publication
+
NORMAL OPERATION OF A PRACTICE
~0.01 mSv/a
a, c
0.1 mSv/a
e
Exemption level, protection optimized
Constraint for long-lived nuclides
82
0.3 mSv/a
e
Maximum public constraint
77
20 mSv/a
a
Maximum worker constraint
60, 68
10 mSv/a
(1500 Bq m-3)
b
Worker Constraint for Rn-222
-Optimized level between 500-1500 Bq m-3
65
2 mSv
e
Surface of the abdomen of pregnant worker
60
1 mSv
a, c
Foetal dose over remaining term of pregnancy
75
1 mSv/a
a, c
Dose limit for the public
60
4
64, 76
PROLONGED EXPOSURE
∼10 mSv/a
c
∼100 mSv/a
10 mSv/a
(600 Bq m-3)
∼1 mSv/a
10-5 /a
Below this intervention is optional, but not
likely to be justifiable
82
c, f
Intervention almost always warranted
82
b
Constraint for Rn-222 at home
-Optimized level 200-600 Bq m-3
65
c
Intervention Exemption Level, protection
optimized
82
Risk constraint
81
a
BIOMEDIAL RESEARCH
0.1 mSv
a
Minor level of societal benefit
62
1.0 mSv
a
Intermediate level of societal benefit
62
10.0 mSv
a
Moderate level of societal benefit
62
> 10.0 mSv
a
Substantial level of societal benefit
62
SINGLE EVENTS AND ACCIDENTS
Effective Dose*
Averted
50 mSv
e, c
Sheltering warranted – optimized 5-50 mSv
63
500 mSv
5000 mSv skin
e, c
Evacuation warranted– optimized 50-500
mSv
63
5000 Gy thyroid
e, c
Issue stable iodine – optimized 50-500 mSv
63
1000 mSv
d, a
Arrange relocation (-10s mSv per month)
1000 mSv
5000 mSv skin
f
10 mSv
100 Bq/g (α),
10000 Bq/g
(β/ )
(β/γ
c, d
63, 82
Constraint for planned emergency work
63
Optimized value for foodstuffs 10-100 Bq/g
(α),
), 1000-10000 Bq/g (β/
(β/γ)
63
*Unless otherwise stated
+
a. individual annual fatal risk, b. upper end of an existing range of naturally occurring values,
c. multiples or fractions of natural background, d. formal cost-benefit analysis, e. qualitative,
non-quantitative, reasons, and ff. avoidance of deterministic effects
5
3. The 2005 System of Protection
The primary aim of the Commission continues to be contributing to the establishment
and application of an appropriate standard of protection for human beings and
now explicitly for other species. This is to be achieved without unduly limiting
those desirable human actions and lifestyles that give rise to, or increase, radiation
exposures.
This aim cannot be achieved solely on the basis of scientific data, such as those
concerning health risks, but must include consideration of the social sciences.
Ethical and economic aspects have also to be considered. All those concerned with
radiological protection have to make value judgements about the relative importance
of different kinds of risk and about the balancing of risks and benefits. In this, they are
no different from those working in other fields concerned with the control of hazards.
The restated recommendations will recognise this explicitly.
The Commission now recognises that there is a distribution of responsibilities for
introducing a new source leading to exposures, which lies primarily with society at
large, but is enforced by the appropriate authorities. This requires application of the
principle of JUSTIFICATION, so as to ensure an overall net benefit from the source.
Decisions are made for reasons that are based on economic, strategic, medical, and
defence, as well as scientific, considerations. Radiological protection input, while
present, is not always the determining feature of the decision and in some cases plays
only a minor role. The Commission now intends to apply the system of protection to
practices only when they have been declared justified, and to natural sources that are
controllable.
The justification of patient diagnostic exposures is included, but has to be treated
separately in the recommendations, because it involves two stages of decisionmaking. Firstly, the generic procedure must be justified for use in medicine and,
secondly, the referring physician must justify the exposure of the individual patient
in terms of the benefit to that patient. It is then followed by a requirement to optimize
patient protection and the Commission has advocated the specification of Diagnostic
Reference Levels as indicators of good practice.
Where exposures can be avoided, or controlled by human action, there is a
requirement to provide an appropriate minimum, or basic, standard of protection both
for the exposed individuals and for society as a whole. There is a further duty, even
from small radiation exposures with small risk, to take steps to provide higher levels
of protection when these steps are effective and reasonably practicable. Thus, while
the primary emphasis is now on protection of individuals from single sources, it is
then followed by the requirement to optimize protection to achieve doses that are as
low as reasonably achievable.
In order to achieve this, it is proposed that the existing concept of a constraint
be extended to embrace a range of situations to give the levels that bound the
6
optimization process for a single source. The optimization of protection from the
source may involve either, or both, the design of the source or modification of the
pathways leading from the source to the doses in individuals. They would replace a
range of terms that include intervention levels and action levels since there would be
no need to distinguish intervention situations separately, constraints, clearance levels
and exemption levels as well as the dose limits for workers and the public.
The system of protection being developed by the Commission is based upon the
following principles, which are to be seen as a natural evolution of, and as a further
clarification of, the principles set out in Publication 60. Once the source is justified by
those appropriate authorities, the radiological principles may be expressed as,
For each individual, there is a limit to the acceptable risk imposed by all
regulated practices
- LIMITS
For each source within a practice, basic standards of protection are applied
for the most exposed individuals, which also protect society
- CONSTRAINTS
If the individual is sufficiently protected from a source, then society is also protected
from that source.
However, there is a further duty to reduce doses, so as to achieve a higher
level of protection when feasible and practicable. This leads to authorised
levels
- OPTIMIZATION
These constraints or basic levels of protection can be recommended by ICRP and
accepted internationally. The responsibility for optimization then rests with the
operators and the appropriate national authority. The operator is responsible for
day-to-day optimization and also for providing input to the optimization that will
establish Authorised Levels for the operation of licensed practices. These levels will,
of necessity, be site and facility dependent and beyond the scope of ICRP.
4. The Choice of New Constraints
The Commission now considers the starting point for selecting the levels at which
any revised constraints are set is the concern that can reasonably be felt about the
annual effective dose from natural sources. The existence of natural background
radiation provides no justification for additional exposures, but it can be a benchmark
for judgement about their relative importance. The worldwide average annual
effective dose from all natural sources, including radon, as reported by UNSCEAR
is 2.4 mSv.
7
The challenge is whether fewer numbers could replace the 20-30 numerical values
for constraints currently recommended. Further, could they also be more coherently
explained in terms of multiples and fractions of natural background.
A smaller number of constraints can be chosen from amongst the current numerical
values so as to ensure continuity. They can be explained in terms of multiples or
fractions of background, which achieves simplicity. Finally they are to be seen as a
necessary, but not sufficient condition for protection, which requires optimization.
The resulting suggested scheme is shown in Figure 2.
5. Optimization of Protection
The Commission wishes to retain the phrase ‘Optimization of protection’ and applies
them both to single individuals and to groups. However, it is applied only after
meeting the restrictions on individual dose defined by the relevant constraint. It is
now used as a short description of the process of obtaining the level of protection from
a single source, that is as low as reasonably achievable.
Figure 2. Proposed constraints to the process of optimization of protection.
The Commission stated in Publication 77 that the previous procedure had become
too closely linked to formal cost-benefit analysis. The product of the mean dose and
the number of individuals in a group, the collective dose, is a legitimate arithmetic
quantity, but is of limited utility since it aggregates information excessively. For
making decisions, the necessary information should be presented in the form of a
matrix, specifying the numbers of individuals exposed to a given level of dose and
when it is received. This matrix should be seen as a ‘decision-aiding’ technique that
8
allows different weightings of their importance to be assigned to individual elements
of the matrix. The Commission intends that this will avoid the misinterpretation of
collective dose that has led to seriously misleading predictions of deaths.
The concept of collective dose was also previously used as a means of restricting
the uncontrolled build-up of exposure to long-lived radionuclides in the environment
at a time when it was envisaged that there would be a global expansion of nuclear
power reactors and associated reprocessing plants. Restriction of the collective dose
per unit of practice can set a maximum future global per caput annual effective dose
from all sources under control. If, at some point in the future, a major expansion of
nuclear power were to occur, then some re-introduction of a procedure may have to be
considered to restrict a global build-up of per-caput dose.
The process of Optimization may now be expressed in a more qualitative manner.
On a day-to-day basis, the operator is responsible for ensuring the optimum level of
protection and this can be achieved by all those involved, workers and professionals,
always challenging themselves as to whether protection can be improved. Optimization
is a frame of mind, always questioning whether the best has been done in the prevailing
circumstances. For the more formal authorisations, which are decided by the regulator
in conjunction with the operator, they may in future best be carried out by involving
all the bodies most directly concerned, including representatives of those exposed, in
determining, or in negotiating, the best level of protection in the circumstances. It is
to be decided how the Commission’s recommendations will deal with this degree of
societal process. However, the result of this process will lead to the authorised levels
applied by the Regulator to the source under review.
6. Exclusion of Sources and Exposures
The Commission intends its system of protection to apply to the deliberate introduction
of a new controllable source or the continued operation of a controllable source that
has deliberately been introduced, i.e. a practice, and to controllable natural sources.
Its Recommendations can then be applied to reduce doses, when either the source or
the pathways from the source to the exposed individuals can be controlled by some
reasonable means. Sources that do not fall within this definition of controllable are
excluded from regulatory control. There are sources for which the resulting levels of
annual effective dose are very low, or for which the difficulty of applying controls
is so great and expensive, that protection is already optimized and the sources are
therefore excluded.
In its restated policy the Commission defines what sources and exposures are to
be excluded from the system of protection and will not use the term ‘exemption’.
Exemption or clearance is seen as a regulatory decision that is applied to nonexcluded sources by the appropriate regulatory body. That body has the responsibility
for deciding when radioactive material is to be released from its control, which is in
effect an ‘Authorised Release’ no different from that specified for effluent discharges
after application of the optimization process.
9
Apart from these exclusions, the Commission has aimed to make its recommendations
applicable as widely and as consistently as is possible, irrespective of the origin of the
sources. The Commission’s recommendations thus will now cover exposures to both
natural and artificial sources, so far as they are controllable.
7. Radiological Protection of the Living Environment
The current ICRP position regarding protection of the environment is set out in
Publication 60: “The Commission believes that the standards of environmental
control needed to protect man to the degree currently thought desirable will ensure
that other species are not put at risk.” Up until now, the ICRP has not published any
recommendations as to how protection of the environment should be carried out. The
Commission has recently adopted a report dealing with environmental protection.
This report addresses the role that ICRP could play in this important and developing
area, building on the approach that has been developed for human protection and on
the specific area of expertise to the Commission, namely radiological protection.
The Commission has decided that a systematic approach for radiological assessment
of non-human species is needed in order to provide the scientific basis to support
the management of radiation effects in the environment. The proposed system
does not intend to set regulatory standards. The Commission rather recommends a
framework that can be a practical tool to provide high-level advice and guidance and
help regulators and operators demonstrate compliance with existing legislation. The
system does not preclude derivation of standards, on the contrary, it provides a basis
for such derivation.
8. Some Outstanding Issues and Proposed Timescales
The Main Commission is preparing a number of supporting documents on which the
main recommendations will draw. These include summaries of the health effects of
radiation at low doses and the review of RBE values, which together will lead to a
document on the decision for revised radiation and tissue weighting factors. Other
major issues which are under development and need further discussion are;
•
•
•
•
Exploration into the possibility of specifying a fewer number of
numerical constraints than presently exist and whether they can be
more coherently explained
Clarification of the Exclusion concept and further elaboration of the
observation that all releases from regulatory control are ‘Authorised
releases’
A review of the ‘critical group’ concept as used to represent the
hypothetical individual. ICRP has not addressed this since well
before the 1990 recommendations.
Develop methods by which the optimization of protection can
realistically be achieved
10
The intention is to have draft recommendations prepared for discussion with the four
Committees late in 2003 so that a well-developed draft is available for the IRPA 11
Congress in May 2004. It is planned to produce the final version in 2005.
Bibliography
ICRP Publication 26. Recommendations of the ICRP, Annals of the ICRP, Vol. 1 No
3 (1977)
ICRP Publication 37. Cost-Benefit Analysis in the Optimization of Radiation
Protection, Annals of the ICRP, Vol. 10 No2-3 (1983)
ICRP Publication 55. Optimization and Decision-Making in Radiological Protection,
Annals of the ICRP, Vol. 20 No 1 (1989)
ICRP Publication 60. 1990 Recommendations of the ICRP, Annals of the ICRP, Vol.
21 No 1-3 (1991)
ICRP Publication 62. Radiological Protection in Biomedical Research, Annals of the
ICRP, Vol. 22 No 3 (1993)
ICRP Publication 63. Principles for Intervention for Protection of the Public in a
Radiological Emergency, Annals of the ICRP, Vol. 22 No 4 (1993)
ICRP Publication 64. Protection from Potential Exposure: A Conceptual Framework,
Annals of the ICRP, Vol. 23 No 1 (1993)
ICRP Publication 65. Protection Against Radon-222 at Home and at Work, Annals of
the ICRP, Vol. 23 No 2 (1994)
ICRP Publication 75. General Principles for the Radiation Protection of Workers,
Annals of the ICRP, Vol. 27 No 1 (1997)
ICRP Publication 76. Protection from Potential Exposures: An Application to
Selected Radiation Sources, Annals of the ICRP, Vol. 27 No 2 (1997)
ICRP Publication 77. Radiological Protection Policy for the Disposal of Radioactive
Waste, Annals of the ICRP, Vol. 27 Supplement (1998)
ICRP Publication 81. Radiation protection Recommendations as Applied to the
Disposal of Long-lived Solid Radioactive Waste, Annals of the ICRP, Vol 28 No 4
(1998)
ICRP Publication 82. Protection of the Public in Situations of Prolonged Radiation
Exposure, Annals of the ICRP, Vol. 29 No 1-2 (1999)
ICRP Publication zz. Relative biological Effectiveness(RBE), Radiation Weighting
Factor(wR ), and Quality Factor(Q). Annals of the ICRP (2003a) In press.
ICRP Publication xx. A Framework for Assessing the Impact of Ionizing Radiation on
Non-Human Species. Annals of the ICRP. (2003b) In press.
11
Risk as continuum: a redefinition of risk
for governing the post-genome world
Denise Caruso
The Hybrid Vigor Institute
San Francisco
UNITED STATES
Abstract
The achievements of the past 30 years in molecular biology have produced an
unprecedented volume of genetic material, information and experimental activity.
During that period, the products of genomic biology writ large have gradually
become an integral part of the Zeitgeist - genes themselves, genetically modified and
engineered organisms in drugs and food, databases filled with the genetic identities
of millions of people, and ever-cheaper and more powerful technology to deconstruct
and analyze them. Not only do these achievements stand to permanently alter our
notions of human autonomy, the natural environment and health but, perhaps most
fundamentally, they force us to reconsider our definition and perception of public
risk.
Some of the potential scientific risks are being considered for the first time in history,
such as the exposure of humans and the environment to genetic pollution from
modified living organisms, and engineering of the human germline cells which pass
along our heredity traits to future generations. Some of the social and economic risks
may have equally far-reaching consequences: the theft of genetic resources, ongoing
controversies over the patenting of genes and other living materials, the privacy and
civil liberties risks of compiling DNA databanks, the economic risks of commodifying
living organisms.
Protecting the public from undue risk is the job of governance. The degree to which
the scientific risks posed by the products of genomic biology are still largely unknown
- and the social and cultural risks mostly unacknowledged - has magnified many of
the shortcomings of present-day government oversight, laws and regulations in areas
where science and technology meet public interest. At the heart of these shortcomings
is the fact that the post-genome world lacks a transparent framework for risk and its
regulation that includes the input of all knowledgeable stakeholders affected by these
decisions, while simultaneously encouraging responsible technological and economic
development.
12
This argument for redefining risk includes a discussion of the risk of reliance on the
data and agendas of scientists and the risk of oversimplifying this debate. We then
present a series of perspectives from various disciplines on possible ways to mitigate
these risks, closing with a discussion of some new and potentially useful approaches
and methods for opening the process of governance to include the full complement of
stakeholders who are affected by decisions made on their behalf.
13
Risk Perception is not what it seems:
the Psychometric Paradigm Revisited1
Lennart Sjöberg
Center for Risk Research
Stockholm School of Economics
Box 6501
113 83 STOCKHOLM
SWEDEN
E-mail: [email protected]
Abstract
Risk perception has become an important topic to policy makers concerned with
technology and the environment, and the psychological analysis of this construct has
attracted much interest. Psychological research on risk perception has been dominated
by the psychometric paradigm which has been fruitful in bringing up important issues
in research. Yet, most of the conclusions reached in the paradigm are not sufficiently
well based on empirical data and appropriate analyses. Results are presented here
which show the prevalence of risk for energy attitudes, the importance of Tampering
with Nature as a new risk dimension accounting for much of the perceived risk of
nuclear waste, widely different levels but similar correlational structures of risk
perception data for experts and the public, moderately strong correlations between
perceived risk and trust (especially specific trust rather than general trust), and
demand for risk mitigation being related most strongly to seriousness of consequences
of a hazard, not the risk of an accident or the riskiness of the activity. Risk perception
is related to conceptions of knowledge which stress the limits of science and different,
New Age type, ways of knowing. Finally, interest emerged as an important predictor
of demand for risk mitigation. A conceptualization of the risk perceiver, based on
these results, is briefly discussed.
Introduction
Risk is an important aspect of attitudes to technology. Consider the following results,
obtained in a recent study . Respondents (N=294) judged their global attitude, on a 7
step good-bad scale, of 18 technologies. These ratings were regressed on 5 independent
variables, also rated. They were risk, benefit, voluntariness, possibility to protect
oneself against the risk and whether the technology was replaceable. Results are
provided in Table 1. The table shows that these simple models fitted fairly well to the
attitude data with a mean proportion of explained variance of 0.407. The standardized
regression weight show that risk was the most important explanatory variable, with
benefit as the second most important variable. Voluntariness and protection possibility
played a very marginal role.
. Paper supported by a grant from the Bank of Sweden Tercentary Fund, project
”Neglected Risks”
1
14
Table 1. Results from regression analyses of global attitude (reversed scoring)
towards technologies, largest value in bold face.
Standardized regression coefficients
Technology
1. Commercial passenger
aviation
2. Car ferries
3. Micro wave ovens
4. Alcohol
5. Genetically modified food
6. Pesticides
7. X-ray diagnostics
8. High speed train
9. The Internet
10. Cellular telephones
11. e-mail
12. Nuclear power
13. Wind power
14. Hypertension medicine
15. Private automobile
satellite navigation
16. Personal ID numbers
17. Heart transplants
18. Television
Voluntariness
Protection
possibility
Replaceability
Benefit
Risk
R2adj
-0.089
-0.082
0.165**
-0.377***
0.197**
0.274
0.032
0.036
0.076
-0.033
-0.020
0.000
-0.046
-0.056
-0.051
-0.113*
0.109**
0.015
0.045
-0.067
0.007
-0.125**
-0.037
0.029
-0.013
-0.110*
-0.108*
-0.002
-0.078
0.063
0.019
0.062
0.036
0.051
0.231***
0.152**
0.261***
0.194***
0.068
0.189**
0.117*
0.108*
0.246***
0.242***
0.345***
-0.300***
-0.336***
-0.270***
-0.260***
-0.171***
-0.021
-0.302***
-0.324***
-0.421***
-0.499***
-0.291***
-0.467***
-0.186***
0.385***
0.383***
0.344***
0.478***
0.500***
0.447***
0.456***
0.281***
0.269***
0.264***
0.378***
0.248***
0.252***
0.306
0.404
0.416
0.489
0.553
0.276
0.394
0.369
0.373
0.456
0.647
0.549
0.325
0.048
0.052
0.040
-0.455***
0.310***
0.399
0.013
-0.043
0.067
0.010
-0.021
0.050
0.324***
0.185**
0.288***
-0.212***
-0.369***
-0.266***
0.398***
0.107*
0.229***
0.518
0.228
0.341
‘ p<0.05, ** p<0.01, *** p<0.001
Results like these establish risk perception as a variable of focal interest in socially
and politically important attitude such as attitude to a technology. Several decades of
work have been devoted to psychological work on the understanding of perceived
risk. The received message from this work is called the psychometric paradigm.
It has gained wide credibility and popularity. But what is the recipe for success in
psychology and social science? History teaches us that success seems sometimes to
have only a weak relationship to actual empirical power of a model or a theory, and
much more with the apparent power, which in turn can be created in many ways. A
necessary condition is probably that there is a credible observational base. However,
what is credible is in the eye of the beholder. In the present paper I will discuss a
current case of a well-known approach, or paradigm to borrow Slovic’s phrase , in
risk perception research: the psychometric paradigm2. I will show how apparent
credibility has been created, and present results that refute essential elements of the
purported validity of the psychometric paradigm. First I give a brief historical expose
to provide the wider context in which this approach was established.
. I will also use the term the psychometric model, referring to an essential element
of the paradigm, viz. the set of qualitative risk characteristics and their relation to
perceived risk.
2
15
Development of risk perception research
Risk perception appeared on the stage of policy as a very important concept in the
1960’s. It was implicated as a main determinant of public opposition to technology,
most notably to nuclear technology, but other early examples can be given as well .
In Sweden, parliamentarians now devote about three times as much attention to risk
issues as they did in the first half of the 60’s, as reflected in their submitted private
bills .
Attempts were made to handle the difficult situation that had arisen due to unexpected
opposition to technology. Sowby suggested that risk comparisons be made . The risk
of smoking is very much greater than the risk of living close to a nuclear power plant,
according to nuclear experts, to take one example. People smoke, so why not accept
nuclear power? This approach had little immediate effect in making people willing to
accept technology risks, perhaps not very surprisingly. Starr then investigated risks in
some detail and found that society seemed to accept risks to the extent that they were
associated with benefits, and were what he termed voluntary . Starr’s work gave rise
to much interest in risk management and an awakening of interest in the question of
how people perceive, tolerate and accept risks. Risk perception came to be seen as
an obstacle to rational decision making, because people tended to see risks where
there were none, according to the experts. The conflict between expert and public risk
perception is at the basis of the social dilemmas of risk management .
The Rasmussen report on nuclear risk in 1975 further stimulated the debate and
polarized the camps, since the report claimed that the risks of nuclear power could
be quantified scientifically and that they had been assessed as being very small but not zero. Then TMI happened (1979) and later Chernobyl (1986). The political
turmoil surrounding nuclear power and nuclear waste is still intense in many
countries.
Meanwhile, in the 1970’s, a small group of cognitive psychologists with a background
in the experimental study of decision making, became very interested in investigating
how people react with regard to risks. One of the inputs to this work was experimental
studies of lotteries and other forms of gambles, and in this field attempts have been
made to define an abstract concept of risk and to measure it by means of psychological
scaling procedures . This work says something about how people react to lotteries and
similar gambles, but probably little or nothing about the risk policy questions that
bother decision makers. Preferences regarding lottery gambles appear to be unrelated
to just about everything else .
Another part of the input to the process came from work on subjective probability by
Kahneman and Tversky . They found great differences between probability according
to calculated probability calculus and the intuitions people had about probabilities.
Assuming that risk is about probability it was tempting to conclude that this work
had much to say about how people perceived and acted upon risk . However, no such
16
evidence has ever been presented. Risk perception in a policy context is probably not
a question of cognitive biases .
Returning to Starr and his concept of voluntary risk, a number of authors suggested
variations on the theme in the 1970’s. The need to do so was obvious both because
Starr seemed to have hit upon a goldmine full of interesting and important problems,
and because his own solution was obviously in need of improvement. Surely, both a
train passenger and a mountain climber are doing something which is ”voluntary” more or less - and more obvious distinctions would concern concepts such as control.
Several books on risk topics, such as acceptability of risks, were published in the
1970’s, suggesting different interpretations than voluntariness of the Starr findings.
This work led to the development of the psychometric paradigm, to which I now
turn.
The psychometric paradigm
In 1978, an important paper was published by Fischhoff et al. . In this paper the authors
had compiled 9 dimensions from the literature, including voluntariness, and asked
people to rate the ”risk” of a large number of activities on each of the dimensions.
They computed mean ratings for each activity, or hazard, and then intercorrelated
these means. The intercorrelations were factor analyzed and it was found that two
major factors explained much of the variance: Dread and Novelty. Nuclear power
was high on both so here was apparently an explanation why people were opposed
to this technology! Fischhoff et al. also showed that perceived level of risk, and risk
tolerance as rated by their respondents, could be well explained by Dread and Novelty
of the risks. Multiple correlations were of the order 0.8 . Later work with larger groups
of subjects and more rating scales and hazards, and by several researchers, essentially
replicated these initial findings. Some of the later work has been carried out in
different countries, usually with student groups, and, again, on the whole confirmed
the findings, see Boholm for a review.
Hence, the basic work in the psychometric paradigm has been replicated many times
and it has virtually always been possible to demonstrate that the factor structure
is fairly invariant and that perceived risk is well accounted for by the factors. It is
probably for these reasons, and their intuitively appealing contents, that the paradigm
has been claimed to be ”enormously successful”. Many citations can be found, among
them in a book published by a Justice of the U. S. Supreme Court . These citations
usually take the validity of the Psychometric Model for granted.
There are grounds for a cautious assessment of the paradigm and the model, however.
The claim that Dread and Novelty account for virtually all of the variance of perceived
risk is based on analysis of means, and regressions are calculated across hazards.
However, when perceived risk is regressed on the psychometric factors across
respondents, for one hazard at a time, the level of explained variance is typically
about 20%, not 70-80% . Marris et al. have published intra-individual correlations
among psychometric variables, i.e. correlations computed for each respondent,
17
across hazards. The mean of the intra-individual correlations can be related, on the
basis of their published results, to the correlations between the means . When this
is done, it is seen, that the mean intra-individual correlation was linearly related to
the corresponding correlation between means, but only about half of it. In terms of
proportion of explained variance, there would therefore be a drop to about 1/4 when
changing from correlations between means to mean intra-individual correlations,
from 80% to 20%.
One can debate which is the most appropriate level of analysis. If, however, we are
interested in the processes of individual perception or in individual differences it is
clear that the appropriate analyses are those that give the lower levels of explained
variance. The correlation between means may be high, but that says nothing about
how well we can account for individual differences or the individual risk perception
process. When the latter are in focus, the psychometric paradigm clearly leaves
most of the variance unexplained. This is true for level of perceived risk, and even
more so for more policy-related variables such as demand for risk mitigation or risk
tolerance. Misleading conclusions from analyses at the aggregated level have long
been discussed in the social sciences and are termed the ”ecological fallacy” .
The fact that the factor analyses of the psychometric model show that the two major
factors account for about 70% of the variance is of course no evidence for those factors
accounting for a large share of the variance of perceived risk. This misunderstanding
occurs in the literature, however. Also, the fact that the factors are replicated (and that
they are so few) is probably mostly due to shared semantics of these words, and to the
fact that relatively few words are used, and sampled from a very restricted domain.
Even cross-cultural validation with mostly highly educated samples of respondents
does not prove that these are very important factors of perceived risk. Overlapping
semantics of basic vocabulary is commonly found.
In other papers, it was claimed that the model explained only lay risk perception,
not the risk perception of experts . The latter were said to take into account only the
facts, annual fatalities and the like. This, too, is a finding which is quite attractive
and matches the preconceptions of many, perhaps especially of experts. However,
there are several problems with the statement . The finding is based on a very small
sample of expert respondents, only 15, and they made judgments of a very broad
range of hazards, much broader than anyone can be a topical expert in. At any rate,
the concept of expert usually refers to someone who has advanced topical knowledge
in a given field, and it is clear that the group studied by Slovic et al. did not qualify
in this sense.
Data from a fairly large group of nuclear waste experts were recently re-analyzed.
It was found that, in this group, risk perception was related to qualitative risk
characteristics. The correlations were lower than for the public, probably due to the
much smaller standard deviations of experts’ data .
18
A further development was that of introducing trust as yet another determinant of
risk perception, said to be very powerful . But trust correlates only about 0.3 with
perceived risk . Only a few studies, out of about 20 published, have provided higher
levels of explanation. Siegrist has claimed that there are strong correlations between
trust and perceived risk but the claim is only supported by data where both risk and
trust are measured by means of Likert type attitude scale items, not by the usual ratings
of both constructs . Hence, a methodological factor seems to be the explanation for
his result.
A well-known study by Slovic involved asking people how their trust would be
affected by various events . The conclusion was that it is easy to destroy trust, hard
to re-establish it. This may be true, but the data say little since they merely reflect
the beliefs that people had about how their trust would change. Such beliefs are
notoriously invalid as indicators of what psychological event would in fact take place,
cp. Nisbett and Ross .
A still further methodological twist was the introduction of associations. In the studies
of public opinion to a HLNW repository in Nevada, people were asked to report their
associations to a nuclear waste repository . As expected, these were quite negative.
”Nuclear” and ”waste” are both words which have negative connotations, their
concatenation even more so. However, it is unclear just what such associations mean
in terms of action readiness or further consequences. Attitude by itself has well known
ties to action , and in a comparison it was found that attitude was a more informative
dimension than that of associations .
Risk perception data are presumably collected because there is a belief that perceived
risks are, and should be, of importance to policy makers. The larger the risk, the more
should people demand that the risk be mitigated. That assumption sounds very likely
to be true. However, many other factors enter the picture and it has been found in
several studies that demand for risk mitigation, when rated separately, is most strongly
related to severity of consequences of a hazard, not to its ”risk”. Perceived risk is
most strongly related to probability of an accident or some other type of unwanted
outcome . These results go against a central tenet of the Psychometric Paradigm which
is centered on ”risk” as people perceive it.
Risk is usually also undifferentiated with regard to target, but it has been found in
many investigations that personal and general risk differ . In relation to risk policy,
personal and general risk have different implications depending on what kind of risk
is under consideration. Results from a study by Sjöberg show that general risk is more
important than personal risk with regard to risk policy attitude when there is a large
protection possibility. These are lifestyle risks such as risk from alcohol consumption.
It is the risk to others which is driving policy attitude. In the other extreme, personal
risk is more important. These are hazards people feel they cannot protect themselves
from, such as nuclear risks.
19
The psychometric paradigm work usually operates with an undifferentiated concept
of risk without a specified target. It has been found that such non-specific risk ratings
most closely resemble ratings of general risk , thus making them more relevant for
lifestyle risk policy than for environment and technology policy attitudes. This is
unfortunate since the paradigm has been motivated foremost by the need to inform
the latter type of policy making.
Another thesis of the psychometric paradigm is that media have strong effects on the
public’s risk perception, see af Wåhlberg and Sjöberg for a review. The media are
the main scapegoat in experts’ debates of risk policy dilemmas and it is frequently
asserted that media have a very large effect on risk perception, see e. g. Okrent . The
media are often seen as irresponsible and only interested in disseminating negative
information, with a special inclination to cover low probability/high consequence risk
. But typically, few references are given to support such statements. A classical study
is the one by Combs and Slovic published over 20 years ago . This was a preliminary
study of very limited scope and oriented towards accidents and illnesses rather than
risks proper. Okrent also cites Kone and Mullet as support for the thesis of media
effects. However, that paper describes a study of risk perception in Africa, finds it
similar to Western risk perception, and concludes that media must be the reason. This
is clearly only indirect evidence for a media effect.
Combs and Slovic reported that perceived causes of death correlated more strongly
with media report frequency than with actual mortality statistics. As they pointed
out, this is no proof of a causal effect, of course. Many alternative explanations are
possible. Perhaps more important, the study pertained to illnesses and accidents rather
than technologies, which are the usual topic of risk policy issues. Suppose the study
would have relied on data from, e.g. nuclear waste and genetically modified food. No
deaths have been reported in media, and no are known in statistical data bases. Yet,
people have strong views about these hazards and perceive risks of varying intensity.
Why? Perhaps because media cover them. But there are no data showing strong
relations between perceived risk and amount of media coverage.
In our recent project concerned with risk perception and media in five countries in
Western Europe (project RISKPERCOM, with partners in Sweden, Norway, France,
the UK and Spain) we found no support for the simple media image common in
the debate. We found that perceived risk correlated only weakly (across hazards)
with media coverage, whereas self-rated knowledge about the risks correlated more
strongly. The former correlations were only of the size 0.2, the latter about 0.6 .
Demand for risk mitigation was not correlated with media coverage.
In our study, media gave a fairly balanced picture of risks and accidents . It was also not
true that media gave priority to risks of the type small probability/large consequence.
The media reported, for example, many traffic accidents, which are fairly common
and typically do not have catastrophic consequences; only few people are involved in
any accident. What was written in Swedish newspapers about Chernobyl in the spring
of 1996, commemorating the Chernobyl accident in 1986, was indeed negative (how
20
could it be otherwise?) but Swedish nuclear power was given a balanced treatment. To
take another example, some researchers have complained that media did not inform
sufficiently about facts in connection with the local referendum about a high level
nuclear waste repository in Malå in Northern Sweden in 1997 , but there are many
other ways to get information than through the local newspaper. The local newspaper
does not even seem to be the most likely source of scientific and technical information.
The web is today a very important alternative.
Media seem often to be the only channel of information seriously considered when
risk perception is to be accounted for. However, there are several alternatives worthy
of considerations. One is movies3 and television dramas, another is rumor , or
personal contacts . It is also possible that the mere connotations of the terms carry
strong suggestive power for eliciting risk perceptions. ”Nuclear waste” sounds very
unpleasant even if you know nothing more about it than the verbal label and have only
a vague idea about the concepts of waste and nuclear.
The psychometric paradigm is also open to the possibility that Cultural Theory
carries important explanatory power with regard to perceived risk . This theory is
discussed elsewhere and will not be treated here in detail. In a quantitative form, as
operationalized by Dake and Wildavsky , it has been found to explain only some 5%
of the variance of perceived risk in European studies, somewhat more in US data .
Peters and Slovic, with results fully in accordance with such a very modest level
of explanatory power, still claimed that Cultural Theory dimensions were powerful
determinants of perceived risk. They investigated correlations between Cultural
Theory scales and various risk judgments and found a number of, mostly very weak,
but often statistically significant correlations, which they describe in the text in a
somewhat optimistic manner. For example, the correlations between the egalitarian
subscale and technology concerns were -0.22, -0.10, -0.01 and 0.02, a not very
impressive set of correlations. Nonetheless the authors wrote that ”these data confirm
the hypothesis that the Egalitarian factor will be strongly related to concerns about
technology... ” (p. 1439, emphasis added). Other examples of optimistic bias in
interpretations could be given, see .
Slovic and Peters have responded to this criticism with arguments pertaining to the
use of squared correlations as measures of explanatory power . Although this is an
interesting debate, they did not succeed in demonstrating the futility of correlations
in the current contexts. They cited authors who had shown that correlations between
independent and dependent variables in an experimental context may be quite
misleading as to the practical value of the experimental interventions. However,
the present discussion is about the explanatory value of a dimension as an indicator
of the value of a theory . The squared correlation is the first statistic suggested in
. For example, the movie ”The China Syndrome” which was released shortly
before the TMI accident in 1979.
3
21
the APA manual as a measure of effect size . It seems strange to see this standard
statistical measure be written off as misleading and even obsolete .
Finally, what is driving policy attitudes and demand for risk mitigation? Several
studies, including the present one, have shown that it is seriousness of consequences,
not risk per se . Here, it was also found that the risk inducing activity was important in
the sense of commonness, or frequency. This makes sense. If people are often exposed
to a risk, it would be more important to mitigate it. On the other hand, the riskiness
of the activity had little importance for demand for mitigation, and accident risk also
turned out to be less powerful as an explanatory variable. The psychometric paradigm
is based on the assumption that risk of an activity is the most important variable to
investigate; a wealth of data has by now shown this belief to be false, and that it does
make a difference which risk variable is selected for study. It has also been found,
in a very consistent set of analyses, that interest in a risk was a relatively powerful
predictor of demand for risk mitigation, clearly more so than perceived risk . The
risk perceiver seems to be willing to pursue risk themes in a positive mood, perhaps
because such an active attitude creates a feeling of empowerment and heightened self
esteem. The results were surprising because we had earlier found that risk perception
was negatively related to interest in groups of adolescents . Risk perception should
clearly not be analyzed only with reference to negative thoughts and feelings.
The psychometric paradigm is an interesting and fruitful pioneering effort and it has
without doubt done much to create an interest in important issues. Yet, as so many
pioneering efforts, it has raised more questions than it has been able to provide well
founded answers to.
The psychometric paradigm: conclusions
As pointed out in the introduction, the shortcomings of the psychometric paradigm
are not hard to see . When data are analyzed in an appropriate manner, it is clear that
the model accounts for, at the most, 20-25% of the variance of perceived risk and risk
tolerance. The strong explanatory power claimed for the model is based on misleading
data analysis using means. The same method that was used in the 1978 paper is still
used for current claims about the powerful properties of a similar model of ecological
risk .
The explanatory power of the psychometric model is largely due to the inclusion
of Dread items among the explanatory variables. However, Dread is probably a
consequence of perceived risk, not a cause of it, and therefore it should not be used as
an explanatory variable. It seems unlikely that we first feel fear, then perceive a risk
- at least the assumption must be substantiated in some way. The risk characteristics
form a logically coherent group of variables denoting properties of the hazard, not how
the respondent reacts emotionally to the hazard. The fact that there is a moderately
strong and consistent relationship between emotional reactions and risk perception
22
is documented in our work as reported elsewhere . It can be added that Dread was a
misnomer for the set of items summarized under that heading ; they denote severity
of consequences in various respects . One or two items which do attempt to measure
emotional reactions appear not to carry any explanatory power.
The data on experts cited so frequently turn out, on examination, to be very weak as
evidence. Only 15 experts were studied, and their claim to expertise is most dubious.
When real experts in a chosen field were analyzed, as I recently did for the case of
nuclear waste , they turned out to give risk ratings which were explicable by the
psychometric model dimensions in a manner similar to those of other respondents.
The present study gave similar results. Experts’ risk perception was correlated with
risk characteristics in a similar manner when compared to the public. This was true
also for policy attitudes, but that aspect of risk attitudes needs to be developed beyond
what was done in the present study.
Trust is a fairly weak explanatory variable with regard to perceived risk. The present
results support that statement and suggest that trust measures should be made
specific, not general. A specific risk measure turned out to give moderately important
contributions to the explanation of perceived risk.
The psychometric paradigm misses out by neglecting important distinctions and
variables. People make quite different judgments of personal and general risk, for
example . Personal and general risk differ both in level (personal risk is usually
much lower than general risk) and in correlates. General risk is a better predictor
of policy attitudes for lifestyle risks such as alcohol while personal risk is more or
equally important for technology and environment risks perceived as being outside of
our individual control, such as nuclear power . The model does not take into account
the powerful dimension we have called Tampering with Nature , nor does it account
for the rather loose relationship between rated ”risk” and policy attitudes, such as
demand for risk mitigation. The latter dimension is hard to explain with risk data but
so far it has been found to be most strongly related to expected consequences rather
than probabilities or risks . The latter finding is true whether hazards are defined
as activities or consequences, as shown in the present paper and elsewhere . The
psychometric paradigm is founded on the presumption that activities are important
for risk policy attitudes. The present data, and data in the cited references, strongly
support the notion that consequences are much more important.
Despite its important shortcomings, the psychometric paradigm has become an
attractive basis for most current work on risk perception, both research and consulting
. The question I pose here is why. A few suggestions are offered as answers:
The model is very simple. It is very easy to understand and is close to ”common
sense”.
23
The model provides answers which are politically desirable. The public is depicted
as emotional and ignorant, just as policy makers have always suspected. In contrast,
experts are said to make the objectively correct risk judgments.
The model seems to supply a final answer. As we have seen above, the model has been
popularized with the help of a kind of data analysis which can hardly fail to provide
the impression that risk perception is explained. Furthermore, it gives replicable data,
probably because it profits from common semantics of risk and related concepts in
various groups and even nations or cultures. Similar semantic overlap has been noted
before, e.g. with Osgood’s semantic differential .
Perhaps other aspects also are involved, but those mentioned above appear to carry the
Psychometric paradigm a long way.
A conception of the risk perceiver
Myths based on the psychometric paradigm have indeed penetrated very far. Jasanoff
puts the matter quite well in the following citation:
”The psychometric paradigm, in particular, has been instrumental in preserving
a sharp dualism between lay and expert perceptions of risk, together with an
asymmetrical emphasis on investigating and correcting distortions in lay people’s
assessments of environmental and health hazards. In policy settings, psychometric
research has provided the scientific basis for a realist model of decision making
that seeks to insulate supposedly rational expert judgments from contamination by
irrational public fears”. (p. 98).
The simple diagram, reproduced many times, of the factors Dread and Novelty,
seemed to show convincingly that people oppose nuclear power because they were
emotional and ignorant. And, as Jasanoff points out, experts were widely held to be
rational and objective in their risk perceptions. The widespread notion that policy
decisions should ignore the public’s perceived risk probably has some of its roots
and at least considerable support in these conclusions from the psychometric model.
It disregards the simple fact that the views of the public cannot and should not be
ignored in a democracy . Public opinion is becoming a more salient factor also in
sociological theory .
In our research we have found a quite different picture of the risk perceiver, be he or
she an expert or not. The dominating factors in risk perception are of an ideological
character, e.g. Tampering with Nature. The belief that the development of technology
involves unknown dangers is very important, and it seems no less rational than not
believing that such is the case. Science does develop all the time and risks we do not
know about today will be discovered tomorrow. If experts say ”there is no risk” this
usually means ”no risks have yet been discovered”. In studies especially oriented
24
towards New Age beliefs and world views, it was found that such beliefs account for
some 10-15 percent of the variance of risk perception .
Maybe we have here a clue as to the relatively moderate importance of trust. I may
trust that experts tell the truth, as they know it, but still believe that they do not know
the whole truth, that nobody does .
Why should risk be so important? In data presented in the present paper we have
seen risk to be a dominating factor in accounting for attitude, benefits being much
less important. In related work, we found that people are more easily sensitized to
risk than to safety . Mood states have been found to be more influenced by negative
expectations than by positive ones . People seem to be more eager to avoid risks
than to pursue chances. Maybe biological factors are responsible for human risk
aversiveness.
References
af Wåhlberg, A., & Sjöberg, L. (2000). Risk perception and the media. Journal of Risk
Research, 3, 31-50.
American Psychological Association. (1994). Publication manual of the American
Psychological Association, 4th Ed. Washington, DC: American Psychological
Association.
Boholm, Å. (1996). The cultural theory of risk: an anthropological critique. Ethnos,
61, 64-84.
Boholm, Å. (1998). Comparative studies of risk perception: A review of twenty years
of research. Journal of Risk Research, 1, 135-164.
Breyer, S. (1993). Breaking the vicious circle: Toward effective risk regulation.
Cambridge, MA: Harvard University Press.
Cohen, B. L. (1985). Criteria for technology acceptability. Risk Analysis, 5, 1-2.
Combs, B., & Slovic, P. (1979). Newspaper coverage of causes of death. Journalism
Quarterly, 56, 837-843,849.
Dake, K. (1990). Technology on trial: Orienting dispositions toward environmental
and health hazards. Unpublished Ph. D. thesis, University of California, Berkeley.
Douglas, M., & Wildavsky, A. (1982). Risk and culture. Berkeley, CA: University of
California Press.
Drottz-Sjöberg, B.-M. (1993). Risk perceptions related to varied frames of reference.
Paper presented at the SRA Europe Third Conference. Risk analysis: underlying
rationales, Paris.
Drottz-Sjöberg, B.-M., & Sjöberg, L. (1990). Risk perception and worries after the
Chernobyl accident. Journal of Environmental Psychology, 10, 135-149.
Drottz-Sjöberg, B.-M., & Sjöberg, L. (1991). Attitudes and conceptions of adolescents
with regard to nuclear power and radioactive wastes. Journal of Applied Social
Psychology, 21, 2007-2035.
Dunlap, R. E., Kraft, M. E., & Rosa, E. A. (Eds.). (1993). Public reactions to nuclear
waste. Durham: Duke University Press.
25
Findahl, O. (1998). Media som folkbildare. Malå och kärnavfallet. In R. Lidskog
(Ed.), Kommunen och kärnavfallet. Svensk kärnavfallspolitik på 1990-talet (pp.
211-242). Stockholm: Carlsson Bokförlag.
Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S., & Combs, B. (1978). How safe
is safe enough? A psychometric study of attitudes towards technological risks and
benefits. Policy Sciences, 9, 127-152.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An
introduction to theory and research. Reading, MA: Addison-Wesley.
Gardner, G. T., & Gould, L. C. (1989). Public perceptions of the risk and benefits of
technology. Risk Analysis, 9, 225-242.
Graham. (2001). Technological danger without stigma: The case of automobile
airbags. In J. Flynn & P. Slovic & H. Kunreuther (Eds.), Risk, media, and stigma.
Understanding public challenges to modern science and technology (pp. 241-256).
London: Earthscan.
Jasanoff, S. (1998). The political science of risk perception. Reliability Engineering
and System Safety, 59, 91-99.
Kapferer, J. N. (1989). A mass poisoning rumor in Europe. Public Opinion Quarterly,
53, 467-481.
Kone, D., & Mullet, E. (1994). Societal risk perception and media coverage. Risk
Analysis, 14, 21-24.
Lopes, L. (1983). Some thoughts on the psychological concept of risk. Journal of
Experimental Psychology: Human Perception and Performance, 9, 137-144.
Lopes, L. L. (1995). Algebra and process in the modelling of risky choice. In J.
Busemeyer & R. Hastie & D. L. Medin (Eds.), Decision making from a cognitive
perspective. San Diego CA: Academic Press.
Marris, C., Langford, I., Saunderson, T., & O’Riordan, T. (1997). Exploring the
”psychometric paradigm”: Comparisons between aggregate and individual analyses.
Risk Analysis, 17, 303-312.
Marris, C., Langford, I. H., & O’Riordan, T. (1998). A quantitative test of the cultural
theory of risk perceptions: Comparison with the psychometric paradigm. Risk
Analysis, 18, 635-647.
Martin, B. (1989). The sociology of the fluoridation controversy: A reexamination.
The Sociology Quarterly, 30, 59-76.
McDaniels, T. L., Axelrod, L. J., Cavanagh, N. S., & Slovic, P. (1997). Perception of
ecological risk to water environments. Risk Analysis, 17, 341-352.
Nilsson, Å., Sjöberg, L., & af Wåhlberg, A. (1997). Ten years after Chernobyl: The
reporting of nuclear and other hazards in six Swedish newspapers (Rhizikon: Risk
Research Report 28). Stockholm: Center for Risk Research.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports
on mental processes. Psychological Review, 84, 231-259.
Nuclear Regulatory Commission (NRC). (1975). The reactor safety study: An
assessment of accidental risks on US commercial nuclear power plants (NUREG74/104, WASH 1400). Washington, DC: Government Printing Office.
Okrent, D. (1998). Risk perception and risk management: on knowledge, resource
allocation and equity. Reliability Engineering and System Safety, 59, 17-25.
26
Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning.
Urbana: University of Illinois Press.
Peters, E., & Slovic, P. (1996). The role of affect and worldviews as orienting
dispositions in the perception and acceptance of nuclear power. Journal of Applied
Social Psychology, 26, 1427-1453.
Robinson, W. S. (1950). Ecological correlations and the behavior of individuals.
American Sociological Review, 15, 351-357.
Rowe, G., & Wright, G. (2001). Differences in expert and lay judgments of risk: myth
or reality? Risk Analysis, 21, 341-356.
Sandman, P. M. (1993). Responding to community outrage: Strategies for effective
risk communication. Fairfax, Va.: American Industrial Hygiene Association.
Siegrist, M. (1999). A causal model explaining the perception and acceptance of gene
technology. Journal of Applied Social Psychology, 29, 2093-2106.
Siegrist, M. (2000). The influence of trust and perceptions of risks and benefits on the
acceptance of gene technology. Risk Analysis, 20, 195-204.
Sjöberg, L. (1979). Strength of belief and risk. Policy Sciences, 11, 39-57.
Sjöberg, L. (1989). Mood and expectation. In A. F. Bennett & K. M. McConkey (Eds.),
Cognition in individual and social contexts (pp. 337-348). Amsterdam: Elsevier.
Sjöberg, L. (1993). Perceived risk vs demand for risk reduction. Paper presented at
the Consequences of Risk Perceptions, Center for Risk Research, Stockholm School
of Economics.
Sjöberg, L. (Cartographer). (1994). Attityder till svenskt medlemskap i EU och
riskperception
Sjöberg, L. (1996a). A discussion of the limitations of the psychometric and Cultural
Theory approaches to risk perception. Radiation Protection Dosimetry, 68, 219225.
Sjöberg, L. (1996b). Riskuppfattning och inställning till svenskt medlemskap i EU.
(Risk perception and attitude to Swedish membership in the EU). Stockholm:
Styrelsen för Psykologiskt Försvar.
Sjöberg, L. (1997a). Explaining risk perception: An empirical and quantitative
evaluation of cultural theory. Risk Decision and Policy, 2, 113-130.
Sjöberg, L. (Cartographer). (1997b). Perceived risk and tampering with nature: An
application of the Extended Psychometric Model to nuclear disaster risk
Sjöberg, L. (1998a). Why do people demand risk reduction? In S. Lydersen & G. K.
Hansen & H. A. Sandtorv (Eds.), ESREL-98: Safety and reliability (pp. 751-758).
Trondheim: A. A. Balkema.
Sjöberg, L. (1998b). World views, political attitudes and risk perception. Risk Health, Safety and Environment, 9, 137-152.
Sjöberg, L. (1998c). Worry and risk perception. Risk Analysis, 18, 85-93.
Sjöberg, L. (1999a). Consequences of perceived risk: Demand for mitigation. Journal
of Risk Research, 2, 129-149.
Sjöberg, L. (1999b). Perceived competence and motivation in industry and government
as factors in risk perception. In G. Cvetkovich & R. E. Löfstedt (Eds.), Social trust
and the management of risk (pp. 89-99). London: Earthscan.
27
Sjöberg, L. (1999c). Risk perception by the public and by experts: A dilemma in risk
management. Human Ecology Review, 6, 1-9.
Sjöberg, L. (1999d). Risk perception in Western Europe. Ambio, 28, 543-549.
Sjöberg, L. (1999, July). The psychometric paradigm revisited. Paper presented at the
Annual Meeting, Royal Statistical Society, University of Warwick, Warwick, UK.
Sjöberg, L. (2000a). Consequences matter, ”risk” is marginal. Journal of Risk
Research, 3, 287-295.
Sjöberg, L. (2000b). The different dynamics of personal and general risk. In M. P.
Cottam & D. W. Harvey & R. P. Pape & J. Tait (Eds.), Foresight and precaution.
Volume 1 (pp. 1149-1155). Rotterdam: A. A. Balkema.
Sjöberg, L. (2000c). Factors in risk perception. Risk Analysis, 20, 1-11.
Sjöberg, L. (2001a). Limits of knowledge and the limited importance of trust. Risk
Analysis, 21, 189-198.
Sjöberg, L. (2001b). Political decisions and public risk perception. Reliability
Engineering and Systems Safety, 72, 115-124.
Sjöberg, L. (2001c). Whose risk perception should influence decisions? Reliability
Engineering and Systems Safety, 72, 149-152.
Sjöberg, L. (2002a). The allegedly simple structure of experts’ risk perception: An
urban legend in risk research. Science, Technology and Human Values, 27, 443459.
Sjöberg, L. (2002b). Are received risk perception models alive and well? Risk
Analysis, 22, 665-670.
Sjöberg, L. (2002c). Attitudes to technology and risk: Going beyond what is
immediately given. Policy Sciences, 35, 379-400.
Sjöberg, L. (2002d). New Age and risk perception. Risk Analysis, 22, 751-764.
Sjöberg, L. (in press-a). Distal factors in risk perception. Journal of Risk Research.
Sjöberg, L. (in press-b). Risk perception, emotion, and policy: The case of nuclear
technology. European Review.
Sjöberg, L., af Wåhlberg, A., & Kvist, P. (1998). The rise of risk: Risk related bills
submitted to the Swedish parliament in 1964-65 and 1993-95. Journal of Risk
Research, 1, 191-195.
Sjöberg, L., & Drottz-Sjöberg, B.-M. (1993). Attitudes to nuclear waste (Rhizikon:
Risk Research Report 12). Stockholm: Center for Risk Research.
Sjöberg, L., Hansson, S.-O., Boholm, Å., Peterson, M., & Fromm, J. (2002). Attitudes
toward technology and risk. Unpublished manuscript.
Sjöberg, L., Jansson, B., Brenot, J., Frewer, L., Prades, A., & Tönnesen, A. (2000).
Radiation risk perception in commemoration of Chernobyl: A cross-national study
in three waves (Rhizikon: Risk Research Report 33). Stockholm: Center for Risk
Research.
Slovic, P. (1992). Perception of risk: reflections on the psychometric paradigm. In S.
Krimsky & D. Golding (Eds.), Social theories of risk (pp. 117-152). Westport, CT:
Praeger.
Slovic, P. (1993). Perceived risk, trust, and democracy. Risk Analysis, 13, 675-682.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1979). Rating the risks. Environment,
21(3), 14-20,36-39.
28
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1980). Facts and fears: Understanding
perceived risk. In R. Schwing & J. Albers (Eds.), Societal risk assessment: How safe
is safe enough? New York: Plenum.
Slovic, P., Flynn, J. H., & Layman, M. (1991). Perceived risk, trust, and the politics of
nuclear waste. Science, 254, 1603-1607.
Slovic, P., Layman, M., & Flynn, J. H. (1993). Perceived risk, trust, and nuclear waste:
Lessons from Yucca Mountain. In R. E. Dunlap & M. E. Kraft & E. A. Rosa (Eds.),
Public reactions to nuclear waste (pp. 64-86). Durham: Duke University Press.
Slovic, P., & Peters, E. (1998). The importance of worldviews in risk perception. Risk
Decision and Policy, 3(2), 165-170.
Sowby, F. D. (1965). Radiation and other risks. Health Physics, 11, 879-887.
Starr, C. (1969). Social benefit versus technological risk. Science, 165, 1232-1238.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and
biases. Science, 185, 1124-1131.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of
choice. Science, 211, 453-458.
Waerneryd, K.-E. (1996). Risk attitudes and risky behavior. Journal of Economic
Psychology, 17, 749-770.
Weakliem, D. (in press). Public opinion research and political sociology. Research in
Political Sociology.
Wildavsky, A., & Dake, K. (1990). Theories of risk perception: Who fears what and
why? Daedalus, 119(4), 41-60.
29
Strategic Questions Regarding the Patenting
System – Global Market Access Demands
IPR Protection
Eskil Ullberg
Service Management Group
SWEDEN
[email protected]
Introduction
The role patenting is playing in corporate strategy and more generally in the
global economy at large is changing. During the last two decades, it has become
a key instrument for market access and trade strongly associated with economic
development in a more “co-productive” fashion. The previous “monopoly right”
is converging in usage towards an instrument to secure the intellectual assets, or
property, when the economy goes from “product” to a “service” economy.
Management of risk1
The sole purpose of patenting, from a corporate perspective, is then to manage the
risk and uncertainty in the (global) market. This view of patenting best explains the
usage of the patenting system. It also gives a way of thinking for changing the system
to become more efficient in absorbing risks and uncertainties in today’s more complex
and uncertain world. The patent right is transferable with mutual consent and thereby
provides a basis for a “market in ideas”. Not only the customer’s are customers but
also strategic partners become customers to the technology you have developed. This
transferability or right puts competition closer to the customer.
Management of intellectual resources
The general legal structure of assets can be divided into two categories: physical
assets and intellectual property rights.
Physical assets are defined as “value in possession”. This means that the value is there
weather you do something or not. You must use it to create value but the right, and the
value, is in the possession of the object. Here a portfolio strategy can be developed to
spread risks effectively.
Intellectual property rights, IPR, on the other hand, are “value in action”. You don’t
possess any asset but the only value is in doing something with the right. These rights
Risk management issues discussed: Risk Management – from portfolio strategy to
value creating systems strategy, Ullberg, et. al. Geneva Papers, July 2002.
1
30
are therefore much more connected to the management of this opportunity. A portfolio
of IPRs has been used for long to spread risks here but since the action part demands
knowledgeable people (working typically in companies) a more “systemic” view of
the value creation is demanded to get the most out of the asset. This is true both for
the creation of these assets and the exploration of the assets. Trading these rights
therefore becomes a basic tool, a commodity, of the new, more knowledge driven,
service economy. Here several constraints are present today due to the rather recent
development of this trade. The developing countries, and small countries, are here
challenged the most to be able to participate in the “global trade” of IPR. Of these
rights patents are the most “aggressive” and therefore also most contested2. The usage
of patents for market access then becomes most challenging for economic actors in
securing their intellectual property worldwide.
Capitalization
The patenting system challenged
This development towards the usage of patents and IPR is moving the whole patent
industry “from patent to patenting”. A system of value creation is developing, rather
than “technology monopolies” of competing actors. This is in particular true in
complex technology industries like IT, where no single actor has “monopoly” on all
technologies used in a product, for example a single computer, to be competitive in
the market place.
This challenges traditional ways of doing business and managing risk. The “obvious”
rights to market access it not so obvious anymore. The need for securing IPRs
increases if one wants to have direct customer relations and not only become a
“supplier” in the system.
This development is however not only technological nature but of a business logic
nature. The previously dominating “production logic” where the production of the
product is at the center is being replaced by a “service logic” type of value creation
where the knowledge is at the center. This shift in logic challenges the whole patenting
system. New ways of doing business, in close collaboration with the customer,
innovating “business concepts” rather than innovating technology also poses needs
for increased risk taking. However, the patenting systems of were not made for this
“non technology” aspect of innovation.
This has let to different position worldwide. In the US “business method” patents
are in fact granted. These patents are not technology patents but “schemas” of
doing business. “Abstract schemas” are traditionally not patentable. In Europe the
technology position is firm – no business method patents. Japan follows the US
approach. The effect of this is that other, less efficient, IPRs are used to manage risk
2
For example during the TRIPS negotiations and current implementation.
31
like trademarks, copyright, design, undisclosed information, geographical indication,
licensing agreements (for software) (WTO/TRIPS IPRs).
The patenting system is therefore challenged as instrument to absorb risk in the “new”
global service economy.
At the same time it is the “service sector” or service activities of the economy that
dominate and grow. This has created an increasingly complex situation for policy
makers, investors and inventors.
Valuation of these assets also poses challenges. Since they are very uncertain on
average when it comes to value – only 2% of all patents lead to any business – a more
“innovation system” approach is necessary.
A “risk management” approach to the patenting system is therefore a key to
understanding the strategy of patenting systems and their usage. The system becomes
an “infrastructure” to the economy.
This has policy implications for patenting institutions. A more “active” role is
demanded in the economy being part of the risk management of companies.
There is also a “chicken and egg” problem related to adopting patenting systems for
developing counties. The European Patenting Office, EPO, is an interesting case with
respect to strategic moves to an “infrastructure for growth”.
1. From Patent to Patenting
ing – how business capitalizes on patents
The usage of patents has changed since first conceived.
During the 18:th century “protection” was on the agenda. The development of
national industry required trade barriers. Manufacturing monopolies in the UK
allowed boosting the industrial revolution. This protection was the fundament of the
system: The industrial manufacturing logic – “the product”.
During the 19:th and 20:th century, the formula changes somewhat to “innovation
and information for monopoly”. The idea was to enable companies to recover R&D
expenses by monopoly pricing of products.
Towards the end of the 20:th century and now in the 21:st century, is seems that the
usage is changing again towards “market access”. Patenting becomes “a business”
(through extensive licensing/cross licensing). With more complex products that
turn into systems not a single supplier can make all R&D investments. A multiple
of patents are needed to make a “system”. This development moves into services as
indicated above. This means that portfolios of patents are interesting for all actors just
to get market access – something new for many industries. These portfolios thus gives
incentives to competitors to cross license since they also would like to have access
to all and the best technology. In a global market, building a national industry is no
longer relevant, nor is recovering cost the only way to capitalize on knowledge. The
capitalization reaches far beyond and increasingly becomes a source of income. New
risk management strategies are developing with these new perspectives.
32
a) Usage of patents on “everyone’s” strategic agenda
Different strategies are developing in this more competitive global, innovation driven,
service economy. IBM for example, “the inventors” of modern patent portfolio
management only patents in the large patenting markets in the world. Here large
patent portfolios are built up. These are used to cross license with actors from small
countries. In that way both actors get a wider market access – but at lower cost for
IBM. Apart from these cross licensing activities, IBM licenses patents for more than
1b$ per year. This equals 1% of world licensing revenues!
Telecom companies used to have “gentlemen’s agreements” on innovation and
patenting. Towards the end of the 1980’s, this lack of patenting strategy “stopped”
European manufacturers to enter the US market. Now these “gentlemen’s agreements”
are replaced by global competitive market where IPR plays a role. Companies also
tend to include patented technologies in to world standards. This give the right to
licensing revenues (under RAND conditions), replacing the original “monopoly idea”
of patenting with a market access idea where standards play the most important role
for market access: example GSM, GPRS, etc for mobile telephony.
This change in patenting strategy (the usage of patents as enabler of new more
competitive corporate strategy) reflects a shift in competitive focus towards a service
logic and economy.
b) The challenge for the patenting system – the service economy is where the
value is created
The development of the service economy has changed the driver in the economy. It
is not totally driven by customer oriented usage concepts. The west has a lead of 30
years to the rest of the world in this development. Intellectual property and innovation
here plays a critical role in a country’s knowledge strategy for the future3.
Service sector in % of GDP4
Year
West
1900
25%
1970
50%
2000
>70%
World
~50%
Ref. to presentation at UNECE IP Advisory Group in Belgrade, 2004-03-27,
“Management of IP Assets for Strategic Advantage and Development of IP-driven
growth strategies”, E. Ullberg.
4
Geneva Association and other sources
3
33
c) Patenting system competition
Innovation is now taking place on a global basis. Experience from South-East Asia
is shared with those of US, Sweden, Australia, etc. This “global innovation” needs
global protection. The protection can today only be given nationally (even the EPC
is a “bundle” of national patents) thus putting patenting systems in competition with
each other. Their respective risk management capabilities play an essential role in
attracting IPR and then enabling to for example cross license and open “global”
market access5. The competition is national versus “regional” in Europe. US – Europe
– Japan when it comes to where to patent first. This is guided by language, market
size, presumption of validity, enforcement, etc6. With high presumption of validity,
more risk can be assumed and stimulate investment in innovation. Competing systems
then gets increased importance for market access. Today, when many international
customers choose, they use increasingly the PCT route since it manages risk better,
faster, more uniform (one standard), etc.
2. Complex issue for …
The issue competing patenting systems and importance of IPR is a complex issue for:
a) policy makers, b) investors and other stakeholders as well as 3) the inventors.
a) Policy makers
Policy makers are in the situation of choice of system. This has to be done in
coordination with other overall economic and international trade policy decisions.
The USPTO with “internet patents” have extraterritorial ambitions. If using certain
technology on the Internet, then you are infringing US patents.
The EPO has an “extension system” which allows, as the only system in the world, to
engage in bilateral agreements. A nation can choose to opt in / opt out of a validation
system, which grants patents on state-of-the art basis (all major technology is patented
in US, Europe, Japan). The concept of “back-yard” (The Wilson doctrine) does not
hold any longer. New approaches to intellectual property field can be made after
WTO/TRIPS. The WIPO standard, PCT, is the “preferred choice” of international
users.
Standards, which are set by WTO/TRIPS, are not easy to fulfill and not obvious for
all economic development levels. IPR depends on a certain innovation activity. Since
TRIPS standards are “maximum” standards rather than “minimum” standards it is
difficult to set standards at will. It is also very challenging, if not almost impossible to
Much additional research is needed here to get empirical facts on the relationship
between IPR and FDI (Foreign Direct Investment).
6
In the US, since early 1980, a change in principle was made regarding the
presumption of validity. The view used to be that the courts made a fresh
investigation upon challenge of a patent. Now the courts presumed that it was valid.
5
34
build a database of all state-of-the-art in the world if one does not get all applications
of relevance to the art. This economic equation then favors centralized handling of
patents.
Developing a patenting system- “Chicken and egg” problem
For nations, the patenting system can be a “chicken and egg” problem. All European
patenting systems have come into place after an extensive copying of technology
from abroad. German and Swiss chemistry is an example. The copying led to growth
of a national industry in the field. The national industry then demanded protection
from national competition – and got it. Some recent examples of this are Taiwan and
India.
Taiwan started out as a high tech US manufacturing facility. This spurred local
innovation. Demand from local inventors – and investors – for protection in the local
market brought the patenting system in place.
India is a big (“illegal”) medicine manufacturer, which is spurring local innovation.
Now voices are raised from local inventors for protection in the local market. Soon
maybe an efficient patenting system will be in place.
Now it is the quality issue of patent that is the predominant issue: EPO “quality” of
search etc. versus US “registration” policy with extraterritorial claims.
b) Investors and other stake holders
From the investor perspective, the risk management issue dominates. Here financing
issues are driven by the uncertainty of investments in R&D activities. A strong right
reduced the uncertainty. Other measures of the value include groundbreaking research
by Prof. B. Lev. Based on citations. If a certain patent is cited much in comparison
with other new applications is has been shown to indicate future earnings from that
patent / technology very well. The quality issue is a real issue here. Quality of search
is crucial for any judgment on the value of the patent.
c) Inventor
For the inventor today’s system lack much. It is heavy, cumbersome, slow and not
very transparent (timeliness of decisions easily manipulated, etc.) The system is
predominately used - and therefore also built – for the industrial policy is should
support. These have been primarily large companies, not small. Although the IPR
has “equalizing” effect between companies and countries, the procedures and efforts
needed to secure one’s assets are far from “efficient” for smaller companies.
A democratization of IPR seems to be a good idea for the global economy, when the
economic differences are exposed in free trade.
35
3. Summary Capitalization
Summarizing this section on capitalization, patents (or IPR) become part of every
business as the only way to secure freedom to market.
This is not a simple granting issue any more but a complex, business issue with strong
global undertones (innovation, patent systems, etc) in fierce competition, multiple
laws and possibilities to cheat.
This development is different from industry to industry but globalization drives even
medical companies to rethink their market policy7.
Valuation
Valuation – a system’s issue
In valuing a patent or IPR is typically not a “single patent” issue but a much more
systemic issue. It is the “technological capabilities” of the company that are valued.
Biotech and Citations
Valuation of hi-tech biomedical companies in the US shows a direct relationship
between citations of researchers / patents and market value of companies8
This indicates that it is not an individual patent issue but more a “technological
capability” issue.
IT and Patents
IBM states patents in annual report for the purpose of consistency in technological
capability and R&D investments.
Citations/patent
According to B. Lev’s research consistency in performance of new technology is a
good differentiator of relation between investments in R&D activity and market value.
For example, Dow Chemical has low citations/patent but DuPont high citation/patent.
Consistency in patenting is the value. The “R&D activities” are given a number by the
fact that a lot of things are going on with relation to a specific technical area.
From a policy point of view, a good idea would then be to provide for “valuation
systems” actors (in other words not to preempt that market with for example
government subvention statistics).
Ref. to Economist article on Pharmaceutical companies, July 13th 2002, page 51-52.
8
B. Lev, et al., Stern, N.Y. Univ.
7
36
Valuation – corporate side
Risk in R&D investments
Risk in R&D investments can be managed by classifying patents according to
citations give a sharp instrument for investors and management. The patenting system
can thereby help create great institutions.
The typical average patents has a 2% success rate but taking “high quality” patents
and innovation systems (with high number of citations) into account the volatility of
future earnings with respect to investment in R&D activities goes down with a factor
or 49. The risk is then at comparable level to physical assets.
Generally valuation is then related to “innovation system” and “inventor”. Valuation
is also linked to access to global experiences. Actors outside the innovator’s and
company management’s control in this case hold the citation information. It provides
for a neutral, transparent position, rather than overstated company “innovation
reports”.
A great institution can then be built not a “bright idea”, nor on a “great company”
but more the management of that organizations capacity to produce consistently hi
quality hi tech.
The system
EPO Case
The European Patent Office, EPO, has adopted this risk management view of the
patenting system, for the benefit of the economy as a whole. They focus on the
role of creating an “infrastructure for the economy” – managing economic risk and
uncertainty. The ultimate potential of this is a more efficient patenting system. More
on this can be found on the EPO website.
Rethinking the patenting system
Strategic issues related to the patenting system from a policy and corporate perspective
then arise from several sources:
•
•
9
From patent to patenting – adopting a systemic view and risk management
focus
The usage of the system has gone from a simple monopoly right to global
market access. The success and challenge is more in business concept
innovation.
B. Lev, et al., Stern Univ.
37
•
•
•
•
Global patenting system competition create efficiencies by the customers
choice
Introducing a patenting system successfully - a “chicken and egg” problem.
Yesterday local innovation was the driver. Today we have global innovation.
New institutions are needed to absorb this risk more effectively.
Valuation of patents - a system’s issue
From the individual investor - “democratization” of patenting.
The issue of “survival” of the patenting system is challenged. The local protection
argument is gone in a global economy with TRIPS-agreements. The usage is changing
with “global innovation” and the service economy – standards must also change!
The patenting system then gets a new goal: Generating growth for the economy.
This goal supersedes the national industry protection, R&D recovery and focuses on
a new dimension demanding:
• Global standards/competition
• Independent from government policy
• Service economy
• Private inventor usage (democratization)
The new goal can be achieved by giving the patenting system a global (economic)
risk management focus.
Recommendations for discussions
There are several issues of interest to discuss further. First the “systemic view” of
patenting based on the risk management aspects of the system.
Secondly there is a multi government agency issue. Typically several agencies are
involved in the patenting system: Legal, economic, financial are the most commonly
used to “host” the system. They have conflicting interests and in order to create an
efficient patenting policy these initiatives must be coordinated in some way.
Thirdly the “chicken and egg discussion” needs much empirical evidence.
Forth there are public issues here: efficiency of patenting system, quality of system,
scope of patenting (value driven), transparency (sharing information), etc.
Fifth there are private challenges in creating global protection of intellectual property
enabling a global market economy. This is a very central economic policy point of
view.
Taking the role to manage risk for growth combines the private and public interests
– not simply “granting monopolies”. How patents (both the innovation and the right)
are used to manage risk then becomes the key issue to understand.
38
Participation Under Uncertainty
Moses A. Boudourides
Department of Mathematics
University of Patras
265 00 Rio-Patras
Greece
[email protected]
http://www.math.upatras.gr/~mboudour/
Abstract
This essay reviews a number of theoretical perspectives about uncertainty and
participation in the present-day knowledge-based society. After discussing the
on-going reconfigurations of science, technology and society, we examine how
appropriate for policy studies are various theories of social complexity. Post-normal
science is such an example of a complexity-motivated approach, which justifies civic
participation as a policy response to an increasing uncertainty. But there are different
categories and models of uncertainties implying a variety of configurations of policy
processes. A particular role in all of them is played by expertise whose democratization
is an often claimed imperative nowadays. Moreover, we discuss how different
participatory arrangements are shaped into instruments of policy-making and framing
regulatory processes. As participation necessitates and triggers deliberation, we
proceed to examine the role and the barriers of deliberativeness. Finally, we conclude
by referring to some critical views about the ultimate assumptions of recent European
policy frameworks and the conceptions of civic participation and politicization that
they invoke.
Science, Technology & Society
Many have already discerned that something is changing in science and the world at
the turn of the millennium: “business as usual in science will no longer suffice, that
the world at the close of the 20th century is a fundamentally different world from the
one in which the current scientific enterprise has developed” (Gallopin et al., 2001, p.
219). But almost everybody acknowledges that the main changes have been occurring
in science with respect to the world, i.e., in the relationship between science and
society. In the early arrangements between science and society, there used to exist
clear distinctions between the two fields such that norms and principles in each one
of them were harmoniously cohabiting and co-ruling a deterministic, regular (and
linear) cosmos. Of course, this coexistence did not exclude any combination between
science and society; the trajectories of facts, discoveries, ideas, beliefs were possibly
molded by passing through a multiplicity of different modalities but in each one of
them they were capitulated into exact non-contradicting rules of reason governing
both science or/and society. However, since the late 1980s (and with the end of the
Cold War), the distinctions between science and society were suddenly blurred and
39
their boundaries became no longer self-evident. Since then, we have been witnessing
a “reconfiguration, one that in the eyes of many researchers has narrowed the confines
of academic freedom, while giving free play to commodification of research and
commercial stakeholder interests (market governance) and other players, including
social movements and activists or NGOs” (Elzinga, 2002, p. 2). In other words, the
witnessed reconfiguration is not just a mere rearrangement by a two-ways mixing
or mutual permeation of science and society; it is something more since structural
transformations are occurring in both science and society. In particular, science, the
authority of which used to be the foundation of legitimacy of the modern state, “is
now increasingly seen as an instrument of corporate profit and unaccountable power”
(Ravetz, 2001, p. 7). The way Michel Callon sees it, science in the present-day society
is just a ‘quasi-public good,’ because it has become to a certain degree socially,
economically or politically ‘appropriable’ (Callon, 1994, p. 400).
During the last two decades, there has been an increasing number of different
conceptualizations of the ways such recombination of science and society is shaped
and many different names, ideas, theories and models have been coined in order to
grasp the new relationship between science and society.1 Nevertheless, one should
remark that it is not completely clear whether always these conceptualizations are just
opinions on descriptions of how things are developing or whether they also constitute
normative prescriptions of how things have to develop according to the insight of the
promoters of such concepts and theories. In this sense, Mark Elam and Margareta
Bertilsson (2003) are cautioning that “behind every authoritative account of major
changes of science and society relations stands a more or less explicit vision of how
the future ‘knowledge-based society’ should be organized. The work of accounting for
change is never innocent of a desire to make a difference to change” (p. 2).
Having said the last caveat, it remains to remark that – independently of the rhetoric
and the inmost intentions of the coiners of all such descriptive or normative concepts
or theories – some elementary but very essential premise is required to be satisfied
so that these re-imaginations of science and society might be practically viable and
contextually sustainable. This is the idea that a new ‘social contract’ or a ‘New Deal’
(Lubchenco, 1997; Latour, 1998; Gibbons, 1999) is needed to be established among
all involved parties in the assayed draw of science close to society. Such a new
social contract is strongly needed not just as a procedural means to ratify the new
arrangements but also as a necessary requirement to settle with a concomitant crisis
linked to the changing character of science and society in our times. As it is stated in
a recent working document of the Commission of the European Communities (2000),
there is a paradox in the existing relationship between science and society. On the
one side, science and technology constitute the “heart of the economy and society,”
attracting an increasingly high amount of expectations towards the belief that they
can bring a growing positive impact on society. On the other side, advances in science
and technology have been happening in such ways as to end up greeted with equally
growing skepticism or even alarming hostility. The reason is because sometimes
techno-scientific expertise might either fail to cope with social expectations or tend to
40
neglect public concerns about the outcomes of techno-scientific developments, which
are vitally important for the whole society.
For this purpose, in a concrete effort to establish a new contract between science and
society, the European Union (EU) has recently adopted a number of extremely urgent
priorities, which aim to improve the quality of policy-making by establishing an
inclusive and participatory policy framework. Thus, the Commission of the European
Communities is currently setting up the processes to implement the strategic goal to
make Europe the most competitive and dynamic knowledge-based economy in the
world by 2010, based upon the creation of a strongly participatory and democratic
European Research Area (EC, 2000, 2001b). It is believed that such a sound institution
is needed in order to improve the interactions between policy-making, expertise and
public debate and, thus, to produce sustainable relationships between science and
society (EC, 2001a).
The attempted policy reorientation, which in particular strives to deal with the new
situation in science-technology relationships, is commonly epitomized as the turn
from traditional governing or political steering towards a new process of governing,
usually described as ‘governance.’ Although there exist multiple definitions of this
concept (cf., Kooiman, 2002; Rhodes, 1996; Boudourides, 2002), the general tendency
is to understand governance as an analytical framework for collective action resulting
from multiple interdependent state and public actors, private institutions, voluntary
organizations etc. with blurring boundaries and responsibilities (Stoker, 1998).
Although – at least in the context of Europe – the governance perspective is usually
considered to emanate from the so-called ‘democratic deficit’ and the concomitant
predominance of a strong ‘input-oriented’ legitimacy of policy-making over a weak
‘output-oriented’ democratic perspective (Scharpf, 1999), there are many further
theoretical investigations of the sources of the concept. According to Philippe
Schmitter (2002, p. 54), governance emerges as a result of a double failure: “state
failures and/or market failures.” State failures are created when something goes
wrong with “calling upon the government, backed by the hierarchical authority of
the state, to impose a solution.” Market failures arise from “relying upon firms to
allocate resources by market competition and, thereby, generate a voluntary, mutually
satisfactory outcome.” In the words of Luigi Pellizzoni (2001b, p. 11), state failures
manifest the “failure of pluralism,” because of the increasing marginalization of the
political institutions of parliamentary democracy from processes of competition
among civil society groups. On the other side, Pellizzoni argues, market failures are
just the “failures of neo-corporatism,” because of its inability to cope with complexity,
when “in a post-fordist, globalised economy, interests and concerns are too diversified
to be represented and managed by a handful of monopolistic actors” (Pellizzoni,
ibid.). In other words, what the proponents of the governance paradigm are arguing
ibid
is that “network governance” comes to fill up the empty space that hierarchies and
markets are increasingly producing in regulatory practices.
41
Complexity & Policy-Making
Therefore, as it is typically acknowledged nowadays, there is the need for an
administrative re-organization because of the observed tensions between the
functional approach usually followed by governments and the increasing complexity
that modern societies are exhibiting (Lebessis & Paterson, 1997). For instance, in the
EU Science & Society Action Plan, the rationale for the new research and foresight
must be sought and thought “in view of the complexity of relations between science
and society” (EC, 2001b, p. 19). Thus, the key concept is that of ‘complexity,’ a
concept paradigmatically inscribing most of the claims in the ongoing discussions
about the new recombination of science with society. This is why in this section we
are going to be concerned with how complexity (or better said complexities) are put
in action in the context of policy studies.
However, to define complexity is beyond the scope of the present essay (not to say
that it still appears to be an open, unresolved, theoretical question). Just to give some
hints, we can say that complexity is a property of things or processes (or systems),
which distinguishes them from those which are simple (or ‘complicated,’ on the other
end) in terms of a number of various attributes, such as: relational structure, holistic
orientation, multiplicity of scales (either in space or in time), irreducibility (in its
parts or in other references), multiplicity of legitimate perspectives, contextualized
signification, representational incompletion, emergence, reflexivity etc. (Funtowicz &
Ravetz, 1994; Gallopin et al., 2001; Law & Mol, 2002).
Let us give an example from political philosophy drawing on the work of Walzer. In
his book Spheres of Justice (1983), Michael Walzer argues against the singularity of
John Rawls’ theory of justice. For this purpose, Walzer builds his version2 of justice
on a set of multiple orders that he calls ‘spheres of justice.’ Each of these domains
have their own way of distributing justice, wealth and power but also any other social
good. Furthermore, Walzer calls ‘complex egalitarian’ a society that is sustaining
these spheres on the basis of certain autonomous distributive mechanisms, which
are imposing strong limitations to the conversion of one social good into another.
Apparently, Walzer’s scheme of a ‘complex equality’ is an elegant conceptualization
of a social complexity on defense of pluralism and equality, which is exhibiting
some of the defining characteristics of the concept like rich multiplicities and rigid
irreducibilities.
Thus, when complexity theories enter the arena of policy studies, it should not elude
that policy studies themselves – but also society (and science, of course) – are by
their own constitution highly complex fields. As a result, one could wonder – slightly
paraphrasing Peter Stewart’s (2001, p. 323) “question of social complexity” – do
complexity theories give answers to all the hot political and social issues that policymakers are trying to answer or are some of them too complex and always elusive
to be tamed by these theories? In fact, P. Stewart (critically examining N. Luhmann
and other proponents of system theory, critical realism etc.) argues that this is the
case with society, in the study of which complexity theories have many weaknesses
42
and rather fail to rigorous model social processes in complexity terms. Thus, when a
policy-maker comes to examine the practical utility of complexity theories in policy
considerations, she might always bear in mind that not only the object of complexity
theories is a complex and stringent entity but (necessarily – from the reflexive point of
view) similarly complex and contested are the means and strategies (either cognitiveintellectual or pragmatic-political) typically used to approach this entity.
Nevertheless, the previous criticism does not intend to diminish the success that
modern complexity theories have in the context of systems-theoretical or cybernetic,
mathematical-computational, physical and biological sciences. There are many
examples from the latter sciences, in which remarkable advances by complexity theories
are observed, such as in the cases of chaos and nonlinear dynamical systems, fractals,
algorithmic complexity, self-organization, autopoiesis, organizational complexity etc.
As a matter of fact, it is the rich analytical insight gained from these fields that policy
theorists can use in their investigations in order to confront the real complexities of the
contemporary world (Boudourides, 2000). After all, would it be possible for policy
studies to tackle the complexities of the new reconfigurations of science and society
without drawing upon the ‘new’ language of the science(s) of complexity aiming to
harness social complexities? There is nothing wrong with such an appropriation for
a field in which reflexivity on the acceptance of made policies and efficiency in their
implementation are virtues that count most. Furthermore, up to the degree that the
language of complexity can also reflect and sustain the fundamental principles of the
Western democratic tradition (as participation-inclusivity, deliberation-responsivity,
pluralism-accountability etc.), then it is a pressing challenge for policy studies (in
particular) and political theory (in general) to cope with the increasing complexities
of modern societies (Zolo, 1992). For this purpose, in the next section, we are going
to discuss a certain trope of science policy, which attempts to appropriate3 fruitfully
and to ‘mobilize,’ for a good cause in the policy context, concepts and ideas shared
with complexity theories.
Post-Normal Science
This is a science policy trope, which seems to be taking seriously in account the lesson
of complexity. Its fundamental thesis is based on the idea of ‘post-normal science,’
a thesis, which has been proposed and developed by Silvio Funtowicz and Jerome
Ravetz (1991, 1992, 1993). To see how close to complexity this idea claims to be,
it suffices to recall a very popular slogan: “post-normal science as a bridge between
complex systems and environmental policy” (Funtowicz et al., 1996, p. 8).
By its name, it becomes clear that this trope intends to outgo the Kuhnian regime
of ‘normal science,’ argued to have been ruling the past arrangements of a routine
puzzle-solving scientific practice, administered in a similarly simplistic, self-assured
and certain policy environment. According to the post-normal science thesis, current
problems (mainly originating from environmental debates) demand another style
of both science and policy: one which is typically characterized by uncertain facts,
43
values in dispute, high stakes and the need to take urgent decisions (Ravetz, 1999,
p. 649). Thus, what is seen to happen under post-normality is nicely described by
an inversion of the “previous distinction between ‘hard,’ objective scientific facts
and ‘soft,’ subjective value-judgments,” because now “we must make hard policy
decisions where our only scientific inputs are irremediably soft” (ibid.).
ibid
ibid.).
This is why, when the “textbook analogy fails” and the demand of quality of decisionmaking (both in terms of processes and outcomes) becomes increasingly crucial, the
proponents of post-normal sciences resort to two concepts – ‘systems uncertainty’ and
‘decision stakes’ – in order to seek the locus of the new science policy in relation to
more traditional problem-solving strategies (Funtowicz & Ravetz, 1991, 1992, 1993).
Uncertainty signifies the limitations or the unavailability of confident knowledge
about the ways systems function; stakes indicate the magnitude of consequences
when adopting one view over another, i.e. the value-sensitivities or the valueladenness of policy decisions. When both uncertainty and stakes are small, traditional
research and expertise do the job, without having to pay any attention to value-laden
considerations. When either one (or both) of them is medium, routine techniques are
not enough, making necessary the appeal to ‘professionally consultancy’ (as are the
services provided by surgeons or senior engineers). Post-normal science emerges
when either (or both) of these key issues is high. Then the accentuated intensity of
these issues unavoidably raises inferences, which necessarily have to be conditioned
by the values held by the involved stakeholders (in contrast with the ‘value-free’
character of traditional science).
One of the main implications derived from this thesis refers to the claim of the particular
cognitive demands set up by post-normal science. These are special needs in relation
to “direction, quality assurance, and also the means for a consensual solution of policy
problems in spite of their inherent uncertainties” (Funtowicz & Ravetz, 1991, p. 145).
What these needs necessitate – so the argument goes – is the mobilization of an
‘extended peer community’ (p. 149) in order to extend the quality assurance character
of conventional (normal) science policy mechanisms. The post-normal extended
peer community consists of all stakeholders, potentially affected by policy decisions
at issue, who are expected to offer their ‘extended facts,’ which are covering, for
instance, all sorts of local knowledge, anecdotes, community based research, personal
and communal value commitments etc. (Ravetz, 2001, p. 6). Therefore, broadening4
the framing of an issue – at the level of admission of uncertainties, value-loadings
and recognition of the right of all interested parties to speak out and be heard by
policy-makers – is adding a tone of ‘openness’ and (democratic) ‘participation’ in
policy processes of decision-making, otherwise condemned to failure if they were
based solely on the purely technical dimensions of the issue (ibid.).
ibid.). However, the
ibid
implicated further extension of democracy by citizen mobilization is not the main
concern here – it is merely believed that the involvement of a larger constituency of
peers, covering different kinds of knowledge, will contribute to the accomplishment
of high-quality policy outcomes. Nevertheless, as Ravetz remarks, such opening
in public participation does not necessarily result a restoration of trust in official
44
scientific expertise, because through these processes the public might become “even
more suspicious of government assurances of safety” (pp. 6-7). In any case, these
are the stakes of the “‘post-normal’ world of science policy, in which scientific
demonstrations are complemented by stakeholder dialogues … all sides come to the
table with full awareness that their special commitments and perspectives are only a
part of the story, and with a readiness to learn from each other” (p. 15).
The post-normal science approach has already proved attractive enough as a policy
framework to deal with global environmental issues (including global warming) – for
instance, as adopted by O’Riordan & Rayner (1991) – and it has provided concrete
tools for the management of uncertainties – like the NUSAP (Numerical, Unit,
Spread, Assessment, Pedigree) scheme (Funtowicz & Ravetz, 1990). However, this
approach has also attracted a number of severe criticisms (Jasanoff & Wynne, 1998;
Yearley, 2000). These criticisms mainly focus on the following points (for the first two
points see Wynne [1992] and next section):
•
•
•
•
Negligence of qualitative differences among forms of not-knowing or
incertitude.
Questionable ‘objective’ measurability and comparability of the two principal
dimensions of uncertainty and stakes.
Disregard of the interdependencies between uncertainty and stakes, two
dimensions which neither in principle nor in practice can be considered to
constitute truly discrete entities: contests over the stakes are always inseparable
from claims about the uncertainties (Yearley, 2000, p. 110).
Insufficient justification of why and how the claim on the advisability of extended
peer review follows from the theoretical premises of the post-normal condition.
Steve Yearley (2000, pp. 109-110) has assessed that the argument of the postnormal science thesis looks less convincing in cases near the axes of the two
principal dimensions (uncertainty and stakes), such as, for example, the case of
cosmology (high uncertainty, low stakes) and the case of various disasters (high
stakes, low uncertainty). Yearley has argued that the problem with post-normal
science in such cases is an incomplete explanation of the rationale and scope
of public involvement in knowledge production and certification, sometimes
together with a false identification of the required qualification for participation,
i.e., the elements and constituencies of the extended peer review involvement.
Furthermore, given the frequent public mistrust to expertise, Yearley doubts
whether the publics might be eager to adopt the role only of peer-reviewers (p.
120).
Types of Uncertainty
According to Jerome Ravetz (1997, p. 6), uncertainty is a non-quantified form of
risk and it may unfold in many versions ranging from sources of statistical errors
(and notions of confidence intervals) up to various considerations of ignorance. In
particular, Ravetz considers that two forms of ignorance are very important. One is
what he calls “policy-critical ignorance,” referring to the case “when there are no
45
parameters, no theories, hardly any guesses, to supply a necessary foundation for
policy decisions. Such was the case in the early days of the BSE/neo-CJD crisis”
(ibid.).
ibid.). But there is another notion of ignorance that Ravetz calls ‘ignorance of
ibid
ignorance’ and he considers it to be the most dangerous of all. By this, Ravetz means
a social construction of false beliefs on the possession of the necessary knowledge
or simply unawareness that “there is something out there getting ready to impact on
us” (ibid.).
ibid.). Ravetz holds that, based on the Cartesian legacy of the “suppression of
ibid
the tradition of awareness of ignorance” (Ravetz, 2001, p.13), our previous scientific
methods were “designed around the achievement of positive knowledge and the
fostering of ignorance” (p. 7), i.e., they were built upon the notion of ‘ignorance of
ignorance’ (p. 13). Therefore, to come to grips with our ignorance is not a trivial task
and Ravetz believes that “this will be a major task of philosophical reconstruction,”
which will need to take us back to the thought of Socrates, through a “wholesale
reform of philosophy, pedagogy and practice” (p. 13).
Brian Wynne (1992) has developed an alternative approach to conceptualize
uncertainty from a more diagnostic than prescriptive point of view (Yearley, 2000,
p. 111). Instead of locating uncertainty on an “objective scale,” as Wynne considers
that Funtowicz and Ravetz have done, he is trying to compose a list of different forms
of not-knowing, “overlaid one on the other,” depending on the breadth of “social
commitments (‘decision stakes’) which are bet on the knowledge being correct”
(Wynne, 1992, p. 116). Thus, Wynne’s taxonomy (1992, 2001), drawn upon a number
of previous models of uncertainty – such as the work of Smithson (1989, 1991)
– is an attempt to make some key distinctions on the following seven categories of
uncertainty:
•
•
•
•
•
•
•
Risk: system behavior is basically known and chances of different outcomes can
be quantified probabilistically – “know the odds.”
Uncertainty: important system parameters are known but not the probability
distributions – “don’t know the odds.”
Ignorance: knowledge about the system and likehoods of its outcomes escape
recognition – “don’t know what we don’t know” or “unknown unknowns.”
Indeterminacy: issue, conditions and causal chains are all open-ended and
“outcomes depend on how intermediate actors will behave” in non-determinate
behavioral processes.
Complexity: open behavioral systems and emergent, multiplex, ‘non-linear’ and
irreducible processes.
Disagreement: divergence over observation, framing and interpretation of
issues.
Ambiguity: contested or unclear meanings of the issues and, hence, of the process
key-elements.
To depict the first four types of uncertainty, we are going to use a schematization of
Brian Wynne’s (1992) categories according to a slightly reworked version of a model
proposed by Andy Stirling (1998). In this scheme two are the salient dimensions: (i)
46
public knowledge about outcomes and (ii) technical knowledge about likehoods of
outcomes. The first dimension varies between two forms of knowledge of outcomes,
uncontested (and well-defined) and contested (or/and poorly-defined, even not
defined). The second dimension is expressed by the level of existing or available
statistical estimations of outcomes, which may vary between situations of complete
lack of any causal inferences and determinations (non-determinacy) and cases when a
stochastic (probabilistic) analysis is feasible. Thus, Fig. 1 distributes these four types
of uncertainty as follows:
Fig. 1. Types of uncertainty (Wynne, 1992; Stirling, 1998).
Note that the above schema is nothing more than an indicative representation, in the
sense that the types of uncertainty might either fit somewhere in between the poles of
social/technical knowledge or even overlap and share possibly common characteristics.
This is how Wynne describes the situation: Although “conventional risk assessment
methods tend to treat all uncertainties as if they were due to incomplete definition of
an essentially determinate cause-effect system … many risk systems embody genuine
indeterminacies which are misrepresented by this approach.” Because “the scientific
knowledge which we construct … is also pervaded by tacit social judgement which
cover indeterminacies in that knowledge itself” (Wynne, 1992, p. 116).
Failings as the above of traditional risk management techniques have been elaborated
by Wynne (1996a, 1996b) in the context of his case study of how scientists and the
government have attempted to deal with the Chernobyl fallout on upland Cumbrian
sheep farms. The way Yearley has epitomized Wynne’s main arguments, Wynne
47
appears to claim two things in relation to how experts and the public are dealing
with all these types of uncertainty. On the one hand, the weakness of experts is their
tendency to treat all forms of not-knowing as quantifiable risk, while the public
is deciding whom to trust through her lay contextual insights. On the other hand,
in relation to matters of ignorance and indeterminacy, the public is proven to be
significantly more expert that the scientists (Yearley, 2000, p. 111).
In fact, Wynne’s diagnosis locates an insurmountable predicament on traditional
practices of expert-based policy-making: “Through their rationalist discourses,
modern expert institutions … tacitly and furtively impose prescriptive models of the
human and the social upon lay people, and these are implicitly found wanting in
human terms” (Wynne, 1996b, p. 57). As Alan Irwin and Brian Wynne have argued,
“science … offers a framework which is unavoidably social as well as technical since
in public domains scientific knowledge embodies implicit models or assumptions
about the social world” (Irwin & Wynne, 1996, pp. 2-3). Therefore, Wynne expects
that sustainable policy decisions, based on broader sociopolitical commitments,
may arrive when the public resists this ‘furtive imposition’ and raises questions
about the limits of applicability of scientific models or about its underlying social
assumptions. In other words, Wynne’s plea is that the broader uncertainty – ignorance
and indeterminacy – should be embraced and debated, in open-ended and case-bycase forms of deliberations, by all involved stakeholders, rather than being arbitrarily
banished or ignored or routinized/institutionalized, as the dominant tradition of expert
policy-making would recommend. Similarly, in an attempt to formulate a theoretical
approach to the operationalization of ‘post-normal’ extended peer communities,
Stephen Healy (1999) has rediscovered the crucial role that might be played by
uncertainty, ignorance and indeterminacy: “If we are to clarify how extended peer
communities may be institutionalised we must first understand how they might be
configured and extended facts enrolled. A particular opportunity to achieve both these
aims is granted by uncertainty … [by] extended multidimensional conceptions that
embrace both ignorance … and indeterminacy” (Healy, 1999, p. 656).
Uncertainty in Policy Processes
We are going now to reflect on another uncertainty-based model of policy-making
logic proposed by Claudio Radaelli (1999, pp. 47-49) and also discussed by Luigi
Pellizzoni (2001a, pp. 206-208). In this model, policy processes are understood in
terms of two dimensions: the salience of the issue at stake and its level of uncertainty
(Fig. 2). When both salience and uncertainty are low, the policy process follows a
bureaucratic logic, by which the involved actors are competing and bargaining for the
control of the issue. When salience is high but uncertainty is low, traditional political
conflicts in decision-making are setting up.
48
Fig. 2. Uncertainty and salience in policy processes
(Radaelli, 1999, p. 48; Pellizzoni, 2001a, p. 208).
When uncertainty is high, then experts turn out to play a prominent role, because
in these situations knowledge might be either a scarce and costly resource, which is
highly contested, or it might be even controversial (Pellizzoni, 2001a, p. 207). In the
former case, only the means to solve a problem are ill-defined, neither the structure nor
the goals of the problem: new knowledge is needed to fit into an existing theoretical
scheme so that a solution might be found to the problem and this knowledge is
uncontested in its description and its rationale. In other words, in this case, uncertainty
is reduced to lack of proper scientific knowledge, which should be (or is expected to
be) recovered in the future by further scientific research. Pellizzoni identifies this type
of uncertainty with the one typically addressed by rational choice theory. Apparently,
this is an issue, which can be entrusted to expert advisory bodies, and, thus, its
salience remains low. Consequently, when uncertainty is high but salience is low, then
the policy process typically follows a technocratic logic.
However, high uncertainty of an issue may occur because of other reasons too, such
as: (i) when existing knowledge is unsettled yet (either cognitively or normatively)
and subject to alternative interpretations or different scientific methods, theories or
disciplines; (ii) when there exist persistent disagreement and controversies among
experts; (iii) in cases of indeterminacy or ignorance. Note that the first and the last
sources of uncertainty are connected, as Silvio Funtowicz and co-workers have so
clearly explained: “The framing of the problem also frames our future ignorance, for
what is excluded from enquiry also frames our future ignorance” (Funtowicz et al.,
2000, p. 333).
This is a different type of uncertainty from the previous one of rational choice theory,
because now not only the means but also the structure and the ends (goals) of the issue
49
are contested. Pellizzoni calls it ‘radical’ (or deep) uncertainty (ibid.).
ibid.). Note that radical
ibid
uncertainty is a typical case of an ‘intractable controversy’ (Schoen & Rein, 1994), as
in a plethora of examples in environmental and techno-scientific issues (for instance,
GMOs, BSE, electromagnetic fields, climate change etc.). Intractable controversies
differ from simple routine political disagreements in that the latter can be resolved
by appealing to facts and using a standard techno-scientific (rational) argumentation.
On the contrary, in the former, the parties in a dispute tend to use or emphasize
different facts, interpret them differently and resort to conflicting theories. There is no
consensus either on the relevant knowledge or on the disputed principles: “facts and
values overlap” (Pellizzoni, 2001b, p. 6). Otherwise said, a controversy is intractable
when the usual strategies of conflict management cannot apply (Hisschemoeller &
Hoppe, 1996).
In the case of a radical uncertainty, the structural disagreement on an issue is
automatically triggering a generalized conflict around the issue, which is automatically
raising up the salience of the issue. Then, in situations of high uncertainty and
high salience, Radaelli (1999, p. 48) and Pellizzoni (2001a, p. 208) argue that
two (heterogeneous) sets of involved stakeholders may take the lead: epistemic
communities and policy entrepreneurs. Epistemic communities, which are playing
a key role in international environmental policy, are networks of “professionals with
recognized expertise and competence in a particular domain and an authoritative
claim to policy-relevant knowledge within the domain or issue-area” (Haas, 1992, p.
3). They are defined by shared beliefs, common knowledge and policy experiences
and their influence comes from their institutionalization and from the support of
organized interests that they receive. Policy entrepreneurs are individual or collective
actors who have the power to influence a policy area (King & Roberts, 1987). They
are clustered around concrete policy discourses (Hajer, 1995), in which they reframe
and process policy issues through ‘narrations’ shared by different actors.
Concerning now the policy outcomes of processes involving different levels of
uncertainty, Luigi Pellizzoni (2001a, p. 209) suggests that these outcomes can be
mapped in terms of two dimensions: knowledge (which essentially amounts to
uncertainty) and policy discourse. Knowledge can be assumed to be either uncontested
or contested. Policy discourse can be framed either (i) on a unitary or consensual basis
or (ii) on a pluralist or dissenting setting; typically, a technocratic framing would
favor the former discourse, while a traditional political framing the latter. Then the
following four kinds of policy outcomes can result (see Fig. 3).
50
Fig. 3. Uncertainty and discourse in policy processes (Pellizzoni, 2001a, p. 209).
When the discourse is unitary, then the outcome is either an established bureaucratic
policy, if the knowledge is uncontested, or an expertise-based arrangement, if the
knowledge is contested. When the discourse is pluralist but the knowledge is kept
uncontested, then the discourse framing of traditional politics is the expected policy
outcome. However, the situation is more complicated, when the discourse is pluralist
and the knowledge is contested, depending on how much attention is given to the
issue outside specialist circles, i.e., the degree of public coverage given to the debated
controversy. Typically, in this case, the policy outcome is that of an inclusion of an
extended number of involved actors (in accordance with the policy setting of postnormal science) and an orientation to mutual understanding among these actors.
Apparently, such an outcome is more flexible than what might have resulted from
traditional technocratic authoritarianism or strategic bargaining-negotiation (as
suggested by rational choice theory).
Questioning Expertise
From the discussion of the previous section on how uncertainty might structure various
configurations of policy processes, it is evident that expertise plays a prominent
role in many cases. But how is expertise defined and usually conceived? Arguing
with Claudio Radaelli (2002), it is more convincing to associate experts less with
specific intentional or behavioral features of the actors and more with the structure
in which they are enacted and operate. In this sense, expertise usually comes from
a multiplicity of structures or domains of epistemic, bureaucratic and technocratic
policy-making, such as (cf., Radaelli, 2002; EC, 2001a): (i) communities of scientists,
ad hoc or ‘independent’ experts, consultancies and scientific advisory committees;
51
(ii) ‘stakeholders’ or various ‘holders’ who possess some quality of resource that
entitles them to participate in a governance arrangement (Schmitter, 2002, pp. 62-63;
more in the next section); (iii) ‘in-house’ experts, ‘fonctionnaires’ of the European
Commission (EC) or other European agencies, including expertise developed through
the research policy of the EC, or working within national ministries, diplomats, etc.
As Luigi Pellizzoni (2001b, pp. 2 and 7) has argued, there are not a few instances
when the two fundamental commitments of the social contract of science with society
– disinterestedness and objectivity – become problematic in the ways expertise is
trying to implement them. In fact, when policy- or decision-makers are asking experts
to provide them with disinterested and objective factual considerations, it is generally
assumed the validity of two ideal principles, which should inspire the work of experts.
On the one hand, it is expected that experts’ assessment can be either value-free or,
otherwise, at least based on value differences, which can be measured univocally
(e.g., in economic terms). On the other hand, experts’ conclusions are expected to be
either unanimous or, otherwise, at least, in a state of temporary disagreement, which
could be later reconciled by means of a closer scrutiny or deeper deliberation among
a wider circle of experts. However, in reality, things turn out to be quite different,
as Pellizzoni is arguing, in many instances, when these ideal premises are falsified.
Examples are abundant in cases of:
(i)
(ii)
(iii)
safety, threatened by hazards or accidents as, for instance, in circumstances of
environmental risk (Irwin, 1995, 2001) or even risks provoked by business,
in particular by multinational enterprises (Ravetz, 2001, p. 2);
unintended consequences produced by ‘side effects’ of such developments,
as, for instance, in GMOs and GM food, xeno-transplantation, human
genome etc. (Ravetz, 2001, p. 3);
intractable disagreement or unsettled controversies among experts, who are
striving to preserve their authoritative power over science and technology by
all means, as, for instance, in the case of the regulatory conflicts in the EU
over GM crops (Levidow et al., 2000).
What results from all these cases is a growing conviction about the contextual and
socially constructed character of knowledge production; this is the perception of what
Pellizzoni (ibid.)
ibid.) has called ‘plasticity’ of scientific knowledge in order to highlight
ibid
its subordination to political and economical interests but also its social and cultural
malleability.
But why do experts possess such a privileged position among institutions and the
public in the European conception and architecture of policy-making? According
to Andersen and Burns (1996), experts constitute one of the three main forms
of representation in the EU (together with interest groups and single countries).
However, the fact that from its creation the EU has focused on regulatory (rather
than on distributive or re-distributive) policies aiming to efficiency and considering
knowledge to be their main asset explains the dominant position given to experts.
52
As many have remarked (cf., Pellizzoni, 2001a), this orientation has some negative
consequences in EU policy-making, such as an increasing de-politicization, lack of
transparency and low public accountability.
Furthermore, expertise might be considered to be suffering from a lack of public
legitimacy and, thus, to become problematic in at least two aspects: representation
and knowledge (Radaelli, 2002). According to Claudio Radaelli (pp. 198-199), when
expertise is represented in a process of public policy, what is in action is a different
notion of representation than, say, when a person or an organization is entitled to
represent a certain political, social or economic entity. In fact, what is claimed to be
represented in the former case is just knowledge as a highly valued resource in the
policy-making process. But the concept of knowledge is qualitatively different from
that of a political, social or economic identity, which would be expected to participate or
to be represented in the former case. This is what Radaelli calls the ‘anthropomorphic’
view of knowledge, i.e., a tendency to define knowledge in relation to certain actors
(e.g., experts or knowledge-holders or ‘guardians’ in Schmitter’s terminology – see
below). But then, the crucial question to answer is: What makes knowledge a more
critical resource in policy-making than, say, money or votes and gives advantages
to knowledge ‘guardians’? According to Radaelli, it is what he calls the ‘cognitive
structure’ of the policy process the factor, which attributes such a prestige to experts.
As Radaelli argues, the ‘cognitive structure’ is a social and cultural construct, which
“provides interpretation, representations of political events, learning opportunities
and ‘lenses’ that give focus to interests and behavioral codes” (p. 202). This is why
Radaelli suggests that to democratize expertise should involve multiple strategies in
different dimensions, including civic education, the media and culture.5
Participation & Policy-Making
We have seen before that the mandate of the democratization of expertise through
an increasing expansion of participatory processes is a commonplace now among
policy- and decision-makers due to the extremely dramatic nature of the problems
raised up in the context of the present-day knowledge-based society and economy.
Thus, the prevailing issue becomes now that of participation, which desperately calls
for solution, if viable and sustainable policies are to found to all these problems.
To approach the issue of participation, we are going to revolve around five main
questions about participation that Philippe Schmitter (2002, pp. 58-67) and Luigi
Pellizzoni (2001b, pp. 3-6) have recently raised and discussed: (i) Why and for what
purpose are participatory arrangements appropriate (chartering)? (ii) Who is entitled
to participate and how is she or he selected (composition)? (iii) How is participation
implemented (procedures)? (iv) How are participatory processes connected with
decision-making (decision-rules)? (v) How are they connected with the problem
setting (agenda-setting)?
On the chartering question, Philippe Schmitter has based his analyses around
the research of Eleanor Ostrom (1990) on issues of ‘governing the commons’
53
and the management of ‘self-organized, common-pool resource regimes.’ In this
sense, Schmitter (2002, pp. 59-60) highlights the following attributes (originally
suggested by Ostrom), which might assure a sustainable performance of participatory
arrangements: (1) Feasible improvement, aiming to enhance only resource settings
that are not at a point of deterioration (or underutilized). (2) Development of common
indicators to measure the flow of information and to establish operational benchmarks
of ‘best practice.’ (3) Guarantee of predictability so that resource units do not differ
spatially and are not subject to ‘local’ interpretation. (4) Flexible adaptation of the
resource system into particular spatial contexts (national and regional).6
On the composition question, Schmitter (2002, pp. 60) suggests the following
three attributes that participants (‘appropriators of the resource system’ in Ostrom’s
terminology) should exhibit: (1) Salience in the sense that they should possess a
significant interest in the issue and they are capable of convincing their ‘followersemployees-clients’ to comply with the decisions derived in the context of the
participatory process. (2) Common understanding of how the resource system
operates and how participants’ interactions affect it. (3) Low discount rate in relation
to future benefits to be achieved from the resource (something which depends on the
development of sufficiently high levels of trust so that actors would tend to abstain
from short-term ‘opportunistic’ benefits).
Moreover, Schmitter (2002, pp. 62-63) identifies seven qualities or resources, which
define the entitlement to participate (and the identity of participants): (1) rights
(citizens); (2) spatial location (residents); (3) knowledge (experts or guardians); (4)
share (owners); (5) stake (beneficiaries-victims); (6) interests (spokespersons); and
(7) status (representatives).7
On the procedural question, Schmitter (2002, pp. 67) proposes five very general
guidelines that participatory arrangements should follow: (1) The precautionary
principle, in cases that the available knowledge is incomplete or uncertain, which
would demand calculations according to the worse-case scenarios in order to avoid
potential costs than to maximize potential benefits. (2) The forward-regarding principle,
which would demand calculations on the basis of the furthest future projection and,
thus, necessitate that some participants develop a long time perspective. (3) The
subsidiarity principle, which would restrict dealing only with issues that could not
be resolved at a lower level decision body. (4) The principle of (partial) transparency,
which would allow public dissemination of the proceedings of a process only once a
decision has or has not been made but never during the period the process takes place.
(5) The principle of proportional externalities, according to which no decision should
be taken when its effects in social, economical and political terms are disproportionate
either to the original expectations of the charter of the participatory arrangement or to
the standards of fairness in society.
On the decision-rules question, Schmitter (2002, pp. 65-66) discusses the following
eight principles of fair decision-making: (1) The principle of ‘putative’ equality,
54
which would exclude discriminations of any kind in the treatment of participants. (2)
The principle of horizontal interaction, which would regulate smoothly and flexibly
internal organizational arrangements in participatory processes. (3) The consensus
principle, which would necessitate that decisions are taken through deliberation by
consensus rather than by vote or imposition. (4) The ‘open door’ principle, which
would allow participants’ exit without the imposition of any sanctions against the
withdrawing members. (5) The proportionality or reciprocity principle, which would
roughly weigh attained outcomes with respect to the assets that the participants are
contributing. (6) The principle of shifting alliances, which would guarantee mobility
among participants’ positions during the process of consensus formation. (7) The
principle of ‘checks and balances,’ which would assure the consistency of decisions
taken among decision-making processes at different levels. (8) The reversibility
principle, according to which all decisions could potentially be annulled or reversed
by citizens either directly through referenda or indirectly by their representatives in
higher level decision-making bodies.
Through a different perspective, Luigi Pellizzoni (2001b) approaches the entitlement
issue, based on a distinction between two fundamental types of participatory
schemes: opinion-oriented and position-oriented. An opinion-oriented participatory
arrangement is composed of public participants concerned with “highlighting,
confronting, and clarifying a constellation of general opinions and ideas, principles and
values” (Pellizzoni, 2001b, p. 4). On the other side, an opinion-oriented participatory
arrangement is composed of stakeholder participants, who aim to “addressing in a
cooperative and dialogical way a dispute among well-defined social positions having
a direct stake in an issue” (ibid.).
ibid
ibid.).
Thus, drawing upon concrete case studies of European experiences (EUROpTA, 2000)
of Participatory Technology Assessment (PTA), Pellizzoni remarks the following
differences with regard to the entitlement issue of participatory arrangements (pp. 45): In a public PTA, participants are defined through their normative competence, as
this is expressed by their opinions, preferences, principles and values. No cognitive
or intellectual competence is expected in a public PTA. Although the normative
competence of a stakeholder PTA is expressed by participants’ interests and stakes,
these stakeholders are also expected to possess a cognitive competence too. They
participate not only in order to assure their legitimacy but also in order to contribute
to processes of both problem-setting and problem-solving using the privileges of their
professional, social, economic or spatial (territorial) positioning. The latter privileges
are what constitute the ‘positional insight’ of such participants (stakeholders).8
With respect to procedural issues of PTA, Pellizzoni (2001b, p. 5) has distinguished
differences again according to the type of the participatory arrangement. For opinionoriented PTA, he argues that the dilemma is to form a general consensus or to follow
the majority rule. On the other side, for position-oriented PTA, he claims that the
dilemma is to reach consensus through discussion or strategic bargaining.
55
On the decision-rules issue, David Guston (1999) has posed a general framework for
evaluating the impact of participatory arrangements according to the following three
dimensions of impact: (1) Categories of impact (actual impact, impact of general
thinking, on training of knowledgeable personnel, on interaction with lay knowledge).
(2) Target of impact (policy, politics and people). (3) Type of impact (substantive,
procedural, reflexive).
On the decision-rules question, Pellizzoni (2001b, p. 5) again is interested in
highlighting differences in decision-making between the two types of PTA. In
public PTA, participants are more concerned in the production of indirect effects
(for instance, by stimulating political changes for further investigation) than a direct
impact of the decisions. In stakeholder PTA, what characterizes participants is that in
the decision process they are reluctant to abandon their usual fixed ideas, routines and
habits and to adopt an open-minded attitude and criticize existing power relations in
favor of a more balanced representation of positions.
With regards to the agenda-setting question, Pellizzoni (2001b, pp. 3, 5-6), using idealtypical terms, distinguishes between top-down and bottom-up approaches in relation
to whether the agenda is set up and controlled by the promoters or the participants in
the process. In public PTA, Pellizzoni claims that a truly bottom-up approach is rather
problematic since the average participant is not prepared cognitively, normatively and
politically to affect and reorganize the agenda. Thus, in such situations the ‘threat of
manipulation’ is present. In stakeholder PTA, the problem is, Pellizzoni thinks, that
a top-down approach might not so much raise the problem of manipulation as rather
the possible expulsion of some participants from the process. Moreover, in such
situations, Pellizzoni argues, the main problem is the ‘threat of strategy,’ depending
on how stakeholders are behaving in terms of usual interest negotiation.
In a further attempt to formalize a classification of participatory approaches to policymaking, Luigi Pellizzoni (2001a, pp. 215-218) has singled out two dimensions:
purpose of participation and agenda-setting. In the first dimension, the purpose of
participation is distinguished according to whether it is deliberation- or decisionoriented. In the second dimension, agenda-setting (or issue-definition) is distinguished
according to whether it is top-down or bottom-up. Then the following arrangements
of four policy approaches result (Fig. 4): (1) Referenda, which may be taken as an
example of a decision-oriented and top-down approach. (2) ‘Citizen bills,’ which
constitute decision-oriented bottom-up citizen initiatives. (3) Consensus conferences,
as typical examples of deliberation-oriented bottom-up approaches. (4) Citizen
advisory committees, which constitute small groups of participants-discussants
(i.e., discussion-oriented) selected by promoters (thus, obeying a top-down logic).
Of course the last two approaches are just two characteristic examples among many
others (Rowe & Frewer, 2000).
56
Fig. 4. Approaches to participatory policy-making (Pellizzoni, 2001a, p. 216).
Participation & Deliberative Democracy
To be able to discern the characteristic attributes of concrete participatory forms
and arrangements, one needs to reflect further on the composition question. In
particular, from the selection criteria used to answer this question, one could derive
some analytical typologies of the inclusive or exclusionary character of participatory
arrangements. To do this, we are going to follow again Luigi Pellizzoni’s (2001a, pp.
212-213) distinction between two dimensions of participation: (i) a normative and
(ii) a cognitive dimension. Depending on around which issues – related to concerns
or knowledge – participation is focused, the resulting arrangements can be either
inclusive or exclusionary. Then, taking together these dimensions, four models of
policy-making may be distinguished (Fig. 5): (1) The technocratic model, based on
a double exclusion: Experts are defining both the problem and the solution and are
not allowing any political process of interaction, negotiation or deliberation with
laypersons. (2) The model of traditional politics, which is inclusive with respect to
values and concerns but exclusionary with respect to the cognitive dimension: Experts
are framing unquestionably the technical contents of the issue but all other stakes are
permitted to be debated, negotiated and shared among all the participants. (3) The neocorporatist model, which allows debate and negotiation among a limited number of
involved stakeholders, representing concrete interest groups and organizations: Only
these participants can provide their insights in the cognitive processes of problem
formulation and solution – the concerns of any other participants are excluded
from the normative process of decision-making too. (4) The model of deliberative
democracy, which is inclusive in both dimensions: It acknowledges the relevant
insight from every participant and it incorporates the whole range of expressed
concerns from the whole society.
57
Fig. 5. Models of participatory policy-making (Pellizzoni, 2001a, p. 213).
Clearly, on the composition question, the tradition of deliberative democracy shares
the value of ‘popular inclusion’ with participatory liberalism. But there is a difference
in the way this inclusiveness is now comprehended. For participatory liberal theorists,
inclusion is an end in itself (Barber, 1984). For deliberative democracy, “the ultimate
goal is a public sphere in which better ideas prevail over weaker ones because of the
strength of these ideas rather than the strength of their proponents” (Ferree et al., 2002,
p. 301). In particular, a well functioning public sphere should include actors from the
‘periphery,’ i.e., autonomous (‘autochtone’) actors that Jürgen Habermas (1984) was
associating with the ‘life-world’ of citizens. These actors, by their dialogue with the
center, might subscribe to an ‘ideal speech situation’ and, thus, contribute to the
fostering of more deliberative speech.
However, deliberative theory challenges the ideals of democracy supported by
representative liberal theorists, which are based on preference-aggregative political
processes or on bargaining among conflicting interests or on the elitist restriction
of the discussion (March & Olsen, 1989). In the words of Luigi Pellizzoni, “the
deliberative perspective admits that political preferences conflict, that modern society
is pluralist and cannot be viewed as a community with shared goals and principles …
[and] that conflicts can be resolved by means of unconstrained discussion intended to
achieve the common good” (Pellizzoni, 2001b, p. 8).
Therefore, “deliberativeness is the core value of this perspective, and it involves
recognizing, incorporating, and rebutting the arguments of others – dialogue and
58
mutual respect – as well as justifying one’s own” (Ferree et al., 2002, p. 306). This
is grounded on the possibility of reaching consensus on the ‘best argument,’ thanks
to the fundamental unity of reason and the invariant structure of language, according
to Habermas (1996). Since “mutual understanding is the outcome of a process of
abstraction or generalization,” the proponents of deliberative democracy try “to find,
behind different positions, a common set of principles from which the solution to the
problem at stake can be derived” (Pellizzoni, 2001b, p. 9).
However, the main criticisms to the arguments of deliberative democracy concern the
problem of inequality and the problem of incommensurability (ibid.).
ibid.). Inequalities and
ibid
differences in status, power and the possession of other resources manifest the practical
difficulties of the reconciliation between the deliberative democratic ideal and the
actual reality of political conflicts. On the other side, incommensurability manifests
the difficulties of the rapprochement between different cognitive and axiological
positions. Although reason might be a shared quality among humans, its exercise
does not always lead to agreement. When there are acute differences in worldviews
or problem definition or factual assertions or value assumptions, agreement on the
‘best argument’ cannot always be reached (Pellizzoni, 2001c). In these cases, what
is missing is a conceptual framework, which would provide a common measure to
compare among different opinions, which is an elementary and necessary condition
for deliberation to start. Moreover, beneath the claim of the universalist-rationalist
dominant approach, Chantal Mouffe (1999, 2000) has argued, hegemonic pretensions
are often hidden.9 Similarly, Lynn Sanders argues that appeals to deliberation “have
often been fraught with connotations of rationality, reserve, cautiousness, quietude,
community, selflessness, and universalism, connotations which in fact probably
undermine deliberation’s democratic claims” (Sanders, 1997, p. 348).
Finally, incommensurability also raises the issue of the social situatedness of positions,
i.e., the issue of positional objectivity (Sen, 1993). Thus, trans-positional, trans-group
and trans-cultural assessments may provide a way out of incommensurability and
total relativism (Pellizzoni, 2001b, p. 9). Moreover, the possibility of trans-positional
assessments should be in principle open to everybody, who may perform critical,
local and contextual observations, taking into account different assumptions and
descriptions and trying to filter out the false or senseless ones. Hence, Pellizzoni
argues, “the aim of deliberation becomes to confront contextual knowledge, different
position insights, looking at similarities, isomorphisms, common features among
differently framed descriptions of portions of reality, in order to find a shared ‘local’
solution to a problem” (ibid.).
ibid
ibid.).
Concluding Remarks
We started our investigation by pointing out the complexities of modern governance,
which necessarily today – in the knowledge-based society – has to deal with the new
ways that science, technology and society are related to each other. In this framework,
uncertainty becomes the central policy issue and we saw two approaches dealing
59
with it: post-normal science and Wynne’s diagnostic model. However, uncertainty
necessitates participation and deliberation and, thus, we discussed the ways
participatory arrangements are actuated through modern policy discourses in order to
enhance democratic governance.
So far, so good, in theory. But what’s happening in practice? Is participation solving
problems in the way it is implemented in modern governance and policy-making?
Some would be reluctant to sign on the effectiveness of participatory policies in
concrete contexts and would rather question the underlying structural and normative
assumptions of such policies. To give some characteristic examples, let us discuss two
recent criticisms of relevant EU initiatives, which focus (i) on how the latter attain
a relegitimization of decision-making in science and governance and (ii) on what
kind of conceptions of civic participation and political engagement these policies are
based.
Examining the case of EU policies on agricultural biotechnology, Les Levidow and
Claire Marris (2001) are trying to identify the serious problems and misconceptions
in the orientation of the EU policy frameworks of science and governance. Their
argument is that, in the dominant models of science and technology, a discourse of
technological determinism (stressing particular technological imperatives, closely
related to competitiveness) is often invoked in order to “promote and conceal socialpolitical agendas, while pre-empting debate on alternative futures” (Levidow &
Marris, 2001, p. 345). They argue that, originally during the 1980s and early 1990s,
the official policy discourses had adopted a ‘deficit model’ of how the publics were
understanding science and technology, according to which lay people were assumed to
base their opposition on fear and ignorance. Thus, the need for ‘better communication’
to be disseminated from experts into society.
However, Levidow and Marris hold that subsequently the problem “shifted away
from public ignorance to mistrust” and, thereby, the observed crisis of confidence
increasingly necessitated ‘better debate’ or ‘stakeholder dialogue’ rather than ‘better
communication’ (p. 348). But, they argue, by shifting the issue from ‘how to educate
the ignorant people’ to ‘how to regain trust in science,’ essentially the new rhetoric
continued to mis-diagnose the problem and to propose inadequate solutions, still
drawing upon the same deficient public understanding of science. This explains why,
for Levidow and Marris, the proposed ‘new forms of dialogue’ were “sought mainly
as a means to restore the legitimacy of science and technology, not as a means to
reconsider innovation processes” (ibid.).
ibid
ibid.).
Furthermore, the official discourses and regulatory procedures, Levidow and Marris
claim, were often attributing public distrust to science and technology to ‘extrascientific’ concerns – for instance ‘ethical’ issues – “as if value-free scientific
knowledge was readily available, as if scientific evidence were separable from values,
and as if expert advice could thereby stand separate from ‘other concerns’” (p. 349).
At the same time, this systematic reinforcement of the notion of a value-free and
60
neutral science was accompanied by a relegation of the ‘extra-scientific’ concerns to
a subjective realm, where they would be evaluated differently by actors according to
their vested interests and values (ibid.).
ibid
ibid.).
Levidow and Marris give an example of such a concealment of value-frameworks
in regulatory policies in the way the official language and institutional practices
distinguish a value-free ‘risk assessment’ from a socially-, politically- or ethicallyladen ‘risk management’ (p. 350). But, from our previous discussion of uncertainty
and ignorance, we have seen how ill-founded such a distinction is. Therefore, by
continuing “to entertain an artificial boundary between (supposedly objective)
‘science’ and (supposedly subjective) ‘other factors’, which are labeled as ‘societal’
or ‘ethical’, or ‘political’” (ibid.),
ibid.), Levidow and Marris argue, “rhetorics of openness
ibid
have tagged onto the dominant models, rather than superseding them” (p. 345).
Thus, Levidow and Marris conclude: “If the aim is to relegitimise decision-making,
government will need to ‘un-learn’ many institutional assumptions and to redefine the
problem at stake. Rather than seeking ways to change the public, it is necessary to
change the institutions responsible for promoting innovation and regulating risks. In
particular they need to change their preconceptions of science, technology and public
concerns. In such a process, public concerns offer a useful starting point and social
resource for organisational learning” (p. 357).
In a similar fashion, but this time targeting the official conceptions of participation
and politicization, Paul Magnette (2001) has addressed a critical appraisal of the
European Commission White Paper on Governance (EC, 2001a). In spite of ambitious
objectives, Magnette discerns a limited conception of participation throughout the
policies stimulated by this document. He claims that an elitist conception of citizenship
restricts participation to already organized groups, just favoring sectoral groups and
interested parties, who are in a position to use such procedures as petitions, lobbying,
appeals to the European Court of Justice etc. Thus, the white paper fails to encourage
ordinary citizens and other general members of civil society to become more active.
Given also the low levels of civic participation in western democracies, which are
lower at the supranational level, Magnette holds that the mere stimulation of citizens
involvement only in specific procedures does not suffice to raise up the general level
of civic consciousness and participation. Furthermore, these policies, Magnette
argues, confine participation only inside non-binding preliminary procedures of nondecision during the consultative, pre-decision stage.
Magnette explains the weakness of civic participation in the European Union in terms
of two sets of factors: (i) the clarity of the institutional structures and (ii) the polarity of
the party system. With respect to the former, which in principle increases participation,
Magnette claims that the existing “large set of channels of representation (Council,
EP, CoR, ESC, Cosac) fragments deliberation” (p. 7). With respect to the polarity
of the party system, which “simplifies the electoral choice,” he believes that the
“Community method hides political conflicts” through the production of a consensus-
61
oriented decision-making. In fact, Magnette holds that “the Community method,
based on a long process of informal negotiation and the elaboration of compromise
before political discussions take place, is a very powerful disincentive for political
deliberation” (ibid.).
ibid.). After investigating a variety of possible theoretical solutions to
ibid
the problem, such as institutional reforms and federal constitutionalism, Magnette
proclaims for a politicization of the EU, which could generate public interest and
broaden public participation. However, he realizes that this can only happen if and
when the Commission changes political attitudes and no longer “considers itself to be
a body designed to bypass political conflicts and forge compromise before political
deliberation takes place” (p. 12).
Thus, we see, though widely contested in its theoretical, normative and institutional
dimensions, the concept of participation has already expanded in the realm of
‘political symbolism,’ which plays such an important role in the processes of European
integration (Wallace, 2000, p. 56). But this is another territory of uncertainty as the
present-day international conflicts and geopolitical stakes of the globalized world
manifest.
Acknowledgement
This essay has been motivated by discussions drawn during the ongoing project
STAGE (Science, Technology and Governance in Europe) funded by the European
Commission under the Framework Programme V. I have benefited enormously
from what I have learned from my partners in this project: Peter Healey, Alan Irwin,
João Arriscado Nunes, Marisa Matias, Rob Hagendijk, Margareta Bertilsson, Mark
Elam, Hans Glimell, Marja Haeyrinen-Alestalo, Karoliina Snell, Egil Kallerud, Vera
Schwach and Dimitris Kalamaras.
Notes
1.
Michael Gibbons, Helga Nowotny, Peter Scott and their co-workers are talking
about the emergence of a mode-2 science and society (Gibbons et al., 1994),
taking place within the public arena, which, subsequently, the core of these
theorists have preferred to call agora (Nowotny et al., 2001). Bruno Latour
(1998, 1999) proclaims the reconfiguration of the world of science into the
world of research. Karin Knorr-Cetina (1999) by looking at epistemic cultures
epitomizes what she considers to be the basis of the present-day knowledge-based
society. Silvio Funtowicz and Jerry Ravetz (1992, 1993) preach for the advent
of post-normal science. Arie Rip and Barend van der Meulen (1996) even talk
about a post-modern research system and other “higher forms of nonsense” (Rip,
2000). Henry Etzkowitz and Loet Leydesdorff (1997) envisage the co-evolution
of science and society through the triple helix paradigm. This new science is
what has been already called mandated science (Salter, 1988) or post-academic
science (Ziman, 1996) or even academic capitalism (Slaughter & Leslie, 1997).
62
2.
3.
4.
5.
6.
Of course, there are other ways too of defining the multiplicities and complexities
of justice. In their book Les Économies de la Grandeur (1987), Luc Boltanski
and Laurent Thévenot prefer to distinguish among ‘styles’ rather than spheres
and, thus, to justify actions by the mobilization of multiple styles in every
specific situation. In this way, they are able to investigate empirically all kinds of
justifications, which are convincing for different people in different settings.
It would be more surprising if sometimes we were witnessing the opposite
direction of appropriation of representations: ideas, concepts, theories or
methodologies first elaborated in the context of social sciences and subsequently
migrating or diffusing inside natural or mathematical sciences. Of course, in the
present-day conjuncture (the ‘two cultures’ divide, domination of hard sciences
etc.) this is rather far from being possible to happen.
In the words of Brian Wynne, “science’s greater public legitimation and uptake
… would involve, inter alia, recognition of new, socially extended peer groups
legitimated to offer criticism of scientific bodies of knowledge from beyond the
confines of the immediate exclusive specialist scientific peer group” (Wynne,
1996a, p. 39).
Besides all these, the fact is that the democratization of expertise entails finding
solutions to some thorny issues and even committing certain compromises or
trade-offs. From the point of view of policy-making, Radaelli (2002, p. 203)
describes the decisions, which have to be taken in terms of a trilemma among
three poles: political legitimacy, policy effectiveness and scientific accuracy.
Radaelli argues that all the elements of this trilemma are problematic although
they should not be considered always as antithetic. Finally, as the working
group of the European Commission has suggested, democratizing expertise
necessitates some potential trade-offs, such as between legitimacy and efficiency,
simplification and participation (EC, 2001a, p. 7) and between democracy and
time (Radaelli, 2002, p. 204).
Furthermore, Schmitter (2002, pp. 61-62) considers that the initial establishment of
participatory arrangements should be guided by the following six general norms:
(1) The principle of ‘mandated authority’ demanding the existence of a clear and
well-defined mandate as a necessary precondition for a participatory arrangement
to be established. (2) The ‘sunset’ principle imposing an expiration date – known
from advance – for any participatory scheme. (3) The principle of ‘functional
separability’ defining the borders between tasks accomplished by different
participatory arrangements. (4) The principle of ‘supplementarity’ determining
that any participatory body should not compete with existing political institutions
but just supplement them. (5) The principle of ‘request variety,’ according to
which it is up to the participatory arrangement to establish its internal procedures
and develop its distinctive format, appropriate for accomplishing its tasks and
goals. (6) The ‘high rim’ or ‘anti-spill-over’ principle, which will require that
changes in tasks are only permitted if the appropriate changes in mandates are
issued by the supervising institution.
63
7.
8.
9.
In addition, Schmitter (2002, pp. 63-64) outlines the following four principles that
might provide a basis for the justification of the selection of participants: (1) The
minimum threshold principle, according to which no participatory arrangement
should have more active participants than what is necessary. (2) The stakeholding principle, which would exclude participants without a significant stake in
the issue – except knowledge-holders (experts). (3) The principle of ‘European
privilege’ – of course, with respect to European governance arrangements – in
the sense that all participants should represent Europe-wide constituencies. (4)
The adversarial principle, favoring selections of participants who represent
constituencies with diverse and opposite interests (including knowledge-holders
or experts supporting different or opposing theories and paradigms), so that
there might be no preponderance of representatives holding a similar position or
forming an alliance for common purpose.
In what concerns the selection question, Pellizzoni (2001b, pp. 4-5) again
discerns differences between opinion-oriented and position-oriented participatory
schemes. In public PTA, the fundamental dilemma is whether selected participants
should act as representatives of others with similar characteristics, interests and
values or as citizens, who might decide to transform their characteristics through
their interaction and deliberations with others. On the contrary, the basic problem
of stakeholder PTA arises in circumstances when certain stakeholders’ interests
and positions have low visibility or the organization level of these stakeholders
is very poor to permit them to convince their constituencies to compile with the
decisions taken in the process.
This is why, in her criticisms of deliberative democracy, Chantal Mouffe
envisages politics through a model of radical or ‘agonistic pluralist’ democracy,
which would replace the ‘consensualistic’ approach to public deliberation (1999,
pp. 755-756). For her, “pluralist politics should be envisaged as a ‘mixed-game,’
i.e., in part collaborative and in part conflictual and not as a wholly co-operative
game as most liberal pluralists would have it” (ibid.).
ibid
ibid.).
References
Andersen, S.S., & Burns, T. (1996). The European Union and the erosion of
parliamentary democracy. In S.S. Andersen & K.A. Eliassen (eds.), The European
Union: How Democratic Is It? pp. 227-251. London: Sage.
Barber, B. (1984). Strong Democracy: Participatory Politics for a New Age. Berkeley:
University of California Press.
Boltanski, L., & Thévenot, L. (1987). Les Économies de la Grandeur. Paris : Presses
Universitaires de France.
Boudourides, M.A. (2000). The real complexities of International Relations (in
Greek). Paper presented at the 13th Summer School – Pan-Hellenic Conference
“Nonlinear Dynamics: Complexity and Chaos,” Chania, Greece, July 17-28, 2000.
Available online at: http://www.math.upatras.gr~mboudour/articles/pds.html
64
Boudourides, M.A. (2002). Governance in science and technology. Paper presented at
the EASST 2002 Conference “Responsibility Under Uncertainty,” York, UK, July
31 – August 3, 2002. Available online at: http://www.math.upatras.gr/~mboudour/
articles/ gst.pdf
Callon, M. (1994). Is science a public good? Science, Technology, & Human Values,
vol. 19, no. 4, pp. 395-424.
Elam, M., & Bertilsson, M. (2003). Consuming, engaging and confronting science:
The emerging dimensions of scientific citizenship. European Journal of Social
Theory (to appear). Available online at: http://www.spsg.org/scisoc/stage/
StageDiscussPaper.pdf
Elzinga, A. (2002). The new production of reductionism in models relating to research
policy. Paper presented at the Nobel Symposium of the Royal Swedish Academy
of Sciences, “Science and Industry in the 20th Century,” Stockholm, Sweden,
November 21-23, 2002. Available online at: http://www.center.kva.se/NS123/
Paper%20PDF/ ElzingaPaper.pdf
Etzkowitz, H., & Leydesdorff, L. (eds.) (1997). Universities and the Global
Knowledge Economy. Herndon, VA: Pinter.
European Commission (2000). Science, Society and the Citizen in Europe. Working
Document, SEC(2000) 1973. Available online at: http://europa.eu.int/comm/
research/ area/science-society-en.pdf
European Commission (2001a). White Paper on Governance: Report of the
Working Group “Democratising Expertise and Establishing Scientific Reference
Systems” (Pilot: R. Gerold; Rapporteur: A. Liberatore). Available online at: http:
//europa.eu.int/comm /governance/areas/group2/report_en.pdf
European Commission (2001b). Science and Society Action Plan. Communication
from the Commission, COM(2001), 714 final. Available online at: ftp://
ftp.cordis.lu/pub/ rtd2002/docs/ss_ap_en.pdf
EUROPTA (2000). European Participatory Technology Assessment. Project Report.
EC TSER Programme. Copenhagen: Danish Board of Technology. Available
online: http://www.tekno.dk/pdf/projekter/europta_Report.pdf
Ferree, M.M., Gamson, W.A., Gerhards, J., & Rucht, D. (2002). Four models of the
public sphere in modern democracies. Theory and Society, vol. 31, pp. 289-324.
Funtowicz, S., Martinez-Alier, J., & Ravetz, J.R. (1996). Information tools for
environmental policy under conditions of complexity. European Environment
Agency. Available online at: http://reports.eea.eu.int/ISSUE09/en/envissue09.pdf
Funtowicz, S.O., & Ravetz, J.R. (1990). Uncertainty and Quality in Science Policy.
Dordrecht, the Netherlands: Kluwer.
Funtowicz, S.O., & Ravetz, J.R. (1991). A new scientific methodology for global
environmental issues. In R. Costanza (ed.), Ecological Economics, pp. 137-152.
New York: Columbia University Press.
Funtowicz, S., & Ravetz, J.R. (1992). Three types of risk assessment and the
emergence of post-normal science. In S. Krimsky & D. Golding (eds.), Social
Theories of Risk, pp. 251-273. Westport, CN: Praeger.
Funtowicz, S., & Ravetz, J.R. (1993). Science for a post-normal age. Futures, vol. 25,
no. 7, pp. 739-755.
65
Funtowicz, S., & Ravetz, J.R. (1994). Emergent complex systems. Futures, vol. 26,
no. 4, pp. 568-572.
Funtowicz, S., Shepherd, I., Wilkinson, D., & Ravetz, J. (2000). Science and
governance in the European Union: A contribution to the debate. Science and
Public Policy, vol. 27, no. 5, pp. 327-336.
Gallopin, G., O’Connor, M., Funtowicz, S., & Ravetz, J. (2001). Science for the 21st
century: From social contract to the scientific core. International Journal of Social
Science, vol. 168, pp. 219-229.
Gibbons, M. (1999). Science’s new contract with society. Nature, vol. 402, pp. C81C84.
Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M.
(1994). The New Production of Knowledge: The Dynamics of Science and
Research in Contemporary Societies. London: Sage Publications.
Guston, D. (1999). Evaluating the first U.S. consensus conference: The impact of
the citizens’ panel on telecommunications and the future of democracy. Science,
Technology, and Human Values, vol. 24, no. 4, pp. 451-482.
Haas, P.M. (1992). Introduction: Epistemic communities and international policy
coordination. International Organization, vol. 46, no. 1, pp. 1-35.
Habermas, J. (1984). The Theory of Communicative Action. Boston: Beacon Press.
Habermas, J. (1996). Between Facts and Norms. Cambridge: Polity.
Hajer, M.A. (1995). The Politics of Environmental Discourse. Ecological
Modernization and the Policy Process. Oxford: Clarendon Press.
Healy, S. (1999). Extended peer communities and the ascendance of post-normal
politics. Futures, vol. 31, pp. 655-669.
Hisschemoeller, M., & Hoppe, R. (1996). Coping with intractable controversies: The
case for problem structuring in policy design and analysis. Knowledge and Policy,
vol. 8, no. 4, pp. 40-60.
Irwin, A. (1995). Citizen Science. A Study of People, Expertise and Sustainable
Development. London & New York: Routledge.
Irwin, A. (2001). Sociology and the Environment. A Critical Introduction to Society,
Nature and Knowledge. Cambridge, UK: Polity.
Irwin, A., & Wynne, B. (1996). Introduction. In A. Irwin & B. Wynne (eds.),
Misunderstanding Science? The Public Reconstruction of Science and Technology,
pp. 1-17. Cambridge: Cambridge University Press.
Jasanoff, S., & Wynne, B. (1998). Science and decisionmaking. In S. Rayner & E.L.
Malone (eds.), Human Choice and Climate Change, Vol. 1, pp. 1-87. Columbus,
OH: Battelle Press.
King, P.J., & Roberts, N.C. (1987). Policy entrepreneurs: Catalysts for policy
innovation. The Journal of State Government, vol. 60, no. 4, pp. 172-179.
Knorr-Cetina, K. (1999). Epistemic Cultures. How the Sciences Make Knowledge.
Cambridge: Harvard University Press.
Kooiman, J. (2002). Governance. A socio-political perspective. In J.R. Grote & B.
Gbikpi (eds.), Participatory Governance. Political and Societal Implications
(pp. 71-96). Opladen: Leske + Budrich. Available online at: http://www.ifs.tudarmstadt.de /pg/heinelt/p_eu_2000-2002-kooiman.pdf
66
Latour, B. (1998). From the world of science to the world of research? Science, vol.
280, no. 5361, pp. 208-209.
Latour, B. (1999). Pandora’s Hope: Essays on the Reality of Science Studies.
Cambridge: Harvard University Press.
Law, J., & Mol, A. (eds.) (2002). Complexities. Social Studies of Knowledge Practices.
Durham & London: Duke University Press.
Lebessis, N., & Paterson, J. (1997). Evolutions in governance: What lessons for the
Commission? A first assessment. Forward Studies Unit, European Commission.
Available online at: http://europa.eu.int/comm/cdp/working-paper/evolution_in_
go vernance.pdf
Levidow, L., Carr, S., & Wield, D. (2000). Genetically modified crops in the European
Union: Regulatory conflicts as precautionary opportunities. Journal of Risk
Research, vol. 3, no. 3, pp. 189-208.
Levidow, L., & Marris, C. (2001). Science and governance in Europe: Lessons from
the case of agricultural biotechnology. Science and Public Policy, vol. 28, no. 5,
pp. 345-360.
Lubchenco, J. (1997). Entering the century of the environment: A new social contract
for science. Science, vol. 279, no. 491-497.
Magnette, P. (2001). European governance and civic participation: Can the European
Union be politicised? Paper presented at the Symposium “Mountain or Molehill?
A Critical Appraisal of the Commission White Paper on Governance,” European
University Institute, Florence. Available online at: http://www.jeanmonnetprogra
m.org/papers/ 01/010901.html
March, J.G., & Olsen, J.P. (1989). Rediscovering Institutions. New York: The Free
Press.
Mouffe, C. (1999). Deliberative democracy or agonistic pluralism? Social Research,
vol. 66, no. 3, pp. 745-758.
Mouffe, C. (2000). The Democratic Paradox. London: Verso.
Nowotny, H., Scott, P., & Gibbons, M. (2001). Re-Thinking Science: Knowledge and
the Public in an Age of Uncertainty. Cambridge: Polity.
O’Riordan, T., & Rayner, S. (1991). Risk management for global environmental
change. Global Environmental Change, vol. 1, no. 2, pp. 91-108.
Ostrom, E. (1990). Governing the Commons. The Evolution of Institutions for
Collective Action. Cambridge: Cambridge University Press.
Pellizzoni, L. (2001a). Democracy and the governance of uncertainty. The case of
agricultural gene technologies. Journal of Hazardous Materials, vol. 86, pp. 205222.
Pellizzoni, L. (2001b). Public opinion and positional insight: Uncertainty and
participatory democracy. Paper presented at the Conference of the Research
Committee on Environment & Society of the International Sociological
Association (ISA) “New Natures, New Cultures, New Technologies,” Fitzwilliam
College, University of Cambridge, UK, July 5-7, 2001. Available online at: http:
//www-cies.geog.cam.ac.uk/www-cies/isa/3Pellizzoni.html
Pellizzoni, L. (2001c). The myth of the best argument: Power, deliberation and reason.
British Journal of Sociology, vol. 52, no. 1, pp. 59-86.
67
Radaelli, C.M. (1999). Technocracy in the European Union. London & New York:
Longman.
Radaelli, C.M. (2002). Democratising expertise? In J.R. Grote & B. Gbikpi (eds.),
Participatory Governance. Political and Societal Implications (pp. 197-212).
Opladen: Leske + Budrich. Available online at: http://www.brad.ac.uk/acad/eurostudies/STAFF/Demexp.html
Ravetz, J.R. (1997). Integrated environmental assessment forum: Developing
guidelines for “good practice.” ULYSSES Working Paper (WP-97-1). Available
online at: http://www.zit.tu-darmstadt.de/ulysses/eWP97-1.pdf
Ravetz, J.R. (1999). What is post-normal science. Futures, vol. 31, pp. 647-653.
Ravetz, J.R. (2001). Safety in the globalising knowledge economy: An analysis by
paradoxes. Journal of Hazardous Materials, vol. 86, pp. 1-16.
Rhodes, R.A.W. (1996). The new governance: Governing without government.
Political Studies, vol. 44, pp. 652-667.
Rip, A., & van der Meulen, B.J.R. (1996). The post-modern research system. Science
and Public Policy, vol. 23, no. 5, pp. 343-352.
Rip, A. (2000). Higher forms of nonsense. European Review, vol. 8, no. 4, 467-486.
Rowe, G., & Frewer, L.J. (2000). Public participation methods: A framework for
evaluation. Science, Technology, & Human Values, vol. 25, no. 1, pp. 3-29.
Salter, L. (1988). Mandated Science: Science and Scientists in the Making of
Standards. Boston: Kluwer.
Sanders, L. (1997). Against deliberation. Political Theory, vol. 25, no. 3, pp. 347376.
Scharpf, F.W. (1999). Governing in Europe: Effective and Democratic? Oxford:
Oxford University Press.
Schmitter, P.C. (2002). Participation in governance arrangements: Is there any reason
to expect it will achieve “sustainable and innovative policies in a multi-level
context”? In J.R. Grote & B. Gbikpi (eds.), Participatory Governance. Political
and Societal Implications (pp. 51-69). Opladen: Leske + Budrich. Available online
at: http://www.ifs.tu-darmstadt.de/pg/heinelt/p_eu_2000-2002-schmitter2.pdf
Schoen, D., & Rein, M. (eds.) (1994). Frame Reflection. Toward the Resolution of
Intractable Policy Controversies. New York: Basic Books.
Sen, A. (1993). Positional objectivity. Philosophy and Public Affairs, vol. 22, no. 2,
pp. 126-145.
Slaughter, S., & Leslie, L.L. (1997). Academic Capitalism: Politics, Policies, and the
Entrepreneurial University. Baltimore: The Johns Hopkins University Press.
Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. New York:
Springer.
Smithson, M. (1991). ‘The changing nature of ignorance’ and ‘managing in an age
of ignorance.’ In J. Handmer, B. Dutton, B. Guerin & M. Smithson (eds.), New
Perspectives on Uncertainty and Risk. Canberra: CRES, ANU and Mt. Macedon.
Stewart, P. (2001). Complexity theories, social theory, and the question of social
complexity. Philosophy of the Social Sciences, vol. 31, no. 3, pp. 323-360.
Stirling, A. (1998). Risk at a turning point? Journal of Risk Research, vol. 1, no. 2,
pp. 97-110.
68
Stoker, G. (1998). Governance as theory: Five propositions. International Social
Sciences Journal, vol. 155, pp. 17-28.
Wallace, H. (2000). The policy process : A moving pendulum. In H. Wallace & W.
Wallace (eds.), Policy-Making in the European Union, pp. 39-64. Oxford: Oxford
University Press.
Walzer, M. (1983). Spheres of Justice. A Defense of Pluralism and Equality. Basic
Books.
Wynne, B. (1992). Uncertainty and environmental learning. Global Environmental
Change, vol. 2, pp. 111-127.
Wynne, B. (1996a). Misunderstood misunderstandings : Social identities and public
uptake of science. In A. Irwin & B. Wynne (eds.), Misunderstanding Science?
The Public Reconstruction of Science and Technology, pp. 19-46. Cambridge:
Cambridge University Press.
Wynne, B. (1996b). May the sheep safely graze? A reflexive view of the expertlay knowledge divide. In S. Lash, B. Szerszynski & B. Wynne (eds.), Risk,
Environment & Modernity. Towards a New Ecology, pp. 44-83. London: Sage.
Wynne, B. (2001). Managing scientific uncertainty in public policy. Paper presented at
the Conference “Biotechnology and Global Conference: Crisis and Opportunity,”
Cambridge, MA, April 26-28, 2001. Available online at: www.wcfia.harvard.edu
/biotech/wynnepaper1.doc
Yearley, S. (2000). Making systematic sense of public discontents with expert
knowledge: Two analytical approaches and a case study. Public Understanding of
Science, vol. 9, pp. 105-122.
Ziman, J. (1996). Postacademic science: Constructing knowledge with networks and
norms. Science Studies, vol. 9, no. 1, pp. 67-80.
Zolo, D. (1992). Democracy and Complexity. A Realist Approach. University Park,
PA: The Pennsylvania State University Press.
69
Participation in Risk Management Decisions:
Theoretical, Practical, and Strategic Difficulties
in the Evaluation of Public Participation
Initiatives
Gene Rowe
Institute of Food Research
NORWICH
UK
Lynn Frewer
University of Wageningen
WAGENINGEN
NETHERLANDS
Abstract
A current trend in risk management, and policy setting more generally, is the
involvement of members of the public, or other significant stakeholders, in the
decision-making process. Such involvement has been argued to have the advantage of
increasing the democratic legitimacy of decisions, and allowing the incorporation of
lay insight into problems that have a degree of scientific uncertainty (and hence that
are based to some extent on value judgments). One significant issue is the quality or
validity of such processes, namely, the issue of evaluation. Evaluation is important,
not only from a quality control perspective, but because it may indicate potential
improvements for the conduct of further exercises, and importantly, may help to
assure participants (and the public more widely) that the exercise is more than just
a public relations exercise. However, evaluation of public involvement initiatives is
relatively rare, and little discussed in the academic literature. It is also beset with a
large number of potential problems and uncertainties.
In this paper, we will discuss a variety of problems with conducting
evaluations of participation initiatives. These problems range from the theoretical
(how one defines effectiveness, how one measures this, how one confirms the
validity, reliability and utility of one’s measures), to the practical (how one conducts
evaluations given limitations in time, space, resources, and possible sources of
data), to the strategic/political (how one deals with sponsor/organiser resistance to
evaluation). These problems will be discussed from a theoretical point of view, and
with reference to practical evaluations that we have conducted with a large variety of
governmental and non-governmental organisations, predominantly in the UK. The
paper will conclude with a number of recommendations regarding best practice in
conducting evaluations.
1. The Trend for Increased Participation in
Institutional Decision Making
One trend that is readily apparent in contemporary democratic societies is the growth
in participatory decision making, in which the public (and/or other stakeholders) are
consulted and involved in some manner in the policy setting and decision-making of
70
institutional (e.g. governmental) bodies. This ‘model’ of decision-making contrasts
with the predominant model in democracies in which the involvement of the public
is limited to voting at election time, with subsequent decisions left to elected
representatives in Government, often with recourse to advice from unelected experts
(either individual advisors or members of expert committees) [1].
Public or stakeholder ‘participation’ is achieved through various means or
mechanisms. In some cases, involvement is enacted through changed institutional
forms - such as co-opting public members or stakeholders into advisory committees.
In many cases, however, involvement is achieved through one-off events rather than
continuous processes. The use of referenda and consultation documents on which
interested parties can comment are fairly traditional means of involving the public;
more contemporary means include the use of activities such as ‘citizen’s juries’,
‘consensus conferences’ and other innovative forms of conference or meeting. Indeed,
the number of mechanisms seems to grow continually - Rowe and Frewer [2] give
over one hundred named mechanisms in a list that is certainly not exhaustive.
There are a variety of practical and ethical reasons for policy-making bodies
to involve lay people in decision-making. Political theorists and ethicists discuss
concepts such as democracy, procedural justice, and human rights, in providing the
moral basis for involvement. Beyond this issue of ‘fairness’, it is also important to
recognise that in many policy setting or risk management situations, a high degree of
scientific uncertainty exists, such that decisions are based to a significant extent upon
the values of the involved experts, which may have no greater validity than, and which
may overlook important insights held by, the relevant public. In any case, making
decisions without public support is liable to lead to a number of practical difficulties,
such as confrontation, disruption, boycott, , and public distrust. Indeed, a decline in
trust in policy makers has been widely noted, and is regarded as having compromised
the perceived legitimacy of governance in some areas of policy development [3]. The
rationale behind the commissioning and enacting of public participation exercises is
important, for this indicates aims that may be taken as targets or benchmarks against
which the quality of such exercises may be evaluated - the difficulties of which
process are the subject of this paper.
2. The Issue of Evaluation
The evaluation of participation exercises is important for all parties involved: the
sponsors of the exercise, the organisers that run it, the participants that take part,
and the uninvolved-yet-potentially affected public. Evaluation is important for
financial reasons (e.g. to ensure the proper use of public or institutional money),
practical reasons (e.g. to learn from past mistakes to allow exercises to be run better
in future), ethical/moral reasons (e.g. to establish fair representation and ensure that
those involved aren’t deceived as to the impact of their contribution), and research/
theoretical reasons (e.g. to increase our understanding of human behaviour) [4]. As
such, few would deny that evaluation should be done when possible. In the next section
we discuss various difficulties that exist when trying to conduct such evaluations.
71
3. Difficulties in Conducting Evaluations
Although evaluation is desirable, it is also difficult. In the bulk of this paper, we
discuss a number of these difficulties, which we categorise as ‘theoretical’, ‘practical’
and ‘strategic’, raising issues of significance to both researchers and practitioners.
The issues we identify have emerged from our academic considerations of the topic,
and from our own practical experiences of conducting evaluations of participation
exercises for a large number of UK institutions, of both national and local type.
3.1 Theoretical Difficulties
In simple terms, we consider evaluation to entail the measurement of the quality or
effectiveness of something - here, participation exercises. By ‘theoretical’ difficulties
we broadly mean academic difficulties concerned with defining this concept of
‘effectiveness’, and the subsequent difficulties related to the measurement of
effectiveness as defined (measurement being a foundation of academic research).
Defining ‘effectiveness’ is not a straightforward process. In the first instance,
there is a school of thought that this - given complexities that will shortly be discussed
- is impossible or insensible, at least prior to an exercise taking place. The inductive
research tradition seeks to first collect data and then to induce hypotheses/theories
(and indeed, definitions). Here, stating what is meant by ‘effectiveness’ a priori
constrains the data one will collect and imposes a framework upon the data that
might not be appropriate. This research tradition, which typically uses qualitative
over quantitative research techniques, is particularly apt in new environments where
little is previously known, where ‘hard’ (e.g. quantitative) data is difficult to attain
[5], and where hypothesies are difficult to generate. We propose that this process,
with regards participation exercises, should be termed assessment as opposed to
evaluation. Although in some respects the participation environment seems suited
to assessment, this process has difficulties exaggerated by the often political nature
of participation exercises, perhaps the greatest of which is the different values and
perspectives of those involved (from the sponsors and organisers to the various
participants themselves) each of whom may have different rationales for involvement.
Though differing perspectives are problemic for defining effectiveness a priori, this
makes ascertaining it after the event a hugely fraught exercise, in which any party
disagreeing with the assessment may (perhaps justifiably) question the conclusions.
An analogy may be like a game of football in which invisible goal posts are only
revealed at the final whistle. If effectiveness is definied beforehand (the goalposts are
evident) then there can be less cause for complaint.
Unless there is a clear definition of what it means for a participation exercise
to be ‘effective’, there will be no theoretical benchmark against which performance
may be compared. The difficulty lies in the fact that ‘effectiveness’ in this domain is
not an obvious, uni-dimensional and objective quality (such as ‘speed’ or ‘distance’)
that can be easily identified, described, and then measured. Indeed, there are clearly
many aspects to the concept of ‘participation exercise effectiveness’ (as there are aims
for running such an exercise) that are open to contention [6]. This leads to the question:
is it possible to state a single definition of effectiveness for all participation exercises,
or is each exercise unique, with specific aims and hence a need for a specific definition
72
of effectiveness in each case? In a review of evaluations in the academic literature,
we [6] found that answers to this question have varied. Many evaluations - at least
implicitly - have adopted a definition of effectiveness that is ‘universal’ and intended
to be relevant for participation exercises generally (or at least a specified subgroup
of these). Definitions have varied on a number of dimensions. One concerns whether
effectiveness is a subjective aspect (and if so, subjective according to whom) or an
objective aspect (that either incorporates no subjective opinions from the involved
parties, or perhaps combines all of these in some manner). A second concerns whether
effectiveness should relate to aspects of the quality of the exercise process (how the
exercise was conducted) or aspects of the outcome (such as whether the exercise has
a positive consequence), or indeed, both.
Though we found that ‘universal’ definitions of effectiveness have varied
[6], certain themes have tended to reoccur, such as the need for the participants to
be fairly representative of the affected stakeholders or public, and the need for the
exercise to have some genuine impact upon the policy or behaviour of the sponsors of
the exercise. Though a number of universalist ‘evaluation frameworks’ (definitions)
have been described [7] [8], there is little evidence as yet of any of these being taken
up more generally and used/assessed/improved. We have also argued [6] that, though
any particular exercise may have a very precise aim, that these should be able to be
phrased in terms of more general effectiveness criteria – for example, a specific aim
to ‘effect policy in a specific way’ may be cast as the general criterion ‘to have an
impact on policy’.
Following specification of a definition of effectiveness, there arises a need
to develop an instrument to measure this quality. If a theoretically justified definition
of ‘effectiveness of (all) participation exercises’ could be derived, then the evaluation
process could be eased through development and use of a single tested, validated set
of measures (if unique definitions were derived, then clearly a unique set of measures
would be needed for each case). Rosener [9] noted, however, that among the problems
inherent in conducting evaluations, is that there are no agreed-upon evaluation
methods and few reliable measurement tools - and the situation appears to have
changed little since she wrote this. Certainly, a variety of social science approaches
exist that may be used to gather data for this purpose, from the use of questionnaires
(e.g. to ascertain the opinions of participants), to the use of observation (by evaluators,
to assess the quality of process), to the stipulation of objectively verifiable outputs that
may be logged or counted. Unfortunately, in our review [6] we found little evidence
of standardised or validated instruments (indeed, we identified only three cases
where this issue was addressed). By validated we mean that the instruments have
been assessed to determine whether they measure the concept they are intended to
measure, and also that they are reliable, in the sense that they give similar outcomes
when used on different occasions. In the absence of such measures, the meaning of
one’s evaluation will not only be in doubt, the generalisability to other situations and
exercises will also be uncertain. The practical difficulty of conducting evaluations,
using measures, and testing the qualities of one’s measures, is addressed in the next
section.
73
3.2 Practical Difficulties
In our past research, we have encountered a number of difficulties in developing
and using instruments during the conduct of evaluations. The main difficulty is
that the environment in this area is far from conducive for developing instruments
according to standard procedures and criteria (of reliability and validity). That is,
methods at assessing reliability (that the instrument will give similar results when
used on separate occasions when measuring the same thing) rely upon the use of
statistical procedures (e.g. correlations) that need relatively large amounts of data
– e.g., large numbers of respondents completing large numbers of questions, over a
number of occasions. The typical participation exercise, however, uses relatively few
participants. We have also found that restrictions in the collection of adequate data may
arise from an unwillingness of sponsors to allow distribution of instruments on more
than one occasion - sometimes because of a desire not to over-burden participants, and
sometimes because they do not understand the need to reiterate the process.
Standard ways of testing validity (i.e. that an instrument actually measures
what it is supposed to measure) also tend to require large amounts of data. An
additional difficulty arises, however, which is the lack of alternative benchmarks
against which validity may be determined. That is, the validity of an instrument is
often judged by comparing the results obtained from using it, with results obtained
from other instruments that are accepted as having demonstrated validity, or by
comparing the results to what might be predicted from some current theory. In public
participation research, unfortunately, there are no such instruments or theories to
provide alternative benchmarks (yet). We have also experienced an understandable
lack of co-operation from sponsors/organisers of exercises that are perceived to have
failed, and this also provides a problem, in preventing the instruments from being
validated in important circumstances (i.e. of negative evaluation or lack of successful
implementation) such that their limits are uncertain. A simple example will illustrate
this point. Consider the development of a new type of scale that measures weight, and
is found to be highly accurate in measuring the weight of children. It might be that
when an adult stands on the scale the untested weight causes the scale to malfunction
(or even break!); one cannot be certain of the scale’s limits until one tests it in the
range of circumstances in which it might be employed.
Aside from difficulties in assessing the quality of one’s instruments, there
are practical difficulties in the actual application of such instruments (if we set
aside, or assume, the validity/reliability of these). One is the practical difficulty
of identifying an end-point to a participation exercise, as institutional and societal
responses to a particular exercise may be manifest months or even years after an
exercise has finished. This is particularly problemic for instruments that aim to
measure outcomes of exercises, as distinct from the quality of the processes within it.
Hence, outcome measures may be difficult to ascertain in a timely manner, and in any
case, the outcomes themselves may be due to some extent to other factors, such as the
occurrence of simultaneous events or externally mediated pressures influencing policy
processes [10]. Relatedly, sponsors of exercises that do involve evaluation generally
desire rapid appraisal so that this might be included in some report of the activities
- again, limiting the scope of the evaluation. Though the evaluation of outcomes is
74
perhaps preferable to processes, because these will correspond more directly to the
desired aims of the exercise, evaluation of exercise processes must often serve as
surrogate to the outcomes of the exercise [6]. That is, if the exercise process is good
(conducted well according to one’s definition) then it would seem more likely that the
outcomes will be good than if the process is bad.
A further practical problem (which might also be considered ‘strategic’)
concerns the timing of an evaluation. Properly, the evaluation process should be
initiated at the outset of the exercise. For example, if an evaluator wished to establish
the ‘fairness’ of an exercise, they would need access to the decision making body
that sets the terms of the exercise and details who will be involved, how they will
be recruited, what will be expected of them, and how their involvement will be
operationalised. In many cases, however, evaluation is an afterthought, and the
exercise may be well underway before the issue is even raised (this is our experience
in one major participation exercise in the UK that is currently in process). If accurate
documentation of initial decisions is available (e.g. minutes of steering committee
meetings), this problem may be partially overcome – but if not, substantial important
activities that might have bearing upon the ultimate effectiveness of the exercise
will have already passed out of the reach of the evaluator, to the detriment of the
evaluation [4].
Restricted access to information and the various involved parties are a
further point of difficulty for those conducting evaluations. This problem ranges from
the desire of sponsors and organisers to keep various aspects of their activities hidden,
to a simple unwillingness of participants to complete questionnaires, leading to bias
in sampling. Attempts need to be made (both political and methodological) in order
to maximise data available for analysis, and in any evaluation, data gaps need to be
properly noted [4].
3.3 Strategic Difficulties
The evaluation of public or stakeholder participation exercises is relatively rare.
While a number of the problems previously discussed ensure that the process is a
difficult one, and may deter researchers, practitioners, organisers and sponsors from
conducting evaluations, it must also be recognised that there are further strategic, or
perhaps, political, difficulties that also militate against the conduct of evaluations. At
the heart of the problem is this: no one likes to be evaluated. This is particularly the
case with sponsors whose motives for conducting exercises are perhaps not as genuine
as they would like the public to believe (such tokenism has been much condemned in
the past, e.g. [11]). It is also true for organisers who are contracted by sponsors to run
such exercises: our experience is that, in this domain, consultancies that undertake
participation exercises have in the past been given free reign in their activities, hence
the sudden implication that their work might now need to be checked for quality
can cause resentment. After all, in the perspectives of such sponsors and organisers
there is relatively little to gain from an evaluation, and their recognition of the often
experimental and uncertain nature of the participation processes they are using may
cause them to expect criticism. This fearful culture militates against the conduct of
75
research, but also against learning by the institutions or organisations running the
exercises - a benefit to sponsors often overlooked [4].
Unfortunately, any framework used in an evaluation may seem sub-optimal
to such sponsors - and almost certainly will be (for the reasons discussed concerning
the lack of universally accepted and validated measures and evaluation criteria). For
the unconvinced sponsor (e.g. one who is compelled, perhaps by statute, to conduct
a participatory exercise) this realisation might even come as a boon. If the evaluation
process is somehow flawed, then need it be conducted, or if conducted, need the
results be heeded? Any answers produced may be open to criticism by those against
whom criticism is levelled. For example, if the participation process were found to be
flawed in some way, the organiser might challenge the evaluation process, and if the
process is deemed efficient, then those who do not like the outcome might challenge
these conclusions based on evaluation difficulties. The sponsor or organiser might
even find a way to influence the manner of evaluation in order to pre-empt discovery
of some expected flaw. In several of our evaluations using measurement instruments
we have developed [12], sponsors have either attempted to have one or more of the
instruments changed because it did not relate to their ‘unique’ situation [13], or
placed restrictions in the way of the evaluation process. Ultimately, it is the sponsor
who wields most power, and the evaluator, in order to gain a commission/conduct
research may need to concede and conduct their evaluation in a way they would not
wish. We [4] have argued that, though unfettered evaluator control may not be a good
thing for a sponsor, excessive sponsor interference risks biasing the evaluation, and,
if discovered by a competing stakeholder, might lead to charges that undermine the
whole exercise and lead to more rancour than had the exercise and evaluation not been
conducted in the first place.
It is also important to note that, even though a sponsor might resist this, it
is difficult to stop an evaluation being conducted on an exercise (particularly after
the event) by academics or evaluators sponsored by participants or other interested
bodies. If conducted outside the control or influence of the sponsor, then there may be
bias in the evaluation towards the position or the other parties, and bias may also arise
from incomplete knowledge (of the evaluators) of the motives of sponsors and other
information related to the exercise. Although such evaluation is unlikely to cost the
sponsor financially, it may prove costly in other senses. As such, it is probably best for
the sponsor to provide for an evaluation to be conducted at the outset, and ensure that
the process will be fair from all perspectives, particularly its own [4].
4. Discussion: Some Solutions to the Difficulties of
Evaluation
The purpose of this paper has been to raise some of the main difficulties in evaluation
of participation exercises - not necessarily to solve them. Nevertheless, in this final
section we propose a number of solutions to some of the problems discussed here, in
the form of a checklist. In the conduct of an evaluation, we therefore propose:
76
•
•
•
•
•
•
•
Plan the evaluation early, so that it is integral to the exercise and not a delayed
bolt-on or afterthought. This should ensure full data is collected from the outset.
Produce a clear definition of evaluation criteria. What is the aim of the exercise?
Is this aim unique, or is it typical of all participation exercises, or perhaps a
sub-class of these? (That is, consider how generalisable your results might be.)
Are you assessing the achievement of particular outcomes, or are you focussing
upon the process of the exercise (e.g. how well it is run), or aspects of both
these things? A clear definition reduces scope for subsequent arguments from the
parties involved.
If possible, ensure the parties involved understand on what basis the exercise is
being evaluated, and even get them to ‘sign-up’ to the process [4]. This should
also reduce the potential for criticism and dispute after the evaluation.
Develop (and if possible, test or trial) instruments to measure effectiveness
according to your criteria. Be aware of good social science methodology, e.g. on
questionnaire design, observational processes, and document analysis. This will
increase the chance that the instruments will be valid and the data meaningful.
Be aware of the limitations of the evaluation, particularly the likelihood that the
instruments used may not be ‘valid’ or ‘reliable’ in a strict research sense. If the
evaluation is the first of several planned, it may be appropriate to become engaged
with the evaluator in developing, testing, and improving the instruments.
Produce a clear evaluation schedule: what data will be collected, and when
will it be collected? Be aware of difficulties in relying upon rapid impacts of
the exercise (which often do not take place), and consider analysis of process
indicators instead.
If possible, record or document meetings concerning the evaluation, so that it is
clear to what extent (if any) there is undue influence on the evaluators and what
is being evaluated. This may indicate sources of bias in the evaluation.
Finally, we would just like to reiterate that we believe that the evaluation of
participation exercises is a crucial process, particularly in the current environment
in which the popularity of participation is increasing, along with the number of
mechanisms that might be used. Rigorous evaluation is not easy, but through trial and
error, by following sound research methodology, and also by recognising the nature
of this difficult research environment, including the strategic and political imperatives
of those involved, we may hope to create more solid foundations for this discipline
and for the researchers and practitioners of the future. One important consequence of
this, we hope, will be improved participation, in terms of the appropriate choice of
mechanism to use in any particular situation, and their appropriate implementation.
The alternative - shunning evaluation in the face of its many difficulties - will only
leave this domain in a dark age, where mysticism overshadows science, and is liable
to result more generally in cynicism on the part of the public as to the merits and
utility of the participation process.
77
References
1. Jasanoff S. The Fifth Branch. Scientific Advisors as Policy Makers, Harvard
University Press, Harvard, 1993
2. Rowe G. Frewer L.J. A Typology of Public Engagement Mechanisms, submitted
3. Frewer L.J. Risk Perception, social trust, and public participation into strategic
decision-making - implications for emerging technologies, Ambio 28, 569-574,
1999
4. Frewer L.J. Rowe G. Commissioning and Conducting the Evaluation of
Participation Exercises: Strategic and Practical Issues, Report to the OECD,
2003
5. Joss S. Evaluating Consensus Conferences: Necessity or Luxury? Public
Participation in Science: The Role of Consensus Conferences in Europe, edited
by S. Joss and J. Durant, 89-108, The Science Museum, London, 1995
6. Rowe G. Frewer L.J. Evaluating Public Participation Exercises: A Research
Agenda, Science, Technology, & Human Values, in press
7. Rowe G. Frewer L.J. Public Participation Methods: A Framework for Evaluation,
Science, Technology, & Human Values 25:1, 3-29, 2000
8. Webler T. ‘Right’ Discourse in Citizen Participation: An Evaluative Yardstick,
Fairness and Competence in Citizen Participation: Evaluating Models for
Environmental Discourse, edited by O. Renn, T. Webler, and P. Wiedemann, 3586, Kluwer Academic Publishers, Dordrecht, Netherlands, 1995
9. Rosener J.B. User-Oriented Evaluation: A New Way to View Citizen Participation,
Journal of Applied Behavioral Science 17:4, 583-596, 1981
10. Chess C. Purcell K. Public Participation and the Environment: Do We Know
what Works?, Environmental Science and Technology 33:16, 2685-2692, 1999
11. Arnstein S.R. A ladder of citizen participation, Journal American Institute of
Planners 35, 215-224, 1969
12. Marsh R. Rowe G. Frewer L. An Evaluative Toolkit for assessing Public
Participation Exercises, Report to the Department of Health and Health and
Safety Executive, Institute of Food Research, Norwich, 2001
13. Rowe G. Marsh R. Frewer L.J. Evaluation of a Deliberative Conference, Science,
Technology, & Human Values, in press
78
Robust Decision Making in Radioactive Waste
Management is Process-Oriented
Thomas Flüeler
Chair of Environmental Sciences:
Natural and Social Science Interface (UNS)
Federal Institute of Technology ETH Zurich
Contact: Umweltrecherchen & -gutachten
Münzentalstr. 3
CH–5212 HAUSEN
SWITZERLAND
1
Introduction
Dealing with a complex sociotechnical system such as the disposition of radioactive
waste needs an integrated perspective [26:265]. Much of the widespread blockage
faced in this sensitive policy area for decades may be ascribed to the neglect of
looking at the various dimensions involved.
In the course of the 1990’s the international nuclear community recognised that it is not
up to them alone to decide on such complex issues, thus the Nuclear Energy Agency
in 1999: “Rather, an informed societal judgement is necessary” [21:23]. A year later,
a special NEA committee conceded that “radioactive waste management institutions
have become more and more aware that technical expertise and expert confidence
in the safety of geologic disposal of radioactive waste are insufficient, on their own,
to justify to a wider audience geologic disposal … the decisions, whether, when and
how to implement [it] will need a thorough public examination and involvement of
all relevant stakeholders” [23:3]. Accordingly, in line with efforts of the International
Atomic Energy Agency (IAEA) [12][13], the NEA established a so-called “Forum on
Stakeholder Confidence” whose format is currently being developed [23][16].
2
The Concept of Integral Robustness: …
In 1998, the IAEA stated that “[t]he key principle … is the concept of defence in
depth …. One of the important aspects is the evaluation of the robustness of repository
systems …. One of the main issues is obtaining a better understanding of the meaning
of principles such as defence in depth in the context of waste disposal” [11:244].
Regarding the long hazard potential of radioactive waste and society’s requirement for
control the concept of “integral robustness”–technical and societal–has been proposed
for radioactive waste management (RWM) [4][5][6][7][8]. This amplification of the
decision model is, in fact, an integration of societal aspects into the defence-in-
79
depth strategy familiar to the nuclear community (see Figure 1): The system calls
for technical barriers against releases of radioactivity, as well as societal checks to
achieve and sustain confidence in technical assessments and, hence, acceptance or at
least tolerability in Society.
Figure 1: Societal robustness. Stakeholders are to act according to their respective
responsibilities. Dependent on their mutual (mis-)trust, their activities serve as institutional
barriers and potentially lead to a consistent, i.e., robust decision, backed up by incremental
building of confidence in the overall disposal system. Attention to special activities is given in
various phases. After Flüeler 2003 [8].
With such an approach, it is attempted to minimise negative side effects resulting
from a long-term disposal system. In general, a system is robust if it is not sensitive
to significant parameter changes; according to Rip 1987 it is “socially robust” if
most arguments, evidence, social alignments, interests, and cultural values lead to a
consistent option [28:359]. Therefore, the concerned and deciding stakeholders have
to eventually achieve consent on some common interests.
The need for involvement of all main stakeholders in the radwaste arena has lately,
as shown in 1 Introduction, been realised by the international nuclear community.
80
In the course of this seeming paradigm shift–from a purely technocratic “Decide–
Announce–Defend” to a more participatory approach–process aspects have indeed
become fashionable.
3
… Inherent Emphasis on Processes
This contribution argues that “process-orientation” is not only an issue of involving
formerly excluded stakeholders but a productive approach to integrate different
aspects and perspectives and to expound the associated problems (see research
potential within the European 6th Framework Programme [2]).
3.1
Long-term Safety
3.2
Long-term Project
3.3
Sustainability
As a matter of course, robust procedures as defined in a narrow sense can only be
achieved if the parameters are clearly defined and if it is assured that the system rests
within well-set boundaries [35:33]. Yet, the system characteristics of radioactive–and
chemically toxic–waste are unique and technically complex. Once defined, the waste
is to be stored in a safe way since it emits hazardous ionising radiation. Depending on
the hazard potential of the waste in question its isolation period from the biosphere
ranges from 100’s to 100,000 of years. The main mechanism in an underground
(geological) site is a low-level but long-term, chronic release into the environment;
it is a slow degradation of an open system with concurrent large uncertainties. Such
potential impacts are hard to detect with respect to location and time (except for some
scenarios of human intrusion). These system characteristics lead to the admission that
the required long-term safety “is not intended to imply a rigorous proof of safety, in
a mathematical sense, but rather a convincing set of arguments that support a case
for safety” [22:11]. Nevertheless, it is precisely the robust control systems that are
designed to manage the mentioned uncertainties (an important aspect not dealt with
here). “Confidence” in performance assessment is only attained step by step (see also
[22]).
RWM is not only confronted with the “objective” long-term dimension but also with
institutional longevity. From site selection via characterisation, design and operation
to surveillance and closure, the various phases will last for decades. And so will the
corresponding project management which–necessarily–entails entwined and tangled
processes. A (disposal) project laid out for long time periods has to be backed up over
decades by the technical community, the political decision makers and the general
public [22:11]. It is to be implemented in a phased approach, with feedback for
recourse, and interim decisions [19].
The multi-dimensionality of the issue asks for applying the principle of sustainability.
Normatively it seems to suggest itself as a reference system: On the one hand it
facilitates a stepwise analysis according to various dimensions (see Figure 2), on
the other hand it forces us to integrally examine these dimensions–and consequently
perspectives, needs and knowledge systems.
81
According to the Brundtland Commission 1987 “sustainable development is
development that meets the needs of the present without compromising the ability of
future generations to meet their own needs” [36]. Sustainability, therefore, is based
on two pillars: protection, i.e., safety, and intervention potential, i.e., the ability of
today’s and future generations to control; both objectives have to be weighed against
each other [5:789
[5:789passim]. At any rate, examining the dimensions of sustainability is
inherently system- but also process-based.
Figure 2: Sustainability of disposal systems. Eight dimensions–not “only” the three of the
“magic” triangle Environment–Society–Economy–have to be respected: An (ethical) tradeoff takes place via the facility design (technical dimension), along the ecological dimension
(protection of the environment), the social and political dimensions (society and balance of
power determine acceptance/tolerance) as well as the economical dimension (cost of disposition
and institutional control). This decision has eminent spatial and temporal dimensions (location/
situation as well as periods of isolation and concern, respectively). After Flüeler 2003 [8] in
developing [5:790].
In view of the objective longevity of the hazard potential, the primary goal of the entire
undertaking has to be the long-term safety of humans and the environment whereas
the secondary goal is flexibility, defined as intervention potential (controllability,
retrievability) (for a prolonged discourse of the reasoning see [5][7][8]). However
that may be: The decision has to be taken by society.
3.4
Decisional Issue
A decision is more that the preference of an option–it involves the decision making
process with problem definition, judgement, choice and implementation [15:
388
388passim
] (so it is not „only process“ that counts ...). „Good“ decisions are always
goal-related decisions („good“ with respect to what?, see 3.3). „Good“ decisions
imply good processes (which do not necessarily result in good decisions though). A
multitude of stakeholders and perspectives are involved. Therefore, particularly due
to the complexity of the issue, the process and procedure, not only the result, are vital
for the decisions to be taken. It is a stunning interaction to face.
82
The following are attributes of a “good” decision making process:
– Stepwise: Planning phases with milestones
– Periodic orientation, reviewing and interim decisions: For technical and political back-up
– Open and comprehensive option analysis
– Iterative, with opportunities for recourse (and mutual learning ...)
– Reliable, accountable: Unambiguous rules to be complied with (only modifiable by prior
consent)
– Consistent, minimising conflicts: Technical and non-technical sets of criteria
– Coherent, continuous: For sufficient trust in „the system“ (see below)
– Traceable: Arguments and reasoning have to be fully comprehended by interested parties
– Transparent: In broad discussion fora aspects may be put up for discussion at early stages
– „Fair“ procedure and treatment of the intra- and intergenerational equity issues (taking
into account the twofold–spatial and temporal– asymmetry: The benefit of nuclear electricity
is distributed whereas the cost/risk of waste disposal is locally concentrated and transferred to
future generations)
To reach well supported and stable decisions an “informed consent” is needed which,
in turn, requires a demonstration of (all, most) possible tracks and consequences of
actions [9].
3.5
Risk Debate
3.6
Technology and Progress Debate
3.7
Integration of Technical and Political Aspects, Learning
Process
The discourse on risk issues is factual and political bargaining about what is conceived
as “risks” by the actors engaged: “Risks … are to be understood as consequences of
decisions on conditions of uncertainty, social action and situations of interest”, as
the sociologists Nowotny & Eisikovic 1990 describe the negotiation of interests [25,
transl. supplied]. It is not assumed that the stakeholders’ belief systems could be
changed–certainly not in their core principles, but modifications might be made in
their secondary aspects [29:671]–in so far as the actors would identify a “common
ground” as Carter put it in 1987 [1:427]. If some common basis can be found, we may
speak not only of negotiation but of deliberation.
Scope and orientation of technology policy are publicly and politically contentious
issues which are continuously and always debated [3:247
[3:247passim]. Assumptions on
advances or re-directions of technology directly impact on the choices of disposition
(disposal, storage and transmutation) (see [20]).
Credible and sustained compromise can only, if at all, be achieved if collective learning
takes place, if authorities turn their backs to technocratic planning, stop separating
technical from political issues and involve the concerned stakeholders in all relevant
planning phases [32]. All stakeholders have to realise that, in the end, effectively
sustainable radioactive waste management only results from transdisciplinary “mutual
83
learning”–learning from each other. „Transdisciplinarity aspires to make the change
from research for society to research with society” as Scholz 2000 puts it; “... mutual
learning sessions ... should be regarded as a tool to establish an efficient transfer of
knowledge both from science to society and from problem owners (i.e. from science,
industry, politics etc.) to science“ [30:13].
According to Gibbons et al. 1994 problem-driven, dynamic transdisciplinarity is
one attribute of the so-called “mode 2” of knowledge production. On top–or beside–
“mode 1” of conventional disciplinary scientific knowledge acquisition, this newer
strategy to gain insight into the world is produced in the process-oriented context of
application; it is marked by new, often transient forms of organisation with members
of heterogeneous experience: “The experience gathered in this process creates a
competence which becomes highly valued and which is transferred to new contexts”
[10:6]. By integrating a number of interests, also so-called concerned groups, mode
2 knowledge gains more social accountability and makes all participants more
reflexive. Gibbons et al. even argue that “the individuals themselves cannot function
effectively without reflecting–trying to operate from the standpoint of–all the actors
involved” (ibid.:6).
3.8
Responsibility, Project Implementation and Mediation
3.9
Trust in the System via Procedure
Often authorities put off crucial decisions and break them down into a number of
partial decisions; political opposition is forced to obstruct each step, which in turn
leads authorities to consider themselves being in the role of defenders [33:216].
Mediation builds on the assumption that consensus is possible and, in the long run, of
some use to all involved [34:232].
Procedures symbolise the continuity of similar experience and may add to actors
retaining and gaining trust in the political system [17:199]. In the context of laypeople
and probabilistic analysis Sowden refers to “process benefits” [31].
3.10 Performance via Networks
Organisational theory assumes that performance in companies is attributed to their
system characteristics rather than to interests or intentions of individual actors [27].
Cyert & March 1995 even hold that achievements are generated through intricate
networks within and between companies. Networks are characterised largely by
interactions and processes (see [18]).
3.11 Modern Management Principles
The current ISO 9000:2000 quality management standards are guided by dynamics,
stakeholder involvement (from worker to client), system thinking and continual
improvement [14]. The insight into the need of involving stakeholders hitherto
excluded goes as far as to recognise “Concerned Action groups”, i.e., opponents, as
“clients” in the NEA 2001 proposal of a “Quality Management Model” [24:15]. As
84
to a comprehensive quality control, quality is determined by a wider set of criteria
than, e.g., peer-reviewing in Gibbons et al.’s
’s mode 1 [10:6
[10:6passim], since additional
issues of cost-effectiveness, competitiveness, social acceptance, etc. are raised. Such
an approach enhances the openness of the process and the reliability and robustness
of the decisions.
4
Conclusions
Interdisciplinary convergences regarding processes in complex issues such as
radioactive waste management (RWM) allow the conclusion that for the overall
decision not only the results are relevant but also the procedure how to reach them.
Thereby, RWM may, and does have to, develop a satisfactory degree of maturity.
In view of the sustainability goal relation “protection vs. control” and process- vs.
outcome-orientation, it is understood that the RWM system has to be dynamic,
adaptive, and even experimental in its instruments, but not in its ultimate goal, i.e.,
the passive protection of present and future generations and environments.
References
[1] Carter, L. J.: Nuclear Imperatives and Public Trust: Dealing with Radioactive
Waste, Resources of the Future, Washington, DC, 1987
[2] European Council: Council Decision of 30 September 2002 Adopting a Specific
Programme (Euratom) for Research and Training on Nuclear Energy (2002–
2006). Off. Journal of the Europ. Comm., L294/74-85. 29.10.2002.
[3] Evers, A. & H. Nowotny: Über den Umgang mit Unsicherheit. Die Entdeckung
der Gestaltbarkeit von Gesellschaft [On How to Deal with Uncertainty. The
Discovery of the Designability of Society], Suhrkamp, Frankfurt a. M., 1987
[4] Flüeler, T.: Decision Anomalies and Institutional Learning in Radioactive Waste
Management, 8th Intern. Conf. on High-Level Rad. Waste Mgt., Las Vegas, May
11–14, Am. Nucl. Soc., La Grange Park, IL, 796-799, 1998
[5] Flüeler, T.: Options in Radioactive Waste Management Revisited: A Framework
for Robust Decision-Making, Risk Analysis, Vol. 21, No. 4, Aug. 2001, 787-799,
2001a
[6] Flüeler, T.: Robustness in Radioactive Waste Management. A Contribution
to Decision-Making in Complex Socio-technical Systems. In: E. Zio, M.
Demichela & N. Piccinini (Eds.): Safety & Reliability. Towards a Safer World,
Proc. European Conf. on Safety and Reliability, ESREL 2001, Torino, Sep 16
– 20, Vol. 1, Politecnico di Torino, Torino, Italy, 317-325, 2001b
[7] Flüeler, T.: Radioaktive Abfälle in der Schweiz. Muster der Entscheidungsfindung
in komplexen soziotechnischen Systemen [Radioactive Waste Management in
Switzerland. Patterns of Decision Making in Complex Socio-technical Systems],
Doctoral dissertation, ETHZ, dissertation.de, Berlin, 2002
[8] Flüeler, T.: Robust Long-term Radioactive Waste Management. Decision
Making in Complex Socio-technical Systems. Lessons Learnt from a Case Study
and International Comparison (tentative title), Series Environment & Policy,
Kluwer Acad. Publishers, Dordrecht NL, 2003
85
[9] Fischhoff, B.: Cognitive and Institutional Barriers to “Informed Consent”. In: M.
Gibson (Ed.): To Breathe Freely. Risk, Consent, and Air, Rowman & Allanheld
Publishers, Totowa, NJ, 169-232, 1985
[10] Gibbons, M., et al..: The New Production of Knowledge. The Dynamics of
Science and Research in Contemporary Societies, Sage, London, 1994
[11] IAEA, International Atomic Energy Agency: Proceedings of the International
Conference on Topical Issues on Nuclear, Radiation and Radioactive Waste
Safety, Vienna, 31 Aug – 4 Sep 1998, IAEA, Vienna, 233-255, 1999
[12] IAEA: Measures to Strengthen International Cooperation in Nuclear, Radiation
and Waste Safety, Intern. Conf. on the Safety of Radioactive Waste Management,
Córdoba, 31 May, IAEA, Vienna, 2000
[13] IAEA: Measures to Strengthen International Co-operation in Nuclear, Radiation,
Transport and Waste Safety, Waste Safety (Secr. responses to waste safety issues
of Member States), Attachm. 1, Rep. on the Safety of RWM, Board of Govs.,
Gen. Conf., GOV/2001/31-GC(45)/14, 19 July, 2001
[14] ISO, International Organization for Standardization: Quality Management
and Quality Assurance, Quality Management Systems – Fundamentals and
Vocabulary (ISO 9000:2000), – Requirements (ISO 9001:2000), Guidelines for
Performance (ISO 9004:2000), 2000.
[15] Kleindorfer, P. R., H. C. Kunreuther & P. J. H. Schoemaker: Decision Sciences.
An Integrative Perspective, Cambridge Univ. Press, Cambridge, UK, 1993,
3
1998
[16] LeBars, Y., C. Pescatore & H. Riotte: The NEA/RWMC Forum on Stakeholder
Confidence: Overview of 1st Meeting and Workshop. In: NEA (Ed.): Better
Integration of Radiation Protection in Modern Society, Workshop Proc.,
Villigen, 23 – 25 Jan 2001, OECD, Paris, 63-67, 2002
[17] Luhmann, N.: Legitimation durch Verfahren [Legitimacy via Procedures],
Suhrkamp, Frankfurt a. M., [1969], 1975, 1983, 21997
[18] March, J. G. (Ed.): Decisions and Organisations, Basil Blackwell, Oxford, 1991
(from R. M. Cyert & J. G. March: Eine verhaltenswissenschaftliche Theorie der
Unternehmung, Schäffer-Poeschel, Stuttgart, 21995)
[19] National Research Council: Principles and Operational Strategies for Staged
Repository Systems, Progress Report, 20 March, Committee appointed by the
National Research Council, Washington, DC, 2002
[20] NEA, Nuclear Energy Agency: The Environmental and Ethical Basis of
Geological Disposal. A Collective Opinion, OECD, Paris, 1995
[21] NEA: Progress Towards Geologic Disposal of Radioactive Waste: Where Do We
Stand? An International Assessment, OECD, Paris, 1999a
[22] NEA: Confidence in the Long-term Safety of Deep Geological Repositories. Its
Development and Communication, OECD, Paris, 1999b
[23] NEA: Stakeholder Confidence and Radioactive Waste Disposal, Workshop
Proc., Paris, 28 – 31 Aug, OECD, Paris, 2000
[24] NEA: Improving Regulatory Effectiveness, NEA/CNRA/R(2001)3, OECD,
Paris, 2001
86
[25] Nowotny, H. & R. Eisikovic: Entstehung, Wahrnehmung und Umgang mit
Risiken [Generation, Perception and Management of Risks]. Forschungspol.
Früherkennung B/34. Schweiz. Wissenschaftsrat, Bern, 1990
[26] Pearce, D. W.: Social Cost-Benefit Analysis and Nuclear Futures. In: G. T.
Goodman & W. D. Rowe (Eds.): Energy Risk Management, Acad. Press,
London, 253-267, 1979
[27] Perrow, C.: Normal Accidents. Living with High-risk Technologies, Basic
Books, New York, 1984
[28] Rip, A.: Controversies as Informal Technology Assessment, Knowledge:
Creation, Diffusion, Utilization, No. 2, Vol. 8, 349-371, 1987
[29] Sabatier, P.: Knowledge, Policy-oriented Learning, and Policy Change,
Knowledge: Creation, Diffusion, Utilization, Vol. 8, No. 4, 649-692, 1987
[30] Scholz, R. W.: Mutual Learning as a Basic Principle of Transdisciplinarity. In:
R. W. Scholz et al.. (Eds.): Transdisciplinarity: Joint Problem-solving among
Science, Technology and Society, Proc. Intern. Transdisciplinarity 2000 Conf.,
Workbook II: Mutual learning sessions, Vol. 2, Haffmanns Sachbuch, Zurich,
2000
[31] Sowden, L.: The Inadequacy of Bayesian Decision Theory, Philosophical
Studies, Vol. 45, 293-313, 1984
[32] Vatter, A.: Qualifiziertes Referendumsrecht für Betroffene. Zum Vorgehen bei
der Standortsuche für ein Atommüllager [Qualified Right of Referendum for
Concerned Parties. On the Procedure in Siting a Radwaste Repository], Neue
Zürcher Zeitung, 21.12., 15, 1995
[33] Wälti, S.: Neue Problemlösungsstrategien in der nuklearen Entsorgung [New
Problem-solving Strategies in Nuclear Disposal]. In: Vollzugsprobleme.
Schweiz. Jahrbuch für Politische Wissenschaft, Vol. 33, 205-224, 1993
[34] Weidner, H.: Der verhandelnde Staat. Minderung von Vollzugskonflikten
durch Mediationsverfahren [The State as a Negotiator. Attenuating Conflicts of
Enforcement by Mediation Techniques]. Ibid., 225-244, 1993
[35] Weinmann, A.: Uncertain Models and Robust Control, Springer, Wien, 1991
[36] World Council on Economic Development: Our Common Future, Brundtland
Report, Oxford Univ. Press, Oxford, 1987
87
The Meaning of Green:
Values and Energy Policy in Sweden
Måns Nilsson
Stockholm Environment Institute
Box 2142
103 14 Stockholm
Sweden
Email: [email protected]
1. Introduction
Values, and the way they are represented, have strong implications for environmental
policy. Mapping of how values, perceptions, scientific facts and professional
judgments interact and function in the process will contribute to understanding the
policy process. However, although a systematic, reasoned and critical examination
of values underpinning policy choices and expressions is an essential part of
policy analysis, values are rarely scrutinised (Dunn 1994). As a result, their role in
environmental policy making is not well understood. This paper addresses this gap by
asking: How are values represented in the policy arena and the processes within it?
What changes do values undergo over time and as they go through the policy-making
process?
Value concepts are discussed as a basis to understand and describe the values at
play in a process. The paper also draws on theories of the policy process to support the
analysis of the empirical material. The analysis is then delimited to a selected set of
value issues. The empirical material includes public debate documents such as bills,
hearings, motions, presentation, statements and their commentaries. Semi-structured
interviews, designed as in-depth interviews with open-ended probing, were held to
complement the material. Respondents were identified in the policy committees,
career professionals and representatives of important interest groups.
2. Values as an object of analysis
Values tend to be an elusive object of analysis, representing both a generic concept
and a specific set of issues. Broad and somewhat floating definitions are abundant in
the environmental ethics literature (Rolston 1988; Carnegie Council on Ethics and
International Affairs 2000). Frequently, expressions of values need to be broken down
and analysed to reach the core. Values are also dynamic and constantly changing over
time and in social processes. When studying environmental policy, the boundary
between environmental values and other societal values cannot be sharply drawn
(Jasanoff 2000). Environmental policy is not solely or even primarily based on values
towards nature or the environment. More important in the policy arena are probably
88
values about how we organise ourselves, how we allocate resources and rights, and
how we make decisions. Values underlie different diagnoses of issues at stake and
different prescriptions on how to solve them. Values influencing or being of relevance
to environmental policy might include: distributive justice, inter-generational equity,
freedom, nature’s integrity, absolute precaution, competitiveness, economic welfare,
trust, safety, quality of life, internationalism, stewardship, truth, spiritual life, and
ecological co-dependence. It is difficult to draw boundaries between these values.
Ultimately, they affect and interact with each other.
A simple classification facilitates the analysis of values in policy. (Dunn
1994) presents a hierarchy of three levels that depict the values and the stages
of transformation they undergo when entering policy. At the fundamental level,
individuals or groups hold certain value systems that are fundamental to worldviews and preferences. Value systems are based on general beliefs of the ways that
society and nature work and interact, as well as ethical beliefs. On the basis of these
value systems there are value premises; assumptions that, at least in principle, can be
shown to be support a certain value system. The value premises, in turn, determine
preferences about policy objectives, the more concrete expressions in the discourse.
A limited set of value systems judged to be of particular relevance for environmental
policy has been selected for analysis; state-market, ecologism-growth, and sciencejudgment. Each of these value systems has value premises attached to them. These
value premises, that might also be called policy principles, are possible to connect
to a wide range of policy objectives. There are several overlaps between the
different types of value systems and value premises. For instance, values concerning
economic growth and markets are often linked. It is not always conceptually clear
what constitutes systems and what constitutes premises. Some are perhaps closer to
decision-making principles than values.
Ecologism and growth values concern our relationship with nature: are we
acknowledging man as intertwined with nature or is man largely detached from
nature? Ecologism tends to see the environment as a restriction on economic activities
and growth, and policies should seek to minimise disturbances on the natural systems.
This value system is also closely linked to premises such as decentralisation, smallis-beautiful and eco-communalism. In highly simplified terms, nature is normally,
and without human interference, in balance. Human activities are seen to disrupt
this balance. Ecologism was popularised through the “limits to growth” debate in
the 1970s (Meadows and Club of Rome 1972) and survived in the policy arena in
new forms. Opposing ecologism is the growth value system that lends much of its
perspectives from economics. According to this value system, the exact physical
properties of the relationship between man and nature has no interest per se. Through
growth and welfare creation, problems will be solved. One emphasises the ability to
adjust and that technology will fix problems on the way. Economic value is seen as
a natural instrument in political mediation, and then becomes the expression of the
environmental, social or economic objective (Vincent and Ali 1997).
State
tate and market values concern how we organise ourselves to solve environmental
issues. Do we prefer political intervention and regulatory measures for protection
of the environment rather than market solutions or soft (economic) instruments?
89
Environmentalists are commonly associated with a state value system. However, the
relationship between the political views on state and markets and environmentalism is
rather complex. (Connelly and Smith 1999) show that environmental values and green
movements have relationships with many kinds of political views.
Science and judgment values concern on what basis policy is formulated. One
can find ambiguous and at times contradictory attitudes. Governments are influenced,
but often indirectly and not always rationally, by scientific findings (Caldwell 1990).
(Kuhn 1970) demonstrated that science in itself certainly is open for political and
sociological analysis and the interdependence between science and policy is quite
strong. This is particularly true in environmental sciences, where uncertainties are
high and values are at the forefront of the scientific perspective. In many cases,
judgment take over as the overriding premise for decision-making (Cothern 1996).
Related to science and judgment are value premises on formal and informal decision
making; and risk assessment versus a precautionary approach.
Concepts and frameworks for studying values in policy making
In this study, social relationships rather than individual preferences constitute the
basis for values and define the expression of values (Rayner and Malone 1998).
This is linked to new institutionalism perspectives where formal and informal rules,
routines and procedures bring order to and shape politics (Peters 2000); (Goodin
1996). The normative school in new institutionalism emphasises values as the basis
of this construction process (March and Olsen 1989). In this perspective, institutions
shape politics through the construction and elaboration of meaning, values and belief
systems, which then become central analytical variables. How then do we find theses
value systems? (Roe 1994) holds we should analyse the different stories, or policy
narratives, that policy makers use to articulate their beliefs and proposals. A similar
concept is policy discourses, or the ‘informal logic’ of institutions (Dryzek 1990;
Dryzek 1996) and policy frames; i.e. the perspectives that organise, interpret and make
sense of a complex situation (Fischer and Forrester 1993). There are stories as well as
counter-stories that underwrite the assumptions for policy making. A balance between
stories and counter-stories can sometimes paralyse political debate, in which case the
search for a meta-narrative takes place. Closely related to narratives, policy positions
frequently have myths associated with them; that define the ‘reality’ of a problem
characterisation (Thompson et al. 1990). A myth can become a hegemonic myth as it
becomes dominant in the policy discourse. The dominance of certain ‘policy frames’
has also been explored in other policy settings (Stone 2002). Hegemonic myths are
those myths and associated rhetorical frameworks that dominate the policy discourse
(Thompson and Rayner 1998).
Political cultures, formed and driven by values, is a strong influence on policy
choices (O’Riordan et al. 1998; Rayner and Malone 1998; Jasanoff 2000). There is
usually a distinct dominating culture in a particular organisation or policy sector,
but it might differ from sector to sector and from country to country. Among the
criteria for a policy proposal to survive is the degree to which it resonates with the
dominant political culture. One might also talk about an institutional preference for
90
certain policy instruments (O’Riordan et al. 1998). The political culture does not only
affect the opportunities for certain policy proposals, but also the process itself. Is the
political culture consensus-based or impositional, anticipatory or reactive, top-down,
rationalistic and formal or bottom-up, actor-oriented and informal, or perhaps topdown and informal (Dale and English 1998)?
In order to understand the actors, their interactions and constraints, the policy
network approach assumes that policy actors can be aggregated into networks that
engage in coordinated activities. It also holds that these networks are fundamentally
glued through values and beliefs concerning issues such as the relative importance
of various issues, understanding of causalities involved, and policy preferences
(Sabatier 1988). (Marsh 1998) distinguishes between policy networks, closely tied
communities with shared values and problem perceptions and frequent interactions,
and issue networks, typically larger communities, with less communication and
frequently a lack of value consensus.
The policy streams model sheds light on the dynamics of the agenda setting
process (Kingdon 1995). This model is an adaptation of the Garbage Can model that
describes policy-making as a rather chaotic process: “a collection of choices looking
for problems, issues and feelings looking for decision situations in which they might
be aired, solutions looking for issues to which they might be the answer, and decisionmakers looking for work” (Cohen et al. 1972). The policy streams model focuses on
three parallel streams, which develop independently: problems, policies, and politics.
At critical junctures, policy windows, the three streams come together, and solutions
become joined to problems, and then these are joined with political forces.
3. Values and environment in energy policy
The energy policy sector in Sweden highlights how values have played out in and
shaped the policy process. Energy is intimately linked to many of the most severe
environmental problems, it displays plenty of goal conflicts and trade-offs, and has
been a central theme in Swedish political history, coloured by political lock-ins and
value conflicts. Many see the sector as a success story of environmental integration in
recent years, but the implementation has also been criticised for deviating significantly
from its original intentions.
In the 1960s, the energy issue started to grow in complexity. Higher prices on
imported fuels, the newly established decision not to develop more of the undeveloped
rivers for hydropower, and an emerging nuclear resistance lead to the need for major
political initiatives (Silveira 2001). The energy sector had so far not been a discreet
field of intervention before, but now it became a central part of Swedish politics.
Swedish dependency on oil imports had increased rapidly during the remarkable
economic growth period of the 1950s-1960s. In order to decrease dependence, the
expansion of nuclear power was given high political priority. The first collective
energy policy agreement was made in 1975 and focussed on decreasing dependency
on imported fuels.
In the 1950s and 1960s, hydropower was the major issue (Vedung and Brandel
2001). Since then, the nuclear issue has been at the centre of Swedish politics. Nuclear
91
resistance emerged at a large scale when the first commercial plant opened in 1971.
In fact, environmentalists had promoted nuclear power earlier on, as an alternative
to hydropower expansion (Anselm 2000; Kaijser 2001). But inspired by increasing
resistance and public sentiment, new movements began to pop up. In 1976, the
nuclear issue was powerful enough to have a decisive influence on the governmental
elections. The Centre party, who brought it to the political agenda, could form the
first non Social Democrat government in over fifty years. When the Three Mile Island
plant outside of Harrisburg nearly suffered a full meltdown, it became evident that
a referendum was the only way to resolve the issue. Because the Social Democrats
were divided over the issue, the voters got not two but three options to chose from.
Alternative 2 won the referendum, with the recommendation to decommission nuclear
power eventually, but only as far as is possible considering the demand for power,
employment and welfare (Statens Offentliga Utredningar 1995:139 1995). Note that
all three alternatives argued for a phase-out of nuclear power, an example of the
importance of agenda setting. Nevertheless, the interpretation of the 1980 referendum
is still an issue of debate.
In the 1980s, oil and electricity prices fell. The energy and nuclear issues cooled
down and lost its rank on the political agenda. This led to inadequate political action
for energy efficiency and renewable energy technology development (Kaijser 2001).
But the 1980s also saw an explosion in environmental consciousness internationally
and in Sweden, fuelled by massive evidence of dying forests, chemical accidents such
as in Bhopal and in the Rhine river, the first major nuclear accident in Chernobyl, and
more locally, dying seals along the coasts of Scandinavia. This public awareness led
to immediate repercussions on the political arena in Sweden as well as in many other
countries. Massive attention provided the necessary leverage for the Greens to grow
strong and enter the Swedish Parliament in 1988.
Public concern, often said to set the boundaries for environmental policy,
is constantly changing. First, there has been a declining public interest for
environmental issues in general in the 1990s. Second, the resistance against nuclear
power is declining. A recent poll show that only 44% want to phase out nuclear power,
compared with 77% in 1986 (Michelsen 2001). Third, new issues arise on the public
agenda. As governmental plans for wind power expansions along the coasts have
received increasing attention, public voices on local and place-based values have been
raised. New NGOs have formed to protect landscapes from wind power exploitation.
In response, the latest energy bill proclaims that the consequences of wind power must
be more carefully examined before major developments can take place (Regeringens
proposition 2002).
State and market
Sweden is by tradition a government-intervention country, with the highest net tax
pressure in the world, and internationally renowned for the ‘welfare state’. However,
we can from the early 1990s see a value shift in the mainstream of politics with a
growing acceptance of seeing the market, rather than the politicians, as the key decision
makers. The Social Democrat government has moved to the right both on economic
92
policy and foreign policy matters. International deregulation processes within the EU
have been a major driver behind this. This trend is reflected in the 1991 energy bill
that abolished the previous energy policy decision and its conflicting goals of nuclear
phase-out, non-expansion of hydropower, and reduction of greenhouse gases. On
nuclear power, it was decided to postpone the closure of the first nuclear reactor until
1999. The bill establishes that the phase-out is contingent on conservation measures,
competitive prices and a supply of alternative technologies (Nordhaus 1997).
The market value system also influences the choice of policy instruments. In most
bills nowadays we find economics textbook arguments about advantages of market
instruments. The principal standpoint that economic instruments are good for the
environment and effective measures for the economy as a whole penetrate much of
the policy discourse but this was not always the case. Already the French economist
(Pigou 1920) wrote about the benefits of using economic instruments but traditions
and values in Swedish society actually inhibited using economic instruments. “Noone
should be able to buy themselves the right to pollute”, was the common political
argument. Not until the late 1960s, can we see any real acknowledgement in Sweden
for this type of thinking (Dahmén 1968).
The 1997 programme for the “greening” of the energy sector carried with it many
features of traditional large-scale programming. The Prime Minister’s, at the time,
personal advisor stated: “Up around 150,000 new jobs and better public finances
in the next economic down-turn. In the long run, we will have a more competitive
industry and a sustainable and at the same time modern welfare state. This is the result
if the ideas are realised.” (Eriksson 1996)
Conservatives and industrialists argue that the liberal values that penetrate the
policy discourse are not reflected in the energy policy, and that the internal value
inconsistencies in the bills, with increased political control and a lack of a longterm principle in energy policy, goes against the market-oriented “spirit” of the
bill. Governmental spending on subsidising alternative fuels and energy efficiency
measures are seen as a waste of taxpayers’ money.
In the 2002 energy bill, the use of subsidies for renewable energy and efficiency,
are de-emphasised again, as a result of the rather negative evaluations of these
schemes (Regeringens proposition). Instead, a new quota-based system of trade-able
green certificates has been announced. In this scheme, renewable energy producers
are allowed to issue and sell certificates. The distributors are obliged to buy a certain
amount of these certificates that will be available on an exchange market. Prices
will depend on demand and supply, and so a new type of environmental market is
created.
Ecologism and growth
When Social Democrats became more market-oriented and de-emphasised traditional
values, they turned to criteria such as “ecological” and “renewable”, thus to some
extent adopting the value system of the Greens. The Social Democratic government
launched the “ecological transition” as the new main challenge for the labour
movement in the mid 1990s. Prime Minister Göran Persson positioned two rather
93
radical environmentalists as his close personal advisors, Stefan Edman and Olof
Erikson. Hence at the time, there was a strong value-driven policy position at the
highest possible level. This coincided with the processes of formulating the 1997
energy bill. The bill maintained the ‘traditional’ objectives, but put a larger emphasis
on ecologism. The agreement also included closing the nuclear reactors in Barsebäck,
with the first one before 1 July 1998, and the second one before 1 July 2001.
That ecologism is a powerful value system in Swedish policy is evidenced by
the “eco-cycle society” concept (“Kretsloppsamhället”). Based on the principle of
having closed material loops in society, it is arguably the most prominent concept
in Swedish environmental policy. In particular the Ministry for the Environment
and the Environmental Protection Agency assimilated much of the environmentalist
value systems in the 1970s and 1980s. Since then prominent environmentalists have
moved in and out of public office positions. Arguably, growth-orientation and market
values have had a weak basis among Swedish environmental policy networks. While
most repeat the growth and market myth, it is not difficult to detect strong planning
economy perspectives to policy in general and a bias towards the use of absolute
levels and regulatory strategies in environmental policy in particular. The opposite
is true in the Ministry of Finance and Ministry of Industry, that are much more
growth-oriented. The Ministry of Finance performed a macro-analysis of the Swedish
economy in terms of a Green Net National Product and they concluded that Sweden
is on a sustainable development path. Swedish EPA was of course critical to this
conclusion (Statens Offentliga Utredningar 2000:7 2000).
However, the strategic change declared by the Prime Minister and expressed in
the bill was largely neglected in implementation and Eriksson and Edman left the
Prime Minister’s office after a short while. In later years we can see that ecologism
has been sliding in prominence. The existence of goal conflicts have made it difficult
to agree on and implement policy measures (Nordhaus 1997). One important reason
was the lack of political support with the labour unions that act as a restriction on
the ecological aspirations. Their influence on policy in Sweden through a close and
symbiotic relationship with the Social Democrats can hardly be overestimated. When
it comes to environmental policy in general, one usually finds the labour unions
backing a growth orientation to ‘save the jobs’. As their power depend on centralised
negotiations, they tend to be more in favour of large-scale production solutions, such
as nuclear power.
Science and judgment
The scientific knowledge input to the policy process typically takes place in the
commissions that prepare the basis for the bills. There have been several major
commissions in the energy sector in the 1990s, the most important one being the
1995 commission that preceded the 1997 bill. These are appointed to prepare for the
policy proposals and they can be constituted by political representatives or by experts.
The government issues the mandate of the commission. Then the commission does
its investigations and the results are presented in a report. Most of the analytical
work and discussions in these types of commissions are based on serious analysis
and broad-based deliberations but the final recommendations are usually subject
94
to political negotiations. The report is circulated for comment among government
authorities, various relevant and interested organisations and municipalities. After
this, a government bill is prepared for the parliament. This bill is processed in the
parliamentary committee. The parliament then adopts the final suggestion.
However, this is not adopted in isolation but is part of the total budget. Since the
early 1990s, no single government has held majority in parliament. This means that
in order to get majority in the parliament, governments must make a coalition with
one or several opposition parties and that the bill is only a partial input. Anything
can happen in the pressing negotiations in the days and hours before the budget
is presented. ‘Horse-trading’ of political programmes between sectors is frequent.
Small coalition parties are able to inject certain issues that they feel strongly about, in
exchange for their support to the overall budget. This treatment signals that decisions
on the long-term development of our energy system is made more based on political
tactics than on scientific knowledge and consideration of all alternatives and their
impacts (Löfstedt, 2001).
It is very rare to see any elaborated cost-benefit analyses or environmental
assessment processes in Swedish policy preparations. A systematic treatment of the
dangers of different energy systems has been avoided. Mainstream views among
environmentalists of what is good for the environment are rarely challenged, although
they are constantly challenged in science. Commonly, arguments are not about the
relative risks but about right and wrong. Nuclear opponents feel that nuclear energy
violates fundamental laws of nature, and therefore accepts treating nuclear risk
different from other risks. Scientific evidence that does not support the common
environmentalist perspectives seems to have difficulties penetrating the work. Instead
of scientific facts, the ‘Ecocycle Society’ discourse is used to rule out unwanted
options.
4. Discussion
Long-established networks play key roles in the conception and implementation
of energy policy. There seem to be at least two major policy networks operating
at the national level, each with relatively clear and distinct value systems, and
supporting premises. A green network with a value system based in ecologism and
state interventionism was very influential throughout the 1990s, culminating with the
1997 energy bill. This network has a political party basis in the Greens, the Socialists
and the Centre, nodes in bureaucracies such as the Ministry for the Environment, the
Swedish National Energy Administration and the Swedish EPA, and strong supporters
among leading environmental academicians. The network argues for more political
action and less market influence, and is in favour of public investment in order to
support biomass energy, wind energy and solar power. As a result industrial groups
associated with these technologies are part of a wider issue network. It seems to prefer
judgment-based decision making to scientific rationalism to get done ‘what has to
be done’. A growth network is visible with a political basis among Conservatives,
Liberals and, to some extent, Social Democrats, with supporters in bureaucracies
of Ministry of Finance and Ministry of Industry. The wider issue network includes
95
industry associations and the labour movements, normally antagonists but here
forming a coalition around a particular issue where interests coincide. However, the
networks cannot be very sharply distinguished. It is difficult to detect pure expressions
of value systems both among stakeholders and political parties. Also, the prominence
of the Social Democrats is blurring the picture as they attempt to accommodate both
the green and growth networks in the rhetoric.
Values underpinning the energy policy in Sweden change over time. The major
policy objectives have been rather constant during the last decades; safeguarding
industrial competitiveness through low prices, maintaining a secure supply and
protecting the environment and natural resources (Silveira 2001). However, their
relative importance changes. Supply security and national self-sufficiency were
important objectives in the 1970s, whereas environmental and health concerns
dominated in later years (Kaijser 2001). Later on, competitiveness and price objectives
motivated the deregulation of the electricity market in the 1990s (Midttun 1996). In
1996-1997, the environmental objectives are overriding to: “facilitate the transition to
an ecologically sustainable society”. Today, these values are losing their bite, and in
recent discussions, the emphasis is on growth and internationalism.
The growing international concern, in particular regarding climate change, lead
to international action, media attention and public interest in environmental policy.
At the same time, the nuclear phase-out had not been dealt with and was becoming
a liability for the Social Democrats. In the mid 1990s, problems, solutions and
politics converged and the ecological transition became the energy policy agenda.
In fact, it also became, for a short while, the overall political agenda of the Swedish
government. Prime Minister Göran Persson was quick to pick up the signals and
launched the ecological transition campaign. However, moving beyond the agenda
setting and into the policy action, this programme gradually faced constraints in the
market and among the public. The popular response turned out, perhaps unexpectedly,
to be meagre. Göran Persson seemed to overestimate the importance of this policy
window. The media and the public as well as key constituencies, such as the labour
movement, became alienated. Sweden was not ready for this agenda.
References
1.
2.
3.
4.
5.
Anselm, J. 2000. Mellan Frälsning Och Domedag: Om Kärnkraftens Politiska
Idéhistoria I Sverige 1945-1999 (between Salvation and Armageddon: On
Nuclear Political History in Sweden 1945-1999). Stockholm: Brutus Östlings
Bokförlag.
Caldwell, L. K. 1990. Between Two Worlds: Science, the Environmental
Movement and Policy Choice. Cambridge: Cambridge University Press.
Carnegie Council on Ethics and International Affairs 2000. Consolidated
Guidelines: Understanding Values. New York, Unpublished, available from
Carnegie Council.
Cohen, M., J. March and J. Olsen 1972. ‘A Garbage Can Model of Organisational
Choice’ Administrative Science Quarterly 17: 1-25.
Connelly, J. and G. Smith 1999. Politics and the Environment: From Theory to
Practice. London: Routledge.
96
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
Cothern, R. 1996. Handbook for Environmental Risk Decision Making: Values,
Perceptions, and Ethics. Boca Raton, FL: CRC Press.
Dahmén, E. 1968. Sätt Pris På Miljön. Stockholm: Studieförbundet Näringsliv
och Samhälle.
Dale, V. H. and M. R. English, Eds. 1998. Tools to Aid Environmental DecisionMaking. New York, NY: Springer.
Dryzek, J. 1996). ‘The Informal Logic of Institutional Design’, in R. E. Goodin.
The Theory of Institutional Design.
Design Cambridge, Cambridge University Press.
Dryzek, J. S. 1990. Discursive Democracy: Politics, Policy and Political Science.
Cambridge: Cambridge University Press.
Dunn, W. N. 1994. Public Policy Analysis - an Introduction. Englewood Cliffs,
NJ: Prentice-Hall.
Eriksson, O. 1996. Bygg Om Sverige Till Bärkraft! Stockholm, ABF’s Idé och
Faktabibliotek.
Fischer, F. and J. Forrester, Eds. 1993. The Argumentative Turn in Policy Analysis
and Planning. London: Duke University Press.
Goodin, R. E., Ed. 1996. The Theory of Institutional Design. Cambridge:
Cambridge University Press.
Jasanoff, S. 2000. Risk, Precaution and Environmental Values. Workshop paper.
paper
New York, Carnergie Council on Ethics and International Affairs.
Kaijser, A. 2001). ‘From Tile Stoves to Nuclear Plants - the History of Swedish
Energy Systems’, in S. Silveira. Building Sustainable Energy Systems: Swedish
Experiences. Stockholm, AB Svensk Byggtjänst and Swedish National Energy
Experiences
Administration.
Kingdon, J. W. 1995. Agendas, Alternatives, and Public Policies. New York:
HarperCollins College Publishers.
Kuhn, T. 1970. The Structure of Scientific Revolutions. Chicago: University of
Chicago Press.
Löfstedt, R. 2001. Playing Politics with Energy Policy: The Phase-out of Nuclear
Power in Sweden. Environment. 43: 20-33.
March, J. G. and J. P. Olsen 1989. Rediscovering Institutions: The Organizational
Basis of Politics. New York: Free Press.
Marsh, D., Ed. 1998. Comparing Policy Networks. Buckingham: Open University
Press.
Meadows, D. and Club of Rome 1972. The Limits to Growth. London: Earth
Island.
Michelsen, T. 2001. Avveckling Som Ständigt Skjuts Upp. Dagens Nyheter.
Nyheter
Stockholm: 11.
Midttun, A. 1996. ‘Electricity Liberalization Policies in Norway and Sweden:
Political Trade-Offs under Cognitive Limitations’ Energy Policy 24(1): 53-65.
Nordhaus, W. D. 1997. The Swedish Nuclear Dilemma: Energy and the
Environment. Washington DC: Resources for the Future.
O’Riordan, T., C. L. Cooper, A. Jordan, S. Rayner, K. R. Richards, P. Runci and
S. Yoffe 1998). ‘Chapter 5: Institutional Frameworks for Political Action’, in S.
Rayner and E. L. Malone. Human Choice and Climate Change Volume 1: The
Societal Framework. Columbus, Ohio, Batelle Press.
97
27. Peters, G. 2000. Institutional Theory in Political Sciences: The New
Institutionalism. ??: Cassell Academic.
28. Pigou, A. C. 1920. The Economics of Welfare. London: MacMillan.
29. Rayner, S. and E. L. Malone, Eds. 1998. Human Choice and Climate Change
Volume 1: The Societal Framework. Columbus, Ohio: Batelle Press.
30. Regeringens proposition 2002. 2001/02:143. Samverkan För En Trygg, Effektiv
Och Miljövänlig Energiförsörjning. Stockholm, Regeringskansliet.
31. Roe, E. 1994. Narrative Policy Analysis: Theory and Practice. Durham: Duke
University Press.
32. Rolston, H. 1988. Environmental Ethics: Duties to and Values in the Natural
World. Philadelphia: Temple University Press.
33. Sabatier, P. 1988. ‘An Advocacy Coalition Framework of Policy Change and the
Role of Policy-Oriented Learning Therein’ Policy Sciences 21: 129-168.
34. Silveira, S. 2001). ‘Transformation of the Energy Sector’, in S. Silveira. Building
Sustainable Energy Systems: Swedish Experiences.
Experiences Stockholm, AB Svensk
Byggtjänst and Swedish National Energy Administration.
35. Statens Offentliga Utredningar 1995:139 1995. Omställning Av Energisystemet.
Stockholm: Swedish Government.
36. Statens Offentliga Utredningar 2000:7 2000. Långtidsutredningen 1999/2000.
Stockholm: Swedish Government.
37. Stone, D. 2002. Policy Paradox: The Art of Political Decision-Making. New
York: Norton.
38. Thompson, M., R. Ellis and A. Wildawsky 1990. Cultural Theory. Oxford:
Westview Press.
39. Thompson, M. and S. Rayner 1998). ‘Chapter 4: Cultural Discourses’, in S.
Rayner and E. L. Malone. Human Choice and Climate Change Volume 1: The
Societal Framework. Columbus, Ohio, Batelle Press.
40. Vedung, E. and M. Brandel 2001. Vattenkraften, Staten Och De Politiska
Partierna. Dora: Nya Doxa.
41. Vincent, J. R. and R. M. Ali 1997. Environment and Development in a ResourceRich Economy. Malaysia under the New Economic Policy. Cambridge: Harvard
University Press.
98
Predicting Public Acceptability
in Controversial Technologies
Donald M. Bruce
Society, Religion and Technology Project
Church of Scotland, John Knox House
45 High Street, Edinburgh EH1 1SR
SCOTLAND, UK
E-mail : [email protected]
Keywords: risk, biotechnology, GM food, GM crops, ethics, values, genetic
modification, nutraceuticals, safety, policy, environmental risk, consumer perceptions,
trust, social contract, shared vision, GM animals, power, control, accountability,
consultation.
Introduction
Technology and society have a synergic relationship, but in recent years the
relationship has become increasingly under strain In the UK and in some other parts
of Europe the generally favourable climate of public opinion towards technological
progress in the decades following the second world war has been challenged by
a more sceptical attitude. As public disquiet about controversial technologies has
grown, the scientific community can no longer take for granted, in the way that it had
been inclined to do, the acceptance by the public of its projects. New technologies
may now be greeted with as much concern for their possible risks as enthusiasm for
their projected benefits. While the products of information technologies generally
attract widespread support, developments in more sensitive areas like agriculture and
food biotechnology or environmentally sensitive energy production may have to earn
the right to be exploited.[1] Even some medical developments, for example involving
embryos, permanent genetic change or genetic information, may be questioned.
Broadly speaking, new technologies are a product of the values and aspirations of the
culture in which they emerge. Technology arises from its scientific, economic and
political life, and also the assumptions the society holds in religious, philosophical and
moral spheres, expressed by and embedded in its artefacts and systems. In turn, each
new technology shapes and alters these values and aspirations, to a greater or lesser
degree. When it is widely accepted, technology permeates economic and political
systems, cultural values, attitudes and aspirations, and reshapes them, often largely
unnoticed. For example, transportation, preservative and refridgeration technologies
have markedly changed our systems of food production and our expectations of
variety of diet and quality of food.
Modern technology has been encouraged, as long as it produced things which the
society regarded as desirable, and did not upset basic values or major interests. The
99
widespread public objection to the use of unlabelled genetically modified foods in
Britain in 1999 revealed a situation where this correspondence had broken down.
A disjunction had emerged between the aims of the technologists and the wishes of
society. This reflected a gap in the public legitimation of non-human biotechnology,
in so far as many of its developments had been occurring remote from the population
at large. The values which some biotechnologies have expressed did not necessarily
have a wide public acceptance, but rather represented the values of certain groups
within the society. Another example was the practice of feeding herbivorous animals
with bone meal, sometimes even derived from their own species, which came to light
through the BSE crisis.
The acceptance by society of any certain types of technology would now seem to
depend on how far the values embodied in the technology reflect those of the wider
society, rather than reflecting only those of some privileged sector like a ruling elite,
a group of academic researchers, a commercial company, or a special interest group.
It is thus now becoming more important to evaluate in advance the degree of likely
mismatch between the aspirations of the technologists and the values of society.
Biotechnology as a Social Contract
This paper explores one approach to making this evaluation, based on the notion of
a conditional social contract between technology and society. [2] A given society
may be prepared to embrace a new technology to deliver certain benefits, and may
also accept a degree of risk and adaptation of its systems, institutions and life styles,
provided certain basic conditions are fulfilled. It is a largely invisible contract as
long as the aims correspond. It becomes more apparent when a significant mismatch
emerges. When the roots of the mismatch are examined, the acceptance or rejection
of novelty and innovation in technology are found to be dependent on a complex mix
of factors, which constitute the unwritten terms of the contract. These features are
gathered from wider social science insights in the field of risk perception and from
experience of engagement with public groups on a range of risky technologies.
•
•
•
•
•
•
•
•
•
•
Values - does it uphold or challenge our values?
Familiarity - is something familiar; is it embedded in the society?
Comparison - if it is unfamiliar, with what can we compare it?
Did that technology go wrong or prove reliable?
Control - how much do we feel in control of the technology?
Trust - if we are not in control, how much do we trust those who are?
Vision - do we share their motives, vision and goals?
Risks - what sort of risks are involved? Are they novel or within existing
experience?
Choice - do I have any choice over either the technology or over its hazards?
If there are significant risks involved, are these voluntary or imposed?
Benefits - are there tangible benefits to consumers?
Portrayal - how has is it portrayed in the mass media?
100
A primary condition for acceptance is whether the technology in question upholds or
challenges basic moral values within contemporary society. These are the underlying
values we hold, for instance in medical applications about the nature of human being
and human well being, about family relationships and reproduction, embryos and
foetuses, genes, cells and body parts. Important factors in non-human biotechnology
include basic attitudes and values held about nature, naturalness and innovation, and
about particular aspects of the natural world. Which sorts of species are considered
of especial value? What distinctions are made amongst humans, animals and plants?
What are seen as appropriate and inappropriate uses of different of animals and of
different types of animals, or of plants, insects and microbes? Social values may also
be relevant in relation to ownership, consent, confidentiality, justice, equity, collective
responsibility and individual choice.
Secondly, how familiar do people feel with the technology? Coal, were it discovered
today, might well be considered too risky and polluting to be a principal source
of energy. Yet it has been socially embedded in numerous communities for many
generations. It is something people feel they know and understand, notwithstanding
the substantial risks which the community is often thoroughly aware that it carries. If
a technology is unfamiliar, however, comparison is an important factor. What existing
technology is it to be compared with, and has that proved reliable or something that
has gone wrong?
We are more likely to accept a technology over which we feel a measure of control.
For example, we may feel we have a sense of control over using cars, a coal fire or
personal computers or having IVF treatment but not about aircraft, nuclear power
stations, genetic databases or crop genetic modification. If we are not in control, our
acceptance then depends on how far we trust those who are, whether they are seen
to have share our values, motivations and goals. Thus most people tend to trust the
aircraft pilot, crew and technical systems, because high safety is in everyone’s interest,
and the medical staff and technicians dedicated to providing fertility treatment and
advice. We might be less certain of the aims and motives of those holding our genetic
information or promoting GM crops. Is there a shared overall vision for the way we
see the future developing, or a mismatch? In the current climate, there is especial
scepticism about commercial or political motives. Capitalism for its own sake is seen
as a dubious value when biotechnology is involved, because if there was a conflict of
its priorities, it would tend to override safety, environmental or ethical factors.
A closely related condition is whether the technology and its risks are voluntary or
imposed. If there are uncertainties over health, safety or environment, a technology
is less likely to be accepted if we cannot exercise choice over using it or avoiding its
risks. Numerous studies on risk perception have identified key attitudes.[3] Certain
sorts of risks are also more likely to arouse aversion than others. Typically people
tend to be more averse to rare events with carry very high potential consequences,
than many frequent events with smaller harms. A technology is more feared if its risks
101
are of this nature, and especially if the potential harmful effects are delayed and would
creep up insidiously, rather than being immediate and noticeable. Ionising radiation,
radioactive waste and genetic modification are examples of the latter.
A technology must offer realistic, tangible benefits to the members of society in
general. Repair and enhancement are two qualitatively different elements. There
is greater preparedness to favour medical genetic interventions, because disease is
considered an aberration to the human condition. It is putting right something that is
generally agreed to have gone wrong. The notion of improving or adding something
new to the human condition, the environment, or any other aspect of creation may
also be desirable. This is problematical in relation to food, however, which is both
sensitive and perceived to be satisfactory as it is. Genetic improvements to food may
be viewed with suspicion, as if to say “since it isn’t broken, don’t try to mend it.”
Finally, acceptance increasingly depends on the context in which one first heard
about the technology, and whether it has been given a positive or negative image
in the media, or in one’s won peer group. Once Greenpeace’s slogan “Frankenstein
foods” became the favoured description by which the media described GM foods, its
repetition created and perpetuated a strong negative image. Even though it denoted
a gross exaggeration, the connotations of danger, irresponsibility and science out of
control were all that was needed, in a climate of plausible food risk, to undermine the
credibility of GM foods. On the other hand a generally positive media presentation
has been given to GM vitamin-A rice and pharmaceutical products in sheep’s milk.
A feature of all innovation is that the full range of outcomes are seldom knowable
in advance. Once a dependence has been established on a technology, and the more
central the role it plays in society, the more damaging are any failures or unforeseen
adverse effects. An alteration of values in the light of the experience. This may work
both ways. It may lead to a loss of the trust that was previously given. Alternatively,
if good results are seen, previously expressed fears my be discounted.
We examine the application of this rationale to the historic cases of importing of GM
Soya and maize, and the UK marketing of GM tomato paste, and assess the likely
public reaction to a number of future biotechnological innovations and also some
energy technologies and radioactive waste disposal.
GM Soya and the Failure of the Social Contract
If several of these factors are not fulfilled, the technology is unlikely to be accepted.
This was dramatically illustrated in the UK public reaction to food products derived
from imported US GM soya and maize. These applications failed most of the above
conditions, and can be said to have broken the invisible social contract, such that
public rejection should have been seen as a foregone conclusion.
102
It was an unfamiliar technology which brought together two very sensitive issues
- genetics and food - which challenged basic values, as already shown. Although its
risks were largely unproven, they were high consequence, low probability hazards,
insidious in nature and probably irreversible, which typically evoke an aversion much
greater than their calculated risk. The immediate points of comparison were the
recent experience of BSE/CJD, a very remote health risk which nonetheless proved to
be real, or the various cases of the unintended spread of an introduced species within
a local ecology. The failure to segregate or label meant that no choice was offered
for those wishing to avoid GM foods, and therefore any risk, however remote, was
unavoidable.
The ordinary public had played no role in the introduction of the technology and had
little or no trust in those who controlled it. Their goals and values were not shared.
The UK Government’s motivations - in reducing input costs in farming and promoting
a competitive UK biotechnology industry - were not seen as relevant to consumers.
The main benefits were agronomic and accrued to foreign growers, and especially to
US-based multi-national corporations. Their perceived arrogance in promoting their
own interests by forcing GM soya and maize on to the European market, oblivious
of consumer values or choice, provoked a deep antipathy to commercial motivations.
Very reasonably, people asked why they should accept foods which might carry an
imposed risk, which they did not need, which offered no tangible benefits to them, and
which served only the corporate ambitions of powerful foreign companies.
Could GM ever Satisfy the Social Contract?
It is important now to examine whether the failure of soya and maize products
is indicative of all GM technologies involving food, or is related to particular
applications and the circumstances of their introduction. Significantly, from 1996
to 1999, Zeneca’s GM tomato paste had sold fairly well in the UK without undue
opposition. This case shows certain critical differences over media profile, risk
awareness, choice and benefit which are illuminating. In 1996, GM was unfamiliar
and potentially problematic, but it had not received exhaustive media coverage and
an issue had not been made of its risks. Supermarkets promoted the paste as slightly
cheaper and tasting better than the non-GM variety. They only stopped selling it once
they saw a market interest in promoting themselves as ‘GM free’. The consumer had
a clear choice, because the paste was a single product sold in a GM-labelled tin, whose
tomatoes were separately sourced. In contrast GM soya and maize were invisible.
They were commodities sold primarily as ingredients to the food processing industry,
and no attempt was made to segregate GM from non-GM sources. It was not clear to
the consumer which of a wide variety of food products would have had GM-derived
ingredients and which would not. As Zeneca’s market research had shown, a near
absolute condition for accepting any GM food is that there is visible choice through
labelling and segregation.
103
It remains to be seen whether providing choice alone, as in the current draft EC
proposals, would now be sufficient to redeem GM food. Alternatively, would
the stigma now attached to GM food as a concept irredeemably taint any future
applications, regardless of how well they might satisfy the social contract? The
question must be seen in the context of strong vested and political interests promoting
different images of the technology. The ordinary citizen is caught in the middle, often
not knowing who to believe. If GM crop and food applications have any future it lies
in focusing on aims which find common cause with these consumers.
Three possible GM applications are evaluated below in terms of how well they might
or might not fulfill the different conditions of the social contract - nutraceuticals,
vaccines produced in crops by GM viruses and extracted chemically, and the possible
use of genetic modification in environmentally sustainable crop production. The
evaluation is purely qualitative and personal, and is summarised in the table.
Values
Familiarity
Comparison
Control
Trust
Notional Acceptance Scores for Some Real and Potential GM Applications
Soya/
Tomato
NutraPlant
Maize
Paste
ceutical
Vaccine
No
?
Yes
Yes
No
No
No
Yes
No
Yes
Yes?
Yes/No
No
No
No
No
No
Yes
?
Yes
?
No
Yes
No
Vision
Choice
Risk
Benefits
Media Profile +/-
No
No
No
No
Yes
?
Yes
No
Yes
?
Yes
?
Yes
?
Yes
?
Enhanced
Ecology
No
No
No
No
No
?
?
Yes/No
Yes?
No
Foods genetically engineered to add ingredients which give a nutritional or health
enhancement compared with non-GM varieties would significantly alter the
perception of benefit. The advantages would need to be enough to overcome fears
of health risks. Nutrients alone might not suffice, but modifications which enhanced
resistance to cancers or heart disease, or which removed allergenic compounds might
have a particular attraction, such as a GM wheat which eliminated the source of the
gluten reaction. Environmental risks would figure less in this type of application,
unless the crops also incorporated GM-based herbicide or insect resistance.
Suspicions of commercial motivation and control would remain. Vaccines produced
in plants using GM viruses would satisfy most of the criteria, provided vaccines
were extracted rather than eaten in situ in the plant. The main doubts would be over
risk comparison, with the association of viruses with human disease, and the fear of
unforeseen transformations.
The application of GM to crops for targeted environmental improvements is
supported by some wildlife conservation bodies, but the consumer benefit would
104
be second order. It is not clear if this would overcome perceptions that GM crops
are by definition ecologically dangerous face. It would also face being written off
by environmental groups as a technical fix. Gene flow might be avoided by making
GM crops sterile, but ‘terminator’ technology has gained an ethical stigma. Its use
would indeed be a deep injustice in those developing countries where seed is normally
resown. This ethical objection would not apply in the UK, where most conventional
seeds are hybrid varieties bought fresh each year, but the terminator media image
may still discourage acceptance. To overcome perceptual barriers, GM developments
would have to offer environmental solutions that were difficult to achieve in other
ways, and without posing new risks.
Renewable Energy Technologies
If the same approach is applied to newer technologies in the energy sector, some
different patterns emerge. Apart from hydroelectricity, wind and biomass, they may
not be familiar in detail, but in general, renewable forms of energy uphold rather than
challenge prevailing beliefs about care for the environment, and would be a goal and
vision commanding general support. The main issues in a social contract are less
about the technologies as a whole and more about local impacts.
In the case of wind farms, which currently may present considerable controversy over
siting in the UK, comparisons may be with other sitings which are, or are not, seen
as a blight on the landscape. Questions of control, trust and choice would depend on
whether local issues had been taken into account adequately. If someone’s objections
were overruled, there would be no choice but to live with the installation, and it might
be difficult to avoid using the energy it generated. Local media portrayal would also
have an impact if it weighed in on one side or the other.
Benefits would be in the energy produced, and the environmental gains relative to
more polluting technologies, but risks would be rather varied. On-shore wind has
presented especial concerns in the invasion of visual amenity, and some questions
about turbine blades becoming detached, bird disruption and local noise. Large hydro
has often been associated with major disruption of the landscape, communities and
habitats in, but the more likely future application of small installations and run-ofriver schemes would cause much less impact. Altered habitats have been a significant
objection to some large tidal barrage proposals, like the Severn Estuary. Not all these
concerns would necessarily be shared in the local population, however. For example,
attitudes may vary to visual effects or the importance of wildlife. Off-shore wind,
wave and tidal all carry the personal risks of off-shore installations, but hardly
impact the general consumer. Biomass used to generate heat, gas or electricity from
waste carries fears of contaminants in the flue gases, and the perceived health risks.
Comparison could be made with chemical waste incinerators. Energy crops such as
novel fuel oils would be seen more in terms of their biological impact.
105
Radioactive Waste Disposal
Radioactive waste disposal has two communities to consider - those in the vicinity of
the planned facility and those far away in the general population. For both it could
be seen to challenge basic values in the handing on a problem that arose from our
activities today, whose burden future generations would bear over unimaginable time
scales. The uncertainty of what we were passing on might also be an objection. For
the local population, it would be seen as more or less familiar, but as a controversy
rather than being comfortable in their familiarity. The wider population may well
have heard about the issue of radwaste disposal, but feel it was something basically
unfamiliar and not well understood. In this case the comparison would probably be
with the indiscriminate radioactive contamination of land and population arising from
Chernobyl. It is perhaps less likely that it would be compared with the long term
management of permanently toxic wastes from heavy metals, and this might not be
seen as a point in its favour.
In the local context, control and trust would again depend on the extent to which
the utilities, regulators and policy makers had established a rapport with the local
community, or whether the community felt alienated. This would also affect vision
and choice. This could be a major difference with the general population who would
feel less in control and would have little relationship on which to base a sense of
trust. Vision and choice would be influenced by basic attitudes to nuclear power and
by the local state of affairs. Those opposed would see it as a nuisance imposed on
them. Those favouring it might see waste disposal as a necessary consequence, and
share the vision of the developers, while probably not wanting it in their back yard.
In the locality, much would depend on how involved the community felt with the
development and were partners in the decision making process.
Risk is likely to be the biggest single factor, and forms the backdrop to all the other
issues. Radioactive waste presents a classic example of a remote insidious risk with
perceived large consequences, were the containment to be breached, affecting the
local water table and land for centuries to come. The overwhelmingly negative aspect
of this will be offset for some by the recognition that the risk is already there and has
to be addressed somehow, but with the hope that someone else may be able to carry
the risk. NIMBY will continue to be a major attitude in the issue which popularised
the expression.
Waste, seen as the tail end of a process which has already happened, has no obvious
benefit to anyone. A ‘double negative’ benefit lies in putting hazardous material away
in a safe form somewhere reasonably safe, and well managed. Strictly the user has
already obtained the benefit in the same electricity provided, but this may not be
perceived as such. Some might argue that the benefit is nil, because it was imposed
and because it could have been better supplied by another technology. The media
portrayal, generally presumed as negative, is perhaps more mixed towards nuclear
power in general. On radwaste, much depends on which viewpoint is seen to be
providing the best story or spin on the issues.
106
Changes needed for a Social Contract to Work
These are speculative evaluations, but they suggest there are some applications which
might succeed. In either biotechnology or future energy technologies an acceptance
profile would contain a yes vote to most of the questions which were seen as central.
That alone is not sufficient, however. Where trust has been lost, it requires bridging
a gulf of perceptions, values and visions, to find common cause with the wider
population. In the cases of GM and radwaste this was far from the case. Industry,
scientists or governments only have a true mandate if they put a lot of work in to the
basic job of listening and are prepared to change their ways, by adapting their goals
to society’s views. This is a two-way process, not the ‘information deficit’ model.
This will also require imaginative ways of engagement with the wider public on
technology issues, as well as contact with the more familiar stakeholders. This also
means opening the process of research prioritisation to some level of public scruntiny,
as part of strategic planning and management, involving the public to find ways to
identify commonly agreed aims for a technology.
Each technology needs to regard its ethical dimension as intrinsic, not extrinsic, to its
task. It also needs to identify where its own ethical limits lie, and demonstrate this
publicly. This requires major changes in how technologists are educated to encompass
an understanding of ethics, and trained on the job to continue their awareness. This
should include regular engagement with rationalities and disciplines outside the
scientific peer group, involving philosophy, theology and social science disciplines,
and also non-specialist lay people.
Lastly, a common vision for a technology also includes a corresponding transparency
by government and regulators. Cultures unused to open discussion need to develop
it. The long trend in most industrial countries to switch from public sector funding
to private has produced a corresponding decrease in public trust in research. In
today’s climate of deep scepticism of private sector biotechnology, the mentality
of ‘commercial-in-confidence’ needs rethinking. It is more important to regain the
confidence of the public than to keep commercial confidentiality for a product which
no one trusts.
This paper has analysed the relationship between public acceptance and values in
terms of a social contract for biotechnology, identifying a set of conditions necessary
to achieve a general acceptance for a novel technology. Evaluation of current and
future technologies in biotechnology and energy indicates a complex situation in
which some applications might be accepted and others not. A more publicly responsive
model of technology planning and outworking is suggested, seeking to find common
cause between sensitive technologies and with the wider population.
107
Abstract
Technology and society have a synergic relationship. New technologies are a product
of the values and aspirations of the culture in which they emerge. In turn, each
new technology shapes and alters these values and aspirations, to a greater or lesser
degree. The acceptance by society of any particular technology depends, however,
on how far the values embodied in the technology reflect those of the wider society,
or only those of some privileged sector - perhaps a ruling elite, a group of academic
researchers, a commercial company, or even a special interest group. As public
disquiet about controversial technologies has grown, their acceptance can no longer
be taken for granted. It is now becoming more important to evaluate in advance the
degree of likely mismatch between the aspirations of the technologists and the values
of society.
This paper explores one approach to making this evaluation, based on the notion of a
conditional social contract between technology and society. A given society may be
prepared to embrace a new technology to deliver certain benefits, and may accept a
certain degree of risk and adaptation of life styles, provided certain basic conditions
are fulfilled. These conditions include the upholding of basic values, familiarity, how
it compares with similar technologies, the degree of control and choice, trust in those
in control, the nature of any risks, the tangible benefits, and the media profile given
to the new area.
If several of these factors are not fulfilled, the technology is unlikely to be accepted.
This was dramatically illustrated in the UK public reaction to food products derived
from imported US GM soya and maize. These failed nearly all the conditions, so
that public rejection should have been seen as a foregone conclusion. In the light of
this, the likely public reaction to a number of future biotechnological innovations is
assessed, based on the same conditions. Some examples taken from the energy sector
are also compared. The notion of an acceptance profile is derived, and suggestions are
made how this could be used to find collaborative modes of technology planning.
References
[1] Bruce, D. M. (1992) Perceptions and Values - Implications of a Changed Public
Perception of Science and Technology, A submission to the White Paper on
Science and Technology, November 1992, SRT Project, Edinburgh
[2] Bruce, D. M. (2002) A Social Contract for Biotechnology - Shared Visions for
Risky Technologies? J. Agricultural and Environmental Ethics, vol.15, pp.279289
[3] Adams, J. (1995) Risk, London: UCL Press.
108
Interests, Values and Biotechnological Risk
Ann Bruce1 & Joyce Tait
Centre for Social and Economic Research on Innovation in Genomics
(InnoGen)
EDINBURGH
UNITED KINGDOM
Abstract
Interviews with stakeholders in the GM (Genetically Modified) crops debate have
been used to develop a framework identifying aspects of the conflicts which are
interest-based or value-based.
We now report on the first stages of a research project which aims to explore in more
detail the criteria which are relevant to distinguishing between these underlying
motivations and to develop a more formal methodology for identifying the extent
to which stakeholders are making their case on the basis of interests or values. As a
first step to this we are looking at debates in a number of genomics-related issues to
identify the parameters underlying the interests/values dimension. This paper reports
on work-in-progress and outlines our initial ideas on how we might evaluate these
complex interacting issues.
Introduction
The confusion among European policy makers about how to handle the issue of
genetically modified (GM) crops has highlighted the difficulties experienced in trying
to find an acceptable way forward in a conflict over risks in biotechnology. Earlier
research [1] has postulated a model of risk-based conflict which distinguishes between
interest-based conflicts and value-based conflicts. The features of these two are shown
in table 1. These two types of conflict have involved very different characteristics.
Interests based conflicts are likely to be restricted to specific developments and to
specific localities. They can usually be resolved by the provision of information or
compensation and involving a process of negotiation. Value-based conflicts, however,
are likely to spread across all related developments and are likely to be organised
on a national and increasingly international basis. These value-based conflicts are
difficult to resolve as information is viewed as propaganda, compensation as bribery
and negotiation as betrayal. Whereas providing concessions is likely to lead to
accommodation if the conflict is interest-based, concessions are more likely to lead to
escalation of demand if the conflict is value-based. In practice, of course, individual
and group motivations are likely to be a mixture of the two, although rarely in equal
proportions.
Corresponding author: Ann Bruce, InnoGen, University of Edinburgh, Old
Surgeons’ Hall, High School Yards, Edinburgh EH1 1LZ Tel: +44(0)131 650 6397
Fax: +44(0)131 650 6399
1
109
Table
Interest-based conflicts
Likely to be restricted to specific
developments
Likely to be location-specific,
locally organised
Can usually be resolved by the
provision of:
- information
- compensation
- negotiation
Giving concessions leads to mutual
accommodation
Value-based conflicts
Likely to spread across all related
developments
Likely to be organised on a national or
international basis
Very difficult to resolve
- information is viewed as propaganda
- compensation is viewed as propaganda
- negotiation is viewed as betrayal
Giving concessions leads to an escalation
of demand
We have chosen to use the terminology ‘values’ and ‘interests’ which are defined
later in the paper. However, we are aware that considerable debate can be had over
whether this is the most appropriate terminology and whether ‘ideology’ for example,
should be substituted for ‘values’. Ideology refers to a system of thinking (an ‘ism)
rather than individual values. The same values may be present in different ideologies.
We find that the term ‘ideology’ has negative connotations and as we would want to
affirm values as important aspects of social systems, we have chosen to use the term
‘values’ in preference to ‘ideology’. The terminology may have different technical
meanings for people coming from different disciplines such as philosophy and ethics
and we acknowledge that we may need to change the terminology in future to reflect
the concepts which are in our minds but which may not be accurately communicated
by use of this terminology.
An example of how these interest-based and value-based disputes can influence the
overall trajectory of a conflict is found in the UK debate on GM crops, contrasting the
situation in the early 1990’s and 1998. Tait [1] considered where various stakeholders
(public and consumer organisations, environmental pressure groups, organic and
conventional farmers, food companies, multinational chemical companies and
scientists) were located on an interests vs. values matrix of motivations and how these
positions moved during the debate. For example, a survey carried out in 1990 showed
that most members of the public were neutral in their attitude to biotechnology and
it did not seem to affect their interests directly, nor did it seem relevant to any of
their strongly-held values. However, this survey also indicated that they wanted
more information and would be most likely to trust consumer and environmental
organisations for this information rather than government or industry. The campaign
stance of environmental groups since the early 1990s appears to reflect a value-based
opposition to GM crops. Consumer organisations, which initially took a neutral,
interest-based stance on the issue of GM crops, announced in 1998 that they were
110
now opposed in principle to intensive farming systems and to GM crops. Whilst it
could be argued that the consumer organisations were following rather than leading
public opinion, this change in stance may have encouraged the environmental groups
to raise the profile of their campaigns and may also have encouraged the press to
take a stronger interest in the issue. The combined effect was that both of the major
sources of information for the public now held value-based opposition to GM
crops. Furthermore, the environmental and consumer groups were joined by nongovernmental organisations (NGOs) with a concern for development issues. Together
these formed what has been termed an Advocacy Coalition [2] to oppose GM crops.
Whilst this analysis obviously over-simplifies the various factors in the debate, it
illustrates the way in which an appreciation of the difference between value-based
and interest-based disputes can be helpful in understanding the processes involved in
decision making about technological risk.
The approach of distinguishing between values and interests in disputes is attracting
interest from a range of practitioners and our objective is to develop formal methods
of exploring these dimensions with the aim of enabling better decision making. The
work reported here is the initial stage of this process and as such the discussion should
be considered ‘work in progress’.
Definitions of Values and Interests
As noted earlier considerable debate can be had over the definition of ‘values’ and
‘interests’ which we are using.
Working definitions of ‘values’ and ‘interests’ are given by Burton [4].
Values
Ideas, habits, customs and beliefs that are a characteristic of particular social
communities. They are features that lead to separate cultures and identity groups.
Values are not for trading and changing values is a long process.
Interests
The occupational, social, political and economic aspirations of the individual, and
of identity groups of individuals within a social system. Interests are held in common
within groups in a society, but are less likely to he held in common nationally. Interests
are transitory, altering with circumstances. They are not in anyway an inherent part of
the individuals as are needs and as values might be. They typically relate to material
goods or role occupancy. Interests are negotiable.
Values and Interests may be congruent or they may be in conflict. For example, in the
area of Embryonic Stem (ES) cells it may be that a particular person suffers from a
disease such as Multiple Sclerosis and might potentially benefit from therapies based
on ES cells but has a fundamental value position against using embryos in this way.
Thus, the person may act against their own interests in objecting to the development
111
of ES cell-based therapies or by refusing to accept such therapy for themselves or
for their children. Another example is environmentalists who may have an interest in
flying around the world to promote their environmental values, but in doing so cause
damage to the environment in the cause of which they are travelling. It is likely that
the promotion of environmental values is seen as a greater imperative and such groups
often attempt to initiate behaviour to ameliorate the environmental damage they are
causing (e.g. by funding environmental schemes as a kind of ‘tax’ on their travel).
A lack of communication in disputes may arise when one party considers an issue as
an interest and the other considers the same issue as a value e.g. environmentalism
may be perceived as an interest to be negotiated with other interests or it can be felt as
an absolute, non-negotiable value [5]. When one party treats environmentalism as an
interest and the other as a fundamental value, it is very difficult to achieve consensus.
Once a fundamental value is recognised, then it may be possible to negotiate means of
satisfying this which are acceptable to others.
A further concept which quickly becomes important is that of ‘needs’. Needs can be
understood as including the objectively definable conditions necessary for the physical
survival of the organism [3] or may also include more socials aspects such as the need
for identity and recognition [4]. Needs can be repressed or suppressed but they never
go away. Needs that are frustrated or denied may give rise to behaviour that is against
the interests of the individual, for example in the rise of terrorist activity [4].
Values in disputes
Values in disputes are often hidden. Sometimes this may be because some values are
so deeply held that they appear to require no articulation, the presumption is that they
are commonly shared and acknowledged. Society also often lacks formal frameworks
for discussing values. Political and regulatory frameworks have actively discouraged
discussion of values, partly because they do not have the mechanisms by which to
deal with them. Values and intrinsic ethical objections are often seen as ‘irrational’
and therefore discounted. Weber [6] distinguished between ‘value rationality’
and ‘instrumental rationality’ where value rationality is behaviour consistent with
a particular value position and instrumental or scientific rationality looks at the
consequences of various actions and carries out cost-benefit type calculations. Both
are forms of rationality but each uses reason on a different basis [7].
In practice there has been little recognition of value rationality in risk regulation.
This distinction can be seen to come to a head in the Precautionary Principle. This
can be understood to consist of a risk: benefit calculation (where the precaution is
proportionate to the risk) or it can be used as a tool to indicate a value position about
which risks are acceptable and which are not. Value positions can be seen as only
lying in the domain of environmentalist, NGOs and religious groups. Scientists are
often held up as exemplars of rationality whose decisions are based on ‘facts’ rather
than ‘values’. But it is clear that scientists themselves hold values which influence
112
their behaviour. For example scientists may make assumptions about the nature of
reality and the acceptability of human action within the natural world [7] or they
may make assumptions about the way in which science is adopted in society and the
acceptability of this.
Failure to recognise the different aspects of value-based arguments and interestbased arguments may mean that the root of a dispute and the factors that are really
important to protagonists are not uncovered. An example might be taken from
farmland biodiversity. A desire to increase the biodiversity of farmland may lead to a
push for organic agriculture. However, further discussion on alternative management
practices may uncover practices which can improve particularly desirable aspects of
biodiversity and which may apparently achieve similar ends. However, these may
be less acceptable because the driver for organic agriculture may in fact originate
in a desire for the use of a supposedly holistic, systemic paradigm. Many of the
processes in mediated disputes are intended to facilitate the discovery of underlying,
not immediately obvious motivations, as a precursor for dialogue about issues that
really matter.
Interests in disputes
People who are funded by a vested interest are seen to reflect that interest. However,
interests can also encompass factors other than financial interests e.g. where people
have a particular health issue in their family or where natural or social scientists use
their position as apparently independent academic researchers to promote a particular
cause.
Individual interests may include aspects such as having more money, more ‘success’,
in not having damage to their property or other interests. Company interests are likely
to encompass making a profit and in surviving as a successful business venture. NGO
groups may have interests related to increasing number of members, increasing power
and influence or in maintaining their image.
Interests can be tied to our social roles (e.g. teacher, scientist etc.) When we take on
a role we take on different social perspectives [3] and this in turn may affect what
risks we consider and what attributes of risk we attend to.
Tidwell [8] notes that one of the main impediments to effective dispute resolution
is articulating a rationale for the resolution. People have to decide when a conflict is
constructive and consistent with their aims and when it is destructive. If parties do
not wish to engage in resolution then there is little that can be done. Maintaining a
conflict may become an interest in itself, for example Tidwell uses the term ‘struggle
groups’ to describe groups of people who have shared interests and who begin to
identify themselves as a group apart from other groups, whose common aspirations
become group norms and pursuit of these aspirations become manifestations of group
113
loyalty. Eventually, the particular group’s identity may reside in the use of adversarial
tactics so that it is in the struggle group’s interest to maintain conflict rather than to
reach settlement.
Democracy in Decision-making
The failure so far to achieve public acceptance for GM crops and the distrust of the
regulatory systems has resulted in calls for increased public participation in risk-related
decision making. The question of how it is possible to have informed consultation with
a diverse population of several million people is being extensively researched and a
number of practical trials are being carried out, such as the UK public consultation on
GM crop trials. This type of consultation may be able to achieve social legitimation
for a particular course of action, but it is unlikely to satisfy everyone involved in the
debate. It should also be remembered that social legitimation is not the same as ethical
acceptability, although it may serve well for pragmatic purposes. In ethics ‘what is’ is
not necessarily ‘what ought to be’. The interaction between the normative perspective
of philosophers and ethicists with the empirical work of social scientists is one of the
dimension we hope to explore further during the course of this work.
Values are not necessarily shared in society, which makes any democratic process for
decision making in the area of technological risk problematical. Bruce [9] argues that
the biotechnology industry made a mistake in the 1990s in assuming a shared value
in technological progress whereas the public reaction (in Europe) was an expression
of scepticism at technological interventions. The public mood in the USA still seems
to be one predominantly of trust in technology. The issues then relate to ways of
democratic governance which respect minority views and of international regulatory
systems which can take into account different value-bases of different societies.
How evidence is used in value-based conflicts
One of the defining features of a value-based conflict is the way in which ‘evidence’
is used by protagonists. Arthur Miller [10] in the historic context of witch hunts
described: “…the spectacle of perfectly intelligent people giving themselves over to
a rapture of such murderous credulity. It was as though the absence of real evidence
was itself a release from the burdens of this world…Evidence, in contrast is effort;
leaping to conclusions is a wonderful pleasure…”
Evidence can be ignored on both sides of the GM debate, although the justification for
doing this is different. It is noticeable that ‘seminal’ works demonstrating underlying
problems with GM technology are apparently ignored by the scientific community on
the basis that these are examples of poor science. They may indeed be poor science,
but it is interesting to note that the response is to ignore the evidence rather than
to repeat the experiment. Examples are the evidence from Pusztai [11] suggesting
health problems with consumption of GM crops and Chapela [12] suggesting that
GM constructs are ineherently unstable. Similarly, evidence suggesting that GM
technology can offer environmental benefits when managed in particular ways, is
rejected on the basis that the scientists have received industry funding and therefore
114
their evidence is ‘tainted’ because the scientists have a vested interest in the GM
industry. The destruction of GM crop trials by anti-GM activists could be viewed as
a deliberate way of making sure there is no potentially positive evidence about the
impact of GM crops.
Giving concessions can result in further uncertainty about the interpretation of
evidence. For example, during an investigation into the health impacts of GM crops
carried out by the Scottish Parliament (details of which are given below), the issue
of antibiotic resistance genes was raised. One proponent of GM crops asked the
Committee to note that the use of antibiotic resistance genes as markers in food crops
is now banned. The response was not, how sensible to be precautionary, but a concern
that this ban meant that the previous product had been unsafe and hence invited
questions about the safety of the current products.
“I am concerned about the fact that the technology was previously considered to be
safe, but is now causing enough concern for a government department to advise that
we should not be producing genes conferring that marker...why does ACRE2 want
those genes to be phased out if it thinks that they are safe?”
We have begun some research into how evidence is used in risk debates. Our first
example is from a debate in a Scottish Parliament committee concerning the risks
from field trials on GM crops. The Scottish Parliament has a system which allows
individuals and groups to send in petitions. The UK government has been running a
series of farm-scale evaluations (‘crop trials’) during 2000-2003. Herbicide tolerant
oilseed rape, sugar beet and maize have been grown on a field-scale to evaluate the
impact of the herbicide management regime associated with these crops on farmland
biodiversity. A petition was received by the Scottish Parliament to immediately
end these farm-scale evaluations for reasons of safety. As a result of this petition,
the Health and Community Care Committee of the Scottish Parliament undertook
an inquiry into the health impacts of GM crops during the period October 2002 January 2003 (other aspects being reviewed elsewhere, for example the Transport and
Environment Committee reported on the environmental implications of GMOs). The
Committee invited written evidence from interested parties and oral evidence was
taken from selected interested parties. We have started to look at transcripts of the oral
evidence presented in order to identify characteristics of the debate.
Examination of these transcripts identified some ways in which evidence is used
in this debate as outlined below. We should note that the evidence presented is
influenced by the questions asked by Committee members and the evidence presented
was limited by the time given to the parties to give their evidence. The people giving
evidence did not necessarily present themselves as being on one side of the argument
or the other. We have therefore tried to distinguish the argument from the person i.e.
that where the argument was in support of crop trials it was used as supporting the GM
2
ACRE – the Advisory Committee on Releases to the Environment – is the UK
Government’s advisory body on risks to human health and environment from release
and marketing of Genetically Modified Organisms - GMOs.
115
trials protagonists and where the argument was in support of stopping crop trials that
it was used as supporting the GM trials antagonists. It would therefore be possible for
evidence from one person to appear on both sides of the debate (although this did not
happen in practice).
What information is left out when presenting evidence
Parties on both sides of the debate appeared to be selective in their use of evidence.
For example the crop trials protagonists would generally not refer to uncertainties
about knowledge. They failed to note for example, that the way in which ‘Substantial
Equivalence’ is interpreted varies between different regulatory regimes. The crop
trials antagonists failed to mention any of the safety testing which is currently being
carried out on GM crops or when mentioning the dangers of antibiotic resistance gene
markers, failed to mention that the crops on trial do not carry the antibiotic resistance
gene and that these markers are being phased out. Both interests and values may
dispose protagonists to be selective about the information they use.
What predictions are being made about the future
The crop trials protagonists argued for example, that the risks to health from GMOs
are tiny. GM crops are as risky (or less risky) than the status quo. Appeal is made to
statements by authorities, such as the Food Standards Agency, about the lack of risk of
spread of antibiotic resistance from GM crops and ACRE’s conclusions that growing
these GM crops does not pose a threat to human health. The crop trials antagonists
refer to a high potential for the spread of antibiotic resistance. Appeal is made to
evidence based on observations of local people. Evidence from one local person
referred to concern about water running from the trial sites into the river where the
GM polluted water may be consumed by fish and hence by humans. This recognises
the complexities of what happens in practice as compared to what ought to happen in
theory.
The balance between risks and benefits are perceived differently by the two parties.
For people whose values are consistent with the acceptance of GM crops the question
is of ‘balancing risks’. For antibiotic resistance the claim was not that there were
no risks, but that the risk was minute compared to that from treatment of humans or
animals by antibiotics. This also demonstrates an example of where interests may be
tied to social roles. For politicians, whose interest should include protection of their
constituents the ‘balancing risks’ argument may mean that more rigorous evidence is
needed of safety than that which is available at the moment. For example the issue
of the extent of toxicological testing was raised and it was clear that the politician
concerned was by no means convinced of the adequacy of the testing regime
(although it was accepted practice) and the appropriateness of the balance of risks
“[feeding trials had been carried out on] tens of birds and ten rabbits and the ten
rats. What benefit do you think the people of Scotland, who must live next to those
experimental fields, will get in the long term? What financial benefits will Bayer get,
as a company, in the future?”
116
It may be that people with value-based positions reject comparative risks all together
and argue that if there is any risk, then we should be precautionary and not proceed.
Interestingly, one of the GM-trials antagonists was reluctant to say what levels of
testing would be satisfactory. The arguments over the adequacy of the concept of
‘substantial equivalence’ may have at root these different concepts of comparative
risks.
Use of connotations
Tidwell [8] describes how perceptions influence decision making, particularly in the
presence of uncertainty. He notes that the perceived probability of the occurrence of an
event is influenced by the event’s similarity to another event and the ease with which
people can recall instances of occurrence of the event. This is clearly understood and
appreciated by both sides in these debates in the Committee. The comparators used
by GM-trials protagonists are for example, classical breeding in agriculture and the
connotation is children (e.g. children playing and ingesting soil which will contain the
constituents of the GM construct used in the crops). The GM-trials antagonists used the
comparator of pharmaceuticals (for purposes of safety testing) and used connotations
of BSE, CJD, Tobacco and use of pesticides in agriculture. In using connotations, each
side will choose comparators that favour their position and arguments, although the
same will be true for either interest-based or value-based arguments.
Interests
The recognition that interests influence positions in disputes was well recognised
by the Health and Community Care Committee. Scientists giving evidence were
extensively interrogated about their involvement in the biotechnology industry and
any funding they may have gained from them. An interesting dimension in these
debates was disagreement among the medical community about the health aspects of
GM crops. The Chief Medical Officer for Scotland (who was a former Secretary of the
British Medical Association - BMA) was far less concerned about the safety of GM
crops than representatives of the BMA. The Chief Medical Officer was intensively
questioned by the Committee as to his interests and whether being a Government
employee and adviser he was therefore toeing the Government line, rather than being
an independent member of the BMA.
Conclusion
We are at the very early stages of seeking to understand more about how values and
interests interact in disputes and how science interacts with values. Our aims over the
next few years is to identify the parameters underlying the interests/values dimension
and to understand how these influence how knowledge is used in risk debates and how
this understanding can further conflict resolution.
117
References
[1]
Tait, Joyce, ‘More Faust than Frankenstein: the European debate about the
precautionary principle and risk regulation for genetically modified crops’
Journal of Risk Research, 4(2), 175-189, 2001
[2] Sabatier
[3] Blomkvist, Anna-Christina ‘Psychological aspects of values and risks’ in
Sjöberg, Lennart (ed) Risk and Society, Allen & Unwin, London, pp89-112,
1987
[4] Burton, John Conflict: Resolution and Provention, The Macmillan Press Ltd,
Basingstoke & London, 1990
[5] Environment Council, personal communication
[6] Weber, Max Economy and Society, University of California Press, Berkeley,
1978
[7] Bruce, Donald & Bruce Ann Engineering Genesis, Earthscan, London, 1998
[8] Tidwell, Alan, C. Conflict Resolved?, Continuum, London, 2001
[9] Bruce, Donald ‘A social contract for biotechnology: shared visions for risky
technologies?’, Journal of Agricultural and Environmental Ethics, xx, 1-1,
2002
[10] Miller, A. The Crucible in History and Other Essays, London, Methuen p47,
2000
[11] Ewen, SWB and Pusztai, A. Lancet, 354, 1353-1354, 1999
[12] Quist, D. and Chapela, IH Nature 416, 602, 2002
118
Ethics of Safety Decisions
in the Railway Industry1
Dr Chris Elliott
Pitchill Consulting Ltd
Magalee, Moon Hall
Road
Ewhurst GU6 7NP
Surrey, UK
Tony Taig
TTAC Ltd
10 The Avenue
Marston CW9 6EU
Cheshire, UK
Railway managers are faced with a continuing ethical challenge. Society demands
that railways are fast, cheap and safe. It is not possible simultaneously to achieve all
three. The media attach great significance to railway accidents, leading to demands
for massive investment or wide-ranging changes in operational practices that offer
little safety benefit. Railways have traditionally used some form of optimising safety
for a given budget, although in some countries it is unwise or even illegal to admit it.
What ethical principles should railway companies apply when deciding what is “safe
enough”?
1. Context
Rail is only a small part of the nation’s activities and there is a regulatory framework
to manage it. However, in UK rail:
•
the railway has changed from being an arm of the State to a mixture of public and
private ventures, albeit with increasing State involvement;
•
European law is coming and will bring about change;
the UK legal principle that every employer has a duty to reduce risks to a level
that is As Low As is Reasonably Practicable (ALARP) does not sit easily with
complex hazardous systems
so there will be change. Also, the issue of the safety of the railway has attracted a
great deal of public comment and hostility to the rail industry that can be considered
to be out of all proportion to the level of risk. There are many unresolved debates that
might be but rarely are affected by ethical considerations, to which different types of
experts offer conflicting views. Regulators and dutyholders need a better rationale for
deciding which expert recommendations to accept.
•
The authors are grateful to Railway Safety for supporting this research and for
permission to publish this paper.
1
119
More parochially, the rail industry needs decision criteria for standards, to know how
to respond to the European threat/opportunity, and to influence any future changes to
legislation to allow for systems. Questions include:
•
what is the railway expected to deliver?
•
where in the railway, and/or in government, should decisions be taken?
•
how ‘good’ are the decisions made in rail when compared with other distributed
industries such as food?
We usually want to make our own decisions about our own safety, and about our own
activities that give rise to risk. This is fine so long as we are the only ones at risk, and
when our actions to control our risks do not affect others.
However, many people and organisations, both public and private sector, are involved
in decisions about railways, with different responsibilities to different groups of
people. Organisations cooperate to deliver a transport service to the end customer;
no one organisation is responsible for delivering the whole of the service. Decisions
about railway safety thus involve risks to many people, and actions to mitigate risk
that affect many people – not least through the money spent on safety measures,
which can come only from increased fares, additional subsidy or reduced profits.
The railway industry in the UK has been particularly active since the Kings Cross and
Clapham accidents of the late 1980’s in assessing risks, and developing policies and
decision processes which more directly target resources at risk reduction. This process
began before, and has carried on through and beyond, railway privatisation, and has
succeeded in producing sustained reductions in several important aspects of railway
safety risk. Yet many people and politicians remain, or have become more, dissatisfied
with railway safety.
The aim of this research was to stand back from current railway arrangements and
explore the fundamental ethics of decisions about safety. How does current railway
policy and practice compare with the expectations reasonable people have of it? Is
there a more solid ethical foundation on which railway safety decisions could be
built? Would this affect public and political satisfaction with railway safety?
2. Approach
We were not expecting to find “the answer” by consulting experts or published
works. Rather, we sought ideas and thoughts that might inspire a deliberative process
involving the study team and the sponsors of the work (Railway Safety), then bringing
in a widening circle of others as the debate develops.
The main tool was a series of interviews with a wide cross-section of prominent
individuals, chosen because they were believed to have well-formed views on how
society should operate and sufficient standing to lend weight to those views, but
not necessarily because of any great expertise on the immediate issues. Each was
provided with a briefing paper in advance (attached as Annex) and assured that the
120
interview would be under “Chatham House” rules, so that nothing that was said would
be attributed to the person being interviewed.
3. Responsible safety decisions
Three main themes emerged from this work:
1. There are circumstances in which an organisation may, morally, put others at
risk. Society constrains those circumstances, and either explicitly or implicitly
licences organisations whose activities give rise to risk for others.
2. The nature and degree of responsibility of an organisation towards people put at
risk by its activities depends critically on the relationship between the individual
at risk and the organisation creating or managing that risk.
3. Ethics will not help us establish what levels of risk are tolerable, or what values
should be applied in weighing safety against other factors, but does point towards
principles for organisations whose activities expose others to risk.
3.1. When may we expose others to risk?
The first point we wish to make here is that no person or organisation has a moral
right to expose another to risk. “Though shalt not kill” is a principle that extends no
matter how far the risk is diluted. It cannot be acceptable for someone to be allowed
to impose a risk on another simply because the risk is small.
There are however many circumstances in which it is morally justifiable for
organisations to do things that involve risk for others. The most basic principles
identifiable for justification of such risks are that there must be some benefit or
legitimacy to the activity giving rise to the risk, that those at risk should where
possible give their informed consent, and that those creating or managing risks should
be competent in minimising them within reason. We rely on law and regulation to
constrain the freedom of people and organisations to create and accept risks, and also
to enable people and organisations to get on with legitimate activity without having
repeatedly to seek people’s specific consent.
3.2. Context and relationship factors
Having identified the broad nature of circumstances in which it is morally acceptable
for organisations to do things involving risks for others, we now consider the nature
of the obligation organisations have towards people legitimately put at risk by their
activities. These range from very limited (e.g. for soldiers expected to kill their
enemies in a war) through to extremely high (e.g. for organisations creating risk for
people who do not consent to and will not benefit more than anyone else from the
associated activity – such as the neighbours of a chlorine plant or a nuclear power
station). The context, and in particular the relationship between the organisation
creating or managing the risk and the person facing it, is critical. Some particular
relationships we have discussed which are relevant to railways are those of:
121
•
an organisation whose activities put at risk people who are not direct beneficiaries
of those activities and have not consented to the risk;
•
a supplier organisation to its customers;
•
an employing organisation to its employees; and
•
an organisation whose assets may create risk for anyone abusing them, to those
who practise such abuse.
3.3. General principles
The discussions about the nature and extent of responsibility of organisations towards
individuals identified some general principles that run across the whole range of
relationships and contexts above. These include:
•
ensuring openness and transparency about risks and how they are controlled;
•
continually searching for ways to reduce risk, unless they compromise other
desirable outcomes;
•
using resources competently and effectively to control risks;
•
involving people in decisions that affect them as far as practicable; and
•
working across the spectrum of areas for which the organisation is responsible,
those for which it shares responsibility with others, and those it can influence but
not control.
What our exploration of ethics and morality of safety decisions has NOT done is
indicate any absolute levels of risk that are more or less tolerable, or weightings and
values to be applied to safety in relation to other factors in decisions.
Once it is accepted that people should be involved in decisions that affect them,
it becomes immediately apparent that how they feel will weigh in any input they
make to decisions (or the frameworks and principles on which decisions are to be
based), regardless of any fundamental rights, risks or reasons. The marketing rule that
“perception is reality” will apply here too.
The process by which safety decisions are made is thus critical to the question of
ethical acceptability, rather than the specific criteria or values that underlie it. It is
the process that determines whether or not the safety decisions are consistent with
the requirements implied by the three “general themes” identified at the start of this
section:
•
legitimisation by society;
•
taking due account of relationships and context; and
•
following the general principles of corporate responsibility outlined above.
122
3.4. Involving people in decisions
Involving people in decisions and gaining their consent is one thing when an individual
doctor is talking to an individual patient about a risky treatment to remedy an even
riskier medical complaint. It is quite another in a context like that of the railways
where there are millions of users, millions of non-users living near a railway line, and
an entire population with strong views about how the system should be run.
There is no suggestion that there should be widespread involvement of the populace
in every safety decision in a large and complex industry, or that uninformed whim and
opinion should take the place of considered professional expertise, or that democratic
institutions should be over-ridden by unelected groups of the citizenry. What has
happened in recent decades is that there has been a shift of public expectations on
their place in important decisions about public policy, from:
•
“They” will take care of it and I don’t need to know, via
•
“Tell me” what’s going on, to
•
“Show me” the evidence this is what’s best, and now
•
“Involve me” and let me have a say.
Relying on elected representatives to do this for us is a very blunt instrument; it buries
specific issues on which people have strong views and concerns in among many
others, and risks simple political polemic over-riding people’s real concerns and
preferences. Increasingly, both in government and corporate contexts, samples of the
people are involved in processes of active consultation. The participants are selected
statistically to be representative of general opinion but are not held to represent
anybody but themselves. They are thus different from “pressure groups” and other
supposed representatives that may have little or no democratic mandate but claim
representative authority.
4. What does this mean for the railway?
The messages are:
•
ethical theory is not very helpful when considering how to take safety decisions
– “right” and “wrong” lie in the context of the decision;
•
there is little criticism of the principles by which the railway takes safety
decisions; but
•
the practice of safety decision taking is unsatisfactory.
The overall conclusion is that it is not correct to think in terms solely of a set of ethical
principles that may be applied by the railway to take decisions. Rather, the railway
has to involve rail users and society in deciding what sort of railway they want. The
goal is to establish some form of “social contract” on the basis of which the industry
and its funding and regulating organisations can decide how to strike the balance
123
between performance, cost and safety, and distinguish between a) situations requiring
explicit consultation and involvement by a wider audience, and b) situations where
the industry and its funding and regulatory bodies can make decisions within the
established framework, but without requiring specific wider consultation.
The historical democratic approach to such issues has been to assume democratically
elected representatives (Members of Parliament) were uniquely authorised to speak on
behalf of society. Parliament (or Ministers answerable to Parliament) also control the
regulatory framework and the funding of the railway and, before privatisation, they
also controlled the rail company as shareholders. The railway could thus discharge its
duty by responding to what Parliament demanded.
The growth of what is often called “civil society” has undermined those certainties.
Parliament is still supreme but democracy acts at many levels. The constituency to
which the railway must answer is much broader, encompassing passengers, pressure
groups; the Press; local, regional, devolved and European government; QUANGOs;
unions; and investors and shareholders. Building a social contract with this wider
constituency requires more subtle and sensitive techniques than are sufficient to deal
with Ministers and Parliament.
The UK railway is composed of private companies (or quasi-private in the case of
Network Rail). Nothing here differentiates public from private enterprises. The duties
are the same. Both have to earn the trust of society – there are examples of public
bodies that have lost trust and of private ones that retain it. Furthermore, the costs of
accidents to the industry are very high – there is no incentive to take excessive risks.
There is no conflict in principle between profit and safety.
There is however a strong ethical requirement to be competent. The rail industry
owes a duty of care to society not only to seek the balance of safety, performance
and cost demanded by society but also to be competent in achieving that balance. The
view that has emerged from evidence at accident inquiries, albeit greatly amplified
by sensationalist and inaccurate reporting, is that the industry has not always been
competent. Not only is this an ethical failure; it contributes strongly to the loss of
trust.
124
5. Findings & Conclusion
•
There is nothing much wrong with the principles currently used by the rail
industry and its regulators to make safety decisions, within the limited framing of
“industry and government should decide such matters”.
•
The framing of decisions, as matters for industry and government, is not ethically
sound. There needs to be a wider process of active consultation and engagement
of rail users and of society more widely, to establish a broader based “social
contract” whereby the balance between safety, performance and cost can be
established.
•
The industry has no ethical duty or moral right to make judgements about what
society wants. The industry DOES have a moral duty to ensure that the wider
debate about such matters is well informed, and also to act competently to ensure
that the mix of safety, performance and cost is the best that can be delivered
within the agreed “social contract”.
We conclude that the railways and their regulators and government funding bodies
have failed to recognise the importance of building wider social consent into decisions
within the current legal framework. Their only response to safety concerns has been
to try harder to reduce safety risks, which has not addressed people’s concerns about
industry safety. The industry and government are in effect busy digging a hole; their
only response to concerns that it is the wrong sort of hole has been to dig harder.
Thomas Jefferson wrote:
“I know of no safe depository of the ultimate powers of the society but the people
themselves, and if we think them not enlightened enough to exercise control with
a wholesome discretion, the remedy is not to take it from them, but to inform their
discretion by education.”
125
Annex: Briefing paper “The ethics of technological risk”
What is risk?
Ever since a caveman decided to bring fire into the cave, we’ve been living with
risk. That caveman knew that fire was dangerous, but he decided that the benefits
of a warm home and cooked food more than compensated for the risk that his home
might catch fire. Since then, it is hard to think of any beneficial innovation, social or
technical, that didn’t bring with it the possibility of harm.
If society were to forbid any activity that might cause harm, we would lose the
benefit of electric lighting, surgery, transport, chemotherapy and much else, as well
as virtually all sports and entertainment. But equally we cannot allow total freedom
to do anything, whatever and whomever it might harm. Society has to find a way of
deciding what is acceptable and managing it to prevent catastrophe.
It is helpful to distinguish hazard (anything that can cause harm) and risk (the chance
that a hazard will cause harm, and the extent of that harm). The objective is then to
manage the risk, not the hazard. The caveman knew that fire was a hazard, but he
realised that, if he kept it in the hearth and made his children stand back, the risk was
low enough to be worth taking to have a warm cave.
That argument is enough when the person taking a decision will suffer the harm if
the hazard materialises and will receive the benefits if it does not. A serious ethical
challenge arises where individuals cannot decide for themselves whether to take a
risk, either because they do not have sufficient information or because they do not
have sufficient control. This is made even harder when the benefits and potential harm
do not fall to the same people, especially if the benefits occur now and the potential
harm is to future generations.
Someone has to take these risk decisions on behalf of society as a whole (including as
yet unborn members of society). “Someone” may mean elected politicians and public
servants, or it may mean industry. Whichever it is, they need a process which pays due
regard to the interests of everyone, allows for uncertain or inconsistent knowledge of
the risks, and reaches a decision – “do nothing” is itself a decision if it denies society
the benefits of an innovation.
Risk and the railways
Risks arising from railways have, especially in the UK, been the object of intense
public scrutiny. The facts, that rail is among the safest forms of land transport and that
the UK’s railways are safer now than at any time in their history, are sometimes lost
in political and journalistic statements. However, railway managers, political leaders
126
and safety regulators still have to make balanced decisions, on behalf of society, to
reconcile its demand for fast and affordable transport with the possibility of a train
crash. They need to ensure that those decisions reflect proper ethical values.
There is a limited amount of money to spend on improving safety. More from public
funds means less is available for hospitals, emergency services or education. More
from the passenger means that some will switch to using cars, increasing safety risks,
congestion and environmental harm. More from the shareholders will make it harder
for the companies to raise finance to fund improvements. How should the “safety
budget” be set?
The rail industry, in line with broader regulatory policy on management of hazardous
activities, currently adopts a hybrid approach aiming to minimise overall harms while
protecting at-risk individuals. Very high risks to identifiable groups of people are
considered intolerable, and activities must be stopped or changed to reduce the risks.
So long as the individual risks are small, safety initiatives are evaluated in terms of
their effectiveness (the deaths and injuries expected to be avoided) and cost. This
enables actions to be prioritised to get the most safety benefit for the budget. It also
helps decide how far is far enough in terms of safety; actions and initiatives may be
rejected if they involve disproportionate expenditure for only small safety benefit. The
railway industry has been particularly open and active in researching and publishing
the effective “Value of preventing a fatality” which it uses to underpin decisions about
which safety investments to pursue.
This approach is endorsed by the law and upheld by the courts as a reasonable and
socially acceptable way to operate. But some people object to the use of any explicit
“Value of preventing a fatality”, and to the structured and quantitative evaluation
of risks and costs, equating it to putting a price on life. The very people who have
struggled hardest to maximise safety may then be vilified, or even be found guilty by
the press of having failed in their duty, while advocates of ineffective or unaffordable
safety solutions are lauded.
The railway industry would welcome a wide debate on the ethical basis of its safety
decisions, to help it to understand where its moral duties lie.
127
Nuclear Waste and Ethics
Herman Damveld
Selwerderdwarsstraat 18
9717 GN Groningen
Nederland
[email protected]
1. Summary
In the past years in almost all conferences on storage of nuclear waste, ethics has been
considered as an important theme. But what is ethics? We will first give a sketch of
this branch of philosophy. We will then give a short explanation of the three principal
ethical theories. In the discussion about storage of nuclear waste, the ethical theory of
utilitarianism is often implicitly invoked. In this system future generations weigh less
heavily than the present generation, so that people of the future are not considered as
much as those now living. We reject this form of reasoning.
The discussion about nuclear waste is also sometimes pursued from ethical
points of departure such as equality and justice. But many loose ends remain in these
arguments, which gives rise to the question of whether the production and storage of
nuclear waste is responsible.
2. Dictionary definition
Ethical considerations pervade discussions of storage of nuclear waste. In the past
few years, the discussion about storage of nuclear waste has come to involve not
only technical, but also ethical questions. The Nuclear Energy Agency (NEA) of the
OECD, a cooperative venture of the 25 richest countries in the world, dedicated a
two-day workshop to ethics in 1994.[1] Following the workshop, the NEA published
a joint opinion on ethical and environmental aspects of storage of nuclear waste.[2] A
NEA bulletin from 1996 about information to the public about nuclear waste storage
covers, among other things, ethical topics.[3] It is noteworthy that ethicists played no
part in the discussion.
In these NEA publications the question, “What is ethics?” is answered by looking
up a definition in a dictionary. The influential American ethicist James Rachels
suggests in his book, “The Elements of Moral Philosophy”[4], that it would be
convenient to begin with a simple, uncontroversial definition of what ethics is. But,
according to him, that is impossible. An overview of a number of ethical theories
can, however, be given. In this article we will begin to do this, and we will apply the
theories to the storage of nuclear waste.
3. Ethics, morals, norms, and values
Ethics is the field that occupies itself with the study of morals. This study can be of
a descriptive or of a prescriptive nature. Ethics can also be described as a systematic
consideration of responsible action, the systematic study of moral behavior[5]. To be
responsible means to give arguments for one’s behavior, to enter into a debate with
128
others as to the reasons for and against a specified way of acting. At that moment
ethics is in full stride.[6]
The concept ‘moral’, points to the entirety of the rules of behavior that are taken
for granted in a society or in a community; morals are an expression of a certain
lifestyle, in which the norms and values of a community are reflected. Morals indicate
what should be done if one wishes to become or remain a member of a society.[7] In
general, with morals we mean the things, material as well as immaterial, which people
find truly important: it makes a difference if they are present or not. It has to do with
the good life and the good society. The good life, of course, means much more than
merely financial well-being.
A norm is a rule that says something about the manner in which individuals should
behave. Norms are the concrete rules and rights that function as practical guidelines
for behavior, and that give substance to values. They are rules to give direction and to
make judgements. Norms are mainly formulated negatively; they limit our behavior;
they are more likely to show what is forbidden than what is permitted.
A value is a point of departure, a view of what someone finds important. Values are
abstract ideals or goals towards which people strive by acting in a certain way. Values
are in a certain sense vague. They are mainly positively formulated: they indicate
something that is important and worth striving for.
In order to realize certain values as much as possible, specific norms are
formulated. Thus, values in ethics are more basal than norms. Only when we know
what is worthwhile, do all the norms and rules in daily life take on meaning for us.
Norms must be formulated from values, not values from norms. Norms change; they
are dependent on a specific time and culture. Values have a more permanent character.
Norms and values hang closely together: from a value, for instance friendship, there
arises a norm, for instance honesty. Norms and values form the subject matter of the
study of ethics. [8,0]
A virtue is a good quality in human character. The moral virtues are the virtues that
it would be good for everyone to have.[10] Some examples of virtues are: solidarity,
trustworthiness, wisdom, honesty, thriftiness, justice, responsibility, courage, and
care.[11]
Virtues and important values are closely related. The ethicist Van Tongeren
describes it thus: “Virtue is a position in which a value relationship has become
concrete and in which a norm has become internalized and related to an orientation
of the good life.”[12]An illustration: the 189 member states of the United Nations
closed the Millenium Summit on September 8th, 2000 by adopting six ‘basic values’:
freedom, equality of individuals and countries, solidarity, tolerance, respect for
nature, and shared responsibility.[13]
Rachels has developed the concept of a minimal ethics, made up of the common
core of the different ethical theories. According to Rachels, ethics is, at least, the
attempt to act according to the best reasons, in which the point of departure is that
the interests of all people affected by the actions weigh equally. We must be able
to support our judgements with good reasons, and explain why these reasons apply.
Rachels shows that some moral rules are commonly held in all societies. Without
these rules, a society cannot exist. As an example he cites not lying, and not killing.
129
These are rules that are in power in all vital societies.[14]
In everyday language, ‘ethics’ stands for something that is good, as in ‘ethical
investments’. That is another meaning of the word ‘ethics’. Here the word ‘ethical’
points towards criteria for good business management. We could think, for instance,
of norms such as good working conditions, and no environmental pollution. ‘Ethical’
has here a good connotation, but implicitly it makes a judgement about the policies of
businesses. ‘Ethics’ serves here as a selling point.
Opposite ‘ethical’ stands ‘unethical’. ‘Unethical’ means that the norms set for
oneself are not met. ‘Unethical’ has a stronger meaning. Suppose that someone knows
that a certain company makes use of child labor in the Third World, and that this person
nonetheless buys from that company. Some people call this ‘unethical’, because it is
in opposition to norms and values that are considered to be generally valid.
4. Three ethical theories
In all sorts of problems, the ethicist is called in. But the ethicist doesn’t exist, as
appears from the following classification. There is the subject of meta-ethics, in which
ethics as a field is considered. The question is then whether a moral expression is true
or can be true. Next we distinguish normative ethics, which theorizes about norms and
values, and includes judgements about which norms and values are better. This is the
largest sub-division of ethics. The third area is applied ethics. Here existing ethical
theories are applied to concrete questions.
4.1 Utilitarianism
With a question about nuclear energy, one does not need the help of the meta-ethicist,
but someone from one of the other two disciplines. The answer that one receives
depends on the theory to which the ethicist adheres. Utilitarianism is an important
branch of ethics.
We encounter utilitarianism as an elaborated theory with the Englishman Jeremy
Bentham (1748-1832). He posits that people take account of the benefit or harm that
their actions produce. In the first instance, that means benefit or harm for oneself. From
experience, a person knows what is beneficial or harmful. According to Bentham,
what produces a good is for us beneficial, while what results in bad is harmful. Good,
then, is what makes us happy and provides pleasure.
Utilitarianism is met with, among other places, in the cost-profit analyses made
by the government, or by usefulness and need discussions about expansion of an
airport or laying a new rail line. The utilitarian will lay emphasis on economic yields
(the degree of increase of happiness) versus damage to nature (deterioration of the
perceived value of nature means a decline in well-being, and thereby a decline in
happiness). The ethicist makes a sort of cost-profit analysis.
4.2 Kantian ethics
A second important branch of ethics was developed by the German philosopher
Immanuel Kant (1724-1804). The central question is what people should do, what
the highest principle of morality is. Much behavior is directed by the terms ‘should’
130
or ‘ought’. We have a certain wish, and in order to accomplish it, we must perform a
certain action. Kant called this the ‘hypothetical imperative’, because it tells us what
we must do on condition that we have certain desires.[15] Moral laws, on the other
hand are, according to Kant, not dependent on certain desires. The moral law is an
objective principle, valid for everyone. It is the principle on the basis of which people
ought to act: Kant calls this the ‘categorical imperative’. It is described thus: act only
according to the principle you would like to be established as a universal law.[16]
The categorical imperative is a recipe for action that is unconditional and binding for
everyone, and for all time. Kant calls this a duty. For this reason, Kantian ethics is also
called duty ethics.
4.3 Ethics of virtue
There is an enormous revival of the ethics of virtue, the third important branch of
ethics. In the ethics of virtue Aristotele (384-322 B.C.) is an important figure. He asked
himself what the good life was, what made life have meaning. His answer was that it
is a question of finding the proper balance between rational and irrational impulses.
The community plays an important role in this context. Only in a community can
people arrive at full development of their talents. The good life is formed, therefore,
in relation with others. The term ‘the good life’ in the ethics of virtue, has nothing to
do with a rich individual who spends his time fishing and drinking champagne.
Dr. Martin van Hees, professor of ethics in the philosophy department of the
University of Groningen, has an explanation for the revival of the ethics of virtue:
“I often use the example of a visit to a sick person. Suppose that you are lying in the
hospital and your best friend comes to visit. You thank him for his visit and he says:
“As a friend I do my moral duty”. You understand then that your friend is acting under
the influence of Kantian ethics, and you are deeply disappointed. This makes clear
the limitations of acting according to duty. The qualities that make living valuable are
missing.”[17] The philosopher Paul van Tongeren writes[18]: “For a long time the
idea of ‘virtue’ has been thought almost suspect. It seemed a symptom of a moral idea
especially associated with middle-class respectability...Not until our time could a true
revival of the ethics of virtue be discussed. An important book for this renewal was
MacIntyre’s “After Virtue” (1981)”.
4.4 Choice for continuation
After this exposition of some important ethical systems the question remains which
one is of relevance for the issue of nuclear waste. We encounter chiefly utilitarianism
and Kantian ethics.
In Kantian ethics justice is of great importance. Likewise, it is a central virtue
in the ethics of virtue, as philosopher Ad Verbrugge writes: “Justice is called the
perfect virtue, which contains all the other virtues, and above all in which the rights
of one’s fellow men are recognized...This perfect virtue of justice contains, then, not
only one’s proper well-being, but that of others as a criterion for action.”[19] In the
following we will emphasize justice in our treatment of nuclear waste. This is partly
an ethics of virtue line of approach. A further meeting with the ethics of virtue and its
application to nuclear waste can be discussed later.
131
5. Ethics and nuclear waste: utilitarianism
The environmental ethicist Constantine Hadjilambrinos published an analysis of
the debate over the ethical aspects of storage of nuclear waste.[20] She notices a
recurring phenomenon: the analyses are not usually performed by ethicists, but by
representatives of the government or people in the nuclear industry. These people
often adhere tacitly to the ethical system of utilitarianism. The reason for this is that
utilitarianism offers an apparently reasonable social goal, namely the maximization of
social benefit. This goal can be approached in a rational, methodical, and quantitative
manner.
There are a number of objections to utilitarianism. Calculations for the distant
future assume that people in for example 10,000 or one million years will be
as sensitive to radioactivity as the people of today. But there is no basis for this
supposition. In fact, it is assumed that future people will show the same behavior as
people of today, but this cannot be known. The consequences of the risks for social
benefit cannot be calculated with any degree of security. So far the analysis of
Constantine Hadjilambrinos takes us.
In the Netherlands the environmental philosopher Wim Zweers has criticized the
utilitarian manner of thinking about nuclear waste. He analyses the question of what
sacrifices (for instance, death from cancer caused by radioactivity) we ought to make
for material wealth and economic growth.
Zweers is extremely doubtful about utilitarianism. The main problem with
utilitarianism is that it does not include attention to human rights. The theory is
contrary to the idea of inalienable individual rights concerning life and health and
concerning equal protection for all.[21] With this argument Zweers links up to the
analysis of the American ethicist Kristin Schrader-Frechette.[22] Because future
generations have not profited from the production of nuclear waste- so SchraderFrechette argues- they would probably not agree to be exposed to its risks. With
the choice for underground storage this generation is making choices for future
generations. In a democracy choices are indeed made for people, but the question is
whether that may be also done for future generations. Schrader-Frechette denies this,
and gives three reasons why not. First, it is unclear if a majority (figured over a longer
time-scale) supports underground storage, and even less clear that such a majority can
agree to the requirements which such storage entails. Second she points out that even
in this generation a majority does not support underground storage. The third reason
that this generation does not have the right to decide over those to come, is the unequal
distribution of risks, which fall more heavily on those to come. It is -according to
Schrader-Frechette- doubtful whether this generation can act as a representative of
future generations.[23]
5.1 The future counts less
Utilitarianism is, as we have indicated, an important branch of ethics. There a
maximization of benefit is aimed at, a summing up of good minus bad. Utilitarianism
is a universal theory: everyone should be counted, thus also people not yet born.
But in a study from the ethicist Hilhorst it appears that utilitarianism does not give
everyone the same weight: the determining of the happiness and suffering of future
132
generations is more problematic than that of people now living. We are not so sure
about future generations as we are about ourselves. This is a reason to give full weight
to the people of today and not to those of the future: the importance of future people
receives successively less weight as they become more distant in time.[24]
With a discounting foot of five percent the victims in the coming year count a
thousand times as heavily as those 200 years from now. Still it is not clear that the
moral consequences of future happenings, such as the death of people, diminish by x
percent per year. Hilhorst concludes that there are limits to what is morally acceptable:
“But on this basis to write off the distant future, only because it is far from the present,
is unfair.”[26,26] All the more reason, then, to reject utilitarianism as an ethical
system when it comes to storage of nuclear waste.
6. Nuclear waste, equality, and justice
The NEA considers equality within and between generations an important ethical basis
for storage of nuclear waste.[28] By this is meant a just division, which is not further
elaborated. The question is now what the reasoning of the NEA is worth. Equality and
justice find a place in the Kantian ethics which we considered here as the second main
branch of ethics. What they have in common is the so-called golden rule, according to
German ethicist Tugendhat.[29] This is found in the saying: “What you do not want
done to you, do not do to others.” In the Bible we find: “Always treat others as you
would like them to treat you” (Matthew 7,12). The golden rule is not only found in the
New Testament, but also, among other places, in Buddhism, Hinduism, Islam, and the
writings of Confucius.[30] This rule does not by itself guarantee moral behavior, but
tries to establish minimal conditions that make possible the performance of concrete
actions in the long term.[31]
The following three points stand central: 1. Do not harm others (this is called
‘negative duty’, the duty not to do certain things); 2. Help others (positive duty); and
3. Arrive at rules for working together, such as: Do not lie and Keep promises.
Tugendhat describes his (Kantian) outlook on ethics thus: “Act towards people,
including your own person, in such a way that the person is seen always as an end
and never as a means.” This means you should never use others as a means to achieve
your ends. Another way of saying this is: “Act- towards everyone- as you would like,
from the perspective of a random person, everyone to act.” The ethic described here is
universal and egalitarian, and considers everyone as having equal value. Everyone has
equal rights, and at the same time equal duties, towards everyone else.
In this context, Tugendhat refers to human rights. These human rights are set down
by the United Nations. These are rights people extend to themselves and to all others.
Human rights are not ‘from nature’ or ‘God-given’ but come from people themselves.
People give themselves rights. That means that is possible to go to a judge if your
rights are overstepped.
6.1 Theory of justice
No ethical vision on justice can fail to mention John Rawls’ important book, “A
Theory of Justice”[32]; since its first publication more than 3000 books and articles
133
relating to it have appeared.[33] Because the NEA does not discuss it, we will give
here some of Rawls’ main arguments.
Rawls calls his theory of justice “justice as honesty”. Justice means that the loss
of freedom for some cannot be made good by gains for others. Justice does not permit
that the sacrifices a small number of people make be compensated by increased
advantages for a larger group. In a just society, freedom and equality for everyone is
a prerequisite.
A society is well-ordered when:
1. Everyone knows and assumes that the others accept the same principles of
justice; and
2. The important social institutions fulfil the principles.
Rawls distinguishes two principles of justice.
According to the first principle, everyone has the same right to as wide as possible
a system of basic freedoms. The basic freedoms are political freedom, freedom of
speech and assembly, freedom of conscience and thought, integrity of the person, and
private property. All these freedoms fall under the first principle. The second principle
applies to the division of income and wealth. Social and economic inequality can only
be justified if a system exists with equal chances for everyone and the inequalities are
to the advantage of the most needy in the society.
The two principles are emphatically in this order: the first principle comes before
the second. Limitations of the basic freedoms cannot be compensated by greater
economic advantages.
The principles are universal. Everyone can understand the principles and take
part in applying them. A principle will be discarded if an internal contradiction
results when everyone acts according to that principle. The principles of justice
mean that people do not consider each other as means but as ends in themselves. This
is a Kantian interpretation of justice. Kant begins with the proposition that moral
principles are the object of rational choice. They define the moral laws that a rational
person will use as his or her guideline for behavior. The moral law must be acceptable
and available to everyone. People are free and equal. These things hold also for the
principles of Rawls.
6.2 Nuclear waste and justice: the present generation
The NEA talks about equality and justice within the present generation. This raises
many questions, as the work of Schrader-Frechette shows.[34] Nuclear waste storage
sites are established in rural areas, far from population centers. Is it fair that someone
should be made to run a risk just because he or she lives in the country? Should
the local gavernment or the local population be able to exercise a veto, even when
it appears that the site chosen is the best one in the whole country? Or should the
government be able to designate a site?
The third dilemma has to do with the level of protection. When are risks
acceptable? For this the government has figured an average risk. But an average
risk for the whole population does not necessarily mean that the individual risk is
acceptable. In fact, this is a case of a limitation of the basic freedoms in the theory
of Rawls. In consequence, application of the theory of Rawls to the discussion about
134
nuclear waste leads to the conclusion that storage of waste is contrary to the first
principle, and therefore is unjust.
6.3 Nuclear waste and justice between generations
Storage of nuclear waste brings a risk for future generations. Already in 1980 Robert
Spaemann made the following analysis: [35]
It is in the nature of human actions that they have side effects. Actions are aimed
towards goals. All of the other effects of actions, then, are lowered to the level of
secondary effects, means, and costs. The difference between means and side effects
lies in the fact that means are still wanted, while side effects are unwanted, just added
to the sale, so to speak. The secondary effects of human actions can concern people
who could not participate in the making of the original decision, because at the time
of the decision they were under age or not yet born.
With nuclear energy and nuclear waste there is the problem of irreversibility of
released radioactivity. In order to make use of nuclear energy for perhaps the next
30 years, we create radioactive waste that will remain dangerous for thousands of
generations.
The releasing of radioactivity creates a situation that cannot be undone by any later
decision. The coming generations must accept this situation as an unchangeable and as
such unfruitful part of their lives. This means that a minority (the present generation)
makes a decision for which the majority (the future generations) must pay the costs.
Therefore the use of nuclear power is not ethically responsible. The state is the
body that is responsible for a judgement over long-term results of human actions. And
therefore the state must prevent the opening of nuclear power plants, claims Robert
Spaemann.
7. Nuclear waste and the greenhouse effect
Nuclear energy has been called the solution for the greenhouse effect. The fact that
nuclear energy also creates nuclear waste is given. But is nuclear energy the solution
for the greenhouse effect? We give here three arguments:
1. Worldwide, 441 nuclear power plants furnish 6% of energy production.[36]
Nuclear power plants produced 2544 billion kilowatt hours in 2002. Combating the
greenhouse effect demands a steep increase in the number of power plants, but that
is not taking place. Between 1988 and January 2003, the number of nuclear power
plants worldwide has only increased by 12[37]. The International Energy Association
expects that in the next ten to twenty years, except in South Korea and Japan, few or
no nuclear plants will be built in OECD countries [38]
Strong growth of the nuclear energy will use up the limited supply of uranium.
Should predictions made in the seventies about uranium supplies prove correct,
supplies would begin to run out in the next five years.[39,40] According to a report
from the Dutch government from 2002, the proved uranium supplies are perhaps just
enough to fill the expected worldwide demand until 2050.[41] Precisely because of
the limited supply of uranium, the nuclear industry wanted to convert to breeding
reactors, but that has proved a failure[42].
135
2. Nuclear energy is not CO2 free, as the Nuclear Energy Agency wrote in 1998[43];
here is meant the indirect CO2 emission caused by the building of the plant itself,
and the first step of the fission process, the extraction and the refining of the uranium
ore. In the future, the indirect emission will increase; the reason for this is that people
will have to seek recourse to poorer ores, and the poorer the ore, the more it must be
turned over and processed in order to yield a certain amount of uranium, and the more
energy is needed.
In a study that appeared in September 2002 by the Dutch writers Storm van
Leeuwen and Smith, it appears that the nuclear energy-related CO2 emission is not
even much less than that from a gas-powered energy plant. They conclude :[44] “The
use of nuclear power causes, at the end of the road and under the most favourable
conditions, approximately one-third as much CO2-emission as gas-fired electricity
production. The rich uranium ores required to achieve this reduction are, however,
so limited that if the entire present world electricity demand were to be provided by
nuclear power, they would be exhausted within three years. Use of the remaining
poorer ores in nuclear reactors would produce more CO2 emission than burning fossil
fuels directly.
The remarkable conclusion of this research might prompt one to ask, “How is it
possible that an entire energy industry was built up when in fact, using all available
resources, it could only provide such a small amount of electrical power?” The
full energy content of the 0.71% 235U in natural uranium could be converted into
electricity (with essentially no losses, except for the unavoidable loss when heat
energy is converted into electrical energy). As our calculations show, this is a far
cry from reality. The magnitude of the losses become clear when the energy costs of
energy production are taken into account. .. The largest unavoidable energy cost is that
of mining and milling the uranium ores. To calculate this we use only the data on these
processes provided by the industry itself. The rich ores that are at present exploited
need very little energy for exploitation, but the useful energy content of the known
rich ores is quite small (under the assumption that only the 235U is “burned”). When
they are exhausted the energy needed for the exploitation of leaner ores will require
more energy from fossil fuels to be able to function than the nuclear power-plant will
provide, so that a nuclear power-plant would become an inefficient gasburner.” So far
the study of Smith and Storm van Leeuwen.
3. The nuclear industry counts on a negligible CO2 emission through nuclear energy.
The same holds for the Nuclear Energy Agency in Paris. The NEA has a scenario
in which by the year 2100, 18 times as much power as now would be produced by
nuclear energy. That would require the establishment of, on average, 60 large nuclear
power plants per year. But even with the planned expansion of nuclear power plants,
the CO2 emissions for the year 2100- the estimate is in fact low- are reckoned at only
4% less than in 1990.[45]
8. Conclusion
In conclusion, we propose that justice requires that future generations should not
begin life worse off than us. Storage of nuclear waste can result in harm for future
136
generations, while they can obtain no advantage from it. That makes the application of
the principle of justice difficult. Justice includes responsibility for the consequences
of our actions. With nuclear power, that calls for responsibility for hundreds of
thousands of years. This goes beyond our power of reckoning.
Justice could be the ethical basis for storage of nuclear waste, in the case of
avoiding a still greater damage for future generations, such as the greenhouse effect.
However, in the last section we have shown that nuclear energy is no solution to the
greenhouse effect. On the basis of the principle of justice, then, dealing with nuclear
waste is problematic.
References:
1. Nuclear Energy Agency, “Environmental and ethical aspects of long-lived
radioactive waste disposal”, Proceedings of an International Workshop organized
by the Nuclear Energy Agency in co-operation with the Environment Directorate,
Paris, 1-2 September 1994.
2. “The Environmental and Ethical Basis of Geological Disposal”, A Collective
Opinion of theRadioactiveWasteManagement Committee of the OECD Nuclear
Energy Agency, Paris 1995.
3. Nuclear Energy Agency, “Informing the Public about Radioactive Waste
Management”, Proceedings of an International Seminar, Rauma, Finland, 13-15
June 1995, NEA, Paris 1995.
4. James Rachels, “The Elements of Moral Philosophy”, McGraw-Hill, Third
Edition, 1999, p.1.
5. Joosje Buiter-Hamel, “Ethiek basisboek”, Wolters-Noordhoff, Groningen, third
printing, 1998, p.18.
6. H.A.M.J. ten Have et al., “Medische ethiek”, Bohn Stafleu Van Loghum, Houten/
Diegem, 1998, p.8.
7. H.A.M.J. ten Have et al., op.cit., p.9.
8. H.A.M.J. ten Have et al., op. cit., p. 16, 17, and 18.
9. Joosje Buiter-Hamel, op.cit., p. 35.
10. James Rachels, op. cit., p. 178.
11. Joosje Buiter-Hamel, op. cit., p. 35.
12. Paul van Tongeren, “Waarom deugdenethiek”, in:Wijsgerig Perspectief, 42, 2002,
number 1, p.3-10.
13. General Assembly Plenary -1a- Press Release GA/9758, 8th Meeting (P.M.),
September 8th, 2000.
14. James Rachels, op. cit., p. 19, 30, and 49.
15. James Rachels, op. cit., p.122-125.
16. Immanuel Kant, “Fundering voor de metafysica van de zeden”, Dutch translation
by Th. Mertens, Amsterdam,1997.
17. Interview with Van Hees, November 21, 1999.
18. Paul van Tongeren, op. cit., p.3-10.
19. Ad Verbrugge, “Denken om het eigen goede leven”, in :Filosofisch Perspectief,
42, 2002, number 1, p.22-37.
137
20. Constantine Hadjilambrinos, “An Egalitarian Response to Utilitarian Analysis of
Long-Lived Pollution: The Case of High Level Waste”, in: Environmental Ethics,
vol.22, Spring 2000, p.43-62.
21. Wim Zweers, “Reactie op samenvatting en vragen van de Stuurgroep”, publication
of the Stichting Stichtse Milieufederatie, De Bilt, July 11, 1982.
22. K.S. Schrader-Frechette, “Nuclear Power and Public Policy: The Social and
Ethical Problems of Fission Technology”, Dordrecht, Reidel, 1980.
23. K.S. Schrader-Frechette, “Burying Uncertainty: Risk and the Case Against
Geological Disposal of Nuclear Waste”, University of California Press, Berkeley/
Los Angeles/ London, 1993, p. 195-199.
24. M. Hilhorst, “Verantwoordelijk voor toekomstige generaties?”, Kampen, 1987,
29-44.
25. M. Hilhorst, op. cit., p. 45.
26. M. Hilhorst, op. cit., p.43-45.
27. K.S. Schrader-Frechette, “Ethical Dilemmas and Radioactive Waste: A Survey of
the Issues”, in: Environmental Ethics, vol.13, Winter 1991, p. 327-343.
28. see footnote 2.
29. Ernest Tugendhat, “Vorlesungen uber Ethik”, Suhrkamp, Frankfurt am Main,
1995.
30. Hans Kung, “Weltethos fur Weltpolitiek und Weltwirdschaft”, Piper Verlag,
Munchen, third printing, 1998, p. 140.
31. Christof Hubig, “Technik- und Wissenschaftsethik: Ein Leitfaden”, second
edition, Springer, Berlin, 1995, p. 116.
32. John Rawls, “A Theory of Justice”, revised edition, Cambridge, Mass., 1999.
33. M.J. Trappenburg, “John Rawls”, in: P.B. Cliteur and G. A. Van der List (eds.),
“Filosofen van het hedendaagse liberalisme”, Kampen, 1990, p.91-105.
34. K.S. Schrader-Frechette, “Risk and Rationality”, University of California Press,
1991; same author, “Research in Philosophy and Technology”, in: Frederick Ferre
(ed), Technology and the Environment, 12, p. 147-155, University of Georgia, JAI
Press Inc., 1992; same author, “Nuclear Energy and Ethics”, Geneva, 1991, World
Council of Churches.
35. Robert Spaemann, “Technische Eingriffe in die Natur als Problem der politischen
Ethik”, in: Karl Otto Apel et al., Practische Philosophie/Ethik”, Fischer
Taschenbuch Verlag, Frankfurt am Main, October 1980, p. 229-248.
36. Nuclear Energy Agency, “Nuclear Energy and the Kyoto Protocol”, Paris, July
2002, p.19.
37. At the end of 1988, there were 429 nuclear power plants worldwide, and 105
under construction. By the end of 1994, the number of nuclear plants reached 432,
with a total capacity of 340 Gigawatts (GW; 1 GW= 1000 Megawatts (MW); 1
MW= 1000 kiloWatts); by the end of 1995 there were 437, with a capacity of 343
GW; at the end of 1996, there were 442, producing 350 GW. In January 2003,
there were 441 nuclear power plants in operation, with a capacity of 357 GW,
and 28 nuclear plants under construction (information from the World Nuclear
Association, London).
138
38. Caroline Varley and John Paffenbarger, “Electricity Market Competition and
Nuclear Power”, reading given at The Uranium Institute, 23rd Annual Symposium,
September 10-11, 1998, London.
39. IAEA, 1974 Annual Report, Vienna; Joop Boer, Steef van Duin, Jan Pieter
Wind and Herman Damveld, “Atoomafval in beweging: Een overzicht van de
problametiek van het radioactief afval”, De Haktol, Arnhem, 1982, p.7.
40. In 1975 the IAEA estimated that by the year 2000 about 2,300,000 MW would be
at the disposal of nuclear plants. In 1997, IAEA made another estimate for 2000,
of 368,000 MW, a reduction by a factor of 6.5. In case the estimate from 1975 were
correct, the uranium supply now would only be sufficient for 6.5 years, instead of
40 years.
41. Dutch Parliament, 2001-2002 session, 28241, n.2, p.10.
42. Herman Damveld, “De onvervulde belofte van kweekreactoren”, Technisch
Weekblad, March 1, 2002.
43. Nuclear Energy Agency, “Nuclear Power and Climate Change”, April 1998,
p.11.
44. J.W. Storm van Leeuwen and P.Smith, “Can nuclear power provide electricity for
the future; would it solve the CO2 emission problem?”, revised edition, March 28,
2002, on:www.oprit.rug.nl/deenen/
45. Nuclear Energy Agency, “Nuclear Energy and the Kyoto Protocol”, Paris, July
2002, p.42-44.
139
Questions and Advice
to the Swedish Radiation Authority
in their Current Work on Radiation Safety
from Participants in Focus Group Discussions
in the Municipalities of Östhammar
and Oskarshamn
Britt-Marie Drottz-Sjöberg
Department of Psychology
Norwegian University of Science and Technology (NTNU),
N-7491 Trondheim, Norway. [email protected]
ABSTRACT
In connection to their work on developing a “general advice document”, based
on the radiation protection law, the Swedish Radiation Authority (SSI) initiated a
process in 2002 that welcomed comments and suggestions from the general public,
specifically representatives and interested parties involved in the work related to a
Swedish repository for high level nuclear wastes. The authority held a seminar in
September, and presented the forthcoming task. The present paper summarises and
exemplifies discussions in focus groups in October 2002, when participants from
the municipalities of Oskarshamn and Östhammar met to give their input to the
authority’s ongoing work. The questions and suggestions emerging from the focus
groups are classified into three major areas in this presentation: I. Issues related
specifically to radiation and radioactivity. II. Issues of comprehension of terminology,
measurements, risk, and safety. III. Issues concerning the information process and
the transfer of knowledge. The discussion highlights that issues and comments raised
by the public are not constrained to specific knowledge questions, e.g. on radiation
or risk, but may relate to legal, strategic and political considerations, as well as the
basics of the performed analyses and the related assumptions and evaluations. Ideas
for improving public knowledge and for facilitating an exchange of information are
outlined below.
Keywords Radiation, Radioactive waste, Focus groups, Public participation
1. Introduction
In line with previous work related to an intensified dialogue with the public [1,2,3] the
Swedish Radiation Authority (SSI) held a seminar in September 2002 and initiated a
140
process in 2002 that welcomed comments and suggestions from the general public. It
was connected to their work on developing a “general advice document”, related to
the authority’s 1998 Instructions on protection of human health and the environment
regarding the final disposal of used nuclear fuel and nuclear waste [4]. Especially
representatives of the concerned municipalities, people living in the vicinity of
the recently started site investigations, and other interested parties involved in the
work regarding a Swedish repository for high level nuclear wastes were invited to
participate in the discussions.
SSI presented the central themes of the forthcoming work at the seminar in September
[5,6], as well as gathered comments from the participants at that time. The project and
results presented here relate to discussions in focus groups in October 2002, when
interested parties from the municipalities of Oskarshamn and Östhammar met to give
their input to the authority’s ongoing work.
2. Method
2.1 Work method, design and participants
A focus group usually involves a rather small number of invited persons, e.g. 10±4,
and at least one leader of the discussions. The central topic is selected and announced
in advance, and often presented with some kind of introductory, written material. The
focus group is put together in such a way that it involves the broadest possible range
of experience or interests in relation to the selected theme. This work method does
not aim at achieving consensus, nor to encourage heated debates. The method often
aims solely at clarifying what comments and ideas the participants have in relation
to the selected topic. It collects the perspectives the participants want to contribute to
the exchange of experiences for achieving a more comprehensive picture of the issue
at hand.
The design of the present project was simple, and involved two group meetings in
each of the two municipalities of Oskarshamn and Östhammar. Each group counted
between 8-12 participants (in total > 40 persons) and each discussion lasted 2,5 hours,
excluding breaks. The participants were persons active in political or administrative
bodies in the municipalities, e.g. in the local security council. Others again had been
involved in reference groups, or similar bodies connected to the municipality council
when the Swedish Nuclear Fuel and Waste Management Co’s (SKB) conducted the
feasibility studies. Others were interested land owners, or property owners, who had
a specific interest in the newly started site investigations due to the closeness to their
properties. All participants took an active part in the discussions. One representative
from SSI participated in each meeting. This expert was available for giving immediate
responses to questions and for collecting impressions for the forthcoming work at the
authority.
2.2 Data collection, organisation and presentation
Short, and specific, comments and questions in the discussions were written down
141
verbatim, whereas longer presentations of points of views were summarised. The
written material constituted the information from the focus group and is here
considered the data. This material was sent (by e-mail or postal service) to all
participants for checking and comments before it was categorised and summarised
in a report to the authority [7]. The contents of the discussions were categorised into
three major themes, within which several topics were identified, as can be seen in the
following presentation.
3. Results
3.1 Issues specifically related to radiation and radioactivity
Comments and suggestions specifically related to radiation and radioactivity are
presented within this categorisation of questions. The respondents asked basic
questions about e.g. the nature of radiation, about related risks and health effects.
Examples of questions and discussed themes are summarised here under the headings
of (a) General and complex issues, and issues related to (b) the repository, (c) to
human beings, (d) to the environment, and (e) to time perspectives.
(a) General and complex issues. Some respondents asked for explanations of how to
understand radiation and radioactivity, the physical properties and the effects on human
health and on the natural environment. For example: What is radiation? When does
a hazard become dangerous? When does a leakage, a dose or a concentration result
in negative health consequences or detrimental environmental effects? Participants
also wanted to understand radiation risks in relation to “the total risk picture”, e.g. the
construction of a high-level waste repository, and what part of that total risk that could
be referred to various sources. There was an interest to know if current knowledge
in the scientific field was sufficient for the assessment, and the management, of
radiation risks. Some wanted to get an idea of what to expect of future research, e.g.
the likelihood of theoretical breakthroughs or major achievements influencing the
construction of the repository, or altering risk assessment. Examples of questions and
comments:
4How does radioactivity spread from the repository?
4How is the danger transmitted and how does it affect man and the environment?
4Please describe substances, particles and gamma radiation and explain the respective
degrees of risk.
(b) Issues related to the repository. Exactly what will be placed in the repository? Is it
possible that there will be a change over time regarding what could finally be disposed
of in the bedrock in relation to what is discussed now? Has the size of the repository,
or the depth, any importance for the radiological situation of the surrounding area?
What importance could it have, or what problems could be caused, if there are other
nuclear installations in the vicinity of the high-level waste repository? The relationship
between the design and the safety level of the repository, especially in relation to
radiation risk, was thus of interest to the group discussion participants. Questions
142
were also related to the issue of possible future retrieveability of the nuclear material.
Often the concern was focused on whether one should expect a trade-off between
safety standard and one or several of the mentioned factors related to the repository’s
design or localisation. A central issue was whether the safety of the repository could
be compromised by retrieveability.
The Swedish radiation authority’s focus regarding the “general advice document”
was intended to be on the time after the closure of the high-level waste repository.
However, several discussants raised questions and asked for information about
radiation risks in and around the repository during it’s construction and filling, i.e.
before it’s sealing. The discussions highlighted transportation risks, work within and
around the repository, as well as the fact that the repository would be open for some
time. What about ventilation of the repository during it’s construction and filling?
How would the ventilation affect the surroundings? It was also of interest to know
how the authority, specifically during the time of an open - or rather not yet closed repository, had considered the handling of intentional intrusions or attacks. Examples
of questions and comments from the group discussions:
4How to explain the relationship (the trade-off) between risk and the size of a repository,
and possible (additional) new future repositories?
4Are there different risks given different depths of the repository?
4Radiation protection issues and radiation risks related to retrieveability ought to be more
highlighted.
(c) Issues related to human beings. The questions that directly related to human
beings in the discussions concerned the authority’s view on the size of acceptable
risk, what effects a radiation dose has, and how radiation accidents during work in the
repository were to be managed. The participants were also interested in getting some
frames of reference to enable them to better understand risks to health and safety. A
number of rather concrete questions had become highlighted among persons residing
in the vicinity of the possible, future repository. These concerned the future radiation
situation, the availability of measurement instruments, and radiation risks in the
current site investigation. It was also of interest to gain knowledge about the potential
effects of the work on the planned facility on the private economy. The discussions
showed that there were clearly expressed wishes of receiving continuously updated
information on the radiological situation in the local environment, and in addition,
to make available such historic information for comparison reasons. Examples of
questions and comments from the group discussions:
4What exactly can get out of the bedrock from a depth of 500 meters? What happens with
those living in the vicinity?
4What would happen (radiation dose) if human beings in some why change their consumer
patterns?
4The closeness to the repository area (f. ex. Misterhult) leads to a greater involvement.
4From the perspective of a landowner, a number of issues can arise. Among them (the change
of) ground water levels and other water issues.
143
(d) Issues related to the environment. The questions on how radioactivity and radiation
affect the environment were similar to the above mentioned knowledge questions.
They concerned what phenomena are involved, why, and how effects are detected.
What effects would radioactivity have on animals and plants? Some participants even
asked about specific animals and plants. There were, however, also more elaborated
questions, e.g. on how the changed environment over long times in or around the
repository would, or could possibly, have effects on the repository or its functions, and
thus in the end influence the radiation situation. One example concerned the change
of the ground water level, the effect of this change on the various buffers around the
spent nuclear fuel, and the potential radiological effects due to this situation. Selected
examples:
4How is radiation or leakage detected in water wells?
4How would berries and mushrooms be affected in the vicinity (of the repository)?
4When the ground water changes, and the level could be going up and down, how is the
repository affected?
(e) Issues related to time perspectives. The time perspectives of radiation risks and
the spread of radioactivity in future times are issues that are very hard to grasp and
to discuss. The time perspectives are also among the most fascinating aspects of the
nuclear waste management issue. The participants urged the authority to try to elucidate
the extreme time perspectives involved, to do this in a comprehensive manner, and
to explain the risks and the changes of risk over time. Some people argued that the
shorter, and more comprehensible, time perspectives ought to be considered the more
important ones. Others underlined that it is the extreme time perspectives that make
the authority’s work so important. Also administrative issues were considered across
long time perspectives, e.g. how can knowledge be maintained over time? Examples
from the group discussions:
4How does the radiation situation change across 10 years, to 100 and to 100.000, etc.?
4It is hard to come to grips with the long time perspectives. What can happen in this total time
frame, during this whole process? And if there is an accident during this time, what effect
would that have?
3.2 Issues of comprehension of terminology, measurements, risk and safety
The second major category emerging from the group discussions presented in some
depth the questions and problems that are difficult to understand for non-experts
on radiation or nuclear waste issues. It exemplified what type of information and
knowledge people asked for in the group discussions. Some of the discussed themes
were related to the eternal research questions, such as the quality and validity of
current knowledge and of research findings. A number of questions asked what
specified phenomena actually are, and where or why demarcation lines are drawn
between phenomena, how certain effects can be differentiated, etc. Although
not immediately relevant to the authority’s task of publishing the general advice
document, the provision of some kind of information related to fundamental research
144
issues could help guide non-experts to a better understanding of the research process
and the establishing of relevant facts. Such answers could facilitate a non-expert’s
orientation in the world of Science, and in enhancing comprehension and feelings
of personal control also provide new perspectives on risks. This section summarises
questions and comments of the second major category resulting from the discussions,
divided into five themes: (a) terminology and definitions of concepts, (b) estimations
and the bases of estimations, (c) about safety, risk and danger, (d) knowledge and
values in the safety assessment work, and (e) status of regulations, responsibilities,
roles and interests.
(a) Terminology and definitions of concepts. Words and concepts used e.g. by the
authorities were sometimes unknown to the participants or hard to comprehend.
This could even be the case with persons who had quite some experience with
nuclear waste issues, i.e. those involved in the previous feasibility studies in the
municipalities. Among the difficult concepts are “dose” and “collective dose”, and
mathematical expressions of risk. Questions were also asked about the definitions of,
and differentiation of, “risk” and “uncertainty”, “biosphere” and “geosphere”, as well
as how “final” in final repository should be understood. A few examples:
4Please explain and clarify the concept of “collective dose”, and how it is related to other
similar concepts. What is “collective dose”, is it everybody?
4What is the difference between risk and uncertainty?
4Make concrete and simple comparisons of risks. Give information so that one can take a
personal stand.
(b) Estimations and the bases of estimations. How is a risk assessment made? What is
included in the estimation? How can one test large systems (of data)? And, how does
one know that very complex estimations are correct? These are some of the questions
the participants wanted to have answered. The questions varied considerably in
complexity. Some asked for concrete examples of how experts do when they provide
a measure, or an estimate, e.g. the dose to a human being or a specified natural
environment. Other questions had to do with what parameters, components or
systems that constitute the basis for risk estimations or safety assessments. It was of
interest to know how different kinds of data are combined into an overall assessment.
Furthermore, how one estimates the validity of such combinations, and of the overall
assessment. People were also interested in knowing if there are, or might soon be,
better or more elaborate estimation models available. And it was of importance to the
participants to know how the authority does their evaluation of the correctness of the
implementers analyses and results. Examples:
4Dose over time, how is that estimated? Could it vary over various time periods? How will it
be reported by SKB, and under later time periods (after closure of the repository)?
4Which parameters are involved in the “danger-risk-equation”? How are inherent relations
(trade-offs, amplifications, etc.) estimated?
4Compare different methods and explain shortly their pros and cons.
145
4It would be interesting to see a mathematical model, with all the included parameters, and
to understand how one calculates the resulting number material.
(c) About safety, risk and danger. A number of questions were related to what should
be protected, what safety margins and controls are available or used, and which
risks are related to the canisters. Furthermore, the risk of a concentration of nuclear
installations to one geographic area, and e.g., the capability of CLAB, and how
long the nuclear waste could be stored there. There were also discussions of which
scenarios that best represent the worst possible cases or outcomes. Most questions
in the discussion sprung from radiation risk aspects. And generally speaking, they
reflected a keen interest in the points of departure in analyses work, the relevance of
the used scenarios, the concrete procedures in estimations, as well as the reliability
and validity of the results. A few selected examples:
4The stipulated margins of safety, are they sufficient?
4What is it that will be protected? Those working, residents in the neighbourhood, the
environment?
4What happens when the ground water fills the repository, after the closure?
4With respect to “safety level”, it should be proven (not only estimated) to be valid for those
living close by (the repository).
4Put the damages over time in relation to each other; show comparisons, concretise.
(d) Knowledge and values in the safety assessment work. This category relates to what
one knows with certainty and what must be assumed or inferred in the assessments
or the analytic work. Several questions were concerned with how the weighing of
various aspects in a model, or into a final result, is conducted, and especially if, and
in that case how, economic aspects, costs and benefits, are weighed relative safety
aspects. An example was the relationship between the cost and the safety level in
the design of the copper canister. The participants asked for explanations of the
“optimisation process” in this context, and wanted to know if there were any absolute
limits, or criteria, for safety. Selected examples follow:
4How does one differentiate between values, estimates and facts? What is known with
certainty, and where must various hypotheses etc. be used? It would be desirable if it was
outlined when one bases the conclusions on one or the other, so that it would be clear if that
which is stated is something that one knows or something that one believes. Is it possible to
require some form of quality assurance in this context?
4With respect to assumptions, which ones has SKB made? Are they trustworthy?
4Is there any trade-off between safety and economy? For example, with respect to the
thickness of the canister?
4Describe how the weighing (or optimisation) of influences from various sources is done.
(e) Status of regulations, responsibilities, roles and interests. There were a large
number of questions about how the authority (SSI), the authority’s regulations,
and the general advice document, should be understood within a legal framework.
146
What status and function has the general advice document? What about the legal
relations vis-a-vis the environmental court, the county administrative board, and the
municipality? It was also of interest to get specifications of which facilities, and which
waste products, that were to be included in the authority’s regulations and general
advice document. Furthermore, the participants wanted to know if the shaping of the
latter document could possible come to influence the future process, for example with
respect to guiding the design of the final repository (the KBS-3 method), the process
of consultations, or issues related to the closure of the repository. Questions on
responsibilities were also expressed as e.g. “Who owns the problem?” Especially the
responsibility after the closure of the repository was of interest. It is suggested that the
exemplified questions and issues could provide an input to the authority’s presentation
of its role and authorities, as well as the legal and administrative framework. A few
examples from the discussions:
4What status does SSI’s regulations have? And what role does the authority, or (the nuclear)
authorities, have in the overall decision process.
4Which is the relationship between the county board and the municipality? Is the
environmental court free to select a consultant other than SSI?
4How is the situation in other countries regarding regulations of the kind that SSI develops?
4Who will be responsible (for it) when the repository is closed?
4It is the authorities that should respond to the decision makers questions. What resources do
the authorities have to manage this task?
3.3 Issues concerning the information process and the transfer of knowledge
The discussions offered quite a lot of suggestions and advice regarding what is
important and interesting facts from the area for non-experts, and how to shape and
spread the information. Participants often had extensive experience of information
materials related to earlier work with the nuclear waste issue, and thus on what
questions are hard to tackle. Information materials, and other kinds of presentations
to the public, should use simple language and pedagogically explain key concepts
and facts. The use of examples, comparisons and good descriptions of the relevant
phenomena was seen as essential. A dictionary or appendix explaining central terms
or phenomena could be attached to a report, etc. Ordinary citizens could be invited
to proof read the final versions of materials aimed at the general public. The group
discussions detected a keen interest in the municipalities to utilise authority experts
and other experts in local seminars and study circles.
Information to future generations, and maintenance of expertise over time, were other
themes discussed. Could youth and children be reached already today with relevant
information? What could be successful and adequate strategies in this respect? And
how would information be stored over time, across generations, etc? Dialogue and
various other forms for exchange of knowledge, experience and thoughts were
mentioned, as well as the municipalities’ own utilisation of experts in the future work.
Suggestions furthermore included considerations on how information programmes
and decision processes could be develop in the future, which geographic regions were
147
considered relevant in decision processes, the future responsibility for the repository,
and the future role of nuclear power in Sweden. It was underlined that well presented
responses to the public’s questions could successfully inspire local efforts of achieving
an insightful public response in the next phase of the siting process, e.g. if there are
local referenda on the possible localisation of the final repository. A question that
summarised many concerns was “Can we (politicians) be trustworthy if we say that it
is completely safe?” A few other examples follow below.
4It is important that information reaches all persons in the neighbourhood. And that
safety comes first, not the economy.
4How can one facilitate for people to understand the consequences of various decisions and
various decision alternatives?
4It is important to clarify the issue of how information will be preserved over time.
4It is important with a good and open attitude. It’s a matter of long time periods. It’s important
with reflection before important decisions.
4. Summary and discussion
The questions and comments presented here illuminated what public representatives
and other participants believed could guide and advance the authority’s general advice
document. The discussed contents stretched from specific knowledge questions to the
validity of safety assessment models, and further on to guiding principles of radiation
protection work, its relevance to the larger contemporary society and into the future.
Figure 1 gives an overview of the different types of considerations that evolved from
the discussions.
Figure 1. Illustration of main categories of questions emerging from the focus group
discussions, their relevance in relation to enhanced understanding of radiation
protection criteria, and relevance to information and knowledge.
148
There were questions on risks to humans, to the environment, and on risk development
over long times. Questions on how safety and the radiation situation could be affected
by various designs of the canister and the repository, including its localisation, and
on the reliability of future scenarios. Some questions specifically concerned the
radiation situation in relation to retrieval of the spent nuclear fuel, and radiation safety
measures in the period before and until the repository is closed. It was suggested that
SSI, in the work on developing the general advice document, informs on the current
radiation protection regulation for the general population and for radiation related
work, and gives information on reasons to, and effects of, radiation doses, as well as
their measurement. It was considered desirable that the forthcoming document would
explain, and exemplify, how flora and fauna were to be protected. It was of importance
to the participants to know more about the basic assumptions in risk analysis and
safety assessments. The discussions clearly indicated an interest in a higher degree
of transparency regarding what is known with certainty and what is assumed or
estimated. Some wanted to know how the authority evaluates the correctness of the
implementer’s analyses and results. There was an interest in being informed about
the weighing of components utilised in data models, procedures in the optimisation
work, as well as in the process of setting priorities, and in the legal framework for
decision-making.
The participants were furthermore interested in learning if there were pedagogical
methods available to better describe the phenomena at hand. They underlined the
importance of providing more knowledge to interested non-experts, and that the
authorities clarify and exemplify the intended meaning in information texts.
In conclusion it will be stated that there still is an active interest in the site
investigation municipalities to participate in, and contribute to, the work related
to creating a Swedish repository for spent nuclear fuel and high level radioactive
wastes. The discussions in the focus groups showed (a) that people have substantial
comments with relevance to the general advice document, and that (b) the involved
persons’ requirements for knowledge, as well as their comments, stretched far beyond
the scope of the planned document. The emerging picture is not one where simple
answers will satisfy those who involve themselves in the subject area. In fact, they
also provided a number of suggestions of methods and manners by which to improve
the processes of information distribution, communication and knowledge acquisition.
The focus group method allows time for dialogue and can result in well articulated
questions and points of view.
References
1. Andersson, K., Espejo, R., and Wene, C.-O. 1998. Building channels for
transparent risk assessment, SKI Report 98:5. RISCOM pilot study, Stockholm:
SKI.
2. Drottz-Sjöberg, B.-M. (2001). Communication in practice. In K. Andersson (Ed.),
VALDOR 2001 (419-427). Stockholm: SKI.
149
3. Drottz-Sjöberg, B.-M. (2001). Utvärdering av utfrågningar. Resultat från genomlysningar av kärnavfallsfrågan i de svenska förstudiekommunerna {Evaluation of
hearings. Results from an illumination of the nuclear waste issue in the swedish
feasibility municipalities}. SKI Rapport Nr 01:39.
01:39 Stockholm: SKI.
4. The Swedish Radiation Authority’s instructions on protection of human health
and the environment regarding the final disposal of used nuclear fuel and nuclear
waste. SSI FS 1998:1. Stockholm: SSI.
5. Hedberg, B. (2002). Kort sammandrag av gruppdiskussionerna vid SSI:s
riskseminarium, den 21/8 2002 {Short summary of the group discussions at SSI’s
risk seminar, August 21th, 2002. Stockholm: SSI.
6. Blix, A. (2002). SSI-seminarium om risk och slutförvar {SSI-seminar on risk and
final repository}. Stråskyddsnytt, nr 3. Stockholm: SSI.
7. Drottz-Sjöberg, B.-M. (in press). Med fokus på SSI:s risk och strålskyddskriterier.
En rapport baserad på diskussioner i fokusgrupper i Östhammars och Oskarshamns
kommuner, under oktober 2002. Stockholm: SSI.
150
Value Judgements and Trade-Offs
in Management of Nuclear Accidents:
Using an Ethical Matrix
in Practical Decision-Making
Deborah Oughton & Ingrid Bay
Agricultural University of Norway
Postbox 5026
1432 Ås
NORWAY
Ellen-Marie Forsberg & Matthias Kaiser.
The National Committee for Research
Ethics in Science and Technology (NENT)
Oslo
NORWAY
Experience after the Chernobyl accident has shown that restoration strategies need
to consider a wide range of different issues to ensure the long-term sustainability
of large and varied contaminated areas. Thus, the criteria by which we evaluate
countermeasures needs to be extended from simple cost benefit effectiveness and
radiological protection standards to a more integrated, holistic approach, including
social and ethical aspects. Within the EU STRATEGY project, the applicability of
many countermeasures is being critically assessed using a wide range of criteria,
such as practicability, environmental side-effects, public perceptions of risk,
communication and dialogue, and ethical aspects such as informed consent and the
fair distribution of costs and doses. Although such socio-ethical factors are now the
subject of a substantial field of research, there has been little attempt to integrate them
in a practical context for decision makers. Within this paper, we suggest practical
means by which these can be taken into account in the decision making process,
proposing use of an ethical matrix to ensure transparent and systematic consideration
of values in selection of a restoration strategy.
1. Introduction
Over the past decade, people involved in radiation protection policy have shown
an increased awareness that risk management needs to take account of social and
ethical issues. With respect to nuclear accidents, it is generally accepted that the
costs of remediation can be greater than the financial investment required to carry
out countermeasures, that the benefits of some actions may extend beyond dose
reduction, and that the acceptability of countermeasures can depend on a variety of
social and ethical factors [e.g., 1-5]. Acknowledgement of these developments has
been an important part of the STRATEGY 5th Framework EU project –Sustainable
Restoration and Long-Term Management of Contaminated Rural, Urban and
Industrial Ecosystems [www.strategy-eu.org.uk], which includes a number of
countermeasure evaluation criteria such as practicality and acceptability, socio-ethical
151
aspects, environmental consequences and indirect side-effect costs [6]. In practice,
however, there is a danger that inclusion of these additional assessment criteria will
prove problematic. For example, difficulties may arise due to:
•
•
•
•
•
•
the many dimensions to the criteria;
the fact that different people will be affected in different ways;
the complexity of the issues (many countermeasures have both positive and
negative social and ethical consequences);
the various “trade-offs” that may be required when making choices;
a possible lack of agreement within society on what is practical or acceptable,
let alone on how to “put a price on” such non-monetary side-effects; and
the lack of established procedures, and experience, in systematically
incorporating these dimensions in decision-making.
If decisions about countermeasure selection are going to be ethically and rationally
defendable, decision-makers require advice on what criteria are important to consider
and why, and also a methodology to ensure a systematic and transparent procedure
for balancing these criteria. The aim of this paper is to offer such advice and to give
suggestions for such a methodology (i.e., an ethical matrix).
2. Options, Constraints and Trade-offs
Current ICRP recommendations state that, for an action to be justified, the benefits
from dose reduction or averted dose should outweigh the costs of implementing
the countermeasure [7]. Although more recent recommendations acknowledge that
the costs of countermeasure may be both social and economic, and that there may
be benefits other than dose reduction [8]. Investing money to reduce exposure to
radiation is a trade-off in itself, and even this relatively simple trade-off is not without
controversy in radiation protection. Even based on the two criteria of monetary cost
and dose, a decision on how to go about reducing exposure to radiation will be an
ethical judgement: we are, theoretically, making choices about which health risks to
reduce and at what cost. Deciding on a remediation strategy based on a multi-criteria
evaluation is going to require a whole suite of trade-offs and value judgements.
There are obviously a number of actions or interventions that might be implemented
in contaminated areas, ranging from highly disruptive measures such as banning
all food production from an agricultural area, to relatively benign actions such as
ploughing fields, washing roofs or turning over paving stones. In radiation protection,
the primary objective of countermeasure implementation is usually dose reduction.
Most countermeasures have ethical, legal and social factors that will bear on their
acceptability and effectiveness. These factors may be issues that should be taken
into account in the overall balancing and judgement of the negative and positive
attributes of a countermeasure; in other instances they as constraints that veto the
action outright, irrespective of its potential cost-effectiveness. Previous evaluations
of countermeasures have mainly concentrated on technical or economic constraints.
152
For example, that the slope of the hill is too steep to allow ploughing, that there is
insufficient labour, or simply that the countermeasure is too expensive. However,
legal and ethical constraints may also apply, particularly for those that have an
environmental impact. For example, interventions may violate conventions or
directives protecting the habitat of wild flora and fauna, or cultural heritage. Other
aspects may be associated with implications for labour and human rights (such as
worker consent and proper training), animal welfare issues, or social factors such as
whether or not the public will comply with the countermeasure, or whether or not the
farm will be able to continue to market its produce as “organic”.
Stakeholder evaluation of countermeasure templates in STRATEGY suggested
that many countermeasures (especially in UK) were as likely to be rejected on
socio-ethical grounds as technical and economic grounds [9]. Examples included a
strong aversion to any measure that would bring-about contamination of previously
uncontaminated foods (e.g., mixing milk from different sources) or environments,
and an awareness of the problems of contaminated foodstuffs appearing on the black
market. Of course, a rejection of specific countermeasures would be expected to
show site, context and national differences. Whatever the reason for deciding that a
particular countermeasure is impracticable or unacceptable, it is important that the
reasons for rejection are made explicit, transparent and (if necessary) open to revision
at a later date. This process in itself needs to be able to survive public scrutiny.
3. Comparison and Selection of Countermeasures for
Holistic Strategies
Although a countermeasure many be expected to provide a net benefit to society, it
can also influence the distribution of costs, risks and benefits. This distribution has
obvious links to the fundamental ethical values of equity, justice and fairness. Costs,
benefits and risk may vary over both space and time, and between different members
of a community. Countermeasures that reduce collective dose (manSv) may change
the distribution of dose, for example, from consumers/users/farmers to workers/
consumers/populations around waste facilities. The question of who is paying for
the countermeasure and who will receive the benefits must also be addressed. Some
countermeasures, and sets of countermeasures, result in an equitable distribution
of cost and dose reduction, such as investment by tax payers to reduce activity
concentrations in a common food product; others are less equitable, for example,
when a reduction of dose to the majority is only possible at the expense of a higher
dose, cost or welfare burden, on a minority, e.g., banning all farm production in a
small community. In relation to the question of distribution, one needs to consider:
Who is being affected? Who is paying? Does the countermeasure have implications
for vulnerable or already disadvantaged members of society (children, ethnic or
cultural minorities)?
When countermeasures are combined into a holistic strategy, the overall impact on
a region’s population and environment also needs to be evaluated from an ethical
153
and social viewpoint. Whilst individual countermeasures may be satisfactory, their
combined use within a restoration strategy may lead to an unfair distribution of costs,
doses and benefits across the population. On the other hand, countermeasures can
be combined to ensure a more even distribution, or adjusted to ensure that minority
populations have either consented to, and/or been compensated for, any prior
inequity.
Finally, whether or not a countermeasure is rejected or accepted in practice will also
depend on the available alternatives. And a viable countermeasure may become
inappropriate if there is a less ethically problematic or socially intrusive alternative.
Hence, the eventual selection of a holistic strategy (i.e., the group of countermeasures)
will require a holistic evaluation, including a systematic comparison of the available
countermeasures. This can, in turn, raise a new set of ethical and social issues, related
specifically to the way in which criteria are balanced, the judgements and trade-offs
that will be needed when making choices between alternatives, and the process by
which this selection is carried out.
Both the evaluation and the selection of countermeasures will need to be based on
site and context specific data, including guidelines for collecting relevant information
on both the site itself and effects of individual countermeasures, procedures for
evaluating the decision-making process and recommendations for communication
and participation [10]. The way in which countermeasure evaluation and selection
is carried out is of utmost relevance for ethical evaluation of a holistic remediation
strategy, and will need to build on the assessment of individual countermeasures
and their interactions. As a procedure for ensuring transparent and systematic
consideration of ethical issues, we suggest the use of an ethical matrix.
4. Making Ethical Decisions on Restoration Strategies
Good practical ethical decision-making is built upon three conditions: high quality and
relevant information ((facts); ethical argument informed by relevant ethical theories
(values); and moral judgement. In pluralistic societies, views on both facts and values
typically differ, which makes it difficult to ascertain that all relevant information has
been collected and that there is no bias towards particular ethical approaches. An
ethical matrix is a tool to ensure the systematic consideration of all affected values and
to indicate what facts are needed for making the decision [11].
The matrix derives from principle based ethics. Rather than relying on an overarching
ethical theory to evaluate actions (usually resorting to an appeal to either utilitarian
or deontological doctrines), a principle-based approach holds that some general moral
norms are central in moral reasoning, and that these may be constructed as principles
or rules. There is a huge variety of principle-based approaches and, arguably, the
most successful in practical ethics are those that adopt a pluralistic view, and start
with a selection of principles that are generally acknowledged in society and that
can find a broad degree of support from different ethical theories or cultural beliefs.
154
One of the most famous and widely used examples of principle-based ethics is that
of Beauchamp and Childress [12], who identified four basic principles that form the
foundation of modern medical ethics:
1.
Respect for autonomy (a norm of respecting the free-will and
decision-making capacities of self-governing persons)
2.
Nonmaleficence (a norm of avoiding the causation of harm)
3.
Beneficence (a group of norms for providing benefits
and balancing benefits against risks and costs)
4.
Justice (a group of norms for distributing benefits, risks and costs fairly)
Inspired by medical ethics, Ben Mepham transferred Beauchamp and Childress’
approach to a practical scheme for addressing broader policy related problems [1314]. While Beauchamp and Childress addressed issues concerning primarily the
relationship between doctor and patient (or the public and the health service), there
is a greater complexity in using the approach in societal decision-making, which has
to cover a much wider area of concern, often including animals and the environment.
Mepham suggested that the various principles, values and stakeholders can be best
summarized in an ethical matrix. In this way, a bias towards certain kinds of values
may be avoided, and the matrix can be used to address conflicts between values in a
systematic way, without, necessarily, having to invoke full-fledged theories.
The ethical matrix method we propose takes its starting point in three fundamental
principles, namely:
1.
To promote well-being and minimise health risks, welfare burdens and other
detriments to affected stakeholders
2.
To respect the dignity of affected stakeholders
3.
To recognise the norm of justice and aim to treat everybody fairly and ensure
an equitable distribution of goods among affected stakeholders.
The preparatory matrix shown in Table 1 represents a detailed specification of
these three general principles as derived by a number of co-operating researchers in
STRATEGY. It encompasses all the values that were considered relevant and it may
be used as an example by an actual decision making group. Many of the issues can be
traced to those identified as relevant for the ethical and social evaluation of individual
countermeasures, and will have several relevant further specifications according to the
particular case in question [15].
In practice, the matrix can aid a decision-making group by giving an overall picture
of the ethical status of the issue at stake. The consequences of both the accident itself
and countermeasures can affect different groups in different ways, and the matrix can
be used to help identify the relevant information required for decision-making (i.e.,
the facts, values and stakeholders affected). However, a method for getting a grip
on facts and values will not be sufficient. Even with all the relevant information on
155
the table and with a systematic representation of different values, moral judgement
must be exercised. A central question for any decision-making process is that of who
is to judge? For decisions that concern the whole of society, and for STRATEGY,
stakeholder involvement is a central element for ensuring a justified and publicly
acceptable conclusion.
If follows that matrix evaluation should be performed in a participatory process with
stakeholders. Public involvement in decisions is important as different groups will
contribute with local knowledge in addition to technical, expert-based knowledge
[18]. This will, in the next turn, enhance public and social acceptability of decisions
[e.g., 19, 20]. Selection between different countermeasures will require trade-offs
and value judgements. The matrix can be used to help in weighting and/or ranking
the importance of those values by the affected stakeholders, and making the ethical
dimension of decision-making transparent.
Neither the final identification of principles and stakeholders nor a final specified
matrix can be given here, as this is both case-dependent and the task of a participatory
decision-making process. Well-being, dignity and justice have been selected as
possible points of departure, but these principles need to be confirmed by the decisionmaking bodies and specified according to case and context. Table 1 illustrates a
range of values that might be considered important when making judgements on
countermeasures. Any selection of a restoration strategy will have to rank and weight
the importance of those values, with or without the aid of an ethical matrix.
156
Table 1. An ethical matrix developed for use in a radiation accident situation.
Stakeholder
Examples
Well-being
Dignity
Justice/
Distribution
Example: Health and
economic welfare
Example:
Choice/consent
/(legal) rights
Is any sub-group
of stakeholders
worst-off?
Owners/
Employers
Government
Farmer
House dweller
Land owner
Hotel owner
Shop owner
Business proprietor
Factory owner
Local authority
Doses to humans
Loss/gain in income
Loss of property
Damage to, or
reduction in value of,
property
Loss of taxes
Compensation
Self-help
Consent
Property rights
Being allowed to pay
their duties
Contract fulfilment
No disruption
No insecurity
Liberty
Possibility for
conflict between
different industries
or projects
Workers/
Employees
Tennant farmer
Farm workers
Factory workers
Contractors
Immigrant workers
Other employees
Doses to humans
Fear of job loss
Gain/loss of income
Insecurity
Family relationships
Compensation
Traditional skills
and practices
Trust and loyalty to
local farmers
Consent
Training
Possibility for
disputes and social
inequity
Users/
Community
Neighbours
Recreational
Tourists
Public amenity
(library, town hall,
playground, park)
Local community
Access
Aesthetic value
Empathy
Isolation
Community values
Tourism
Compensation
Respect for public
heritage
and footpaths
Community sense
Personal control
Self-help
Liberty
Potential conflict
of age/sex/ cultural
minorities
Availability of
alternative
amenities
Consumers
Consumers
Secondary food
producers
Other secondary
producers (e.g.
timber)
Doses to humans
More expensive
goods
Loss of jobs
Insecurity
Information
Choice
Self-help
Intervention limits
Potential conflict
concerning diet
and possibility of
self gathering
Future gen.
Future food
production
Future clean air and
water
Future users of
recreational areas,
etc
Loss of opportunities
to use areas,
resources, common
goods, etc
Respect for the
right to keep living
according to basic
human values
No one future
group be sacrificed
for the presumed
good of other
future groups
Environment
Farm animals
Wild animals
Pets
Other biota
Ecosystems
Dose to biota
Other toxic/health
effects
Compensation
Endangered species
Loss of habitat
Right to life
consistent with their
nature
Potential conflict
between farm and
wild animals,
ecosystems
Waste
location
stakeholders
(if different
from accident
location)
Including all the
above
stakeholders
connected
to the waste site.
Uncertainty/risk
estimates:
Possibilities for
monitoring, retrieval
and treatment
Compensation
Consent
Self-help
Information
etc
Potential conflict
between
stakeholders close
to disposal site
157
5. Advantages and Disadvantages of the Ethical Matrix—
Experience
An advantage of the matrix is that it is able to cover most of the relevant values that
should be considered in a certain decision area. And although the matrix represents
a relatively new methodology, it has been tested within evaluation processes both by
Mepham and The Norwegian Research Ethics Committees and has been demonstrated
to be an appropriate tool for use in participatory and deliberative decision processes
[12, 16, 22]. The conclusion in both Mepham’s and NENT’s studies was that the
ethical matrix works relatively well for the purpose of structuring a discussion. There
were, however, certain practical problems connected to specifying the principles
and weighting the specifications, agreeing on the facts, and undertaking the final
evaluation. Also, the ethical matrix is a rather cumbersome tool that takes time to
explain to participants. These practical problems are taken up both by Mepham
and the NENT project [21]. The general conclusion suggests, however, that the
advantages in some cases of complex decision-making may outweigh the practical
problems related to the process.
6. Conclusions
Rather than seeing the inclusion of ethical and social values as an additional burden
on top of what is already a complicated process, our view is that ethics can be a
practical tool to aid decision-makers. We suggest that a systematic consideration of
ethical and social issues, including an ethical justification of the decision-making
procedure itself will eventually make countermeasure selection more transparent and
less controversial for society. An ethical matrix is a tool to ensure that all relevant
concerns are being taken into consideration and to clarify the ethical basis upon which
eventual decisions are made. There is no deduction of the correct ethical answer from
the evaluation matrix; the ethical matrix is not a substitute for ethical judgement, it is a
way of doing it. As an isolated tool it is not of much help - a tool must necessarily have
someone who handles it. But when an ethical matrix is combined with a stakeholder
process the real value of both the matrix and the participatory approach appears.
Acknowledgements
The STRATEGY project is being carried out under a contract (FIKR-CT-2000-00018)
within the research and training programme (Euratom) in the field of nuclear energy,
key action: nuclear fission and whose support is gratefully acknowledged. The paper
is the sole responsibility of the authors and does not reflect Community opinion, and
the Community is not responsible for any use that might be made of data appearing in
this publication. We gratefully acknowledge the contributions of other STRATEGY
participants.
158
References
1.
Oughton, DH. Ethical values in radiological protection, Radiation Protection
Dosimetry, 68, 203-208, 1996.
2.
Morrey M. Allen P. The role of social and psychological factors in radiation
protection after accidents. Radiation Protection Dosimetry, 68, 267-271, 1996.
3.
Hériard Dubreuil GF. Lochard J. et al., Chernobyl post-accident management:
The ETHOS project. Health Physics, 77, 361-372, 1999.
4.
Kelly N. Post accident management. In: Radioactive Pollutants: Impact on the
Environment ((eds. BJ. Howard, F. Bréchignac) EDP Sciences: France, pp. 75-87,
2001.
5.
Nisbet AF. Mondon KJ. Development of strategies for responding to environmental
contamination incidents involving radioactivity. The UK Agriculture and Food
Countermeasures Working group 1997-2000. NRPB-R33, 2001.
6.
Howard BJ. et al. Sustainable restoration and long-term management of
contaminated rural, urban and industrial ecosystems, Radioprotection - colloques
37 (C1) 1067-1072, 2002.
7.
ICRP. Principles for intervention for protection of the public in a radiological
emergency. Annals of the ICRP, Pub. 63. Oxford: Pergamon Press, 1991.
8.
ICRP. Protection of the public in situations of prolonged radiation exposure. ICRP,
Sutton (GB), ISSN 0146-6453, 2000
9.
Nisbet AF. Management options for food production systems contaminated as a
result of a nuclear accident. Radioprotection-Colloques, vol 37, C1, pp.115-120,
2002.
10. Hunt J. Wynne B. Social Assumptions in Remediation Strategies, STRATEGY
Deliverable 5, University of Lancaster: Lancaster, 2002.
11. Kaiser M. Forsberg E-M.Assessing Fisheries – Using an Ethical Matrix in a
Participatory Process, Journal of Agricultural and Environmental Ethics, 14,
191-200, 2001.
12. Beauchamp TL. Childress JF. Principles of Biomedical Ethics, 4th ed. (1st 1979)
Oxford: Oxford University Press, 1994.
13. Mepham, B. (ed.) Food Ethics, London: Routledge, 1996.
14. Mepham B. (1999). Ethics and novel foods – an analytical framework, in:
Preprints for the 1st European Congress on Agricultural and Food Ethics,
University of Wageningen.
15. Oughton DH. Bay I. Forsberg E-M. Kaiser M. Howard BJ.. Social and ethical
aspects of countermeasure evaluation and selection – using an ethical matrix in
participatory decision making. J.Environ Rad
Rad., in press.
16.
Bay I. Oughton DH. Ethical evaluation of communication strategies. EU
STRATEGY, Deliverable 3, Ås: Agricultural University of Norway, 2001.
159