Project evaluation - Waikato Regional Council

Transcription

Project evaluation - Waikato Regional Council
EVALUATING THE
EFFECTIVENESS
OF DISTRICT AND REGIONAL
PLANS PREPARED UNDER
THE RESOURCE
MANAGEMENT ACT
PLANNING PRACTICE GUIDE 4
By Maxine Day, Greg Mason,
Jan Crawford and Peter Kouwenhoven
Planning Under
Co-operative Mandates
PLANNING PRACTICE GUIDE 4
Page 1
EVALUATING THE EFFECTIVENESS OF DISTRICT
AND REGIONAL PLANS PREPARED UNDER THE
RESOURCE MANAGEMENT ACT
PLANNING PRACTICE GUIDE 4
By Maxine Day, Greg Mason, Jan Crawford & Peter Kouwenhoven
OTHER PUCM PLANNING PRACTICE EVALUATION GUIDES
A Guide to Plan-Making in New Zealand – the Next Generation
Monitoring Plan Implementation in New Zealand
A Guide to Improving Plan Implementation in New Zealand
A Guide to Community Outcome Evaluation
First Published in 2009 by
the International Global Change Institute (IGCI)
University of Waikato
Private Bag 3105
Hamilton, New Zealand
www.waikato.ac.nz/igci/pucm
ISBN 978-0-9864558-9-6
Use of information in this guideline should be acknowledged
© Maxine Day, Greg Mason, Jan Crawford & Peter Kouwenhoven
PLANNING PRACTICE GUIDE 4
Page 2
OVERVIEW
This is the fourth guide in the Planning Under Cooperative Mandates (PUCM) Planning Practice
series. It is based on work undertaken by the PUCM research team from 2002-2007 under
contract to the Foundation for Science, Research and Technology 1 .
The evaluation methodology described here offers a framework for planners and policy
practitioners to evaluate the effectiveness of plans prepared under New Zealand’s Resource
Management Act (RMA). It focuses on attributing environmental outcomes to the interventions in
plans (regulatory and non-regulatory methods), and explaining why outcomes have – or have not –
been achieved.
Monitoring the effectiveness of policies and rules is both a mandatory function for local authorities
under section 35 of the RMA, and an inherent component of the policy making-cycle. Importantly,
monitoring results can demonstrate that planning provisions are justified where they achieve the
goals of the plan and community of interest.
This guide provides a context for plan effectiveness evaluation, the theory underpinning it, and
practical guidance on how to carry out an evaluation. It covers research and evaluation methods,
and suggests tools that can be applied to assess complex environmental issues.
1 The PUCM Team members were: Jan Crawford (Planning Consultants Ltd), Maxine Day (Environmental Planning and
Research), Greg Mason (Inform Planning Ltd), Peter Kouwenhoven (IGCI), Lucie Laurian (University of Iowa) and Neil
Ericksen (IGCI).
PLANNING PRACTICE GUIDE 4
Page 3
CONTENTS
OVERVIEW
3
CONTENTS
4
GLOSSARY OF TERMS AND ACRONYMS
5
.1.
A GUIDE TO EVALUATING PLAN EFFECTIVENESS
6
.2.
AN INTRODUCTION TO EFFECTIVENESS EVALUATION
8
2.1
THE NEED FOR AN EVALUATION METHODOLOGY
8
2.2
THEORY-BASED EVALUATION: A WAY FORWARD
10
.3.
EXPLAINING THE POE METHODOLOGY
12
3.1
CONCEPTUAL FRAMEWORK
12
.4.
APPLYING THE POE METHODOLOGY
13
STEP 1 - ESTABLISHING A PURPOSE FOR EVALUATION
13
STEP 2 - ISSUE SELECTION
13
STEP 3 (A) - PLAN LOGIC MAPPING
15
STEP 3 (B) - UNDERSTANDING CAUSE AND EFFECT
17
STEP 4- EVALUATING PLAN IMPLEMENTATION
21
STEP 5 - IDENTIFYING ENVIRONMENTAL OUTCOMES & DATA GATHERING
21
STEP 5 (A) – INDICATOR SELECTION
21
STEP 5 (B) – OUTCOME DATA GATHERING METHODS
23
STEP 6 - DETERMINING THE EFFECTIVENESS OF THE PLAN
27
USEFUL REFERENCES
29
.5.
BIBLIOGRAPHY
30
APPENDIX 1
32
Example of an Intervention Scorecard for Stakeholder Workshops:
Used To Assist Modelling a District Plan’s Intervention Theory for Stormwater Management
APPENDIX 2
35
Example of an Observation Schedule for Monitoring Building Heritage Outcomes
PLANNING PRACTICE GUIDE 4
32
Page 4
35
GLOSSARY OF TERMS AND ACRONYMS
.EREs. – Environmental Results Expected; the stated intended outcomes of policies and methods
(including rules) within district and regional plans. In some plans, also referred to as
Environmental Outcomes, or Anticipated Environmental Results or Outcomes.
.FRST. – The Foundation for Research, Science and Technology, which invests in research,
science and technology on behalf of the New Zealand Government.
.Interventions. – methods used in a district or regional plan for achieving the environmental
results expected, including regulatory (rules) and non-regulatory (education, incentives) methods.
.LGA. – Local Government Act (amended 2002)
.LTCCP. – Long Tem Council Community Plans which are required to be produced by regional and
district Councils under the amended LGA.
.POE. – Plan Outcome Evaluation; the methodology developed and used by the PUCM team to
evaluate the contribution of plans to environmental outcomes.
.PUCM. – Planning Under Co-operative Mandates; an ongoing FRST-funded research programme
investigating the quality of environmental planning and governance in New Zealand.
.RMA. – The Resource Management Act (1991); New Zealand’s primary planning and
environmental legislation.
PLANNING PRACTICE GUIDE 4
Page 5
.1. A GUIDE TO EVALUATING PLAN EFFECTIVENESS
This fourth PUCM Planning Practice Guide sets out a methodology – Plan Outcome Evaluation –
for evaluating the effectiveness of regional and district plans. It is primarily aimed at practitioners
seeking to determine how effective plans have been in delivering the environmental outcomes
expected following plan implementation. Plan Outcome Evaluation, or POE, focuses on attributing
outcomes ‘on the ground’ to interventions adopted in the plan. 2
The guide draws upon the work from the PUCM Phase 3 research programme, which developed
and applied the POE method in three case studies. The results of these trials are used to illustrate
the steps in the evaluation process.
While this guide provides a conceptual framework for evaluation, considerable thought and input
from the evaluator is required throughout the process to build understanding of the system and
thereby help with attribution. The involvement of key stakeholders and environmental specialists is
needed at certain stages of the evaluation in order to assess whether environmental outcomes
correspond to the environmental results expected (EREs) in the plan and contribute to attribution.
The content of this guide covers:
• an introduction to, and rationale for, outcome evaluation
• the process of evaluation including:
o
establishing a purpose for the evaluation
o
selecting issues or geographic areas
o
‘plan-logic’ mapping
o
the selection of indicators
• a process for attributing outcomes to plan interventions (and non-plan factors), and
• references for other helpful resources to assist the evaluation process.
2 This guide complements the topics dealt with by the previous three guides, namely: Plan Quality Evaluation (PQE),
Plan Implementation Evaluation (PIE) and Improving Plan Implementation. This fourth guide extends the series by
providing a methodology for making the evaluative link between plan quality, plan implementation and the environmental
outcomes that ensue.
PLANNING PRACTICE GUIDE 4
Page 6
Diagram 1 - The Plan Outcome Evaluation Process
1. Scope and Define Methodology and Research Methods
• define issues/areas of plan for review
• review of outcome evaluation theories
• limitations of research methodologies
• methodological approach chosen and rationale confirmed
2. Scope Plan, EREs and Data for EREs
• limitations and barriers of plans, EREs and data
• test internal consistency of plan
• develop ‘shadow’ EREs if necessary
2a. Modelling the system
• environmental and plan
systems modelling
(modelling the intervention
theory of the Plan)
• indicator selection
• preliminary test of robustness of data for selected
topics
3. Data Analysis and Evaluation of ERE Achievement
• Data collection and analysis
3a. System Evaluation of
ERE achievement
* use of local and expert
opinion to calibrate model
Attribution and Explanatory Factors
• Plan factors (activity status, non-regulatory methods, other)
• Non-Plan and External factors (asset plans, community action, legislation, LTCCPs, population,
economy, climate change etc
PLANNING PRACTICE GUIDE 4
Page 7
.2. AN INTRODUCTION TO EFFECTIVENESS
EVALUATION
2.1 THE NEED FOR AN EVALUATION METHODOLOGY
Research into the effectiveness of district and regional plans in achieving the expected
environmental outcomes is required to validate the policies and rules imposed through plans. It is
important that decisions and interventions to manage effects on the environment are monitored
and evaluated in order to demonstrate that they are working, and that plan interventions are
justified.
From a regulatory perspective, section 35 of the RMA requires that councils monitor the efficiency
and effectiveness of policies, rules and other methods in policy statements or plans. Effective
policies, rules, or methods are those that achieve their goals, i.e., that produce the anticipated
outcomes.3 Efficient policies, rules, or methods are the best suited for the job based on some
criteria (unspecified by the RMA, but it could be the most cost-effective, those that yield results the
fastest, and so on), compared to other equally effective methods 4 .
From a policy perspective, monitoring and evaluation is the last step of the planning cycle and the
first step toward the review and improvement of plans and practices 5 .
A critical barrier has been the difficulty in measuring the link between changes in environmental
quality and plan implementation, otherwise known as the attribution challenge 6 . Other key barriers
include:
• The issue of ‘maturation’- some environmental impacts may take years to become apparent, or
monitoring data may show results of impacts that were created long before the plan provisions
were in effect.
• Cumulative effects/environmental tipping points – how to establish the ‘bit-part’ effects of
activities that cumulatively degrade the environment?
• Multi-causality - how do we establish that an outcome was influenced by interventions (or
restrictions) of the plan as opposed to any number of human or natural processes 7 .
• Lack of relevant and discriminating indicators and data – much of the information gathered on
state of the environment does not accurately relate to plan interventions.
• Establishing what the plan has ‘prevented’ (known as the ‘birth control effect’ 8 ) – how do we
know what has not taken place because the rules have discouraged development? This is
known as the ‘counter-factual’ scenario.
3 RMA. section 35 - Duty to gather information, monitor, and keep records
(2) Every local authority shall monitor—
(b) the efficiency and effectiveness of policies, rules, or other methods in its policy statement or its plan; and
4 Laurian et al 2008.
5 Kaiser, et al., 1995; Seasons, 2003; Snyder and Coglianese, 2005 cited Laurian et al 2009.
6 (Kouwenhoven et al., 2005; Leggett, 2002; Talen, 1996a)
7 Healey, 1986;
8 Gilg, 2005
PLANNING PRACTICE GUIDE 4
Page 8
Any effectiveness evaluation needs to be able to overcome these barriers to reliably establish
whether the administration of the plan has influenced environmental quality in the ways anticipated
by the plan. Some methods for overcoming these barriers are:
• Understanding how the plan intended to achieve anticipated or expected outcomes. That is,
knowing the capabilities of the plan to influence outcomes.
• Understanding relevant environmental systems and the factors that influence them (natural and
human factors).
• Using multiple data collection methods and analyses.
• Implementing IT systems that are designed to track consent information or/and manage data
from reports and monitoring.
• Recognising the value of qualitative information in circumstances where quantitative data is not
available.
• Engaging experts and/or communities to analyse or estimate environmental change.
PLANNING PRACTICE GUIDE 4
Page 9
2.2 THEORY-BASED EVALUATION: A WAY FORWARD
RMA plans have a rational cascade linking issues to outcomes (as shown in Diagram 2 below) 9 .
The cascade encapsulates: 1) issues (or problems) that require some form of action; 2) objectives
that describe the intentions of the plan with respect to addressing the issues; 3) policies that
express the general course of action to be taken to achieve the objectives; 4) specific methods to
implement the policies, such as rules, education, incentives etc (which we refer to in this Guide as
‘interventions’); and 5) environmental results expected or outcomes sought, which in turn provide
the benchmark for evaluating the success of the plan in countering the issues (Ericksen et al.,
2003; Willis, 2003).
Issue
Objectives
Policies
Rules,
Methods
Anticipated
Results
Monitoring,
Evaluation
Diagram 2: Rational Cascade of Plan Provisions Required in Plan-Making under the RMA
10
Because of this cascade, RMA plans have been characterised as ‘conformance-based’, which in
theory means that council decisions on development applications should be in conformity with plan
provisions (Laurian, Day and Berke et al., 2004a). The conformance-based view of plan
implementation assumes that the plan represents a clear understanding about the issue or
problem in question and its causes, and that the plan’s regulatory and non-regulatory interventions
are necessary and sufficient for countering it. In short, plans embody cause-effect theories.
The goal of the POE methodology is to determine the validity of the plan’s ‘theory’ in practice - thus
the term ‘theory- based’ evaluation. The methodology entails untangling all relevant processes
that affect the outcome of interest, i.e., understanding the plan’s logic, its implementation, intended
and unintended effects, and the plan and non-plan factors that can shape the outcome.
Importantly, theory-based evaluation does not simply aim to make associations between a plan’s
EREs and actual outcomes (i.e. were the plans’ goals achieved or not?); it also establishes a
framework for answering the critical question why were the plans’ goals achieved or not? 11 In this
way, the results of the evaluation can reveal the extent to which outcomes have been achieved
and explain the reasons for success and failure. While this approach is conceptually demanding, it
will inform practitioners about which plan provisions actually help to achieve the EREs 12 and to
what extent these provisions influence outcomes compared to other factors.
9 An amendment to the RMA in 2005 changed section 75 so that only objectives, policies and rules have to be specified
in plans. Issues, methods other than rules, and anticipated results may be included in plans at the discretion of each
council. This change seems to be a response to criticisms that plans are large, unwieldy, and difficult to interpret.
Nevertheless, for the most part RMA plans still include the full cascade of plan provisions.
10 Adapted from Ericksen et al., 2003, p.35
11 Mason, G., 2008
12 Laurian, et al., 2008.
PLANNING PRACTICE GUIDE 4
Page 10
Due to the complexity of attributing changes in the state of the environment to plan implementation
experimentally, the POE method applies a pragmatic approach to draw causal links between plan
goals and outcomes. This pragmatic approach relies on deliberative expert-driven assessments of
local system dynamics. That is, using expert knowledge and multiple data sources to assess the
strength of associations between planning interventions (policies, rules, standards etc) and
environmental change 13 .
The PUCM team applied the POE methodology to three substantive issues commonly dealt with in
RMA plans: stormwater management 14 ; built heritage protection 15 ; and landscape and ecological
protection 16 . Results from our research can be found on the PUCM website
www.waikato.ac.nz/igci/pucm.
13 see Laurian et al. 2009 for a full analysis of this theory-based model and consideration of alternative evaluation
models
14 Day & Crawford, et al., 2008;
15 Mason, G., 2008;
16 Beattie, L., in progress
PLANNING PRACTICE GUIDE 4
Page 11
.3. EXPLAINING THE POE METHODOLOGY
3.1 CONCEPTUAL FRAMEWORK
In order to evaluate plan effectiveness using the POE methodology, information is needed to
demonstrate:
1) that the plan’s ‘intervention theory’ has worked in practice, that is, the cause-effect assumptions
underpinning the rational cascade of plan provisions have played out;
2) that the plan was fully implemented; and
3) that the outcomes achieved ‘on the ground’ correspond to the outcomes sought by the plan.
The main components of this approach to plan effectiveness evaluation are shown in Diagram 3
below.
Collectively this information enables observed outcomes to be attributed to plan interventions or
other factors. In other words, determining attribution using the POE methodology entails
developing a model of the plan’s intervention theory, and then investigating whether it worked in
practice by i) looking for evidence that actual outcomes are similar to the plan’s EREs, and ii)
examining the implementation process to explain whether or not (and why) the plan influenced the
outcomes in the ways expected. This knowledge makes it possible to distinguish between ‘plan
failure’, where plan EREs are not met due to poor implementation, and ‘theory failure’, where the
plan’s cause-effect logic is flawed and therefore ineffectual 17 .
In some cases, the initial investigation into the evaluation process will establish that plan
effectiveness evaluation cannot be undertaken because of a fatal barrier (e.g. the plan rules were
not implemented, non-regulatory methods were not funded, or plan provisions had poor internal
consistency i.e. the policies and rules are not capable of achieving the expected outcome). But
more likely, the evaluation will show achievement of environmental results expected (EREs) in part
or full, or provide an understanding of where and why expected outcomes are not being achieved.
The following pages outline the process for establishing the plan’s theory, determining the extent of
implementation, and establishing whether or not environmental outcomes have been achieved.
Implementation Context
Intervention
Theory
Plan
Effectiveness
Plan
Implementation
Environmental
Outcomes
Diagram 3: Conceptual Framework for
Plan Effectiveness Evaluation
(from Mason, G., 2008, p.90)
17 Mason, G., 2008
PLANNING PRACTICE GUIDE 4
Page 12
.4. APPLYING THE POE METHODOLOGY
STEP 1 - ESTABLISHING A PURPOSE FOR EVALUATION
Establishing a purpose for an evaluation is an important step in the evaluation process. It helps
shape the scale and extent of plan provisions to be evaluated, selection of methods and provides
some parameters for reporting.
Two primary examples of the different purposes for which an evaluation may be undertaken are:
1) Contribution to plan review processes where information about the effectiveness of existing
plan provisions in achieving the expected outcomes is required through s.35 of the RMA;
and/or
2) Where unexpected environmental changes precipitate review of plan provisions.
STEP 2 - ISSUE SELECTION
Each plan contains a range of issues with associated objectives, polices, rules, other methods and
in some cases environmental results expected. Issues may be broad such as urban amenity
protection across the district, or more specific such as heritage protection within a defined
geographical location.
The selection of issue/s to be evaluated needs to be determined through a clear process that
involves stakeholders to establish the priority issues and gather public and/or organisational
support for evaluation.
In order to undertake an evaluation of the selected issue(s), the following questions should be
answered satisfactorily:
• Can the issue be assessed within a defined geographic location?
That is, can the ‘issue’ be isolated to a defined area where only one set of district and regional
planning provisions apply?
• Is there sufficient information or data relevant to the issue?
Are there studies, reports, data or other sources of information on the issue that will allow you to
see changes in environmental quality over time?
• If there is insufficient data, can it be obtained?
Some plan issues will require regular and consistent monitoring to determine environmental
outcomes over time, such as water quality. For other issues, including protection of heritage and
amenity values, monitoring can be done as part of an effectiveness evaluation where the effects of
activities remain visible over time.
• Have sufficient resources been allocated to undertake the evaluation?
The evaluation is time intensive and can require contributions of time from other personnel within
council and experts from other organisations (e.g. expert analysis of data etc).
• Do the plan provisions create a logical path from Objectives to Rules to Outcomes for the
selected issue (see Step 3(A) Plan Logic Mapping below)?
• Have the relevant parts of the plan been implemented (see Step 4)?
PLANNING PRACTICE GUIDE 4
Page 13
Diagram 4: STEPS OF A PLAN OUTCOME EVALUATION
Step 1. Defining a Purpose for the Evaluation
Step 2. Issue Selection
Step 3. Establishing the Intervention Theory of Plans
Step 3a. Plan Logic Mapping
Step 3b. Understanding Cause and Effect
Step 4. Evaluating Plan Implementation
Step 5. Identifying Environmental Outcomes and Data
Gathering
Step 5a. Indicator Selection
Step 5b. Outcome Data Gathering Methods
Step 6. Determining Plan Effectiveness
PLANNING PRACTICE GUIDE 4
Page 14
STEP 3 - ESTABLISHING THE ‘INTERVENTION THEORY’ OF THE PLAN
In order to develop a plan’s intervention theory, it is necessary to make explicit the ways in which
plan provisions are intended to achieve the desired environmental outcomes. This involves two
steps: 1) mapping the logical links in the cascade of plan provisions relevant to the issue under
evaluation; and 2) modelling the cause-effect assumptions underlying the provisions.
STEP 3 (A) - PLAN LOGIC MAPPING
The first step tracks the logical sequence between plan goals or issues, objectives, policies,
implementation and outcomes, which we call ‘plan logic mapping’. Its aim is to determine whether
the plan is logically capable of achieving the anticipated outcomes through its provisions. This
assessment may require analysis of other documents if implementation is intended to occur by way
of activities outside the plan (e.g. LTCCPs, engineering codes of practice etc).
It is important to ensure the EREs in the plan have some logical means of being achieved through
the interventions described in the preceding sections of the plan. In some cases where the EREs
are too vague or are not consistent with the objectives and policies it may be necessary to reinterpret them to better reflect the intention of the objectives and polices 18 . In other cases, the
EREs may need to be re-worked to provide measurable boundaries for achievement, i.e. setting
timeframes, or geographic constraints. Re-interpreting plan provisions is best done in conjunction
with the councillors and planners of the region or district to ensure that the plan’s integrity is
retained.
18 Phase 1 of the PUCM research found that EREs tended to be poorly developed in the first round of plan making
under the RMA (Ericksen, 2003)
PLANNING PRACTICE GUIDE 4
Page 15
Diagram 5 Example of Plan Logic Mapping
Objective/ Policy (number and description)
Objective 1. To assist in the preservation of water quality in the Harbour and
underground and surface waterways of the District.
Policies
1.1 To ensure, by the enforcement of environmental standards, that potentially polluting
materials do not contaminate the soil, enter the drainage system, or pollute underground
water supplies.
1.2 To introduce into the Plan controls on activities in order to reduce the potential for
pollution of the underground water supplies of the District.
1.3 Silt pollution as a consequence of subdivisional or land development is to be subject
to bylaw and Regional Council control to reduce the water polluting effects upon natural
water courses and the Harbour.
Methods
Standards & Rules
Permitted Activities: 1. Passive recreation in Esplanade reserves
Controlled Activity – Non-complying: 1. Arrangements for disposal of SW from activities
to be carried out on the site in such a way as will not adversely affect the environment
1. (d) cleanfill silt pollution stds (clean fill consents)
2. management plan for clean fill (clean fill consents)
3. stormwater disposal to show 75% removal of sediments from land use activity (all
consents)
4. Taking of esplanade reserves for subdivision adjoining CMA/waterways wider than
3m
Plan ERE
ERE
The expected outcome of the strategy is the retention and enhancement of the present
levels of environmental quality and amenity. Through the careful management of the
natural and physical resources of the District, it is expected that there will be a
measurable improvement in the quantity and quality of water resources, vegetation and
general amenity as well as increased understanding and awareness of the
environmental effects of various activities. Further, it is expected that there will be
increased awareness of potential results of natural hazards and the range of appropriate
responses.
Interpreted ERE (if Necessary)
Re-Interpreted ERE
1. Maintenance of in-stream water quality in short term (1997- 2007).
2. Increased riparian plantings and total esplanade reserve area from 1997
3. An improvement in stream water quality in the long-term (2017).
PLANNING PRACTICE GUIDE 4
Page 16
The plan logic mapping exercise identifies all non-regulatory and non-plan methods that may affect
an environmental outcome, but this process does not specify the extent to which the non-plan
factors affect outcomes, nor does it resolve the attribution question.
External factors may include the influence of other plans (regional/district), or national standards,
local engineering standards, economic, climatic or population changes.
If there is weak internal consistency within the plan provisions (i.e. gaps between policies,
rules/other methods and EREs) then the evaluation process cannot continue using this
methodology. Attention should be turned towards improving the internal consistency of the
planning provisions.
◙ Plan Logic Example: Landscape and Ecological Protection – Despite
the District Plan seeking to “protect and enhance landscapes and
ecological protection,” the plan logic mapping showed that rules and
methods are biased toward protecting the remaining landscapes and
retaining existing vegetation rather than promoting enhancements. In nine
out of 10 consents studied, vegetation was removed with little provision for
replacement, and in many other cases landscape values were degraded by
authorising buildings close to a ridge line, thus indicating that the plan
interventions provide insufficient protection and/or the plan has not been
appropriately implemented. Enhancement was not required by conditions in
any of the cases. The plan is clearly not achieving its EREs and further
cumulative loss is expected over time.
STEP 3 (B) - UNDERSTANDING CAUSE AND EFFECT
When the relevant plan provisions have been mapped, the second step is to develop a model of
how they are intended to influence the wider environmental system. Essentially, this process
makes explicit the assumptions about cause and effect within the plan. This is an important step
as it helps us understand whether or not and how outcomes ‘on the ground’ can be attributed to
planning interventions.
Different approaches can be employed to model intervention theories 19 with perhaps the most
common being logic model diagrams. These show single or multiple ‘causal pathways’
represented by text boxes and connecting arrows, which visually link plan interventions with
intended and unintended outcomes following implementation 20 . A limitation of logic model
diagrams, however, is that they cannot always depict the cause-effect relationships embodied in
plans because environmental issues that plans address tend to be complex. In other words, the
number of variables and interrelationships that need to be considered can be too complex to
capture on paper.
19 Rogers, 2000 gives an overview of several approaches
20 See Pathfinder Project (2003) and Christchurch City Council (2004) for further discussion and examples of logic
model diagrams.
PLANNING PRACTICE GUIDE 4
Page 17
To overcome this weakness, a systems approach to modelling intervention theories can be
applied. The use of modelling tools – either computer-based 21 , or hard copy (e.g. spreadsheets,
matrices, drawings etc) – enables complex environmental systems to be simulated and the
changes introduced through planning interventions tested. This step helps later in the evaluation
process when trying to attribute environmental outcomes to planning interventions as it requires
the evaluator to specifically consider how an intervention may affect an environmental outcome.
The value of this approach is that assumed relationships between plans, outcomes and other nonplan factors become explicit rather than implicit, thus making the attribution exercise more reliable.
The information needed to build and apply a systems model includes details on:
1) the problem to be addressed and its causes;
2) the plan goals sought in addressing the problem;
3) the relevant interventions adopted in a plan; and
4) the means by which the interventions are expected to counter the problem 22 .
This information will come primarily from two sources: first, the plan itself (including the plan logic
map completed in the previous Step (3A)) and other related documents, such as section 32
reports; second, from workshops with key council personnel involved in writing and implementing
the plan, as well as outside specialists with knowledge about the issue under evaluation in this
particular environment. This helps build a reliable model within a local context.
An important part of defining intervention theories is to identify the ‘causal mechanisms’ by which
the plan is intended to influence positive outcomes. In other words, how are plan provisions
expected to influence the development process so that activities achieve the plan’s EREs? This is
particularly relevant for plan issues that require changes in human behaviour in order to produce
good results. For instance, plan methods such as free professional advice, design guidance or
education initiatives often aim to increase the awareness of resource-users about particular issues,
as well as their capacity to meet plan goals. Similarly, the rationale for financial incentives, such as
grants, rates relief, and waiver of resource consent fees is typically to increase the commitment of
resource-users to comply with plan requirements. Once identified, an evaluator can determine
whether or not the causal mechanisms were ‘active’ during plan implementation, i.e. did resource
consent applicants respond to plan methods as anticipated, thus leading to proposals that
complied with plan provisions and ultimately met the plan’s EREs?
The process of establishing causality between plans and environmental change can be difficult,
hindered by the multiple interventions within - and outside - plans to manage environmental
change. For example, stormwater quality may be influenced by internal factors, such as rules in
the plan for earthworks management, riparian planting, or factors external to the plan through
engineering standards, building standards, climatic change, population growth etc. Expert
knowledge about the issue helps establish the extent to which each factor is likely to affect the
outcome in a particular environment. To improve the reliability of the model, the ‘predicted’ results
can be strengthened with state of the environment data where available (see Step 5).
21 The PUCM team used the RAP model – Rapid Assessment Programme – which was developed as a means for
policy-makers to test the likely effectiveness of alternative proposals in alleviating environmental issues. The
methodology behind RAP and examples of its application have been set out in two published papers: 1) Kouwenhoven
and van der Werff ten Bosch (2004); and 2) van der Werff ten Bosch and Kouwenhoven (2004). The potential for RAP to
be used to evaluate the effectiveness of district plans was first explored in a paper presented at a New Zealand Planning
Institute conference (Kouwenhoven et al., 2005) (Cited Mason, G., 2008).
22 Mason, 2008
PLANNING PRACTICE GUIDE 4
Page 18
A process for validating results is also important. Depending on the process undertaken for
building the model, expert peer review, community review or internal agency review becomes
critical for testing the model and later analysis of the outcomes from the model.
◙Modelling the Intervention Theory: Conducting Stakeholder
Workshops – The PUCM team used stakeholder workshops to gain expert
input into the modelling process for the issue of stormwater management.
This was accomplished in two ways.
First, workshop participants were asked to assess the likely impact of
development on a range of outcome indicators based on the plan’s EREs
(e.g. stream quality, odour, Mauri and visual quality – as shown in Diagram
6 below) as if there were no plan provisions in place; that is, if landowners’
ability to develop their property was unfettered. This established a baseline
for considering the extent to which the plan’s stormwater provisions
countered the effects of unconstrained development.
Second, the workshop participants were given ‘scorecards’ for assessing
the effectiveness of the plan’s stormwater rules and methods (see
Appendix 1). Participants were asked to indicate on the scorecards the
effect a particular plan rule or method was expected to have in theory (as
opposed to their view of what happens in practice) on the outcome
indicators, shown along the top of the scorecards. After giving a ‘score’
participants were asked to explain their assessment or, in other words,
outline their thinking about the link between the specific plan provision and
the expected outcome. It was this causal explanation that was most
important, as it provided an insight into the intervention theory and the
assumptions implicit in the plan regarding how it would influence outcomes.
Carrying out the assessments in a group allowed for debate and discussion
as to the validity of particular viewpoints. Finally, participants were asked
their view of the effectiveness of the combined stormwater provisions (i.e.,
the ‘plan as a whole’), as compared to the unconstrained development
scenario.
PLANNING PRACTICE GUIDE 4
Page 19
Diagram 6. Factors of change in stream water quality in the rural zone (from Laurian et al. forthcoming)
Climate
Rainfall
Streams
Forest
Native area
Production area
Agriculture *
Area
Infiltration
Intensity
Population
Runoff
■ Riparian margins grass
■ Riparian margins vegetated
Stormwater
Quantity
Quality
Runoff quantity
Runoff quality
Quality
Fluctuations
■□ Natural Form
Greenfield development
■ Subdivisions
Quantity
% Treated
■ % Impervious
■□ Detention
Runoff
Amenity
Mauri
quantity
■Infiltration
■□ Earthworks
Runoff quality
Wetlands
Erosion
■□ Area
Stormwater network
■ Pipes
■ % Treated
■ Controlled by the
District (plan or nonplan)
Ponds
■ Capacity
Quality
■ Natural form
Groundwater
□ Level
□ Fluctuation
Straight lines indicate positive relationships (an increase in X is expected to yield an increase in Y).
Dotted lines indicate negative relationships (an increase in X is expected to yield a decrease in Y).
PLANNING PRACTICE GUIDE 4
Page 20
Odour
Safety
Use value
Visual
STEP 4- EVALUATING PLAN IMPLEMENTATION
Once the relevant plan interventions have been mapped and their cause-effect assumptions
modelled, the next step is to assess the implementation of the plan. This is necessary to
determine if (and to what extent) the provisions of the plan have been applied in practice, and thus
allows a relationship to be established between observed environmental outcomes and planning
provisions.
An assessment of plan implementation may involve determining whether:
• plan policies are reflected in resource consents granted by councils (e.g. via conditions of
consent) 23 ;
• third parties consistently have input where this is required by the RMA, plan provisions or good
practice, e.g. tangata whenua, the NZHPT, Department of Conservation;
• compliance monitoring is undertaken to ensure conditions are met;
• Environment Court appeals have upheld or overturned councils’ decisions;
• non-regulatory methods proposed in plans, e.g. grants, rates relief, education programmes,
have been funded through the LTCCP/annual plan process, developed and implemented
(adapted from Inform Planning Ltd, 2008)
As for the Plan Logic Mapping exercise above, if there is a significant gap between the plan
provisions and implementation then the evaluation process cannot continue using this
methodology. Attention should be turned towards improving the implementation of the planning
provisions.
STEP 5 - IDENTIFYING ENVIRONMENTAL OUTCOMES & DATA GATHERING
STEP 5 (A) – INDICATOR SELECTION
The selection of indicators should be guided by the goals and logic of the plan and its EREs, and
use existing monitoring frameworks where available. In particular, indicators should be able to
show changes in the selected outcome areas. For example, if the outcome is to improve
stormwater quality, the indicator selected needs to be able to show changes arising from activities
creating stormwater (such as zinc from roof run-off or selected heavy metals from road run-off),
rather than just activities affecting water quality in general.
In selecting indicators, a ‘SMART’ process can be applied:
1) Specific: The indicator clearly and directly relates to the relevant subject and is not an indicator
for other factors.
2) Measurable: The indicator and results can be unambiguously measured
3) Achievable: It is realistic and achievable to collect data through the indicator
4) Relevant: The indicator is relevant to the evaluation
5) Time-bound: The indicator provides data that be tracked in a cost-effective manner at a
desired frequency for a set period.
23 ‘Monitoring Plan Implementation in New Zealand – Planning Practice Guide 2’ by M. Day et al (2005) outlines a
method for evaluating implementation through resource consents.
PLANNING PRACTICE GUIDE 4
Page 21
Most importantly, the indicators need to be discriminating. Discriminating indicators reveal
“unambiguous, predictable, verifiable” relationships between plans and outcomes; are sensitive to
planning processes and changes; and reflect the long term outcomes of plans 24 . They need to be
able to show changes to the environment that are directly associated with the plan provisions being
implemented.
Depending on the scale of the evaluation or nature of the outcomes being evaluated an initial ‘trial
process’ of data collection using selected indicators may be required to establish the validity of the
data gathered for the evaluation. Environmental data at the local scale tends to be scarce,
sporadic or unreliable for many issues. In some cases, outcomes can take years to emerge,
making environmental evaluation particularly difficult 25 .
Data should ideally, be able to show changes from a time period directly before the plan provisions
became operative, and then from a reasonable time thereafter (the relevance of the time gap will
vary from issue to issue). The purpose is to establish a ‘before plan’ and ‘after plan’ state.
Diagram 7: Example of Table Linking Data to Relevant ERE
Objective
1. To enable the
use and
development of
the natural and
physical
resources of the
District in a
manner which
recognises the
need for
environmental
protection.
ERE
(interpreted)
& District
Plan’s ERE
Reference #
Indicator
1.
Maintenance
of in-stream
water quality
in short term
(1997-2007)
Number of
species,
2. …
Changes in
variety of
species
3…
Data
provided by
council or
region?
DC
Source of
data
Freshwater
report ABC
1999
EREs
monitored
by Council?
Y/n
Time
period
Data
collected
N
1998
Value t1
(pre-plan
1999 )
Value t2
(post- plan)
Low
diversity,
pos.
decreasing.
8 to 16
Taxa
found
Taxa
abundance
RC
Effects of
Proposed
Urban
Development
on the Coastal
Marine
Environment
2004
(N.Body)
24 Hockings, et al., 2000
25 Laurian, et al., 2008
PLANNING PRACTICE GUIDE 4
Page 22
2003-4
Outcome&
Conclusion/
Comment
Number of
common
freshwater
fauna: 5
STEP 5 (B) – OUTCOME DATA GATHERING METHODS 26
A range of data gathering methods will be required to identify and explain relevant environmental
outcomes. In some cases desk-top analysis of existing reports may be appropriate. In other
instances, field research may be required. The examples set out below include some guidance for
desk-top and field-based research, and include guidance on sampling, standardised observation,
document analysis, interviews/questionnaires and a newer approach called Photovoice surveying.
(i) Sampling
It may not be necessary to monitor every aspect of the issue being evaluated. The aim is to obtain
enough information to understand the strengths and weaknesses of a plan, its implementation, the
extent to which key outcomes are achieved and the contextual factors that assist or impede its
performance. Consequently, sampling procedures are useful for gaining a representative sample
of the issue under evaluation. Sampling is a means of selecting a manageable number of ‘units’
(consents, properties, activities etc) which, when analysed, provide reliable data about plan
effectiveness. The goal is to ensure that the sample is sufficient to allow the evaluation findings to
be generalised to the whole plan, but not so large as to demand unrealistic time and resources to
complete. There are a number of suitable sampling techniques, such as stratified random
sampling, which is useful for selecting individual items from a list (e.g., consents granted, items on
a plan schedule), and cluster sampling which is useful for choosing properties from a large area,
e.g. in a particular zone or policy area. The extent to which relevant information can be accessed
from a council’s information management system may also influence the sampling procedure used.
Qualitative sampling methods are also useful, e.g. for selecting cases from the main sample for
further examination 27 . Such methods are valuable when faced with time and/or resource
pressures because they allow only a small number of cases to be chosen for detailed investigation.
For instance, ‘intensity sampling’ enables resource consents that led to very good and very poor
outcomes to be selected and studied in-depth, thereby exposing, intensely, the strengths and
weaknesses of the plans and of the implementation process 28 .
(ii) Standardised Observation
Once a sample is selected, outcomes arising from plan implementation (for both permitted and
consented activities) can be assessed, which inevitably requires making judgments based on
observation. The standardised observation approach involves using an observation schedule to
enable the outcomes to be judged against the relevant assessment criteria from a district or
regional plan. In other words, the scope of the exercise is confined to evaluating the degree of fit
between the observed outcomes and a plan’s decision-making criteria. The use of a standardised
evaluation form allows a consistent measure of outcomes. It can also be used to record noncompliant or unconsented activities. When aggregated, the data obtained from the observation
schedules provides a useful picture of the extent to which, overall, the plan’s outcomes are being
achieved or not achieved. An example of an observation schedule used to assess resource
consent outcomes for properties in a residential heritage zone is included as Appendix 2.
To further ensure a consistent and high quality assessment, the observation schedules should be
completed by appropriately qualified specialists who understand the values associated with the
resource under assessment, and who can meaningfully apply the plan’s assessment criteria in a
post hoc appraisal of outcomes. A considerable amount of quantitative information is gathered
through this process and suitable software, such as SPSS, may be needed to assist with collation
and analysis.
26 This section on sampling has been adapted from Inform Planning, 2008.
27 For a wide range of qualitative sampling techniques see Patton, M., 2002.
28 The sampling methods mentioned are explained and applied in Mason, G., 2008 (in particular Chapter 4).
PLANNING PRACTICE GUIDE 4
Page 23
(iii) Document Analysis
Analysing a wide range of documents is necessary to identify the activities that have been
undertaken for the sample properties and to investigate the plan implementation process.
Such documents include:
• the district or regional plan under evaluation, including relevant documentation such as section
32 reports;
• resource consent applications;
• assessments of applications by council staff and any commissioned reports;
• section 94 reports;
• section 104 reports;
• details of any complaints, enforcement or Environment Court proceedings;
• correspondence relating to any of these matters.
Additionally, photographs (GIS maps, historical aerial photographs) provide another form of
documentary evidence that can be used in the analysis process.
(iv) Interviews/Questionnaires
Interviews and/or questionnaires enable rich information to be gained to help understand the ways
in which plan implementation influenced observed outcomes. A key focus is to know ‘what worked
in terms of plan implementation and in what circumstances?’ (and vice versa), in order to reveal
the factors that supported and restrained successful plan implementation. Relevant key informants
will consist of those who operate on the ‘development side’ of plan implementation, that is
landowners and their professional advisors, as well as those from the ‘control side’, including
council decision-makers, planners and specialist staff. Others involved in the management of the
issue or as observers, such as government agencies, community groups, NGO’s and iwi may also
be involved.
Information about the state of and trends in the environmental issue being scrutinised can also be
sought from key informants. This information can then be used to help attribute the evaluation
results to the plan interventions or other factors.
◙The Value of Key Informant Experiences – With respect to the council
personnel, two categories of questions proved useful for the PUCM case
studies. The first sought the perceptions and experiences of those involved
in specific development proposals that required resource consent. The
second covered staff views concerning the factors that supported or
undermined achievement of the plan’s EREs. Thus, the focus of the
interviews was to gain insights into the local planning and decision-making
contexts and to isolate the institutional factors that influenced outcomes.
With regard to the informants initiating development proposals, we were
interested in their prior experience with the planning process, including their
pre-development awareness of the plan and the extent to which this
shaped the design of their proposals. We also queried whether and how
the professional advisors engaged by the applicants (e.g. architects,
engineers, planners) influenced the proposals. Finally, we sought to gauge
the willingness and capacity of the applicants to comply with plan
provisions and the extent to which this was influenced by the consent
process and negotiations with council staff.
PLANNING PRACTICE GUIDE 4
Page 24
(v) Photovoice- community-based research methods
Photovoice is a participatory action research strategy developed to empower people (see
http://en.wikipedia.org/wiki/Photovoice). Cameras are provided to participants who then record
what is important to them as part of a shared learning experience. A key principle of the method is
that the community is involved in the research process and ideally controls both the process and
the products. With modification, this method can be used in planning to identify community
aspirations and for evaluation purposes. It is particularly useful for articulating community values
and perceptions regarding amenity, landscape and urban design.
Key steps in the Photovoice methodology:
1) A planning stage (scoping the ‘problem’)
2) A recruitment stage (find and select guidance group, an audience and the participants)
3) A training stage (facilitators, photographers and researchers learn about the method)
4) A discussion and brainstorming stage (participants develop a list of themes for their photos)
5) Photo shoot assignment (may be done several times over several weeks). Facilitators debrief
the photos and a free writing session is done on selected photos. The goal is to find out the
‘root cause’ of the condition.
6) A public sharing and education phase.
Note: when carrying out research using cameras, there are ethical concerns to manage e.g., a
photo may be incriminating, capable of manipulating public opinion or the subject may not have
given consent. To minimise the potential for ethical breaches, care must be taken when publishing
photos and to ensure that the accompanying process is sound.
Dislike for boats & vehicles occupying reserves and
road verges
Liking for natural landscape, especially coastal margins
29 Waiheke Island Community Planning Group Inc. and Auckland City Council, Hauraki Gulf Islands District Plan
Review, Notes reporting the results of consultation with the Waiheke Island community using photography to identify
what people liked and didn’t like about the place, April-June 2005.
PLANNING PRACTICE GUIDE 4
Page 25
29
(vi) A combination of methods gives better results
In all likelihood, a combination of quantitative and qualitative methods may need to be employed as
there tend to be a number of barriers to single-method approaches to outcome evaluation. These
may include:
• insufficient data for quantitative assessment, including poor quality data
• data that does not ‘discriminate’ between plan or non-plan influences
• complex environmental changes with multiple natural influences
• multiple policy influences or contextual factors that swamp plan interventions e.g., high rates of
urban expansion
The extent to which these barriers affect the evaluation will depend, inter alia, on the issue being
evaluated, the range of alternative data, or the budget for undertaking new monitoring.
PLANNING PRACTICE GUIDE 4
Page 26
STEP 6 - DETERMINING THE EFFECTIVENESS OF THE PLAN
In coming to a conclusion on the effectiveness of a plan’s provisions, evaluators must draw on the
results from all evaluation steps undertaken through the previous pages. To summarise, a plan
can be considered to have been effective when:
• the observed environmental outcomes align with the plan’s stated EREs; and
• the plan’s intervention theory was shown to have worked in practice; and
• the plan has been fully implemented.
Diagram 8 illustrates how the data gathering and analysis methods discussed in this Guide assist
in gathering the information needed for each component of the methodology.
The approach we use to gauge plan effectiveness can be described as ‘analytic generalisation’,
which is where “a previously developed theory is used as a template with which to compare the
empirical results” of the evaluation 30 . In practice this involves contrasting the intervention theory
developed at Step 3 of the methodology to actual plan implementation, based on the identification
of environmental outcomes and examination of the implementation process to determine when the
intervention theory proved accurate, when it fell short, and the reasons why.
Importantly, the results inform the evaluator – and wider community – about the drivers of
environmental change, as well as how the plan failed or succeeded and why. That is, did the
plan's theory play out as expected? Or, have other factors such as poor implementation or weak
internal consistency restricted the ability of the plan to achieve those outcomes?
The results can be fed into plan review processes to validate existing policy or form part of new
policy development. In some cases, the results may indicate attention needs to be directed to nonplan mechanisms, where these prove more effective in achieving the environmental results desired
by the community. In either case, by determining the effectiveness of the provisions, any
subsequent reviews to the plan or amendments to planning provisions will have a sound and
defendable rationale for change.
Diagram 8: Methods of Data
Gathering and Analysis Used to
Implement the Conceptual
Framework for Plan Effectiveness
Evaluation (adapted from Mason,
G., 2008, p.115)
Implementation Context
•
•
•
•
Plan
Implementation
• Document
analysis
• key informant
interviews
• questionnaires
Intervention
Theory
plan-logic mapping
stakeholder
workshops
document analysis
-model construction
Plan
Effectiveness
analytic
generalisation
•
•
•
•
•
Environmental
Outcomes
indicator selection
sampling
standardised
observation
quantitative and
qualitative analysis
30 Yin, 2003, pp.32-33; Patton, 2002 (pp.453-454) refers to ‘deductive analysis’ to describe the same process of
contrasting evaluation findings to a theorised model.
PLANNING PRACTICE GUIDE 4
Page 27
◙ Attributing Built Heritage Outcomes to Plan Implementation – A
PUCM case study evaluated the effectiveness of two district plans in
protecting built heritage. The first involved individual buildings listed on a
heritage schedule while the second concerned properties in a large
residential heritage zone. A sample of 70 scheduled buildings was
selected using the stratified random sampling technique, whereas 250
properties were selected from the heritage zone using cluster sampling.
Stakeholder workshops showed that plan rules aimed to increase owners’
awareness of their site’s heritage values, as well as enabling the councils to
intervene in the development process and assess the likely effects of
development proposals that may have detrimental impacts. A range of plan
rules were used to protect aspects of a site’s heritage values. For instance,
rules controlling additions and alterations aimed to maintain and enhance a
building’s architectural and historic values, whereas rules for new buildings
sought to protect a property’s setting and streetscape values. Nonregulatory methods were used by both councils, including heritage grants,
which aimed to increase the commitment of applicants to comply with the
plan, and specialist advice from council heritage advisors, which was
intended to improve the capacity of applicants to comply.
An architectural historian assessed the outcomes arising from resource
consents granted for buildings and properties in each of the samples. In
both cases this revealed that only around 10% of consents led to an
enhancement of heritage values, 35% of consents maintained heritage
values, and 55% led to a loss of values. Many consent outcomes failed to
satisfy key assessment criteria, particularly those relating to the
maintenance of architectural and historic values.
Using the intensity sampling technique, four consents that led to very
positive outcomes were then chosen from the main samples, as well as
four consents that led to very poor outcomes. The implementation process
for each was examined in detail, including analysis of consent
documentation and interviews with key informants involved in the consent
process.
Key factors that influenced the outcomes included weak and permissive
plan provisions for heritage protection, inadequate identification in the plans
of the heritage values to be protected, planners’ focusing on meeting
processing times rather than outcomes, inexperienced planners who
authorised detrimental changes, and applicants who wanted to maximise
the development potential of a property rather than protect heritage values.
Thus, the key factors that shape successful heritage protection outcomes
were: the quality and thoroughness of plan provisions, planning staff
knowledge and commitment to heritage protection, developers’
commitment to preservation, and pre-application interactions between
developers and staff.
PLANNING PRACTICE GUIDE 4
Page 28
.5. USEFUL REFERENCES
• Evaluation Guide. Christchurch City Council (2004). Available at:
www.ccc.govt.nz/publications/evaluationguide/EvaluationGuide.pdf
• Evaluating Policy and Practice: A New Zealand Reader. Lunt, N., C. Davidson &
K. McKegg, eds. (2003). Pearson Prentice Hall, Auckland, NZ.
• District Plan Monitoring - A Guide to Getting Started. Berghan, T., & Shaw, A.
(2000). Wellington: Opus International Consultants Limited.
• Evaluating Regional Policy Statements and Plans- a guide for Regional and
Unitary Authorities. Enfocus, (2008)
• Monitoring and evaluation of the Operative Bay of Plenty Regional Policy
Statement. (2008). EBOP Strategic Policy Report 2008/05. Available at
www.envbop.govt.nz/Policies/Policy-281108RegionalPolicyStatementMonitoringandEvaluation2008.pdf
PLANNING PRACTICE GUIDE 4
Page 29
BIBLIOGRAPHY
Berghan, T., & Shaw, A. (2000). District Plan Monitoring - A Guide to Getting Started. Wellington: Opus
International Consultants Limited.
Chen, H.-T., & Rossi, P. H. (1980). The multi-goal, theory driven approach to evaluation: a model linking
basic and applied social science. Social Forces, 59(1), 106-122.
Christchurch City Council (2004). Evaluation Guide. Christchurch: CCC. Available at:
www.ccc.govt.nz/publications/evaluationguide/EvaluationGuide.pdf
Crawford, J., Day, M., et al Achieving anticipated environmental outcomes: how effective is the district plan?
(2007) PUCM Summary Report to Papakura District Council, Hamilton: The International Global
Change Institute, The University of Waikato.
Ericksen, N. J., Berke, P. R., Crawford, J. L., & Dixon, J. E. (2003). Planning for Sustainability: New Zealand
under the RMA. Hamilton: The International Global Change Institute, The University of Waikato.
Enfocus, (2008) Evaluating Regional Policy Statements and Plans – a guide for Regional and
Unitary Authorities.
Gilg, A. W. (2005). Planning in Britain: Understanding and Evaluating the Post-War System. London: Sage.
Kouwenhoven, P., Mason, G., Ericksen, N., & Crawford, J. (2005). From plans to outcomes: attributing the
state of the environment to plan implementation. Paper Presented at the New Zealand Planning
Institute Conference - "Pushing the Boundaries", 5-8 May, Auckland. Available at:
www.nzplanning.co.nz/Files/C11.pdf
Laurian, L., Crawford, J., Day, M., Kouwenhoven, P., Mason, G., Ericksen, N., and Beattie, L: “Can
Effectiveness of Plans be Monitored? Answers from POE – a New Plan Outcome Evaluation
Method,” Planning Quarterly No. 179 pp.26-30, September 2008.
Laurian, L., Day, M., Backhurst, M., Berke, P., Ericksen, N., Crawford, J., Dixon, J., & Chapman, S. (2004).
What drives plan implementation? Plans, planning agencies and developers. Journal of
Environmental Planning and Management, 47(4), 555-577.
Laurian, L., Day, M., Berke, P., Ericksen, N., Backhurst, M., Crawford, J., & Dixon, J. (2004). Evaluating plan
implementation: a conformance-based methodology. Journal of the American Planning Association,
70(4), 471-480.
Laurian, L, Crawford, J, and Day, M, with Peter Kouwenhoven, Greg Mason, Neil Ericksen and Lee Beattie
(Forthcoming) Evaluating the Outcomes of Plans: Theory, Practice, and Methodology. Accepted for
publication by Journal of Environmental Planning and Management B, April 2008.
Leggett, M. (2002). Assessing the Impact of the RMA on Environmental Outcomes: Final Report. Auckland:
URS New Zealand Limited.
Lunt, N., C. Davidson & K. McKegg, eds. (2003). Evaluating Policy and Practice: A New Zealand Reader.
Pearson Prentice Hall, Auckland, NZ.
Mason, G. (2008). Evaluating the Effectiveness of Conformance-Based Plans: Attributing Built Heritage
Outcomes to Plan Implementation under New Zealand’s Resource Management Act. PhD Thesis,
University of Waikato
Inform Planning Ltd (2008). Review of Auckland Regional Council’s Regional Policy Statement: Scoping
Study on Historic Heritage Monitoring by Auckland Local Authorities. Unpublished report
commissioned by Auckland Regional Council
Morrison, N., & Pearce, B. (2000). Developing indicators for evaluating the effectiveness of the UK land use
planning system. Town Planning Review, 71(2), 191-211.
Pathfinder Project (2003) Guidance on Outcome Focused Management: Building Block 3: Intervention Logic.
Wellington,
State
Services
Commission.
Available
at
http://io.ssc.govt.nz/pathfinder/documents/pathfinder-BB3-intervention_logic.pdf
Patton, M. Q. (2002). Qualitative Research and Evaluation Methods (3rd ed.). Thousand Oaks: Sage.
Pawson, R., & Tilley, N. (1997). Realistic Evaluation. London: Sage.
PLANNING PRACTICE GUIDE 4
Page 30
Rogers, P. J. (2000). Causal Models in Program Theory Evaluation. In P. J. Rogers & T. A. Hacsi & A.
Petrosino & T. A. Huebner (Eds.), Program Theory in Evaluation: Challenges and Opportunities (pp.
47-55). San Francisco: Jossey-Bass.
Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A Systematic Approach (6th ed.).
Thousand Oaks: Sage.
Talen, E. (1996a). After the Plans: Methods to Evaluate the Implementation Success of Plans. Journal of
Planning Education and Research, 16(2), 79-91.
Talen, E. (1996b). Do plans get implemented? A review of evaluation in planning. Journal of Planning
Literature, 10(3), 248-259.
Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4),
501-524.
Weiss, C. H. (1998). Evaluation: Methods for Studying Programs and Policies (2nd ed.). New Jersey:
Prentice Hall.
Willis, G. (2003). Drafting Issues, Objectives, Policies and Methods in Regional Policy Statements and
District Plans. Ministry for the Environment. Available at: www.mfe.govt.nz/publications/rma/draftingissues-jul03/drafting-issues-jul03.pdf
Yin, R. K. (2003). Case Study Research: Design and Methods (3rd ed.). Thousand Oaks: Sage.
Waiheke Island Community Planning Group Inc. and Auckland City Council, Hauraki Gulf Islands District
Plan Review, Notes reporting the results of consultation with the Waiheke Island community using
photography to identify what people liked and didn’t like about the place, April-June 2005.
Christopher Wragge, Waiheke Island Community Planning Group Inc., pers.comm. 18 May 2009.
Vaughan, Lisa, Janet Forbes & Britteny Howell. Using Photovoice Methodology to Give “Voice” to those
Typically Unheard. Workshop, Third International Conference on Interdisciplinary Social Sciences,
Monash University Centre, Prato, Tuscany, Italy, 22-25 July 2008.
http://i08.cgpublisher.com/proposals/368/index_html
Downloaded 28 May 2009
http://www.photovoice.ca/ downloaded 28 May 2009
http://en.wikipedia.org/wiki/Photovoice downloaded 28 May 2009
PLANNING PRACTICE GUIDE 4
Page 31
APPENDIX 1
Example of an Intervention Scorecard for Stakeholder Workshops:
Used To Assist Modelling a District Plan’s Intervention Theory for Stormwater
Management
UNCONSTRAINED DEVELOPMENT:
What effect would a strong (+++) increase in median house value have on the following environmental qualities, if there was no district plan?
+++ denotes a strong increase
++
denotes a moderate increase
+
denotes a weak increase
+…++ denotes a weak to moderate increase (etc.)
-----…---
denotes a strong decrease
denotes a moderate decrease
denotes a weak decrease
denotes a moderate to strong decrease (etc.)
The blank rows are used for entering personal comments or notes from group discussion.
case
stream quality
amenity: Mauri
amenity: odour
assessment
model’s
assessment
PLANNING PRACTICE GUIDE 4
Page 32
amenity: use value
amenity visual
SEDIMENT CONTROL:
What effect does sediment control have on the following environmental qualities?
In the model, this plan intervention changes the effect of earthworks on the natural form of both wetlands and streams from
strong (---) to moderate (--) (countering effects of unconstrained development).
case
stream quality
amenity: Mauri
amenity: odour
amenity: use value
amenity visual
model’s assessment
--- .. 0
-- .. 0
0 .. +
- .. 0
- .. 0
unconstrained
--- .. 0
--- .. 0
0 .. +
- .. 0
-- .. 0
assessment
Individual comments
CONTAMINANT CONTROL:
What effect does contaminant control have on the following environmental qualities?
In the model, this plan intervention moderately (++) increases the % treated stormwater (countering effects of unconstrained development).
case
stream quality
amenity: Mauri
amenity: odour
amenity: use value
amenity visual
model’s assessment
--- .. ++
--- .. 0
0 .. +
- .. 0
-- .. 0
unconstrained
--- .. 0
--- .. 0
0 .. +
- .. 0
-- .. 0
assessment
Individual comments
PLANNING PRACTICE GUIDE 4
Page 33
INCREASE STORMWATER POND CAPACITY:
What effect does an increase in stormwater capacity have on the following environmental qualities?
In the model, this plan intervention weakly (+) increases the capacity of ponds (countering effects of unconstrained development).
case
stream quality
amenity: Mauri
amenity: odour
amenity: use value
amenity visual
model’s assessment
--- .. +
--- .. 0
0 .. +
- .. 0
-- .. 0
unconstrained
--- .. 0
--- .. 0
0 .. +
- .. 0
-- .. 0
assessment
Individual comments
LOW IMPACT DETENTION:
What effect does low impact detention have on the following environmental qualities?
In the model, this plan intervention weakly (+) increases the stormwater detention in Takanini, the area of wetlands and the capacity of the ponds (countering effects
of unconstrained development).
case
stream quality
amenity: Mauri
amenity: odour
amenity: use value
amenity visual
model’s assessment
--- .. +
--- .. 0
0 .. +
- .. 0
-- .. 0
unconstrained
--- .. 0
--- .. 0
0 .. +
- .. 0
-- .. 0
assessment
Individual comments
PLANNING PRACTICE GUIDE 4
Page 34
APPENDIX 2
Example of an Observation Schedule for Monitoring
Building Heritage Outcomes
Property number:
Resource consent number:
Name of heritage assessor:
Date of assessment:
House constructed (i) pre-1930 or (ii) post-1930? (circle)
Additions or Alterations to Any Existing Building
District Plan Assessment Criteria
Yes
a)
Criterion 1
a) Were changes to the street-front façade avoided?
b) Do additions & alterations preserve the essential character
of the:
(i) street-front façade?
(ii) side elevations (not rear)?
(iii) roof planes of houses built before 1930?
(NB: any additions & alterations should preserve the essential
character with street facade changes generally avoided except
for original detail uncovered & sympathetic alterations)
Criterion 2
c) Do alterations &/or additions to houses built before 1930
retain & reflect design characteristics of the original house:
(i) Detailing?
(ii) Materials?
(iii) Finishes?
(iv) Proportions?
(v) Fenestration?
No
Outcomes
In Part
Can’t Tell
N/A
b)(i)
b)(ii)
b)(iii)
c)(i)
c)(ii)
c)(iii)
c)(iv)
c)(v)
Criteria 3, 4, 5, 6 etc…
Has there been a loss of heritage values as a result of the
consented activity?
Is the loss permanent or unlikely to be reversed in the future?
Overall score
-5
(Heritage Values
Significantly Diminished)
Comments
PLANNING PRACTICE GUIDE 4
0
(No Effect)
Page 35
5
(Heritage Values
Significantly Enhanced)