Preliminary Work Towards the Development of a Self

Transcription

Preliminary Work Towards the Development of a Self
Preliminary Work Towards the Development of a
Self-Assessed Measure of Consumer Outcome
Sarah Gordon
Professor Peter Ellis
Carmel Haggerty
Lynne Pere
Gary Platz
Kaye McLaren
Cover painting by Serena Young, Te Ata, Auckland
The Mental Health Research and Development Strategy is a partnership between the
Ministry of Health, Health Research Council of New Zealand,
and Mental Health Commission.
Published in April 2004 by the Health Research Council of New Zealand
PO Box 5541, Wellesley Street, Auckland, New Zealand
Telephone 09 379 8227, Fax 09 377 9988, Email [email protected]
This document is available on the Mental Health Research and Development Strategy
Website: http://www.mhrds.govt.nz
ISB Number 0-908700-17-2
Acknowledgements
The research team, Sarah Gordon, Professor Peter Ellis, Carmel Haggerty, Lynne Pere (Ngai
Tahu, Ngati Kahungunu, Rangitane), Gary Platz, and Kaye McLaren, thank all the individuals
and organisations that provided input and support to this project. In particular, we acknowledge
the significant contributions of:
•
the reference group to the project: George Anderson, Jim Burdett, Denise Caltaux, Elva
Edwards, Sonja Goldsack, Chris Hansen, Graham Kirkham, Vito Malo, Jan Murphy,
Colin Slade, Suzy Stevens, and Te Wera Te Kotua, who kept the project to task and
ensured the consumer focus of the research was never compromised;
•
Whitireia Community Polytechnic and the Wellington School of Medicine and Health
Sciences, University of Otago, who supported their respective employees to be part of the
research team;
•
Bishop Muru Walters as Kaumātua to the project;
•
Dr Te Kani Kingi who provided advice and support around the project’s responsiveness
to Māori;
•
all the people who helped organise and support the consultation process, most especially:
Colin Slade, Jan and Neil Murphy, Elva Edwards, the Light House, Lina Samu, Whariki
Consumer and Family Service (Challenge Trust), Linda Hall, Deaf Mental Health Service
– Synergy Healthcare, Te Wera Te Kotua, the White House, and Vito Malo;
•
all the people who attended and participated in the consultation forums;
•
all those people who provided knowledge and information through personal interviews;
•
all the organisations that responded to the survey;
•
Cathy Holden who organised us all and dedicated hours to the transcription of the
consultation recordings, and
•
all those who provided us with information in response to our requests.
Acknowledgements
i
Abstract
Objective
To conduct preliminary work towards the development of a self-assessed measure of consumer
outcome for New Zealand.
Method and Findings
This project was carried out by a consumer-led organisation. An extensive literature search was
conducted to identify consumer views on appropriate domains of, and methods for, outcome
assessment. The results of the literature search were used to formulate desirable characteristics
of self-assessed measures of consumer outcome. A large number of self-assessed measures
were identified from the literature. Those satisfying at least some of the formulated desirable
characteristics were then examined in more detail in terms of their content, process, sensitivity
to change, psychometric properties, and prior consumer involvement and consumer opinion.
Six of these measures were selected by the researchers for review by a consumer reference
group, who found all the measures seriously wanting, but identified three they considered
acceptable for wider consultation with a number of consumer forums involving metropolitan,
urban, rural, Māori, Pacific and deaf consumers. This wider consultation supported the
reference group’s reservations about using any of the existing measures, sharing their view that
the primary aim of a self-assessed measure of consumer outcome is as a tool for individuals to
use to assess their own mental well-being. Most of the existing instruments were prepared with
a view to service evaluation. Consumers were clear it was inappropriate to attribute changes
occurring over time exclusively to service interventions, stressing the importance of personal,
relational and environmental factors in such changes. There was considerable consensus among
the various forums on the twelve broad domains they wished to see included in a consumer selfassessed measure. While there were some culture specific domains, the majority were
universal. Deaf consumers identified particular needs. Forum participants gave detailed
feedback on aspects of measure design that were helpful and unhelpful. Consultation with
mental health providers indicated there was significant support for the introduction of a selfassessed measure of consumer outcome, particularly from those already using some measure of
this type.
Recommendations
•
That a project be established for a self-assessed measure to be developed and tested by
consumers in New Zealand;
•
that the primary aim of the measure will be to provide a tool for individuals to assess their
own mental well-being. Secondary aims of the measure should be to facilitate service
monitoring and development, communication, process and systemic evaluation, and
lobbying for improvement;
•
that a flexible approach to the completion of a self-assessed measure of consumer outcome
is needed, with some form of face-to-face communication being the preferred method of
Māori and Pacific populations;
•
that a procedure be established to maximise consumer safety in relation to the provision of
outcomes information, particularly in relation to who can access this information and to
what purpose;
•
that the specific issues raised by consumers through the consultation involved with the
present research be considered in the development of the measure, and
•
that the measure be developed and evaluated with a view to it becoming part of the suite of
outcome measures supported by the Ministry of Health.
ii
Abstract
Table of Contents
Acknowledgements .......................................................................................................................i
Abstract ........................................................................................................................................ii
Objective ...................................................................................................................................ii
Method and Findings.................................................................................................................ii
Recommendations .....................................................................................................................ii
Table of Contents........................................................................................................................iii
Introduction .................................................................................................................................1
What is mental health outcome measurement? .........................................................................1
What is the purpose of mental health outcome measurement? .................................................3
Why involve consumers in evaluating outcomes? ....................................................................5
Why involve consumers in designing and evaluating outcome measures?...............................6
Literature Review ........................................................................................................................7
Defining recovery......................................................................................................................7
What items need to be included in a measure of mental health outcome?................................8
General .................................................................................................................................8
Māori ..................................................................................................................................12
Pacific .................................................................................................................................13
Other groups within society ................................................................................................14
Domains for inclusion in a self-assessed measure of consumer outcome...............................15
The language and emphasis of measures – strengths focused or problem focused?..........15
Length of outcome measures...............................................................................................16
Explaining the purpose of outcome measurement ..............................................................16
The right not to answer .......................................................................................................16
Assistance with completing an outcome measure ...............................................................16
Other issues .............................................................................................................................16
Triangulation ......................................................................................................................17
Satisfaction with services....................................................................................................17
Psychometric testing ...........................................................................................................17
Summary .................................................................................................................................18
Method........................................................................................................................................19
Establishment of consumer reference group ...........................................................................19
Information gathering process.................................................................................................19
Literature review.................................................................................................................19
Analysis of measures...........................................................................................................20
Interviews............................................................................................................................21
Survey..................................................................................................................................22
Consultation process ...............................................................................................................22
Consumer consultation .......................................................................................................22
General consumers’ consultation meetings ................................................................................. 22
Māori consumers’ consultation hui ............................................................................................. 23
Pacific consumers’ consultation fono .......................................................................................... 24
Deaf consumers’ consultation meeting........................................................................................ 24
Mental health service provider consultation ......................................................................25
Results.........................................................................................................................................26
Comprehensive Outcome Measures........................................................................................27
Assessment of Wellness Outcome Tool ...............................................................................27
Behaviour and Symptom Identification Scale (BASIS-32) ..................................................28
Client’s Assessment of Strengths, Interests and Goals (CASIG) ........................................30
Table of Contents
iii
Crisis Hostel Healing Scale ................................................................................................31
Hua Oranga ........................................................................................................................32
Lotofale Evaluation Measure..............................................................................................33
Medical Outcomes Study 36 item Short-Form Scale (SF-36).............................................33
Mental Health Inventory (MHI)..........................................................................................35
Mental Health Recovery Measure ......................................................................................36
Multnomah Community Ability Scale .................................................................................37
Ohio Mental Health Consumer Outcomes System..............................................................38
Outcome Questionnaire (OQ).............................................................................................39
Personal Vision of Recovery Questionnaire (PVRQ) .........................................................40
Recovery Assessment Scale (RAS) ......................................................................................41
Single Factor Outcome Measures ...........................................................................................42
Lehman Quality of Life Interview .......................................................................................42
Quality of Life Index ...........................................................................................................43
Satisfaction Index – Mental Health.....................................................................................43
Verona Service Satisfaction Scale (VSSS) ..........................................................................44
Consideration of existing tools................................................................................................48
Consultation ............................................................................................................................57
Consumer consultation .......................................................................................................57
Value and purpose of self-assessed measures of consumer outcome .......................................... 62
Completion of measures .............................................................................................................. 63
Room to write .............................................................................................................................. 63
Assessment of Wellness Outcome Tool – consumer completed section (Appendix 5)............... 64
Crisis Hostel Healing Scale (Appendix 6)................................................................................... 68
Hua Oranga – Tāngata Whai Ora completed schedule (Appendix 7).......................................... 72
Hui ............................................................................................................................................... 75
Hua Oranga – Tāngata Whai Ora Completed Schedule .......................................................... 76
Crisis Hostel Healing Scale ..................................................................................................... 80
Assessment of Wellness Outcome Tool..................................................................................... 82
Fono............................................................................................................................................. 84
Preferences .................................................................................................................................. 84
Deaf consumers’ consultation meeting........................................................................................ 84
Mental health service provider consultation ......................................................................85
Interviews................................................................................................................................85
Survey .....................................................................................................................................87
Discussion ...................................................................................................................................88
Recommendations......................................................................................................................94
References ..................................................................................................................................95
Appendix One – Interview Questions ......................................................................................99
Appendix Two – Survey Form ...............................................................................................100
Appendix Three – Deaf Mental Health Service – Support Needs Assessment...................102
Appendix Four – Definitions of Psychometric Terms ..........................................................109
Validity..............................................................................................................................109
Reliability..........................................................................................................................109
Feasibility .........................................................................................................................110
Appendix Five –Assessment of Wellness Outcome Tool ......................................................111
Appendix Six – Crisis Hostel Healing Scale ..........................................................................113
Appendix Seven – Tāngata Whai Ora Schedule of Hua Oranga ........................................118
iv
Table of Contents
Introduction
What is mental health outcome measurement?
The purpose of this project was to conduct preliminary work toward the development of a selfassessed measure of consumer∗ outcome for the New Zealand context. As the work progressed
it became necessary for the research team to reflect on some of the definitions and principles
that have evolved, through other work in this area, into commonly accepted precepts within the
field of mental health outcome measurement. Such reflection became necessary because many
of those precepts are simply not compatible with the literature and research in regard to selfassessed measures of consumer outcome. Hence, the introductory part of this document
re-considers some current definitions and principles of mental health outcome measurement.
Much of the work on outcome measurement has been carried out by those intimately involved
with service delivery, so it is not surprising they have focused on the impact of mental health
service interventions. Indeed, this perspective has become a defining concept in the field:
[o]utcome is defined as the visible or practical result of something, the effect or
consequence. Hence consumer outcome in the present document is narrowly defined as
what happens after someone has been a patient or consumer of the services of a health
professional or a health system, technically, the effect on a patient’s health status that is
attributable to an intervention (Andrews, Peters & Teeson, 1994, p. 8).
The Mental Health Advocacy Coalition in New Zealand (2001) similarly defined mental health
outcome as:
a change in the health of an individual or group of individuals which is attributable to an
intervention or series of interventions (p. 3).
These definitions presume to attribute all change to specific mental health interventions, when a
much wider range of influences can lead to changes in mental well-being. For example,
narratives of New Zealand consumers highlight a variety of factors that both helped and
hindered mental well-being. While these include the positive and/or negative aspects of the
mental health system, many other influences, such as support from other consumers and
extended family/whānau, personal strength, re-discovering and re-claiming cultural identity,
spiritual beliefs and faith, were reported as being as, or more, important to recovery (Fenton &
Te Kotua, 2000; Lapsley, Nikora & Black, 2002; Malo, 2000).
This perspective is similar to that of service providers and consumers at an American
community mental health centre who were asked to identify factors likely to lead to positive
changes in mental well-being. Service providers saw the mental health centre as the main
positive influence, while consumers included friends and families as well (Walsh, 1999).
Another survey of consumers provides an even broader list of positive factors in recovery from
mental illness, with the most important seven identified as:
∗
As directed by the consumer reference group, the term ‘consumer’ has been used throughout this
document (apart from where a different term is contained within a reference directly quoted from another
publication). In relation to the present project, consumer should be interpreted broadly as people with
experience of mental illness. It is inclusive of Tāngata Whai Ora. The term ‘Tāngata Whai Ora’, when
used by itself, refers specifically to Māori who experience mental illness. Advice from Te Taura Whiri i
te Reo Māori is that it translates as ‘people in search of well-being’.
Introduction
1
•
the process of coming to terms with the disorder;
•
activities that were helpful;
•
environmental factors;
•
medication;
•
aspects of themselves that were helpful;
•
their network, and
•
hospitalisation (Tooth, Kalyanansundaram & Glover, 1997).
Clearly, some of these factors were due to formal intervention, such as medication and
hospitalisation. Others were not, particularly aspects of the self and people’s networks, such as
having friends who affirmed them and their experiences regardless of the illness. The most
frequently reported factor was the person’s own determination to get better and manage their
illness.
The Well-Being Project, concerned with defining and exploring factors promoting or deterring
the well-being of people with experience of mental illness, found consumers favoured the
following coping and help-seeking strategies:
1.
call or go to see a mental health professional (62%);
2.
relax, meditate, take walks or a hot bath (54%);
3.
eat (52%);
4.
call or see friends (52%), and
5.
write down thoughts or talk the problem out (50%) (Campbell & Schraiber, 1989).
These findings from the consumer perspective are congruent with clinically based research
seeking to identify the contributing factors to clinical outcomes, in which, the particular
intervention (be it medication or psychotherapy) accounted, at best, for no more than 15% of
successful outcome (Asay & Lambert, 1999). In fact, the most potent factors in successful
outcomes are:
•
the resources of the client and outside events (40% of positive change);
•
the client’s perception of the therapeutic relationship or alliance (30% of positive change),
and
•
the client’s hopes and expectancy of change (15% of positive change) (Asay & Lambert,
1999).
The current definitions of mental health outcome measurement make the invalid inference that
any change is due to an intervention if someone has been a consumer of mental health services.
It is clear the factors that aid mental well-being are wider than formal services. In terms of
mental health outcome measurement, change is the sum effect of many complex factors, of
which mental health service intervention will be only one. Scores from outcome measures can
at best tell us that there has been a change, but not what has caused this change. Hence, any
definition of mental health outcome measurement must stand alone from considerations of
causal attribution.
There are a number of other key points that need to be highlighted in relation to the definition of
mental health outcome measurement:
2
Introduction
•
The term ‘outcome’ suggests an end point. However, one definitive end point is not
necessarily an appropriate concept within the field of mental health, particularly given that
many consumers describe recovery as a ‘journey’ and thus a different term, inferring
periodic reflection upon progress, would be more appropriate.
•
The measurement of mental health outcome involves a process. That process is facilitated
by the use of outcome measurement tools that can provide a picture of the situation at any
one time or period of time. It is through the comparison of results from outcome
measurement tools (administered at different times through an outcome measurement
process) that provides for identification of change.
•
Often the word ‘change’ is used within literature about mental health outcome
measurement. Obviously the concepts of improvement and deterioration are encapsulated
within this term. However, it is important to highlight that ‘no change’ is also a valid result
when measuring mental health outcome and could be more appropriately expressed as
‘previous state maintained’. As results from outcome measures may become a basis for
resource allocation, it is essential that a distinction be drawn between no change due to
inadequate service provision or inadequate quality of service provision, and credit for
supporting a consumer to maintain their quality of life despite the effects of illness.
•
It is generally understood within the field of outcome measurement that a major potential
source of error is the assumption that what appears to be a significant change in individual
well-being is actually a normally occurring fluctuation and not likely to last (regression to
the mean). Any person is likely to have ups and downs within their ‘usual’ state of being.
In addition, it is possible for people to experience spontaneous remissions where they
become much better. Both these types of unpredictable but normal changes need to be
ruled out before it is possible to assume a real change has taken place. This can be
addressed through more frequent measurement to reduce the risk of such fluctuations being
undetected (Sonnanberg, 1996) or by seeking to assess current status over a period, rather
than just the day of the evaluation. Opinion varies about how often to measure outcome.
Five measurement points have been suggested in New Zealand research on Māori mental
health outcomes: assessment, inpatient treatment, outpatient treatment, community care,
and community support (or discharge) (Kingi & Durie, 2000). Another suggestion is that
long-term clients should be assessed more regularly and short-term clients at just two points
– intake and close to termination of treatment (Ohio Department of Mental Health, 2000).
Measuring outcome at just two points could lead to higher degrees of error for short-term
clients. In the Ohio Consumer Outcome System longer term clients (those with severe
mental illness) complete the outcome measure at intake, after 6 months, at 12 months, and
then annually until treatment ends. The points at which outcome is measured are not as
important as the fact that they are measured more than twice. Some consumers have
suggested that outcome should be measured through self-assessment on a weekly basis to
give feedback to consumers and service providers about what is working and where they
need to go next with the treatment plan (Graham, Coombs, Buckingham, Eagar, Trauer &
Callaly, 2001).
What is the purpose of mental health outcome measurement?
It is not surprising, given the previous discussion, that the most often cited purpose of mental
health outcome measurement is focused on service improvements, in particular, increasing
service effectiveness and efficiency (Kingi & Durie, 2000; McCarthy, 1995; Stedman,
Yellowlees, Mellsop, Clarke & Drake, 1997; Walsh, 1999). We have argued above that
changes in outcomes reflect far more than the impact of service interventions alone. This wider
view includes an explicitly consumer-focused agenda.
Introduction
3
We contend that the primary purpose of mental health outcome measurement, in line with
general principles of best practice, should be in its direct potential benefits to the consumer,
particularly in providing them with an additional tool to support participation in their mental
health care. Participating in making decisions regarding one’s mental health is the common
thread facilitating recovery amongst service users who live well in the presence or absence of
symptoms associated with mental illness (Read, 2003). In addition, relationship factors are said
to account for 30% of positive mental health outcomes (Asay & Lambert, 1999), and service
users suggest mutual respect and sharing of knowledge, opinions and abilities are the key
contributing aspects to effective mental health partnerships (Read, 2003).
Other research has identified that consumers do not want outcome measurement to become
simply part of a data-collecting exercise by agencies. Australian consumers were concerned
about whether outcome measurement data would be used to improve the dialogue between
consumers and service providers – an outcome they saw as positive, or to ration mental health
services – an outcome they saw as undesirable (Graham et al., 2001). Consumers wanted
dialogue with a ‘maximum’ exchange of information. Consumers also wanted this dialogue to
lead to follow-through, in which service providers offered help in areas in which consumers had
indicated they were having difficulty (Graham et al., 2001). In Ohio, consumers’ attitudes to
outcome measurement were more negative if their outcomes information was not used
practically within the therapeutic alliance (Ohio Department of Mental Health (2000),.
Despite individual reflection and communication being embraced as the primary aims of mental
health outcome measurement, it is acknowledged that outcome measurement data can also serve
a valuable purpose at the organisational, regional and national levels in terms of decisionmaking and continuous development in relation to mental health service provision. However,
care is required to ensure due recognition is given to the myriad of potential factors that could
have contributed to any observed change.
In terms of purist outcome evaluation methodology, the ideal approach would be randomly to
assign matched clients to different types of services controlling other aspects of their lives that
might make a difference, and then follow them to find out which treatment was associated with
the best outcomes. But this approach is neither practically nor ethically feasible.
Sonnanberg (1996) suggests the following strategies to minimise errors in interpreting
information about change in a person or group:
•
use multiple assessment methods (i.e. interview, as well as pen and paper measures);
•
use multidimensional methods of assessment;
•
examine the perspective of all who are affected by the treatment (i.e. consumer, service
providers, family/whānau/significant others);
•
gather opinions concerning change and the meaning of change scores. Asking the key
people what they think changes are due to may provide clues as to what factors are making
a difference. The difficulty here is that it is possible for all parties to be unaware of what
has made the difference, and to attribute changes to factors that are not really having an
impact, and
•
investigate other factors in the consumers’ life, other than treatment, that may be
contributing to change. Outcome measures could specifically ask consumers what other
factors they thought made a difference to their well-being.
In addition to these strategies, Sansoni (1995) suggests:
•
collecting data from many clients over a significant period of time and aggregating it to be
able to detect meaningful trends and what they might be due to, and
•
carrying out other studies to relate outcomes to various factors that might be at play.
4
Introduction
Why involve consumers in evaluating outcomes?
Historically, consumers’ views about their illnesses have been seen as likely to be distorted and
unreliable. Consumers have been, to a large extent, defined by their experience of
psychological symptoms, and some mental health professionals have assumed that:
testing patients on their conceptualizations of illness would therefore be a futile exercise,
since their responses are assumed to be unreliable or irrelevant (Weinstein, 1972, p. 38).
This has certainly had an impact on outcome measurement. Kent and Read (1998) noted that:
Therapists often regard consumers as unable to adequately judge the treatments they are
given (Lebow, 1982). The evaluation of treatment is … often … couched in terms defined
by the therapist, and whether the goals of treatment are achieved is determined by the
therapist (p. 297).
Evidence that shows consumers are capable of reporting reliably on their symptoms,
functioning, and the treatment they are receiving (Atkinson, Zibin & Chuang, 1997; Horenstein
et al., 1973) has led to a change in attitudes and increased involvement of consumers in
treatment and evaluation. Stedman et al., (1997) noted that:
it is now widely recognised that the assessment of treatment outcomes should involve all
relevant stakeholders. It is an accepted philosophy that consumers, and where appropriate
their carers, should be involved in all stages of treatment decision-making including
assessment and review (p. 16).
In addition, there is mounting evidence that clinicians are not necessarily as objective as they
may assume. In research on the CASIG (Client’s Assessment of Strengths, Interests and Goals)
measure, relatively low levels of agreement were found between staff and consumer
assessments, particularly with regard to symptoms and side effects of medication. Consumers
assessed both as being more of a problem than staff did (Wallace, Lecomte, Wilde & Liberman,
2001). They noted that this might be because these scales assess:
interviewee’s subjective reactions and feelings, and staff might not be privy to these private,
personal and symptomatic responses (p. 11).
Their analysis did not show a bias by consumers to portray themselves as healthier than they
appeared to others. Rather, Wallace et al., (2001) noted:
there are no guarantees that staff ratings are particularly accurate (p. 12).
Another study found that clinicians who worked directly with clients had quite low levels of
agreement with clients about their symptoms and functioning. In contrast, clinicians who made
independent judgements were much more likely to agree with clients about their perceived level
of functioning and symptoms (Horenstein, Houston & Holmes, 1973). Clinicians who worked
with clients tended to be less accurate in their ‘before and after’ judgements of symptoms and
functioning than independent clinicians. They also saw clients as having improved much more
than clients and independent clinicians did.
One issue of concern is that where there is a low level of agreement between service provider
and consumer assessments of outcome, this is often assumed to be due to lack of insight or to
some other aspect of consumer functioning. In a study of HoNOSCA, an outcome measurement
tool for adolescents, there was some disagreement between scores for clinicians and in-patient
consumers, with those with psychoses seeing themselves as having less problems and those with
emotional disorders seeing themselves as having more than clinicians judged (Gowers, Levine,
Bailey-Rodgers, Shore & Burhouse, 2002). In discussing the weak correlation between
Introduction
5
consumer and clinician scores, the authors concluded it was likely due to more objective ratings
by clinicians. This automatic assumption of greater clinician objectivity is particularly worrying
given the mounting evidence to the contrary.
This discussion highlights the limitations of ‘objective’ evaluation of both subjective
experience, and the interpretation of so-called objective representations of subjective
experiences, such as behaviour. A person who is isolating themselves from others, for example,
may be struggling with personal troubles or may merely dislike the range of company available.
Why involve consumers in designing and evaluating outcome measures?
Having established that consumers can reliably be involved in individual outcome measurement,
it is timely to look at the value of consumer involvement in designing and evaluating outcome
measures.
Kent and Read (1998) found that in the New Zealand context:
those agencies that do perform evaluation surveys often do not involve the consumers
themselves in the design or administration of the survey (Windle & Paschall, 1981), and
consequently, issues that may be of importance to consumers are omitted (p. 297).
Stedman et al., (1997) suggest that because information from assessments of well-being will be
used to provide feedback to policy makers and planners, who will in turn determine the shape of
future services, measures that are most relevant to the service recipients must be used.
The easiest and most obvious way to make measures relevant to consumers is to involve
consumers in designing them. Internationally, there are increasing moves to involve consumers
in mental health outcome work (Allott & Longanathan, 2002; Ohio Department of Mental
Health, 2000; Onken, Dumont, Dornan, Ralph & Ridgway, 2002).
In addition to relevancy, Prager (1980) found that a client developed measure can also affect
validity. Prager convened a group of 12 consumers and together they developed the clientdeveloped measure (cdm) of well-being. This was valid and internally consistent, had greater
stability over a 3-month period than a therapist-report evaluation system (Problem Statement
and Treatment Goals – PSTG), and was found to have greater predictive validity with regard to
client tenure and service utilisation than the PSTG (Prager, 1980).
It would seem that the best source of information on what actually constitutes a good outcome,
and how one knows it has occurred, is likely to come from the people who have experienced it.
For this reason the present project focused on consumer conceptualisations of mental illness and
mental well-being. The research team determined that the best, and most accessible, source of
such information would be through the literature on recovery.
6
Introduction
Literature Review
Defining recovery
Recovery usually refers to good mental health outcomes as defined by consumers themselves.
Recovery goes beyond the concepts of ‘cure’ or ‘remission’ in two major ways. First, it
considers changes in many more areas of life than just psychological symptoms. Second, it
does not necessarily refer only to the absence of such symptoms. Instead, it also encapsulates
the concept of living well in the presence of symptoms of mental illness.
For example, the Ohio Mental Health Consumer Outcomes Initiative (Ohio Department of
Mental Health, 2000) defines recovery as:
a personal process of overcoming the negative impact of a psychiatric disability despite its
continued presence (p. 5).
Another definition that focuses on a rewarding life, despite symptoms, describes recovery as:
persons with severe mental illness living a satisfying life within the constraints of one’s
mental illness (Corrigan, Giffort, Rashid, Leary & Okeke, 1999, p. 231).
William Anthony, one of the pioneers in developing and defining the concept of recovery, wrote
that:
a person with mental illness can recover even though the illness is not ‘cured’…[Recovery]
is a way of living a satisfying, hopeful, and contributing life even with the limitations
caused by illness. Recovery involves the development of new meaning and purpose in one’s
life as one grows beyond the catastrophic effects of mental illness (Anthony, cited in Allott
& Loganathan, 2002, p. 4).
These definitions are profoundly hopeful in that they give a sense that all people can live a rich
and rewarding life despite ongoing symptoms. However, they have one weakness in that they
give the impression that recovery does not include the possibility of reduction or absence of
symptoms. The reality is that many people with mental illness experience periods with few or
no symptoms.
For this reason, the definition of recovery promoted by the New Zealand Mental Health
Commission seems very apt, as it applies whether or not a person is experiencing symptoms.
According to this definition:
recovery is happening when people can live well in the presence or absence of their mental
illness, and the many losses that may come in its wake, such as isolation, poverty,
unemployment and discrimination. Recovery does not always mean that people will return
to full health or retrieve all their losses, but it does mean that people can live well in spite
of them (Mental Health Commission, 1998a, p. 1).
This definition allows for both the possibility of a rewarding lifestyle even when symptoms are
present, and for the possibility that symptoms can be reduced or absent, without making either
option better than the other.
Recovery, as defined above, is the outcome many consumers aim for, whether they achieve that
goal through the support of formal services, family and friends, other consumers or through selfhelp. This concept of wellness has implications for what should be measured by mental health
outcome measures.
Literature Review
7
What items need to be included in a measure of mental health outcome?
General
Recent years have seen increasing efforts to elicit the views of mental health consumers on
indicators of recovery that they think should be covered in outcome measures. Australian
consumers identified seven main areas they thought should be included in a mental health
outcome measurement tool:
•
quality of life (satisfaction about life circumstances in the physical, cognitive, emotional,
social and financial domains and feelings about one’s life; includes housing, finance,
meaningful work, recreation, spirituality, empowerment, happiness, control over life);
•
day-to-day functioning to maintain independence in the community (focuses on actual
abilities and difficulties rather than satisfaction in areas such as keeping house, maintaining
appearance, managing money, social activities, work, managing medication);
•
physical health and health risks (includes level of medication and side effects, sleeping and
eating habits, energy levels but not suicide risk);
•
relationships (perception of number and quality of relationships rather than relationship
skills, includes family and friends, social networks and activities, levels of social support);
•
illness symptoms (includes anxiety levels, memory, concentration, ability to break habits
and cycles, depression, nightmares and thought disturbances but not suicidal tendencies);
•
coping with and recovering from illness (includes dealing with the impact of the illness on
one’s life and with stigma, coping with return to work, recognising early warning signs and
taking action, strategies to break habits and cycles, minimising detrimental effects of
treatment and care, adjusting to medication, coping with stress and change, developing a
recovery plan, initiating action), and
•
satisfaction with service quality (perceptions of the quality of care received, including the
help one has received, information about mental illness, medication and consumer rights,
staff attitudes, being treated with dignity, feeling listened to by staff and able to express
one’s views, acceptance of self-advocacy) (Graham et al., 2001).
Consumers in Ohio and Maine rated indicators of recovery in the following order from most
important (1) to least important (10):
1.
ability to have hope;
2.
trusting my own thoughts;
3.
enjoying the environment;
4.
feeling alert and alive;
5.
increased self-esteem;
6.
knowing I have a tomorrow;
7.
working with and relating to others;
8.
increased spirituality;
9.
having a job, and
10. having the ability to work (Ralph, Lambric & Steele; Ralph & Lambert, cited in Allott &
Longanathan, 2002).
8
Literature Review
Common themes identified in an analysis of four consumer recovery narratives suggest a similar
set of indicators:
1.
reawakening of hope after despair;
2.
breaking through denial and achieving understanding and acceptance;
3.
moving from withdrawal to engagement and active participation in life;
4.
active coping rather than passive adjustment;
5.
no longer viewing oneself primarily as a mental patient and reclaiming a positive sense of
self;
6.
journey from alienation to purpose, and
7.
support and partnership (Ridgway, 2001).
A review of recovery literature identified four dimensions in personal accounts:
•
internal factors – factors within the consumer, such as awakening, insight and
determination to recover;
•
self-managed care – an extension of the internal factors where consumers describe how
they manage their own mental health and how they cope with difficulties and barriers;
•
external factors – interconnectedness with others, the support provided by family, friends
and professionals, the presence of people who believe the consumers can cope with and
recover from their mental illness, and
•
empowerment – a combination of internal and external factors, where internal strength is
combined with interconnectedness to provide the self-help, advocacy and caring about what
happens to self and others (Ralph, 2000).
Another study focused on five domains identified as playing a critical role in recovery:
1.
resources/basic needs;
2.
choices/self-determination;
3.
independence/sovereignty;
4.
interdependence/connectedness, and
5.
hope (Onken et al., 2002).
Across cultures, concepts of wellness differ. Different cultures have different perceptions of
what mental health and illness are, and what bring both about. When a measure is based on the
concept of health and recovery prevalent to one culture, it may not measure aspects of health
and recovery that are important to people from a different culture. A number of perceived
issues or difficulties with measuring mental health outcomes across cultures have been
identified, particularly through the work undertaken in Ohio (Ohio Department of Mental
Health, n.d.; Bridgman, Dyall, Bidois, Gurney, Hawira, Tangitu, Huata, Webster & Heron,
2000; Malo, 2000; Kingi & Durie, 2000). One of the most frequently cited issues is that of
cultural differences in relation to ‘empowerment’. Empowerment in the recovery literature
usually refers to concepts such as a sense of choice, personal power over one’s life,
assertiveness, and confidence in dealing with authority figures such as mental health
professionals (Rogers, Chamberlin, Ellison & Crean, 1997). But in some cultures these
concepts are not considered as positive as in others, particularly the aspect relating to
questioning authority figures (Ohio Department of Mental Health, n.d.). Research in Ohio, on a
measure that included a subscale on empowerment, found that:
Literature Review
9
the Empowerment Scale’s emphasis on individual liberty and self-expression is directly
contradicted by cultural values that emphasize obligation to and reliance on family and
community [for] Amish, Hispanic, [and] African-American [cultures] (Ohio Department of
Mental Health, n.d., p. 129).
This has also been raised as an issue with Pacific peoples, particularly those born in the Pacific
Islands (Malo, 2000) who have a firmer grasp on their respective languages and cultural
traditions, and grow up with a strong respect for those who hold positions of power. In contrast,
New Zealand born Pacific people (including most young Pacific people) are more likely to
question authority (Malo, 2000).
New Zealand research has established that control and power to make decisions are seen as
positive aims and part of mental wellness by Māori consumers (Tāngata Whai Ora), even within
the framework of whānau and respect for authority (Bridgman et al., 2000). This study
identified being in control, having the power, and exercising choice as important aspects of
wellness to both Māori and mainstream consumers. These are all integral to the Making
Decisions Empowerment Scale, which forms the Empowerment sub-scale of the Ohio
Consumer Outcomes Measure Adult Form A (Ohio Department of Mental Health, n.d.). The
Empowerment Scale also includes a significant number of items about standing up to authority
that were not specifically mentioned in the New Zealand research.
A related issue is the nature of questions or statements included in outcome measures. In the
Ohio research, some consumers and families did not want personal and sensitive questions at
the beginning of the survey (Ohio Department of Mental Health, n.d.). In addition, Amish,
Hispanic and African-American respondents saw questions about the number of friends a person
had as under-emphasising the value and importance of the the relationships with family and
friends; they saw the emphasis on quantity rather than quality. The same groups also thought
that questions about ‘free time’ did not recognise the importance of meeting obligations to
family and community. In their view, having free time implied failing to meet these obligations.
This is similar to a Tongan conceptualisation of wellness, which ties it into a framework of
spirituality, land, extended family, and mutual obligations (Bridgman et al., 2000). In a culture
with this concept of well-being, having too much free time may well be considered a sign of
unwellness rather than wellness.
It is also important to consider the accessibility and appropriateness of language in outcome
measures. Respondents in the Ohio study suggested that the measures may need to use ‘street
language’ for young people or some cultures (Ohio Department of Mental Health, n.d.), and this
may also be true in New Zealand. They also indicated that some colloquial terms used in the
measure were out of date. In the New Zealand context, Kingi and Durie (2000) have
commented that questions in outcome measures:
may refer to issues or use language that the respondents are unfamiliar with. This is
particularly so for schedules or tools which have been developed within other cultural
paradigms (p. 27).
A further issue that came out of the research was the omission of certain domains, that were
considered integral to specific cultures, from measures (Ohio Department of Mental
Health, n.d.). For example, issues of spirituality, religion and the role of fate were seen as major
omissions by African-American and Hispanic cultures in the Ohio study (Ohio Department of
Mental Health, n.d.).
10 Literature Review
There are important differences between cultures, and cultural sub-groups, in the acceptability
of different ways of collecting data. While Australians of European origin and Amish appeared
to have no reservations about either paper and pencil or computerised forms, Hispanics in the
US preferred face-to-face interviews to paper and pencil, and disliked computer forms, and
rural/Appalachian consumers saw interactive voice response interview as the most appropriate
for their isolated situations (Stedman et al., 1997; Ohio Department of Mental Health, n.d.).
Both African-American and rural respondents saw the ideal as having a choice of options for
completion of outcome measures made available (Ohio Department of Mental Health, n.d.).
New Zealand research with both mainstream and Māori consumers identified that transpersonal
processes (discussion or focus groups):
[a]llow[ed] people to connect to a shared experience, increased the report of negative
experiences and evaluations, and better demonstrated the difficulties that people faced
(Bridgman et al., p. 11).
Paper and pen tests can be problematic for those who are not literate, particularly in cultures
where this is considered shameful. It is also important that the measure is not too long,
particularly for people with significant disabilities (Ohio Department of Mental Health, n.d.).
Consumer and family relationships with staff are other important factors in completing outcome
measures. Cultural beliefs about respecting authority mean some people may find it difficult to
contradict the views of provider staff (Ohio Department of Mental Health, n.d.). In the
American research, people from three cultures (rural/Appalachian, African-American and
Hispanic) said they preferred surveys of satisfaction with services to be done separately from
outcome surveys and to be anonymous (Ohio Department of Mental Health, n.d.). The
researchers implied this was because satisfaction surveys by their nature tend to reveal any
problems consumers have with agencies.
Concerns have been expressed by people from minority cultures, both in New Zealand and
overseas, that data will be interpreted with cultural biases in the importance ascribed to different
needs or treatment priorities (Ohio Department of Mental Health, n.d.; Mental Health
Commission, 1998b).
It is important to acknowledge cultural-specific norms in relation to outcome measures. For
example, Hispanic, African-American and rural men will tend to under-report symptoms
because of their need to appear strong and reliable, whereas Hispanic and Amish women will
tend to score low on empowerment as a reflection of the value they put on deference to male
authority in their culture (Ohio Department of Mental Health, n.d.).
The important question for the purposes of this literature review is whether one measure of
mental health outcome can be used validly and reliably across all cultures in New Zealand.
From the preceding section it is clear that outcome measures should ideally be based on the
values, attitudes, concepts and language of the people for whom such measures are intended.
This is obviously an easier task when the target population is mono-cultural. In New Zealand it
is imperative that any outcome measurement tool is valid and reliable across a number of
cultures, or is specifically targeted at a particular cultural population. Given the political stand
on Māori mental health as a health priority, and recognition of Treaty obligations and rights, it is
particularly pertinent that any outcome measurement tool is valid, reliable and appropriate for
Māori. In respect of the present project, it is intended that the work relates to consumers within
the overall New Zealand (multi-cultural) context.
Literature Review
11
Māori
Research specific to New Zealand consumers, both Māori and mainstream, identified a complex
list of indicators of recovery. Bridgman et al., (2000) came up with a list of 22 themes of
wellness when they asked:
What does being “mentally well” mean to you? and
What does “mentally unwell” mean for you?
The first theme, headed ‘holistic’, was identified and presented as the over-arching theme,
referring to:
balance, harmony, and the integration of the physical, mental and spiritual (Bridgman et
al., 2000, p. 7).
The other 21 themes were then related to a Māori model of well-being, entitled Te Whare Tapa
Whā. This model, also known as the four cornerstones of Māori health, was developed by
Durie two decades ago as one of the first Māori models of well-being. It reflects Māori world
views and their historical context and has been widely acknowledged and generally accepted by
Māori (Durie, 1998):
Taha Hinengaro (mental dimension, how people feel or want to feel)
Quality of life (improving aspects of your life), being purposeful (having goals,
motivation), being in control (exercising choice), being normal (feeling human, like I once
was), feeling content (calm, free of depression), not feeling anxious or stressed.
Taha tinana (physical dimension, behavioural, externally observed factors of
un/wellness)
Being consistent in thought, managing (coping, living independently), basic coping (e.g.
getting out of bed, eating), not using/abusing alcohol or drugs, getting off medication or
getting effective medication, not having major or psychotic symptoms.
Taha Wairua (spiritual, cultural dimension)
Maintaining spiritual strength, trust (being able to depend on someone, intimate
relationships), Māoriness (drawing strength from being Māori, using Māori conceptions of
illness and treatment), being socially acceptable (well-behaved, contributing to society); not
being stigmatised or labelled.
Taha whānau (family, social-economic dimension)
Work (being able to work, having a supportive work environment), relationships (being in
contact with others, having whānau/family supports, friends), safety (feeling safe, not being
affected by others), mental health (staying out of mental health system, hospital/unit)
(Bridgman et al., 2000).
Bridgman et al., (2000) then analysed the consultative material to identify the differences
between Māori and mainstream views of mental well-being. They found that:
the only area where there were significant and consistent differences between Māori and
Mainstream related to access to culturally appropriate services, with Māori service users
and caregivers wanting staff to be knowledgeable in the area of the Treaty of Waitangi and
Māori language and customs, and linking these (in the case of service users) to quality of
life or services judgements (p. 6).
12 Literature Review
Pacific
There are many cultures included under the broad umbrella of ‘Pacific’ – including Niuean,
Cook Island, Samoan, Tongan, Tokelauan and Fijian – each with their own specific views of
mental health and recovery. While Pacific cultures differ in their views of mental health, some
common threads run through the beliefs of each culture (Mental Health Commission, 2001).
These have been gathered together by Fuimaono Karl Pulotu-Endemann as a generic Pacific
model called the ‘Fonofale model of health’ (Ministry of Health, 1997). This model is based on
the metaphor of a house, with the following components:
•
the roof, composed of cultural values and beliefs that provide a shelter for life;
•
the foundation, composed of the family, and
•
the four pou, or posts that connect culture and the family, and interact with each other.
These are: spiritual, physical, mental and miscellaneous things such as gender, age, etc.
The Fale, or house, exists in a surrounding composed of:
•
physical environment;
•
time in history, and
•
context – where/what/how and what it means to the people experiencing it (Ministry of
Health, 1997).
Within this model, recovery involves all aspects of a person’s life being in harmony – spiritual,
physical, emotional and family (Mental Health Commission, 2001). Most critical to recovery is
the support of family and community.
The traditional approach to treatment of mental or spiritual disturbance in Pacific cultures is to
focus on the whole family rather than solely on the afflicted individual (Bathgate & PulotuEndemann, 1997). It focuses on the head of the family, or traditional healer, and involves:
rites to placate the spirit and restore the individual and family group as a whole to a
neutral state with their spiritual environment (p. 106).
The one example of a consumer-outcome measure designed specifically for Pacific people
found in this research highlights the type of issues considered important by Pacific people in
mental health treatment (Nonu-Reid, Lui, Erik, Pulotu-Endemann & Bridgman, 2000).
Developed through Lotofale, a Pacific Peoples Mental Health Service based in Auckland (New
Zealand), it was designed to measure how satisfied consumers were with the service, and what
impact it had on their lives. The measure is based on the Fonofale model of health, and covers
the following areas:
•
family issues, including the experiences of belonging, love, honesty, respect, trust, safety
and forgiveness that clients and families have experienced at the service. This includes
how honest they feel staff have been, how informative and respectful, including respecting
the status of elders, and how much staff have kept their word;
•
the respect shown for the family by Lotofale and other mental health services, the degree to
which the service was able to work within the family’s culture, and the degree to which
staff saw things from the family’s perspective;
•
the degree to which the service gave information and support that helped clients and
families adapt to New Zealand culture and traditions, including concepts of mental illness,
dealing with racism and isolation, pressure from church and family/church donation
obligations;
Literature Review
13
•
the extent to which the resources of the family and extended family have been drawn on;
•
cultural outcomes, including developing a better sense of identity as a Pacific Island
person, being better able to meet cultural obligations and responsibilities, having increased
access to a range of Pacific Island cultural activities and processes (e.g. traditional healing)
and giving information and access to culturally appropriate mental health and drug and
alcohol services;
•
spiritual outcomes, such as better understanding of traditional and Christian beliefs and
practices (Christianity is a very prominent part of life in Pacific cultures), and increasing
access to spiritual practices and processes, whether traditional or Christian;
•
physical outcomes, including fewer physical symptoms and signs of mental illness and of
physical illness (e.g. pain, sleep, medication), fewer dysfunctional and aggressive
behaviours (e.g. drug and alcohol abuse, anger management), better coping with stress,
improved skills for daily living and employment (e.g. budgeting, transport, paid and unpaid
work, training), and meeting basic survival needs such as money, housing, transport and
child support;
•
mental outcomes, both increasing positive experiences (self-control, feeling loved by self
and others, independence, motivation) and reducing negative experiences (e.g. depression,
suicidal feelings, anxiety), and
•
other outcomes, including issues related to being born in New Zealand rather than the
Islands, feeling more comfortable with sexual identity, gender and age role, and receiving
appropriate support and education regarding sexual issues (e.g. safe sex, contraception,
sexual abuse) (Nonu-Reid et al., 2000).
Other groups within society
There are many other groups within society, some based on minority ethnic affiliation, some on
gender, sexual preference, religion, and others again on disability. It is not possible to consider
all these in this report, but a decision was made to consider a population defined by one aspect
of physical disability, deafness. This group is often overlooked in consideration of mental
health needs and outcome measurement. Yet members of the deaf community in New Zealand
are particularly exposed to factors considered to predispose to mental illness (Bridgman, 2003).
Accessing appropriate services is made more difficult by the lack of general understanding
about the unique culture of the deaf and how this affects deaf people with experience of mental
illness. This had been little studied in New Zealand until the work of Geoff Bridgman (2003)
whose findings emphasised the important difference for the deaf community between the
description and recognition of mental illness:
[d]escription is the harder task and, while sign language can be richly expressive of the
behaviours and emotions of mental illness, the actual vocabulary around mental illness is
small. For example, words like paranoid, manic, phobia and demented are not in the New
Zealand sign language dictionary (Bridgman, 2003, p. 10).
In terms of how deaf people with experience of mental illness can best be supported, Bridgman
(2003) concluded that most pertinent is that any support is delivered in a manner respectul of
deaf language and culture. When asked about the support and professional help deaf people
with experience of mental illness require, the overwhelming theme was the need for support
related to the everyday challenge of living in a hearing world. Specifically, deaf people
identified that support in the following areas is of most value to them:
•
education/information (learning, training, workshops, advice);
•
friends/family (socialisation, support group/network);
14 Literature Review
•
diagnosis focused (drugs, alcohol, drink, depression, sexual abuse, anger);
•
job/work;
•
funding, financial, money;
•
leisure (trip, outing, hobby, activity, holiday);
•
transport;
•
accommodation/residential, and
•
technology (hearing aid/fax, relay, teletext, alarm, TTY (teletypewriter), e-mail/internet)
(Bridgman, 2003).
Domains for inclusion in a self-assessed measure of consumer outcome
Based on the comprehensive literature review, the following list has been compiled as the sum
of domains that consumers (across cultures) have identified as important to their mental wellbeing:
•
relationships, trust, connectedness, taha wairua/whānau, whānau/family support, social
support, interdependence;
•
day-to-day functioning, coping and managing, including work (having the ability to work),
taha tinana;
•
connection to one’s culture, cultural identity, drawing strength from one’s culture, taha
wairua;
•
physical health and health risks, taha tinana, includes alcohol and drug use, side-effects of
medications, sleeping and eating;
•
quality of life, life satisfaction, enjoying the environment, feeling alert and alive, able to
enjoy pastimes/hobbies;
•
illness symptoms, taha hinengaro;
•
coping with and recovering from illness, self-managed care, staying out of the mental
health system, understanding of illness;
•
hope, journey from alienation to purpose, reawakening of hope after despair;
•
empowerment, being in control, exercising choice, positive sense of self, selfdetermination;
•
spiritual strength, increased spirituality, taha wairua;
•
resources, basic needs (e.g. food, money, accommodation, transport), and
•
satisfaction with services (including cultural relevance of services).
There are some additional and specific issues on which consumers have reflected in relation to
the development and use of mental health outcome measures.
The language and emphasis of measures – strengths focused or problem
focused?
Consumers in the Australian study made it clear they preferred measures that focused on
strengths as well as difficulties (Graham et al., 2001). The Behaviour and Symptom
Identification Scale (BASIS-32) was found to be disheartening and demotivating for the person
filling it in because of its focus on difficulties. One consumer commented:
Literature Review
15
I would like to see a questionnaire that realistically emphasizes achievement while still
identifying areas to be worked on (Graham et al., 2001, p. 26).
In terms of wording, people liked options such as:
What are you satisfied with? Or how have you improved? Or how do you feel about …?
(Graham et al., 2000, p. 26).
In addition, they preferred questions that were written in plain English (Graham et al., 2000).
These findings have implications for the rates of response to measures. If consumers find
measures disheartening and demotivating to answer, they are less likely to begin, or continue
with, an outcome measure.
Length of outcome measures
Australian consumers wanted outcome measures to be relatively brief, no more than two pages
but in a font big enough for them to see (Graham et al., 2001). This poses some difficulties,
given the number of domains and specific items the same groups of consumers indicated they
wanted to see covered in the measure. In addition, there was a preference for some issues to be
covered specifically rather than through an overall content domain (Graham et al., 2000). For
example, ideally there should be a specific item on sleep disturbances rather than this issue
being included in a general item on physical health and health risks.
Explaining the purpose of outcome measurement
Australian consumers who were consulted said they wanted the written version of the measure
to contain an explanation of the purpose of the instrument (Graham et al., 2000). In addition,
they wanted the purpose of the measure to be explained to them in person, preferably by their
case manager, rather than simply having it handed to them to fill in. Consumers also wanted
written information included about who would have access to the data and how it would be
used.
The right not to answer
Australian consumers requested an added statement to the effect that completion of all items
was voluntary, and should an item be inapplicable or unknown, it could be left blank (Graham et
al., 2000).
Assistance with completing an outcome measure
Australian consumers thought they should be offered assistance in filling out the measure
(Graham et al., 2000). The fact that some assistance was required by 50% of Ohio consumers
who filled in measures used in the Ohio Consumer Outcome System, indicates this is a real need
(Ohio Department of Mental Health, 2000).
Other issues
Considerable research currently exists around mental health outcome measurement generally.
This information provides insight into some of the universal issues and considerations that need
to be reflected upon in relation to any work in the field of outcome measurement.
16 Literature Review
Triangulation
Triangulation is an approach to outcome measurement that involves the use of three
assessments, preferably using different versions of the same measure so that they are directly
comparable. This three-way scoring system usually takes into account the views of the
consumer, the service provider most directly involved with the consumer, and a person
significant to the consumer, such as a member of their family or whānau, partner, close friend or
caregiver (Kingi & Durie, 2001; Ohio Department of Mental Health, 2000). The system has
been developed in response to a potential source for error that comes from assessing outcomes
from only one point of view and is based on the philosophy that the more points of view taken
into account, the more likely it is that an accurate picture of change will emerge.
Satisfaction with services
Researchers disagree about whether satisfaction with services falls within the domain of mental
health outcome measurement. But irrespective of whether it does or not, there is an argument
that satisfaction with services should be measured because consumers have a right to services
with which they are happy (Bridgman et al., 2000; McCarthy, 1995). Australian consumers
argued that:
including questions about the consumer’s satisfaction with services within an outcome
instrument is a pre-condition to achieving full involvement of consumers (Graham et al.,
2001, p. 25).
Some Ohio consumers were of the view that satisfaction with services should be measured
separately from wellness outcomes and anonymously (Ohio Department of Mental Health, n.d.).
This is because levels of consumer satisfaction can indicate problems with a particular provider
or system, and some consumers do not want their level of satisfaction with services known to
agencies because they fear they will be denied services if they complain about problems.
Evaluation of service satisfaction requires an awareness of adequate service delivery, and in
New Zealand there is some evidence that expectations in this regard are sometimes so low that
at times dissatisfaction with services may paradoxically be more desirable than satisfaction
(Bridgman et al., 2000).
Psychometric testing
A number of criteria for judging how useful and reliable measures of mental health outcome are,
have been suggested in the mental health outcome literature. The following list has been drawn
from a number of studies (Andrews et al., 1994; Burlingame & Lambert, 1995; Ciario, Edwards,
Kiresuk, Newman & Brown 1986; Green & Gracely, 1987; Stedman et al., 1997):
•
The measure must be valid (i.e. it should have sound psychometric properties and should
measure what it is supposed to measure).
•
The measure must be reliable (i.e. it should, within acceptable limits, provide the same
results when given to the same person on two occasions or by two different people).
•
The measure must be sensitive to change in consumer well-being (i.e. it should be able to
indicate whether a significant change has taken place for a consumer over consecutive
administrations of the measure). This is the definitive property of a measure of consumer
outcome. A measure may provide information about mental health status, but it is the
extent to which it assesses meaningful change in a person’s condition that determines
whether it can be called an outcome measure.
•
The measure must be applicable (i.e. it should address dimensions that are important to
consumers as well as having the ability, once outcome scores are aggregated, to provide
useful feedback to relevant stakeholders).
•
The measure should be relevant and appropriate.
Literature Review
17
•
The measure must be acceptable (i.e. brief, and the purpose, wording and interpretation
should be clear – it should be user friendly).
•
The measure must be practical (i.e. it should not impose too much of a burden on
consumers. It should involve simple methodology and procedures that can be implemented
uniformly by a majority using well-defined instructions. The measurement materials and
implementation procedures should be relatively inexpensive. The scores from a measure
should have clear and objective referents (‘meanings’) that are consistent across
consumers).
•
A measure’s results should provide easy feedback to consumers and be readily interpretable
without extensive statistical skill.
•
Measure(s) that provide information regarding the means or processes that may produce
positive effects are preferred to those that do not.
Burlingame and Lambert (1995) note that many ‘home made’ measures exist for which the kind
of information listed above does not exist. They argue that to judge the usefulness and
reliability of a measure in terms of generating accurate information, one needs data on validity
and reliability, and also normative data (how one detects differences between clinically
disordered and non-disordered scores). In their view these data are so important that:
if a measure lacks sufficient technical and normative data then any decision using the
measure may be highly suspect …[In fact] employing data from the measure to make
decisions may be worse than relying solely on professional judgement (Burlingame &
Lambert, 1995, p. 3).
In terms of technical data, Burlingame and Lambert (1995) suggest the following parameters.
For internal consistency, the coefficient alpha should be .80 or above. Test-retest reliability
should come in at .70 or above. Validity coefficients should be no lower than .50, and
coefficients of over .75 would suggest excellent concurrent validity. The measure should show
sensitivity to meaningful changes in well-being over time.
The measure should be easy to use, score and interpret and not costly on a case, clinic or
hospital basis. They note that while the use of multiple measures of outcome has been
recommended, the cost and complexity of interpreting multiple instruments administered at
different times are substantial. Not only this, but any contradictory interpretations of change
from so many assessments can create an administrative dilemma in deciding which to take
notice of, and what the overall outcome actually is (Burlingame & Lambert, 1995). This
suggests a further criterion of covering as many of the relevant domains as possible within the
single measure.
In terms of feasibility, or ease of use, Burlingame and Lambert (1995) suggest a good measure
should eliminate paper rather than adding to it. It should also be simple enough to be completed
in 10 minutes.
Summary
The literature review was a major part of the present research. It informed the continued
research in two main ways. First, the sum of domains identified as important to consumers
(across cultures) for their mental well-being was seen as central to the consideration of a selfassessed measure of consumer outcome for New Zealand. In addition, the substantial body of
information on both technical and pragmatic aspects of outcome measure design, from both a
consumer and general perspective, was also valuable to reflect on self-assessed measures
already in existence.
18 Literature Review
Method
Establishment of consumer reference group
A consumer reference group was established at the outset of the project. The key tasks of the
group were:
•
to consider, monitor and advise on the process for undertaking the work;
•
to consider, monitor and advise on the material generated through the work;
•
to ensure the research team maintain a consumer focus in all aspects of the work
undertaken;
•
to communicate and consult with other consumers about the project work, and
•
to disseminate information to other consumers, re the progress of the project, to ensure
consumers are kept informed of the work being undertaken.
The group consisted of 11 persons from throughout New Zealand who reflected a wide range of
consumer experience, background, and networks (including gender, rural/urban issues among
others). This included three persons of Māori, and one person of Pacific Island, ethnicity.
During the course of the project, the group came together for three full day meetings.
Communication between the research team and the reference group principally occurred via email.
Information gathering process
Literature review
A formal literature review was conducted using a range of electronic data bases (PsychINFO,
Mental Measurements Yearbook, Index New Zealand, Digital Dissertations, Medline, Expanded
Academic, and Proquest) to identify self-assessed measures of consumer outcome in existence
in the mental health or general health areas and any other material of relevance.
The following search terms were used to undertake the formal literature review:
•
mental health outcomes;
•
mental health recovery;
•
mental health research plus evaluation, measurement;
•
assessment plus consumer, client, patient, self-report, self-administer;
•
mental health consumer outcomes plus measurement or psychiatric evaluation or
psychological assessment;
•
mental health and treatment effectiveness or client satisfaction;
•
mental health treatment effectiveness;
•
consumer self report;
•
client self report;
•
end treatment evaluation;
Method
19
•
patient treatment evaluation;
•
mental health consumer self report;
•
mental illness consumer self report, and
•
mental health outcomes self report.
Additional searches were done using the Google search engine. Several websites were searched
to find fugitive research (i.e. research not published in journals, books, etc.) and consumerdeveloped outcome measures that might not have been formally reviewed. These included: the
Human Services Research Institute (USA), the North Eastern Alliance for the Mentally Ill
(Australia), the National Alliance for the Mentally Ill (USA), MIND (UK), SANE Australia,
Mindful-things (UK), Mental Help Net (USA) and the Mental Health Advocacy Coalition (New
Zealand) websites. Where websites had journals or publications listed, recent editions were
scrolled through for references to self-assessed outcome measures.
A hand search was carried out at the New Zealand Ministry of Health library for periodicals that
might carry references to self-assessed measures that had not been located by computer
searches. The following periodicals were checked by hand: Evidence Based Mental Health;
Health Outcomes Bulletin; Mindful and Mental Health Quarterly.
Lastly, several organisations and individual researchers were contacted by phone and e-mail to
seek supplementary information about existing measures and research on them.
The majority of the measures reviewed were located through computer searches and published
papers.
Analysis of measures
The measures initially identified were then assessed against the following criteria (formulated
from the information collected through the literature review) to determine those suitable for
more detailed analysis:
• were developed by consumers;
•
were developed with significant consumer involvement;
•
were developed in New Zealand;
•
included domains relevant to cultural identity/connection and/or were based on cultural
research/consultation;
•
received positive evaluations by consumers (e.g. easy to use and understand, covered
relevant domains of well-being, culturally relevant);
•
covered all or many of the domains that have been identified as important to consumers;
•
were designed for an adult, rather than adolescent or child, population, and
•
were well researched for validity, reliability, sensitivity to change, cultural appropriateness,
and feasibility.
Those measures chosen to be examined in more detail were comprehensively analysed in
relation to an extensive set of properties:
•
rater or format;
•
key outcomes assessed;
•
missing content areas;
20 Method
•
number of items and rating methods;
•
intervals of measurement;
•
time to administer;
•
sample(s) tested on;
•
consumer views on measure;
•
face validity;
•
construct validity;
•
content validity;
•
criterion validity;
•
convergent and divergent validity;
•
inter-rater reliability;
•
test-retest reliability;
•
internal consistency;
•
sensitivity to change;
•
feasibility – acceptability;
•
feasibility – applicability;
•
feasibility – practicality;
•
current usage in New Zealand and overseas;
•
development;
•
cultural needs considered;
•
consumer input to development;
•
family/whānau input to development;
•
clinical input to development;
•
limitations of measure;
•
effect of setting (e.g. hospital, community, etc.), and
•
effects for type of mental illness.
The research team considered the comprehensive analysis of each tool and identified six
measures that best satisfied the criteria that had been formulated from the information collected
through the literature review. These six measures were presented to the consumer reference
group to discuss their respective strengths and relevance to consumer issues as perceived by that
group. They agreed on the three most appropriate measures to take to wider consumer
consultation.
Interviews
People working in areas of relevance to the present project were identified by the research team
and the reference group, and interviewed by the lead researcher. They were asked to respond to
a total of 16 questions (Appendix 1) covering issues relevant to mental health outcome
measurement generally and self-assessed outcome measurement in particular. Interviewees
included people with experience of mental illness, researchers, service providers and
consultants. These interviews served to identify the major issues the interviewees thought had a
bearing on the present project.
Method
21
Survey
No data were available on the current use of self-assessed measures of consumer outcome in
New Zealand, so a brief survey (see Appendix 2) was conducted to determine this. In the
absence of a complete database of all New Zealand mental health service providers, such a
database was compiled with information taken from the Mental Health Information National
Collection (collected through the New Zealand Health Information Service) and the 'Hospitals
and other health service providers' sections from the white pages of all New Zealand phone
books. The services were then contacted by mail, addressed to the Manager, to complete a fiveitem questionnaire to determine whether they currently used a measure that consumers
completed to assess their current well-being and recovery to date; what this instrument was;
what percentage of consumers received, and completed this instrument; how long it took to
complete; and how useful self-assessed measures of outcome were perceived to be. A stamped,
return-addressed envelope was provided for return of the survey. All return envelopes were
number coded so that survey completion could be tracked while maintaining the anonymity of
individual survey results. One month after mailing out the survey forms, telephone calls were
made to follow-up those services that had not already responded.
Consultation process
Consumer consultation
Five general consumer consultation meetings, two hui with Māori consumers, one fono with
Pacific consumers, and one consultation meeting with deaf consumers were organised to consult
on the subject of self-assessed measures of consumer outcome. A snowball strategy was used to
select participants for these. Consumers with extensive networks within their own areas were
asked to recruit participants to attend the consultation meetings. A maximum of twelve people
were invited to each consultation meeting.
The general consultation meetings took place in South Auckland, Wellington, Christchurch,
Greymouth, and Napier to cover major metropolitan, urban and rural settings within New
Zealand. The two hui were held in Napier and Nelson, where two of the Māori members of the
consumer reference group had established networks. Both the fono and the consultation
meeting with deaf consumers took place in South Auckland.
General consumers’ consultation meetings
Each of the consultation meetings ran for half a day. The two consumer members of the
research team facilitated each meeting. The consultation meetings started with the facilitators
providing a brief introduction about general mental health outcome measurement, the purpose
of the current project, and the aims, objectives and process involved with the present
consultation forum.
Participants were then asked: ‘If you were wanting to measure your mental well-being, what
would it be important for you to consider’? Depending on the number of people in attendance
at each meeting the participants either remained as a large group or were asked to split into two
sub-groups to undertake this exercise. The maximum number of participants in any one group
was seven.
The second part of the meeting involved participants considering the three tools the reference
group had agreed be taken to wider consumer consultation. Presentation order of the tools (at
22 Method
each meeting) was rotated to control for the effects of order of their presentation. The tools
were presented one at a time, and with each, participants were given time to look over the tool
and/or complete it if they wished. Once participants signalled they had had sufficient time to
familiarise themselves with the tool the group was asked the following questions for feedback:
What do you think of this tool? What is good about the tool? What is bad about the tool? Is
there anything missing from the tool? What do you think of what you have to do to complete
the tool? Once the group had completed discussing one tool the next tool was introduced and
the above process repeated until all three tools had been considered.
Finally, the group was asked to rate the three tools that had been presented in order of
preference.
All participants received a $40.00 reimbursement for attending the meeting.
Māori consumers’ consultation hui
Each of the hui ran for half a day and were tikanga-based. They were facilitated by two of the
Māori members of the consumer reference group and the Māori member of the research team.
Tāngata Motuhake∗ from the Light House in Napier (a consumer/Tāngata Motuhake mental
health service), and Tāngata Whai Ora from the White House in Nelson (a consumer/Tāngata
Whai Ora mental health service) were personally invited by two of the Māori members of the
consumer reference group to attend consultation hui to discuss self-assessed consumer outcome
measures.
A pānui was distributed to potential participants that outlined who the hui had been organised
by, who the facilitators were, and who was conducting the research. In the pānui these potential
participants were asked for permission to tape-record the hui to help the researchers write up the
report. They were also assured that all kōrero/comments in the report would be kept
anonymous, and that the tapes would be destroyed once the researchers had finished analysing
the information. A reimbursement of $40.00 for attendance and input into the hui was offered
alongside the provision of afternoon tea. As places were limited, potential participants were
asked to contact the Māori member of the consumer reference group who distributed the pānui
if they wanted any further information or if they planned to attend.
This initial contact and information sharing is an important phase in gaining the trust and buy-in
of Māori. A ‘kanohi ki te kanohi’ (face-to-face) approach by someone known to the potential
participants ensured greater participation, as did the tikanga Māori processes followed
throughout the hui (such as beginning and ending with karakia, allowing for mihimihi, and
encouraging kaumātua attendance).
The facilitators began by explaining the background to the work, what it was about, and why it
was important. They then discussed what a ‘measurement tool’ was, who ordinarily completes
or fills in these ‘tools’ and reiterated who was undertaking the research and why the hui had
been called. ‘Self-assessed Tāngata Motuhake or Tāngata Whai Ora outcome measures’ were
interpreted as tools used by Tāngata Motuhake or Tāngata Whai Ora to assess their mental wellbeing or ‘waiora’. Ensuing discussion focused on what well-being or waiora was, and how it
could be measured.
∗
‘Tāngata Motuhake’ is a term chosen by those in the Heretaunga/Hawke’s Bay region to best describe
people who have experience of mental illness. It is used interchangeably throughout the report with the
term ‘Tāngata Whai Ora’ because it is a local term and not specific to all participants in the research. The
term ‘Māori consumers’ has been used when referring to both Tāngata Motuhake and Tāngata Whai Ora.
Method
23
Participants were asked what they considered important for them for their well-being. They
were split into two sub-groups to undertake this exercise and on completion, each group was
then asked to feed back their responses to the group as a whole.
Participants were asked to consider the three ‘measurement tools’ which the consumer reference
group had agreed be taken to wider consumer consultation. The tools were presented one at a
time and with each, participants were given time to look over the tool and/or complete it if they
wished. Once participants signalled they had had sufficient time to familiarise themselves with
the tool the group was asked to the following questions for feedback: What do you think of this
tool? What’s good about it? What’s bad about it? What’s missing? What do you think
about having to fill it in? Do you think if you filled it in, it would tell you about your
wellness? Would it measure your level of wellness (i.e. is it doing what it’s meant to do)?
Once the group had completed discussing one tool the next tool was introduced and the above
process replicated until all three tools had been considered.
Finally, the group were asked to rate the three tools that had been presented in order of
preference.
Pacific consumers’ consultation fono
Potential participants for the fono were approached through a support service managed by
Pacific People to see if they were interested in being involved in the fono. This service was also
used as the venue for the fono, which provided a familiar and comfortable environment for
attendees. The consultation fono ran for half a day. The Pacific member of the reference group
and the two consumer members of the research team facilitated the fono. The meeting was
opened with a prayer and welcome in each Pacific language. Participants were then given the
opportunity to introduce themselves and where they were from. Facilitators provided a brief
general introduction about mental health outcome measurement, the purpose of the current
project, and the aims, objectives and process involved with the present consultation forum.
Participants were then asked: ‘If you were wanting to measure your mental well-being, what
would it be important for you to consider’? The participants remained as one whole group to
undertake this exercise.
The second part of the fono involved participants considering the three tools the reference group
had agreed be taken to wider consumer consultation, following the process as described for the
general consumers’ consultation meetings.
The fono finished with a prayer.
All participants received a $40.00 reimbursement for attending.
Deaf consumers’ consultation meeting
The consultation meeting with deaf consumers was run quite differently from those undertaken
with the other groups. This was primarily due to the limitations imposed by time and the
availability of material suitable for consideration by deaf consumers.
Interpreters were commissioned to attend the full meeting to translate communication between
the facilitators (the two consumer members of the research team) and the participants of the
meeting. In addition, support workers of the deaf mental health service attended to support their
clients.
24 Method
The meeting started with the facilitators providing a brief general introduction about mental
health outcome measurement, the purpose of the current project, and the aims, objectives and
process involved with the present consultation forum. Participants were then asked: ‘If you
were wanting to measure your mental well-being, what would it be important for you to
consider’? The participants remained as one whole group to undertake this exercise.
On advice from the deaf mental health service, it was agreed that a needs assessment tool,
developed specifically for the deaf mental health service, be presented for participants to reflect
on (see Appendix 3). Participants were given time to look over the tool and/or complete it if
they wished. Once participants signalled they had had sufficient time to familiarise themselves
with the tool the group was asked the following questions for feedback: What do you think of
this tool? What is good about the tool? What is bad about the tool? Is there anything missing
from the tool? What do you think of what you have to do to complete the tool?
All participants received a $40.00 reimbursement for attending the meeting.
All consultation meetings were taped with the prior knowledge of participants.
Mental health service provider consultation
One general consultation meeting was organised to consult with mental health service providers
on the subject of self-assessed measures of consumer outcome. Invitations to this half-day
meeting were sent to all New Zealand mental health service providers. The key aims of this
meeting were:
•
to discuss the progress of the preliminary work towards the development of a self-assessed
measure of consumer outcome, and
•
to discuss any issues health providers might have about the use of a self-assessed measure
of consumer outcome.
Two members of the research team facilitated the meeting, and opened it by talking about
outcome measurement generally, the purpose and process involved with the current project, and
the potential benefits and issues associated with the implementation of a self-assessed mental
health outcome from a clinician perspective.
For the second aim, participants were asked to consider: How could a self-assessed consumer
outcome measure best be integrated into your service? What are the challenges? How could
they be overcome?
Method
25
Results
Eighteen self-assessed measures of consumer outcome were chosen for in-depth analysis. They
were:
Comprehensive self-assessed outcome measures
•
Assessment of Wellness Outcome Tool;
•
Behaviour and Symptom Identification Scale (BASIS);
•
Client’s Assessment of Strengths, Interests & Goals (CASIG);
•
Crisis Hostel Healing Scale;
•
Hua Oranga;
•
Lotofale Evaluation Measure;
•
Medical Outcomes Study 36 item Short Form Scale (SF-36);
•
Mental Health Inventory (MHI);
•
Mental Health Recovery Measure;
•
Multnomah Community Ability Scale;
•
Ohio Mental Health Consumer Outcomes System;
•
Outcome Questionnaire (OQ);
•
Personal Vision of Recovery Questionnaire (PVRQ), and
•
Recovery Assessment Scale (RAS).
Single factor self-assessed outcome measures
•
Lehman Quality of Life Interview;
•
Quality of Life Index;
•
Satisfaction Index – Mental Health, and
•
Verona Service Satisfaction Scale (VSSS).
The following properties of the tools were identified (where data were available):
•
rater or format;
•
key outcomes assessed;
•
missing content areas;
•
number of items and rating methods;
•
intervals of measurement;
•
time to administer;
•
sample(s) tested on;
•
consumer views on measure;
•
face validity;
•
construct validity;
•
content validity;
26 Results
•
criterion validity;
•
convergent and divergent validity;
•
inter-rater reliability;
•
test-retest reliability;
•
internal consistency;
•
sensitivity to change;
•
feasibility – acceptability;
•
feasibility – applicability;
•
feasibility – practicality;
•
current usage in New Zealand and overseas;
•
development;
•
cultural needs considered;
•
consumer input to development;
•
family/whānau input to development;
•
clinical input to development;
•
limitations of measure;
•
effect of setting (e.g. hospital, community, etc.), and
•
effects for type of mental illness.
Table 1 provides a summary of the properties of each of the tools examined. Definitions of the
relevant psychometric terms are provided in Appendix 4. The significant aspects of each of the
tools are discussed in turn.
Comprehensive Outcome Measures
Assessment of Wellness Outcome Tool
(Information collected from Bridgman et al., 2000.)
This is a New Zealand measure that assesses a comprehensive range of outcomes including
quality of life, spiritual strength, cultural connection, relationships, illness symptoms, day-today functioning, and satisfaction with services. It also has one item on physical health and
health risks, one item on empowerment, and one item on hope. Very few areas identified as
important to consumers are missing from the measure. These are: cultural relevance of services,
suicidal thoughts and actions, and basic needs/resources. The tool is filled in by consumers
using a pen and paper format. Extra information is added by providers on the services the
consumer has received, and on demographic data.
The measure is relatively brief, with 27 items for consumers, 15 items on services, and 10
demographic items. The consumer part of the measure uses a variety of scales for answers,
from five point ‘better’ to ‘much worse’, to two point better/worse, plus four questions with
space for a written answer. Being brief, it probably takes around 15 to 20 minutes for
consumers to complete, although no data on time for administration were actually found in the
research. In addition, no information on what time period the measure covered or how often it
Results
27
could be administered was found, although it appeared to measure consumer experiences both at
the time the measure was completed and at any time the consumer considered relevant.
The tool was based on Tāngata Whai Ora/consumer and family/whānau perceptions of wellness
but no information was found on the views of consumers in relation to the finished measure.
Little information was found on validity, reliability and consistency. Face validity appeared
good, with items reflecting the concept of recovery as defined by consumers. Content validity
also seemed satisfactory, with items adequately covering the different domains included. No
information was found on sensitivity to change over time. Feasibility was also difficult to judge
due to lack of data, but applicability appeared good as the tool covers most of the domains
consumers have identified as important to recovery. The brevity and time required to complete
make it practical to use, although no data were found on training requirements or costs.
The measure is used only in New Zealand, in the Waitemata area, and not overseas. It was
developed by researchers and clinicians, based on input from Tāngata Whai Ora/consumers and
their families/whānau on wellness, although consumers do not appear to have been directly
involved in developing the measure. This is one of the limitations of the measure, along with a
lack of data on validity, reliability, sensitivity to change, feasibility and effects of setting. It is
also missing some key domains, as mentioned above, including items on the cultural relevance
of services. The tool does not appear to have been tested across cultures.
Behaviour and Symptom Identification Scale (BASIS-32)
(Information collected from Andrews et al., 1994; Bridgman et al., 2000; Graham et al.,
2001; Stedman et al., 1997.)
BASIS-32 is the acronym for the 32 item Behaviour and Symptom Identification Scale, a
measure of mental health outcome that is filled in by consumers. It can be used as a pen and
paper measure or converted to computer with no loss in effectiveness. It can also be used as the
template for a face-to-face or phone interview. BASIS assesses outcomes in the domains of
coping with day-to-day life (work, leisure, household chores, etc.), relationships, stress,
intimacy, mood, thinking, unusual behaviour, sex, drugs, violence and suicidal thoughts and
behaviour. There are two versions – a 16-item and a 32-item questionnaire. Answers are on a
five point scale of the amount of difficulty consumers have in each area – from ‘no difficulty’ to
‘great difficulty’.
Some of the domains identified by consumers as important in an outcome measure are missing
from BASIS or are covered by very few items. These include: consumer satisfaction with
services (including cultural relevance of services), quality of life, physical health, coping with
and recovering from illness, cultural identity/connection, hope, spiritual strength, and basic
needs/resources. The 32-item version of BASIS is relatively quick to administer, taking only 15
to 20 minutes to complete.
BASIS has been tested with 40 to 50 consumers attending agencies in the community, 40
consumer representatives and consultants, and consumers with a variety of mental illness
experience, including diagnoses of anxiety, schizophrenia and affective disorder. It has also
been used in a variety of settings. Some differences in results have been observed depending on
whether the measure was used in a community or in-patient setting. Consumers in private
psychiatry settings showed scores that suggested higher levels of mental health problems than
those in public psychiatry settings (Stedman et al., 1997). In terms of type of illness, there were
significant differences in scores found between those people experiencing affective (mood)
disorders and the group experiencing schizophrenia. To a lesser extent, scores also differed for
28 Results
those with experience of anxiety disorders. These differences were seen in relation to scores on
self/others, depression/anxiety and daily living/role functioning scores (Stedman et al., 1997).
Consumers have been asked extensively for their views on BASIS and have mixed opinions.
Overall, consumers see a need for BASIS to be modified or replaced with a new measure with a
more positive focus on strengths and improvements rather than on difficulties (Graham et al.,
2001). The Australian consumers who were interviewed saw the domains covered as valid, but
thought there were domains missing that should have been included.
Face validity appears adequate, although a significant number of key concepts of recovery as
defined by consumers are missing. Andrews et al., (1994) reported that construct, content and
criterion validity were all adequate, although Stedman et al., (1997) did not investigate this
further in their re-analysis. In terms of content validity, the items appear to cover the domains
in the measure adequately, although they are somewhat sparse. Convergent validity appears
reasonable, with significant correlations with the relevant scales of provider measures (Health of
the Nation Outcomes Survey, Life Skills Profile, and the Role Functioning Scale). No data
were found on inter-rater reliability. Test-retest reliability was excellent, with coefficients
ranging from .84 to .90. Internal consistency was good to excellent, and the measure showed an
ability to indicate significant differences over a 6-month period, in relation to the domains
covered (Stedman et al., 1997).
In terms of feasibility, acceptability of the measure to consumers was compromised by the
language of the measure, which was seen as negative because of the focus on difficulties
(Stedman et al., 1997). But consumers considered the length of the measure reasonable
(Graham et al., 2001). Consumers also liked the clear response format and thought that the 1week period covered by questions was easier to recall than a 1-month period (Stedman et al.,
1997). While BASIS was seen by consumers as less useful than the Mental Health Inventory
but better than SF-36, many relevant domains were seen as missing, thus reducing its
acceptability. However, the items covered by BASIS were all seen as important by consumers,
so overall applicability was adequate. In terms of the practicality of the measure, no
information was available on costs, but no training is required, and the measure is brief and easy
to use.
BASIS is currently used in New Zealand, as well as in Australia and the US, and may well be
used in other countries. The measure was developed by two researchers based on the
perspective of consumers experiencing acute mental illness in an in-patient facility. It is not
clear from the research exactly how consumers were involved in developing the BASIS. There
appears to have been no involvement of families in the development of BASIS and clinician
input into development is also not clear. BASIS was used at 6-month intervals in the research
that was reviewed, but can be used at intake and specified intervals during or after treatment.
Consumers were asked to consider their experiences over the past week when answering
questions
There are no items on cultural matters and no information was found on testing of the tool
across different cultures. Other limitations of this measure include that the sub-scales may not
represent actual constructs, it has been mainly tested on an in-patient population, which
excluded people with experience of psychosis or who were too ill to be interviewed (Stedman et
al., 1997). There has also been difficulty validating the psychosis sub-scale.
Results
29
Client’s Assessment of Strengths, Interests and Goals (CASIG)
(Information collected from Wallace et al., 2001.)
CASIG is administered via an interview by peers to other consumers, and also has a version to
be completed by service providers. It is a comprehensive outcome measure that looks at an
individual’s goals for improvement in five major areas: living arrangements, financial
community functioning (e.g. work skills and employment), social/family relationships, religious
activities, and physical and mental health. It does not cover the domains of cultural relevance of
services, cultural identity/connection, coping with and recovering from illness, or hope and
empowerment. The measure was not sighted, but from documentation it appears to include
about 80 items, and uses a variety of methods of scoring. It includes some open-ended
questions for goal setting, many yes/no answers, and some Likert scale answers from ‘poor’ to
‘excellent’. No information was found on how long it takes to complete the CASIG but based
on measures of similar length and design it seems likely that it would take 30 to 45 minutes.
CASIG has been tested with 244 consumers in Project Return, a consumer-operated, self-help
organisation running clubs at community sites; the clubs all had members who had experience
of major mental illness. It has also been tested with 93 in-patient consumers with experience of
severe and persistent mental illness (all of whom had either been hospitalised for 5 years or had
had multiple admissions to hospital) in unlocked wards, attending day treatment activities.
Despite this extensive testing no data were found on consumer views on the measure.
While CASIG appears to show reasonable face validity, reflecting some key concepts of
recovery as defined by consumers, with relatively few missing domains, no data were found on
construct validity and content validity. Comparison with the Independent Living Skills Survey
found marginal to moderate levels of correlation, suggesting some degree of criterion validity.
Inter-rater reliability was calculated by comparing staff and consumer scores on the respective
versions of the scale. As there is not usually high agreement between staff and consumer views
this is not an ideal way to test inter-rater reliability. Correlations ranged from extremely poor to
good, with consumers rating side-effects and symptoms higher than staff did. Retesting
occurred over 4 to 6 weeks but test-retest figures were not found in the documentation. Internal
consistency was moderate to excellent, and authors explained any low scores as due to grouping
items by content rather than inter-item correlations (Wallace et al., 2001). Acceptability is
limited by the length of the scale but language was checked by consumers so should be easy to
understand. In terms of applicability, most areas considered important by consumers were
covered by the scale, so this is reasonably high. Practicality is also compromised by the length
of the measure, and training of peer interviewers appeared quite time intensive, as were
audiotaping and feedback of actual interviews. No data were found on costs of training or
availability of the measure.
According to the survey undertaken as part of the present research, CASIG is not currently used
in New Zealand, and overseas use is not known. It was developed by academics, with major
input from 100 clinical staff and managers, including case managers, social workers,
rehabilitation therapists, nurses, and psychologists. Consumers did not appear to be involved in
developing CASIG but were part of the Client Assessment Work Group that reviewed and
approved the measure. In addition, some feedback on the language used was also sought from
peer interviewers. Families of consumers did not appear to be involved in any aspect of
development. No mention of cultural items or testing with consumers across cultures was found
in the research.
It was not clear how often CASIG was intended to be used, but the fact that peer interviewers
were reluctant to administer it 4 to 6 weeks later suggests it would not be used often because of
its length. No information was found on what time period consumers were asked to consider in
answering questions. Some effects of setting (i.e. residential, hospital or community) were
30 Results
found, with those in community settings describing themselves as functioning better than those
in in-patient settings, which is understandable. Those in community settings also tended to be
less engaged in unacceptable behaviours, had more favourable attitudes towards medications,
experienced fewer side effects of medication, and were more satisfied with their quality of
treatment but not with their quality of life. In-patient interviewees reported fewer symptoms,
but engaged in less leisure activities, work and food preparation than those in the community.
Crisis Hostel Healing Scale
(Information collected from Dumont, 1998.)
This scale involves a face-to-face interview with consumers, although it is not clear whether the
interviewer is normally a staff member or consumer. It is a comprehensive outcome measure
that investigates the domains of: self-esteem, confidence, internal self-control, feelings,
hopefulness, mental illness symptoms, empowerment, suicidal thoughts and actions, harm to
self and others, spiritual awareness, physical well-being, medications, relationships, perceptions
of self-acceptance, comfort and pleasure, and some quality of life items. The few missing
domains (of all those consumers would like to see in an outcome measure) are: cultural
identity/connection, satisfaction with services (including cultural relevance of services), day-today functioning, basic needs/resources, and some quality of life items.
Crisis Hostel∗ Healing Scale has a total of 40 items, and uses a four point Likert scale from
‘strongly agree’ to ‘strongly disagree’. It takes 15 to 20 minutes to complete. The scale has
been used as an outcome measure with 110 people attending five psychosocial clubs, as well as
in Crisis Hostel with experimental and control groups. No data were found on consumer views
on the measure.
In terms of face validity, the scale generally reflects the concept of recovery as defined by
consumers in the domains it includes. Construct validity is questionable, as initial factor
analysis indicated a split by positive and negative items rather than by the hypothesized factors.
A revised version of the measure was analysed using pattern matching, which found only a
weak relationship between the factors. Content validity is difficult to judge because subscales
are not marked in the measure. No data were found on criterion validity, inter-rater reliability,
or convergent and divergent validity. Test–retest reliability was good, and internal consistency
was excellent. The scale showed significant changes over time at 6 and 12 months for the
treatment group for the healing scale but not the empowerment scale. The scale appeared to
have relatively high levels of acceptability because of its brevity, and the ease of understanding.
Applicability seemed good as it included many concepts identified by consumers as relevant to
recovery, and consumers were involved in its development. Little information on practicality
was found, although the measure appears to require a statistical package such as SPSS for
analysis of results which makes it reasonably difficult to use compared to many other measures.
The measure does not appear to be currently used in New Zealand, and it is not clear from
documentation how widely it is used elsewhere. It was developed by consumers and service
providers using concept mapping, but no mention was found on the input of families or
clinicians in the development of the tool. The scale does not include items on cultural matters
and no mention was found on testing of the measure across cultures. The scale appears to be
intended for use at intake, 6 months and 12 months, and asks people who fill it out to consider
the present and the past 6 months. The 6-month period consumers are asked to consider when
responding is a very lengthy period for accurately remembering experiences and feelings.
In terms of effects of setting, higher average scores were found for people who had been
hospitalised than for those who had not. Effects of type of illness did not appear to have been
tested.
∗ Crisis Hostel is an alternative to psychiatric hospitalisation that is voluntary, non-medical and based on
a peer support model.
Results
31
Hua Oranga
(Information collected from Kingi & Durie, 2000; Kingi & Durie, 1997.)
Hua Oranga is a comprehensive outcome measure designed to measure mental health outcome
from a Māori perspective. It is a self-report tool using pen and paper that gathers information
from three sources – Tāngata Whai Ora, whānau and clinician/service provider. Hua Oranga
focuses on four domains drawn from Māori tradition and culture: taha wairua (spiritual wellbeing), taha hinengaro (mental well-being), taha tinana (physical well-being), and taha whānau
(family well-being and relationships). Domains identified by consumers as important to
recovery missing from this measure include: satisfaction with services, some items on
satisfaction with cultural aspects and relevance of services, suicidal thoughts and actions, hope,
some quality of life items, some mental health symptoms, some physical health symptoms,
some relationship items, and some items on coping with and recovering from illness.
The measure has 16 items, four on each taha, and three versions, one for Tāngata Whai Ora, one
for whānau, and one for service providers. The items use a four-point scale for measuring
change, from ‘much more’ to ‘much less’. Scores for each version are added together and
averaged to get a total score, from ‘poor’ (-32) to ‘excellent’ (32). Each version takes 10 to 15
minutes to complete. Hua Oranga has been tested with Tāngata Whai Ora in assessment, inpatient, outpatient, community care and community support settings, as well as with whānau
and clinicians. Tāngata Whai Ora, who were consulted by Kingi and Durie (2000), saw the
measure as containing largely relevant items in all domains and felt a high degree of satisfaction
with the measure. They felt the measure gave them a chance to voice their views of treatment
and outcomes.
Face validity for Hua Oranga appears good, and it seems to measure the construct it sets out to
measure – that of mental health outcomes from a Māori perspective. Content validity is
reasonable, although because of the brevity of the measure, items are necessarily sparse.
Psychometric testing on the measure has not yet been completed so no data are currently
available on construct validity, criterion validity, convergent/divergent validity, inter-rater
reliability, test-retest reliability, internal consistency, and sensitivity to change over time.
Relatively good information was available on feasibility. Hua Oranga has good acceptability
with Tāngata Whai Ora, their whānau and clinicians who were consulted by Kingi and Durie
(2000), being brief, reasonably clear, and easy to understand. Some concerns were expressed
about difficulties in interpreting items in the taha wairua (spiritual well-being) section,
especially for more disabled Tāngata Whai Ora, and the measure was not considered suitable for
use with severely impaired Tāngata Whai Ora and those under 15 years. Perceived relevance of
the tool was high across all groups consulted by Kingi and Durie (2000), and there was a
particularly positive response to the inclusion of the whānau version, and the chance for whānau
to express their views. In terms of practicality, the measure is brief, and relatively easy to use
and score. Some difficulty was experienced in identifying suitable whānau members to
complete the measure, but it was otherwise seen as quite practical, although, as mentioned
above, less so with severely impaired Tāngata Whai Ora or those under 15 years of age.
Information on cost, availability of training and documentation was not found.
Hua Oranga is not yet widely used. It is currently being piloted by some services in New
Zealand but is not used overseas. It has been developed by researchers, with input and feedback
from Tāngata Whai Ora, whānau and clinicians. Lastly, the lack of psychometric data means
this measure will require extensive testing. Hua Oranga is designed to be completed at five
points: assessment, in-patient treatment, outpatient treatment, community care and community
support (or discharge). It does not specify a time period in which participants can consider
answering questions, but simply asks about the result of the intervention. Presumably the time
period to be considered is that since the person last completed the form. The effects of setting
and type of mental illness on scoring have not yet been investigated.
32 Results
Lotofale Evaluation Measure
(Information collected from Nonu-Reid et al., 2000.)
The Lotofale Evaluation Measure is a comprehensive outcome measure aimed at evaluating the
impact of a specific, Pacific mental health service on Pacific clients. It is designed to be
completed by consumers and their families, although it is not clear whether this is by self-report
or interview. It looks at the domains of: satisfaction with services (including satisfaction with
cultural aspects and relevance of services), cultural practices, identity and connection with the
Pacific culture, spiritual practices and well-being, physical health, interpersonal skills, day-today functioning, mental illness symptoms, positive mental aspects, culture and gender issues,
and sexual issues. Domains not covered are: coping with and recovering from mental illness,
and empowerment.
This measure is quite long, comprising 60 items, all questions that could either be answered
yes/no or in more detail. It is not clear how it is scored, or whether in fact it is scored, or is
simply used as a source of feedback. Information on the exact time to administer was not
found, but it would probably take between 30 and 60 minutes based on similar measures, and
depending on whether respondents used yes/no answers or went into more detail. No
information was found on who the tool had been tested.
With regard to face validity, the items in the scale appear to measure what they set out to
measure, that is, satisfaction with the service and change in mental health and functioning as a
result of it. Content validity is high, with items providing a very thorough coverage of each
domain. No information was found on construct validity, criterion validity, inter-rater
reliability, test-retest reliability, or sensitivity to change over time. Feasibility appears high,
although no research was found on the views of consumers, their families and service providers
on how acceptable or practical the measure is. In terms of general acceptability the measure is
relatively long but the wording appears reasonably clear and easy to understand. With regard to
applicability, the Lotofale measure covers many domains of relevance to consumers, and misses
out very few. It appears relevant and appropriate to Pacific consumers, with its focus on family,
spirituality, and culture. No data were found on cost, availability of the tool, ease of
implementation, scoring, interpretation, or availability and length of training.
This measure is currently in use with the Lotofale service and does not appear to be in use
elsewhere in New Zealand or overseas. It is not clear what input consumers, their families and
clinicians had into the development of the tool. The research does not stipulate how often the
measure is meant to be used, nor what time period it covers, other than since the last time it was
administered. Given that most questions ask how the respondent found the intervention, it may
be intended for once-only administration at the end of treatment. No information was found on
the effects of either setting or type of mental illness on scores.
Medical Outcomes Study 36 item Short-Form Scale (SF-36)
(Information collected from Andrews et al., 1994; Sansoni, 1995; Stedman et al., 1997.)
The Medical Outcomes Study Short-Form Scale (usually known as SF-36) is a comprehensive
outcome measure covering a range of aspects of recovery from illness. It is a flexible tool that
can be used by consumers, family members or caregivers, and service providers, in a pen or
paper form using self-report, in an interview, by phone, or in person. The SF-36 is
predominantly focused on physical health, and includes the domains of: physical health, role
limitations due to physical and emotional problems, bodily pain, social functioning,
psychological well-being and distress, vitality, and fatigue. Important domains that are not
included in this measure are: satisfaction with services (including cultural relevance of
Results
33
services), satisfaction with areas of one’s life, relationships, coping with and recovering from
illness, suicidal thoughts and actions, spiritual strength, cultural identity/connection, hope, basic
needs/resources, empowerment, and some mental illness symptoms.
The scale has 36 items across 11 sections, using eight different types of scale for answering.
These are largely Likert scales, with five-point scales ranging from ‘excellent’ to ‘poor’, ‘much
better’ to ‘much worse’ and ‘definitely true’ to ‘definitely false’, a six-point scale from ‘all of
the time’ to ‘none of the time’, a three-point scale from ‘a lot’ to ‘not at all’, and some yes/no
answers. Data on the time it normally takes to complete the tool were not found, but based on
similar scales it is likely to take 15 to 25 minutes. The SF-36 has been tested with consumers
who have experience of affective disorders, mainly in community settings. It has also been
tested with people with experience of anxiety, depression and substance abuse, and a small trial
has been carried out with people with experience of psychosis.
Consumers view this measure as largely irrelevant to measuring mental health outcomes due to
its emphasis on ‘general health’, which seems to be interpreted as applying to physical health
only. Concerns have also been expressed about the language, the range of responses that are
available to choose from, and the rating period of one month, which is seen as possibly too long
to remember clearly.
Because of the focus on ‘general health’ issues, face validity seems low with regard to mental
health outcomes. Factor analysis indicates the SF-36 is composed largely of two factors –
physical and mental health, with physical health accounting for 55% of the variance and mental
health only 15% of the variance. This suggests the scale is not high in construct validity with
regard to mental health and is perhaps more suited to evaluating physical health outcomes.
Content validity appears good for the construct of physical health, but poor for that of mental
health. Criterion validity appears excellent, with correlations between related dimensions of the
SF-36 and the Medical Functioning and Well-Being Profile above .90. Convergent validity
appears acceptable, with correlations reported between the SF-36 and other aspects of health
and with clinical information, to the expected degree. There are also significant correlations
with relevant scales of provider measures (Health of the Nation Outcome Scales, Life Skills
Profile and the Role Functioning Scale).
No information was found on inter-rater reliability but test-retest reliability was good to
excellent, with coefficients ranging from .59 to .92. Internal consistency was also good to
excellent, with coefficients from .77 to .91. This measure showed sensitivity to change over
time, with significant differences in scores over a 6-month period. In terms of feasibility,
consumers had concerns with remembering back over a 1-month period, and thought some of
the language was old-fashioned and inappropriate. Consumers also found the changes
throughout the measure from one response format to another confusing. As with BASIS-32 and
the Mental Health Inventory, consumers had concerns that familiarity with the measure would
reduce accuracy of evaluations. The practicality of the measure is uncertain, as no information
was found on cost, although minimal training is required, which makes it easier to use. The tool
is also brief.
The SF-36 is used in some parts of New Zealand, as well as in the US and Australia. It was
developed by researchers based on items in other consumer scales, and no mention was found of
consumers, family members or clinicians being involved in its development. The measure does
not include items on cultural matters, and no testing across different cultures appears to have
been carried out.
34 Results
The SF-36 was administered at 6-monthly intervals in the research reviewed, and asks the
respondent to consider their experiences over the past month when answering questions. There
appear to be effects for the setting in which the measure is completed, with scores from private
psychiatry practices suggesting higher levels of mental health problems than for those from
public psychiatry settings. Differences in scores are also apparent for those with experience of
affective disorders as opposed to those with experience of schizophrenia, in both the mental
health component and some physical health components.
Mental Health Inventory (MHI)
(Information collected from Andrews et al., 1994; Stedman et al., 1997.)
The Mental Health Inventory is a pen and paper measure that can be filled in by consumers or
used in an interview format. It includes the domains of: mental illness symptoms, loss of
behavioural/emotional control, general positive feelings, emotional ties, life satisfaction,
hopelessness and suicidal thoughts. Quite a number of the domains consumers have indicated
as important to the concept of recovery are missing from the measure, including: quality of life
across multiple life areas, day-to-day functioning, physical health/risks, relationships (although
this is covered to a certain extent by the domain of emotional ties), coping with and recovering
from illness, satisfaction with services (including cultural relevance of services), spiritual
strength, suicidal actions, cultural identity/connection, empowerment, and basic
needs/resources.
The MHI has 38 items, most scored on a six-point Likert scale, ranging from ‘not at all’ to ‘yes,
definitely’, ‘always’ to ‘never’, ‘extremely’ to ‘not at all’ or ‘all the time’ to ‘none of the time’.
Although the exact time it takes to complete the tool was not found in documentation, it would
probably take 15 to 25 minutes based on measures of similar length and design. The MHI has
been tested with people with HIV, cancer, some elderly people, women on low fat diets and
primary mental health care consumers. When asked, consumers have indicated they see the
measure as useful, with 81% seeing most or all of the questions as relevant to them, more so
than the questions in BASIS-32 or the Medical Outcomes Study SF-36.
Reasonably full information was available on validity, reliability, sensitivity to change and
feasibility. Face validity appears good, and Andrews et al., (1994) reported construct validity is
adequate. Items appear to reflect the domains under consideration, and Andrews et al., (1994)
concluded it was adequate. Criterion validity seems sound, with relatively strong performance
in predicting global change ratings compared to BASIS-32 and SF-36. Convergent validity
appears reasonable, with significant correlations with relevant scales of provider measures
(Health of the Nation Outcomes Survey, Life Skills Profile and the Role Functioning Scale).
No data were found on inter-rater reliability, but test-retest reliability was excellent, with
coefficients of above .90. Internal consistency was also very good, ranging from .8 to .97.
Scores on the MHI showed significant differences over a 6-month period, indicating good
sensitivity to consumer change over time.
In terms of feasibility, the 1-month time period consumers needed to consider in order to answer
the questions presented some difficulties in accurate remembering. Some of the language in the
MHI was considered old-fashioned or inappropriate. These concerns compromise acceptability
of the tool, although the clear response format was appreciated. In terms of applicability,
consumers thought the MHI had more relevant items than BASIS-32, and SF-36, was more
useful than BASIS, although it was still reported to be missing many relevant domains. The
items included in MHI were all considered important by consumers. In terms of practicality, no
data on costs were found, no training is required, and the tool is brief.
Results
35
The Mental Health Inventory is used in some places in New Zealand, and is also used in
Australia and the US. It was developed by researchers for use with a general population, and
there seems to have been no involvement of consumers, their families, or clinicians in its
development. The MHI has no items on cultural matters and no mention of testing across
cultures was found in the research.
The MHI was administered at 6-month intervals in the research reviewed, and investigates the
most recent month in the consumer’s life. Differences in scores have been observed for
different settings for all scores other than emotional ties. Scores from private psychiatry
practices suggested higher levels of mental health problems for those people than for those in
public psychiatry settings. In terms of effects on scores of types of mental illness, significant
differences were found between those with experience of affective disorders and schizophrenia;
for those with experience of anxiety disorders, differences were less.
Mental Health Recovery Measure
(Information collected from Ralph, Kidder & Phillips, 2000; Young, Ensing & Bullock,
1999.)
A pen and paper tool designed to be filled in by consumers, the Mental Health Recovery
Measure is a comprehensive outcome measure based on the concept of recovery. It includes the
domains of overcoming “stuckness”, self-empowerment, learning and self-redefinition, basic
functioning, overall well-being, reaching new potentials, hope, religious faith, self-care, and
coping with and recovering from illness. Of those domains identified by consumers as
important in measuring recovery quite a few are missing. These include: satisfaction with
services (including cultural relevance of services), mental illness symptoms, relationships, dayto-day functioning, cultural identity/connection, quality of life, physical health, and basic
needs/resources.
The measure has 41 statements that consumers respond to on a five-point Likert scale, from
‘strongly disagree’ to ‘strongly agree’. While no information was found on how long the tool
takes to complete, based on measures of similar length it is likely to take around 15 to 25
minutes. It has been tested with a group of 24 people in a recovery-based treatment programme.
The measure was developed based on semi-structured interviews with mental health consumers
but no information was found on consumers’ views on the measure.
Only limited information was found on validity, reliability, consistency, sensitivity to change
over time, and feasibility, suggesting that not much testing has been carried out with this
measure. Face validity is reasonably low, as the measure does not fully reflect the concept of
recovery as defined by consumers because of the number of domains that are missing. Content
validity is better, in that the items for the domains that are in the measure appear to cover each
domain adequately. Convergent validity was good to very good with the Community Living
Skills Scale and good with the Empowerment Scale, suggesting the tool was measuring the
same underlying concepts as these other measures. Internal consistency between the scales is
good to very good, and overall internal consistency is excellent. No information was found on
sensitivity to change. With regard to feasibility, the measure addresses some issues of
importance to consumers. However, applicability is limited by the missing domains. No
information was found on the practicality of the measure, but the brevity, format and short
administration time mean that this measure would be reasonably practical to use. No
information was found on the effect of setting (e.g. institution or community) on scores, or the
effect of type of mental illness on scores.
The Mental Health Recovery Measure is not used in New Zealand, and no information was
found on its use overseas. It was developed from interviews with consumers about their
36 Results
recovery experiences, using grounded theory analysis. However, it was not clear whether
consumers had any other involvement in developing the measure, or if their families were
involved at any point. It was also not clear whether clinicians were involved in developing the
tool. The measure does not include items on cultural matters and no information could be found
on testing of the tool across cultures. The measure appears more suited for use in predicting
future recovery rather than as a measure of outcome.
Multnomah Community Ability Scale
(Information collected from O'Malia, McFarland, Barker & Barron, 2002.)
Developed in the US, this is a comprehensive outcome measure, apparently using pen and
paper. It is filled in by the consumer, although there is also a parent form with equivalent items
for the service provider to fill in. This scale assesses four areas: interference in functioning,
adjustment to living in the community, social competence, and behavioural problems. Missing
from the measure, in terms of consumer perceptions on domains relevant to recovery, are items
on: satisfaction with services (including cultural relevance of services), mental illness
symptoms, cultural identity/connection, physical health, quality of life, coping with and
recovering from illness, hope, empowerment, and spiritual strength.
The Multnomah Community Ability Scale has 17 items, and is scored using a Likert scale from
‘almost never’ to ‘almost always’ for all answers. While information on the exact time it takes
to complete the measure was not found, it probably takes around 10 to 15 minutes based on
measures of a similar length and design. The scale has been tested with a number of different
groups of people, including consumers from community mental health drop-in centres, four case
management programmes, and a peer counselling programme in urban areas. In addition, the
measure has been tested with a random sample of Medicaid clients with experience of persistent
and severe mental illness. Consumer views on the measure have been sought. In general,
consumers saw it as acceptable, reasonably easy to complete, covering relevant areas, using
easy to understand language, and potentially useful for identifying client needs and tracking
consumer progress in recovery. However, consumers also wished items on housing and
employment were included in the scale.
A limited amount of information was found on validity, reliability, consistency and feasibility.
Overall, face and construct validity could not be judged as the actual scale was not sighted. No
data were found on construct validity or content validity. Convergent validity appeared to be
reasonable. Reliability was low to moderate for consistency in scores across different raters,
and excellent for consistency in scores from one time to another. No data were found on
internal consistency or sensitivity to change over time. In terms of feasibility, the measure was
found acceptable for ease of use and understanding by three groups of consumers. Consumers
thought the measure was brief, relevant, acceptable, easy to understand and easy to use without
assistance. In terms of applicability, some dimensions of importance to consumers are missing.
The Multnomah Community Ability Scale is not currently used in New Zealand, but is used in
the US and Australia, and may well be used in other countries. It was initially developed by
researchers and staff from a community mental health programme. Further revision of the
measure, into a self-report scale, took place after feedback from consumers living in a group
home. On the basis of their feedback any items that were unclear, inappropriate or difficult to
understand were rewritten. Four peer counsellors (who presumably had experience of mental
illness themselves) reviewed the draft self-report tool. Further feedback on difficulties in
completing the measure was elicited from attendees at a community mental health drop-in
centre. Thus, while consumers were not involved in conceptualising or originally developing
Results
37
the tool, they were heavily involved in making it suitable for use as a self-report tool. It did not
appear that families were involved in developing the measure at any stage.
The measure does not include any items on culture matters and does not appear to have been
tested across cultures. It is designed to be used with consumers living in the community. No
information was found on what time period consumers were asked to consider when completing
the tool and how often it could or should be administered. However, there had been some
investigation of the effect of settings on scores, and higher levels of agreement between staff on
scores were found in urban areas than rural areas. The effects of type of mental illness on
scores did not appear to have been tested.
Ohio Mental Health Consumer Outcomes System
(Information collected from Beale, n.d.; Ohio Department of Mental Health, 2000; Ohio
Department of Mental Health, n.d.)
Developed in the US, this comprehensive outcome system has forms for adults,
children/adolescents, families/caregivers and service providers.
It was developed by
amalgamating a variety of other scales that were judged as covering the concept of recovery
from mental illness when taken together. It has two main consumer forms, Form A and Form B
(which is a shorter version of Form A). This is a pen and paper measure although it could also
be used as a structured interview. Domains covered by the forms include: clinical
status/symptom distress that interferes with daily living, quality of life including empowerment
and finances, functional status, or how the person is doing in the community, including school,
work and social relationships, safety and health, including physical wellness, and freedom from
psychological and physical harm to oneself and others, and demographic data (on age, gender,
ethnicity, etc.). Form A is missing items on: satisfaction with services (including cultural
relevance of services), spiritual strength, cultural identity/connection, day-to-day functioning,
coping with and recovering from illness, an item on partner/spouse relationships, items on some
mental illness symptoms, suicidal thoughts and actions, and items on sleep and diet in the
physical health/risks scale. Form B is missing the same domains as Form A as well as the 27
item section on self-esteem and empowerment that is included in Form A.
Form A has 67 items altogether in five parts, Form B has 39 items in four parts. The parts use
four- and five-point Likert scales, with a range of wording, including from ‘terrible’ to ‘very
pleased’ or ‘does not apply’, ‘never’ to ‘always’, ‘not at all’ to ‘extremely’, and ‘strongly agree’
to ‘strongly disagree’. Form A takes an average of 32 minutes to complete, Form B presumably
takes less time. The measure has been tested on people hospitalised with mental illness, people
in self-help groups and tertiary students. Consumers and family members generally found the
system useful, easy to understand and felt comfortable answering questions. The items were
seen as having a low level of offensiveness. Of comments received from consumers and
families, 35% were negative about the system and 21% were positive about the system.
In terms of face validity, the system appears to reflect and measure some key aspects of
recovery as defined by consumers, although some of these are missing. Data were not found on
construct validity, criterion validity, inter-rater reliability, test-retest reliability, and sensitivity to
change over time. Convergent validity overall appeared reasonably low, but internal
consistency was excellent. In terms of feasibility, acceptability was high with consumers and
families although data was not given for service providers. Applicability is threatened by some
items being seen by consumers as not culturally appropriate, and the number of missing
domains. Practicality is reasonably high in terms of training resources and manuals being
available, and relatively short forms, although some groups found Form A too long.
Information on costs of training and use of the system was not found.
38 Results
According to the survey carried out as part of this paper, the Ohio forms are not currently used
in New Zealand. The system is widely used in the state of Ohio, and versions exist in Spanish,
Chinese, Russian, Japanese, and Korean although it is not clear whether the system is actually
used in these countries. The Ohio Outcomes System was developed by a group of consumers,
family members, service providers, health board members, researchers, evaluators and staff
from the Ohio Departments of Mental Health and Alcohol and Drug Services. Neither Form A
or B includes items on cultural matters. However, testing of the system has been carried out
with four cultures other than mainstream – Amish, rural/Appalachian, African-American and
Hispanic. While these groups accepted the usefulness of the system, some items were found to
be culturally inappropriate, particularly those on empowerment.
For adults, the forms are used at intake, 6 months, 12 months and then annually for severely
disabled consumers. For those not as severely disabled, the forms are used at intake and close
to or at termination of services. For children and adolescents, the forms are used at intake, 6
months, 12 months, then annually until termination of services. Consumers filling in the forms
are asked to consider the past 6 months, 7 days or the present, depending on which part of the
form they are completing. In terms of effect of setting, the empowerment section differentiated
between people hospitalised for mental illness, people in self-help programmes and tertiary
students. No information was found on differences in scores by type of mental illness.
Outcome Questionnaire (OQ)
(Information collected from Ensfield, Steffen, Borkin & Schafer in Ralph et al., 2000.)
The Outcome Questionnaire is a 45-item, self-report format filled in by consumers. Domains
covered by the OQ include: psychological well-being and distress, mainly relating to the
experience of distress and anxiety, drinking, drug use, relationships with family, friends, spouse
and others, performance socially and at work, and suicidal thoughts. However, the OQ omits
quite a number of areas identified by consumers as important to recovery, including: quality of
life, physical health and risks, some mental illness symptoms (particularly relating to more
severe illnesses), coping with and recovering from mental illness, satisfaction with services
(including cultural relevance of services), spiritual strength, cultural identity/connection,
empowerment, basic needs, and suicidal actions.
This measure directs individuals to consider their experiences during the previous week and
answer using a five-point Likert scale from ‘never’ to ‘almost always’, and takes about 10
minutes to complete. It has been well-tested with people in community mental health and
private outpatient clinics, those on Employee Assistance schemes, and university students
without any symptoms of mental illness, across both genders and a wide age group. Despite
this wide testing, no information was found on what consumers thought of the measure.
In terms of validity the OQ appears to reflect some key aspects of recovery as defined by
consumers, but many are missing. Construct validity is poor, with some items not measuring
what they are intended to measure. The content of the questionnaire is adequate, although there
is more focus on symptoms of depression than psychosis. Criterion validity is good to
excellent, as is the reliability of the measure over two separate administrations. Convergent
validity is moderate to good, and was established by comparing the relevant subscales to a
number of other measures. No information was found on the level of agreement in scores when
the measure was scored by different raters. The OQ quite accurately reflects changes in
outcomes over time, in the predicted direction.
Results
39
For feasibility, no data were found on how acceptable consumers found the OQ. In terms of
practicality, the OQ is brief, easy to interpret, information on it is widely available, and training
packages in it have been developed. It also uses simple language and straight-forward
instructions and answer sheets, as well as being available for a nominal fee, all of which make it
highly practical.
This measure does not appear to be used in New Zealand, but is widely used in the US and may
well be used in other countries as well. It was originally developed by researchers, clinicians
and administrators from two large managed health care systems, but no consumer or
family/whānau involvement in its development was noted in any of the documentation. It has
no items on cultural matters, but has been tested with two different cultures – white Americans
and African Americans. Differences were found in scoring trends on some items between the
two cultures.
The OQ is designed to be used as a screening instrument when consumers are first seen by a
service, and then to be used on multiple occasions – up to weekly – including at the termination
of treatment. Consumers are asked to look back over the past week when answering questions.
No information on the effect of setting on scores was found. Significant differences have been
found in scores for people assumed to have differing levels of psychological disturbance,
suggesting the OQ is adequately able to discriminate between these groups.
Personal Vision of Recovery Questionnaire (PVRQ)
(Information collected from Ensfield, Steffen, Borkin & Schafer in Ralph et al., 2000.)
This is a comprehensive outcome measure for completion by consumers, and appears to be a
pen and paper tool. It includes the domains of: support, personal challenges, professional
assistance, action and help seeking, affirmation, hope, spirituality, medication, relationships,
and empowerment. It does not cover the domains of: satisfaction with services (including
cultural relevance of services), mental illness symptoms, suicidal thoughts and actions, day-today functioning, cultural identity/connection, basic needs/resources, and only has limited quality
of life items. It has 24 statements and uses a five-point Likert scale to answer, from ‘strongly
agree’ to ‘strongly disagree’. To score, items are summed across each sub-scale and weighted
equally, and no items are reverse scored. The measure probably takes around 15 to 20 minutes
to complete, although no mention of this was found in the literature. It has been tested with 251
mental health consumers in the US. Consumer views on the measure were not found.
In terms of face validity, the domains included in the measure appear to reflect some key
aspects of recovery as defined by consumers although many are missing. Factor analysis
identified five factors, with alphas from a moderate .57 to a very good .70, so construct validity
is good overall. Content validity appears adequate for the construct of support, but low or
ambiguous for other domains. Convergent validity is low to moderate and no data were found
on criterion validity, test-retest reliability or inter-rater reliability. Overall internal consistency
is good, and no information was found on sensitivity to change over time, acceptability or
practicality.
The PVRQ is used in some degree in New Zealand, but no information was found on its use
elsewhere. It was developed by a team of professional and consumer researchers, with no
apparent family involvement or clinician involvement. No items on cultural matters were found
and the measure does not appear to have been tested across cultures. The questionnaire
measures current attitudes and experiences, which is a strength as it means less error from
remembering back over longer periods. No information was found on effects of settings or type
of mental illness.
40 Results
Recovery Assessment Scale (RAS)
(Information collected from Corrigan et al., 1999; Corrigan, Salzer, Ralph, Sangster &
Keck, n.d.; Ralph et al., 2000.)
The Recovery Assessment Scale, developed in the US, is based on the concept of recovery from
mental illness. It is completed by consumers, usually by way of interview format, although it is
not clear who carries out the interview. It covers the four areas of self-esteem, empowerment,
social support, and quality of life. It was developed by asking four consumers to write about
their experiences of recovery from mental illness, then analyzing their accounts for common
themes (Corrigan et al., 1999). These themes were formed into a 39-item scale, which was
reviewed by 12 mental health consumers. A revised 41-item scale was developed, using a fivepoint Likert scale from ‘strongly agree’ to ‘strongly disagree’, with statements about recovery.
It takes about 20 minutes to complete in an interview format. No information was found on the
intervals at which the measure is used to assess recovery. It does not appear family or service
providers were involved in the development or testing of the measure. No data were found on
consumers’ perceptions of the RAS.
Factor analysis of the RAS reveals five significant factors that explained most of the differences
in scores (variance): personal confidence and hope, willingness to ask for help, goal and success
orientation, reliance on others, and symptom coping.
When compared with a composite list of the domains consumers want to see in an outcome
measure, the RAS is seen to be missing items on: satisfaction with services (including cultural
relevance of services), some symptoms (particularly suicidal tendencies and psychotic
symptoms), day-to-day functioning, cultural identity/connection, spiritual strength, and basic
needs/resources.
The measure has been tested in a setting of partial hospitalisation (which may mean day
hospital) with 35 consumers (Corrigan et al., 1999). In terms of face validity, it appears to
reflect some key concepts associated with recovery, such as hope, empowerment, social support
and coping with and recovering from illness. Concurrent validity tests looked at correlations
between the RAS and 4 other scales: the Empowerment Scale, Quality of Life Interview, Social
Support Questionnaire, and the Rosenberg Self-Esteem Scale. The largest of the significant
correlations was with the self-orientation sub-scale of the Empowerment Scale, which looks at
how much a person believes that they are competent and has worth despite social stigma (-.71).
Therefore the more empowered people felt, the more likely they were to score highly on
recovery. This was followed by the subjective section of the Quality of Life Interview, which
looks at independent living (.62). Self-esteem was correlated with recovery at .55, having a
larger social network at .48 (but not satisfaction with social network, which was only .14) and
psychiatric symptoms at -.44. That is, the less symptoms a person had the more likely they were
to score highly on recovery, although this is only a modest relationship. These tests established
modest to good convergent validity with measures of similar constructs, suggesting the RAS
was measuring what it sets out to measure.
There were no data found about the RAS in relation to content and criterion validity and interrater reliability. Tests administered 14 days apart showed an excellent level of agreement in
scores, with .88 test-retest reliability. The internal consistency of items in the measure was very
high, with Cronbach’s alpha standing at .93. The ability of the RAS to reflect changes in
recovery over time does not appear to have been tested.
According to the survey of mental health service providers undertaken through the present
project, the RAS is not currently used as a measure in New Zealand. The RAS does not include
any items on cultural matters and does not appear to have been tested across cultures. It appears
the effect of setting on scores has not been tested, nor the impact of different types of illness.
Results
41
Single Factor Outcome Measures
Lehman Quality of Life Interview
(Information collected from Lehman, 1999; Lehman, Postrado & Rachuba, 1993.)
The Lehman Quality of Life Interview measures only how satisfied consumers are with their
lives, rather than focusing on a comprehensive range of outcomes. It is completed by trained
lay interviewers who ask consumers a series of questions about their objective life situation –
what they are doing and experiencing – as well as on their subjective feelings about their life.
The interview covers a wide range of aspects of quality of life, including living situation, social
relations, daily activities, finances, safety and legal problems, work, school, and health, with
optional items on satisfaction with religion and neighbourhood. Missing areas from those
identified by consumers as important to recovery include: satisfaction with services (including
cultural relevance of services), mental illness symptoms, cultural identity/connection, coping
with and recovering from illness, hope, empowerment, spiritual strength, and basic
needs/resources.
The measure has 143 items in total, and uses a fixed-interval scale to mark how satisfied people
are with each area. The Lehman Quality of Life Interview has been tested with patients
undergoing hemodialysis and also with people with experience of severe mental illness, but no
information could be found on the views of these people on the Interview. In terms of validity,
the measure has not been sighted, so face and content validity cannot be commented on, but
construct and predictive validity have been found to be good. There were no data on inter-rater
reliability, but test-retest reliability was reported to be reasonable. Internal consistency levels
were good, but no data were found on the sensitivity of the scale to change over time.
In terms of feasibility, there were no data on how acceptable the scale was to individuals, but it
is clear it covers at least some of the dimensions consumers consider important. It also misses
many areas, making its applicability good, but limited to the area of quality of life rather than
general outcomes. No data were found on how practical the Interview is to use.
The Lehman Quality of Life scale is not currently used in New Zealand, according to the survey
undertaken for the purposes of the present research. It is presumably used overseas, but no
information was found on its current usage in other countries. The Interview appears to have
been developed by clinicians and academics with no involvement from consumers or their
families. It has no items of cultural matters, and no data were found that indicated testing across
cultures.
It was not stipulated in the research how often the measure should be used. It can differentiate
between consumers in hospitals and those in supervised residential programmes in the
community. It also discriminates between people with severe mental illness and the general
population, although no information was found on outcomes for different types of mental
illness. The measure shows low to moderate agreement with the relevant scales of the
Heindrichs-Carpenter Quality of Life Scale, suggesting it is moderately successful at measuring
the same underlying concepts other quality of life scales measure.
42 Results
Quality of Life Index
(Information collected from Atkinson et al., 1997.)
The Quality of Life index is a single factor measure that focuses on how satisfied consumers are
with various aspects of their day-to-day life. It is not clear from the information available
whether it is a self-report tool or administered by interview. The Index looks at four main areas
of life quality: health and functioning, socioeconomic (finances, etc.), psychological and
spiritual wellness, and quality of family life. Missing areas from those identified by consumers
as important to recovery include: satisfaction with services (including cultural relevance of
services), quality of relationships other than with family, and cultural identity/connections.
Hope and empowerment, and coping with and recovering from illness, may be covered under
the health and functioning sections, or the psychological wellness items.
The measure was not sighted, and no documentation found mentioned the number of items in it.
Part 1 of the measure uses a six-point Likert scale from ‘very satisfied’ to ‘very dissatisfied’,
and scores on Part 1 are weighted by importance. It is not clear how long the Index takes to
complete, although it appears relatively brief. This measure has been tested on samples of
patients receiving hemodialysis, who were chronically ill and largely on lower incomes, and
also on people with experience of chronic mental illness. No information was found on
consumer views of the measure.
The measure appears to reflect the concept of quality of life fairly well, although some
important areas are omitted, such as relationships outside the family. Factor analysis shows
support for construct validity – a four-factor solution accounted for 91% of the differences in
scores (variance). This suggests the overall construct of a four-faceted model of quality of life
is valid. Given the measure was not sighted, no comment can be made on how well individual
items covered the overall concept of quality of life (content validity). Atkinson et al., (1997)
stated correlations between the Index and other related instruments provided good evidence of
convergent validity, although no figures were given. No data were found on criterion validity,
inter-rater reliability, test-retest reliability, sensitivity to change over time, or the acceptability
and practicality aspects of feasibility. Internal consistency is good to excellent across the four
subscales, and excellent overall (.93). In terms of applicability, the measure appears to address
most issues noted as important to consumers for quality of life.
The Index does not appear to be used in New Zealand, but is used in the US. No information
was found on who was involved in developing the Quality of Life Index. The measure does not
appear to contain any items with cultural matters and no mention was found of testing of the
Index across cultures. Research on the Index suggests differences in scores may be due in part
to the nature of diagnosis rather than actual quality of life. The effect of settings on scores was
not covered in the research available.
Satisfaction Index – Mental Health
(Information collected from Nabati, Shea, McBride, Gavin & Bauer, 1998.)
This is a single-factor measure looking at consumer satisfaction with services. The Satisfaction
Index is completed by the consumer, but it was not clear whether it was through self-report
using a pen and paper measure, or through a guided interview. Domains included in the
measure are: satisfaction with services, areas of dissatisfaction, satisfaction with time with
providers, information on services, and responsiveness of providers; confidence in providers,
and effect of services on symptoms. All domains, other than satisfaction with services and
changes in symptoms, are missing from those identified by consumers as important to recovery.
Results
43
The Index is brief, containing only 12 items with a six-point Likert scale running from ‘agree
strongly’ to ‘disagree strongly’. Items alternate in polarity with high scores on odd numbered
items denoting high satisfaction, and high scores on even numbered items denoting low
satisfaction with services. Judging from measures of similar length the Index would probably
take between 10 and 15 minutes to complete, although no written information on this was
found. The measure has been tested with war veterans with varying psychiatric diagnoses who
were chosen at random from the waiting room of a mental health clinic. No data were found on
consumer views on the measure.
Face validity appeared good, as did construct validity, with two main components to the Index
detected by analysis – satisfaction with mental health services, and direction of scoring. In
terms of content validity, items on some aspects of satisfaction with services were missing,
specifically the cultural relevance of services, satisfaction with the specific type of treatment,
and satisfaction with the therapeutic alliance between consumer and provider. Other than this,
the Index content appeared appropriate to the construct of service satisfaction. No data were
found on criterion validity, convergent or divergent validity. No information was found on
inter-rater reliability, but there was good evidence of consistency across occasions of
completing the measure. Internal consistency between different parts of the measure was high,
and the measure showed sensitivity to changes in scores over time. No data were found on
acceptability. Applicability appeared to be good with most of the issues previously identified
by consumers as relevant to service satisfaction included, apart from those already mentioned.
In terms of practicality, the measure is short, and easy to score and interpret. No data on costs
or the availability of training materials were found. In addition, consumer views on ease of use,
and clarity and appropriateness of language and questions were not found.
The Satisfaction Index is used to some extent in the US, but current usage in New Zealand is not
known. The Index was adapted from a general tool, by academics and clinicians, for use in a
mental health setting. No mention of involvement of consumers or their families in the
development of the Index was found. The Index does not contain items on the cultural
relevance of services, and no information was found on testing of the Index across cultures.
Intervals of measurement were not stated, although it appears from the test-retest reliability data
that it can be used throughout treatment for gauging service satisfaction. Information on the
effects of settings on scores was not found. Scores did not appear to be affected by numbers of
psychiatric hospitalisations or suicidality levels, but were higher for people with a diagnosis of
bipolar disorder 1 or 11, or a history of psychosis than for those without.
Verona Service Satisfaction Scale (VSSS)
(Information collected from Ruggeri & Dall'Agnola, 1993; Ruggeri, Dall'Agnola, Agostini
& Bisoffi, 1994).
The Verona Service Satisfaction Scale (VSSS) measures client satisfaction with the services
they receive. It is a pen and paper measure to be completed by consumers and their families,
but can be used as a structured interview where literacy is low. A shorter version of the
measure is available to be completed by service providers. The VSSS covers general
satisfaction with services, skills and behaviour of mental health professional, information
provided about services, access to services, effectiveness of services, types of intervention, and
the involvement of relatives in treatment. Given the measure only covers the domain of service
satisfaction, it is missing all other domains identified by consumers as important to recovery. In
terms of the domain it does cover, it is missing items on satisfaction with cultural relevance of
services.
44 Results
The measure has a total of 79 items across seven dimensions. Sections 1 and 3 use a five-point
Likert scale from ‘terrible’ to ‘excellent’, as well as some yes/no answers. Section 2 requires
spontaneous answers to open-ended questions, such as those starting “the thing I liked the
most/least was” or “the thing I would change is”. No data were found on exactly how long it
takes to complete the measure, but based on scales of similar length and design it would
probably take 30 to 45 minutes, depending on the literacy of the person, whether it was being
administered as a structured interview, and how many comments the respondent wanted to note.
The VSSS has been tested with 75 people from South Verona in Italy who had had more than
18 contacts with mental health services in the past 3 years, and 76 of their family members. Of
the consumers, 53% had experience of psychosis, and the remainder had experienced neuroses,
personality disorders or other types of mental illness. Consumers found the measure acceptable.
In terms of face validity, the VSSS appeared to reflect the concept of service satisfaction as
defined by consumers, with the exception of satisfaction with cultural relevance of services. No
data were found on construct or criterion validity. Content validity appeared good. Two sets of
raters compared consumer and family answers to the open-ended, spontaneous answer questions
in the scale to the content of other items in the VSSS. They found the concerns raised by
consumers and family members in the open-ended answers were largely reflected in the other
items in the measure.
No information was found on inter-rater reliability, internal consistency or sensitivity to change
over time. Test-retest reliability levels ranged from poor to good for consumers and moderate to
very good for family members. With regard to feasibility, acceptability of the measure to
consumers and family members was found to be good, with only small numbers interrupting the
interview process, and for reasons of tiredness or illness rather than unhappiness with the
measure. Applicability seemed good, with the content of the measure matching consumer and
family concerns. Practicality was reasonable in terms of flexibility of the measure for use as a
self-report tool or structured interview, and consumers and family members appeared to have
little difficulty in understanding the language used. Training takes 2 days, which may put some
people off using the VSSS, and no data were found on costs or availability of the tool.
The Verona Service Satisfaction Scale appears to be used in some places in New Zealand, and is
widely used in the United Kingdom and the US. It was developed by clinicians and academics,
with no mention of consumers or their families being involved in its development. The VSSS
looks at satisfaction with services over the past year, which is rather a long period to remember,
and no information was found on effects of the setting in which it was administered, effects of
different types of mental illness on scores, or convergent and divergent validity.
Results
45
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
46 Results
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
Results
47
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
48 Results
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
Results
49
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
50 Results
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
Results
51
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
52 Results
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
Results
53
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
54 Results
Table 1 – Summary of Properties of Self-Assessed Outcome Measures Analysed
Results
55
Consideration of existing tools
The research team reviewed the comprehensive analysis of each of the tools and selected a short
list of the preferred self-assessed measures of consumer outcome based on the criteria
formulated from the literature review.
Six measures were short-listed by the research team: the consumer completed part of the
Assessment of Wellness Outcome Tool; the Behaviour and Symptom Identification Scale
(BASIS); the Crisis Hostel Healing Scale; the consumer schedule of Hua Oranga; the Lotofale
Evaluation Measure; and the Mental Health Inventory (MHI).
The six preferred measures were presented to the consumer reference group to discuss the
respective strengths of the instruments and other elements of relevance during a 1-day meeting.
The reference group felt it was important to confirm the primary aim of a self-assessed measure
of consumer outcome before considering these instruments, because their analysis would be
dependent on this issue. Their consensus was that the primary aim of a self-assessed measure of
consumer outcome is as a tool for individuals to use to assess their own mental well-being. The
secondary gains of self-assessed measures of consumer outcome were identified as service
monitoring and development, communication, process and systemic evaluation, and lobbying
for improvement. Based on this construct, which includes a primary aim and multiple
secondary gains, the reference group then considered the selected instruments. They concluded
none of the tools were appropriate for direct application as a self-assessed measure of consumer
outcome for the New Zealand context. The key issues raised by the reference group in their
consideration of the tools and relevant material were:
•
The importance of measuring the concepts of outcome (has there been a change) and
attribution (what was the change due to) separately. They are both important issues.
However, they require separate processes of measurement to ensure valid and reliable
results. To design a tool based upon an assumption of knowing the cause of any observed
change (e.g. formal service provision) automatically renders the tool invalid.
•
Triangulation: The group raised some concerns about triangulation (the approach whereby
three assessments take place [normally from the perspective of the consumer, the service
provider, and a family/whānau member], preferably using different versions of the same
measure so that they are directly comparable). The group argued that the subjective biases
of all parties needed to be acknowledged. Lumping these three ‘subjective’ perspectives
together cannot provide an overall ‘objective’ perspective, only a pool of three subjective
views. In addition, they expressed concern that currently the clinician perspective is often
assumed to be the more ‘objective’ view and therefore, the most pertinent part of a
triangulated process. Rather, the group felt the consumer perspective should always be
given a greater weighting when a triangulated method was used in relation to outcome
measurement. For the present project, the group felt the research should focus on the
development of a self-assessed measure that, in terms of process, could involve consumers
having an option to request that others also complete the measure.
•
Cultural issues: The Māori members of the consumer reference group expressed concern
about the extent of consumer involvement in the development of the Hua Oranga outcome
measure. They emphasised that the issues raised by the group, in relation to triangulation
(see bullet point above), were equally applicable from a Māori perspective.
•
Content of self-assessed measure: The group concluded there is a generic set of base-line
indicators of well-being (particular to mental health consumers) that should be used to form
the substance of any self-assessed measure of consumer outcome. It was felt that the list of
domains, collated through the literature review, adequately reflected what needed to be
included in such a measure for it to be suitable for the New Zealand context.
56 Results
The reference group agreed it would still be helpful to take some tools out for wider consumer
consultation so participants would have examples of existing tools on which to comment and
reflect. The group considered the three most appropriate measures to take to wider consumer
consultation were: the consumer-completed part of the Assessment of Wellness Outcome Tool
(Appendix 5), Crisis Hostel Healing Scale (Appendix 6), and the Tāngata Whai Ora schedule of
Hua Oranga (Appendix 7).
Given that Hua Oranga is still in the developmental and testing phase, permission was sought
from the researchers of that tool for the Tāngata Whai Ora schedule of the tool to be taken out
for Tāngata Whai Ora consultation as part of the present project. Permission was granted on
condition it was made clear at each meeting that the consumer schedule was only one part of the
overall measure, which also involved whānau and service provider assessments.
Consultation
Consumer consultation
A total of 77 people attended the consumer consultation meetings. Of these:
•
41 people attended the general consumers’ consultation meetings;
•
24 people attended the Māori consumers’ consultation hui;
•
7 people attended the Pacific consumers’ consultation fono, and
•
5 people attended the deaf consumers’ consultation meeting.
At all consultation forums participants were asked to report what they considered was important
to mental well-being. The responses of every group are reported in Table 2.
Table 2.
Equity
Happy
Chores
Work
Learning
Good taking medication
Not suicidal
No fear
No psychotic episodes
Self-acceptance
Forgiveness
Respected
Over-coming stigma
Do not isolate (friends)
Affirmations
Busy
Consideration for others
Understanding
Results
Group 1
General daily living
Coping
Relationships
Helping others
Managing finances
Brain working
In touch with reality
No paranoia
No worry
Do not beat yourself up
Guilt and hurt
Love
Positively
Appearance
Encouragement
Wellness (energy levels, lethargic)
Tolerance
Stability
57
Family support
Coping ability
General support – work mates
Employment/Vocational
Understanding of condition
Understanding of rights
Acceptance
Goal setting/achievements
Group 2
Generic life skills
Relapse prevention strategies
Sleeping patterns
General health: – exercise
– diet
– eating
– hygiene
Group 3
Friends
Family – being connected positively
Social support
Decent place to live
Work for some voluntary or paid
Having hope – something to look forward to
Interest in external world – people, issues
Spiritual involvement/awareness
Confidence
Relationships – satisfying
Supportive and understanding people
Adequate income $$$
Meaningful way to spend your time
Having fun
Physical well-being
Activities
Peace of mind
Appropriate medication
Group 4
Physical well-being
Whānau
Financial
Medication
Can I be bothered answering the door?
Support services
Patience
Trigger signs
Motivation
Weather
Menstrual
A&D
Oral health
Employment
Are you happy?
Nutrition
A sleep
Tolerance
Emotions
Safety
Consideration for others
Moon changes
Children
Behaviour
Group 5
Are we happy with our quality of life?
Happy
Still pursuing hobbies/interests etc.
Energy
Sleeping habits
Motivation
Thought patterns
Spaced out, i.e. sense of reality
Mood swings
Diet
Enjoying work
Coping with a benefit
58 Results
Seeking help
Crisis plan
Hygiene
Spiritual life
Relating to others
Personal safety and safety of others
Finances – spending habits
Self-awareness
Stress: – Financial
– Relationship
– Work
Leisure time
No forced treatment
Sex
Having access to the right services
Going for walks
Good living situation –
how is living situation?
Water
Job
Right career
Playing sport (e.g. skateboarding, soccer)
Relationships
Friends
Understanding
Entertainment
Support from support workers
Participation in the community
Group 6
Smoking
Bank account – how much money in account
How available is a psychiatrist?
Good sleep
Food
Employment security
Being respected
Bike riding
Family doctor
Time-out
No crabby nurses
Future prospects
Housing
Family/Whānau
Listened to
Group 7
If I feel like getting out of bed in the morning.
Do you have negative thoughts going around in your head which prevent you from solving your
problem?
Am I living in a suitable/friendly/light accommodation?
How am I eating?
What is your attitude to yourself in looking after yourself in respect to alcohol & drugs?
Do you have a tendency to see the negative side of everything?
Am I comfortable with the view others close to me have of my mental health?
Am I comfortable with my mood changes or do I get too excited or too sad?
Do I have hopes and dreams?
Am I living comfortably with your financial situation?
What impact is limited finances having on my mental well-being?
How does my fear of financial insecurity attack me?
Am I living within my budget?
Am I happy that my relationship with mental health services/my mental health worker in relation
to respect/equal partnership?
Am I willing and able to seek out and receive help when I need?
Do I know that an appointment with my psychiatrist will change anything for me?
Self-worth, self-esteem.
Am I getting enough sleep, how many times am I waking in the night and do I get back to sleep?
Am I doing things that are meaningful to me with my time?
Am I relating OK with people or are my moods interfering?
Ability to assertively negotiate medication changes.
Honesty with myself and with others.
Level of self acceptance.
Results
59
Transport
Eating the right food
Love making
Movies/videos
Poetry
Healthy friend
Communication with service
Rest
Budgeting
Respected as a person not a label
Sleeping pills
Cigarettes
Whānau
Recreation
Diving
Dancing
Rapping
Reading
Drawing/arts and crafts
Waiata
Hui – Group 1
Exercise
Dieting
Socialising
Music
Stable relationship with partner
Loving friends
Holidays
Trips away
Education about illness
Money
Accommodation
Paid employment
Sports
Fishing
Camping
Singing
Prayer
Writing
Medication
Hui – Group 2
Sleep
Good food
Comfortable environment
Housing
Financial sustainability
Good hygiene
Good relationships
Knowledge of one’s own experience
Exercise
To be able to communicate
Education
Access to services when needed
Access to people when needed
Comfortable clothing
Hobbies (e.g. music, art, etc.)
Happy whānau environment
Acceptance of oneself and others
To be able to talk without prejudice
To be able to express one’s own identity
To be able to make life decisions
To be encouraged in own endeavours
To be forgiven
To be respected
To be loved
To be wanted
Ability to have social contact on a beneficial basis
Ability to see a positive outcome as opposed to thinking the worst
To be happy with what I have and not want for things I can not afford or get, etc.
Good relationships with doctors and psychiatrists, etc.
To be offered support by friends and whānau when it goes wrong
Hui – Group 3
Family and friends
Social services
Food/shelter/work
Medication
Exercise
Employment associates
Going to the movies
60 Results
Home help
Public acceptance
Outings (trips, sports, clubs)
Psychiatrist/counsellors
Financial (money)
Relaxation, take a break
Hui – Group 4
Moving on and up
Children
Te Whare Tapa Whā
Accommodation
The White House (consumer support centre)
God
Karakia
Māoritanga
Peer support
Being proud of who I am
Being listened to
Friends
Te Reo
Money
Whānau
Being appreciated and valued
Work/play/fun
Looking forward to the future
Time out
Sleep
Exercise
Helping others
Freedom
Planning your own life
Good relationship with WINZ
A garden
Nature
Pets
Creating/making things
Belonging
Lovely food
Making my own choices concerning treatment and support/clinical people involved with me
Fono
To be able to start and finish tasks
Not experiencing hyperactivity
Scale
Knowing stress levels
Good relationships
Sleep
Safety
Appetite
Being active in the community
Concentration
Knowing early warning signs
Competency test
Therapy
Routine
Cognitive process
Knowledge
Well-being of children
General moods
Positive feelings
Spiritual well-being
Deaf Consumer Meeting
Ease of communication
Feeling good
Going out/walking/shopping/activities
Enjoying being with people
Not feeling frightened
Feeling comfortable with people
Other people understanding me
Support from people
Less voices in my head
Sleeping well
Work
The right medication
Being happy
Participants of the general consumer consultation meetings, the hui, and the fono were then
asked to reflect on three existing self-assessed outcome measures that were presented to them.
Individual participants in each of the consultation meetings expressed a wide range of views. In
accordance with qualitative research practice, this section provides a description of the key
themes expressed by consumers across all the consultation forums. The results of this exercise
are presented in two parts. Part one sets out the general findings that were relevant to all of the
tools presented. Part two provides specific feedback in relation to each of the tools presented.
In addition, separate sections have been written to highlight particular issues and themes that
were identified through the hui with Māori consumers and the fono with Pacific consumers.
Results
61
Value and purpose of self-assessed measures of consumer outcome
Many participants expressed how valuable they thought a self-assessed measure of outcome was
in terms of having a tool that supported them to reflect on and monitor their mental well-being.
Often people’s comments implied that, before their involvement with the present research, they
had not previously undertaken such an exercise of outcome self-assessment. One of the first
questions asked by Tāngata Motuhake was why it had taken so long for the Government to
come and ask them about mental health outcomes. They indicated that people were, in general,
afraid of talking to them directly, although they felt this was an unfounded fear. As a result they
were very pleased to be asked their opinions on mental health outcome measures for this
research, because ultimately it affected their lives.
People also reported that feelings of hope, awareness of progress, and involvement in treatment
were significantly associated with the process of self-assessment:
Well, I’ve actually been surprised at what I have written down today. Like how far
forward, how much further forward I’ve shot than what I thought. So, that’s got to be
good.
It’s positive and it gives you hope.
At least you get to have a say in your treatment, eh. That’s one good thing about it.
I patted myself on the back the whole way through that because I know how well I have
done.
… but the things where I have fallen down, like I can go back over those and use those as a
monitoring mechanism.
I like the lower statement, some of the questions I haven’t asked myself, but ‘more able to
set goals for yourself’ and I’m thinking “gosh, I do, I have” and ‘I am more able to manage
unwelcome thoughts and feelings’; “Oh, my gosh, I have”. Some of these questions I
would never have thought to ask myself or forgotten to ask myself.
I know what is good about it. You can have a little bit of say ...
It’s not the ideal thing, ’cause you never get the last word! But it’s good enough. It’s the
idea ... it’s the fact that it’s there ...
I just think it is a positive thing that people are interested enough to ask the questions and it
could be the sort of thing that you do with your key worker that you might show things that
they are surprised about and it might be a good vehicle to bring out discussion.
I found writing it down ... I began to learn about the way I’ve been and about myself. And
I’ve found ... learning about my illness and about myself. I found it really good.
Well, it’s something you’re looking at now. And later on ... in another 6 months time ... you
can look back at it and see what you were like before.
62 Results
Completion of measures
Many participants expressed considerable trepidation about how information generated through
self-assessed measures of consumer outcome would be used:
You would like to know who you are filling it out for and know where it is going to.
It became apparent that many consumers are distrustful when it comes to supplying personal
information. Specifically, they fear the repercussions of providing honest information,
particularly where mental health services are involved:
… even though it doesn’t have a name on it there are certain staff I wouldn’t like to know
because depending on how it is collated, who does the collating? Like, is it the lead
clinician? The key worker? Who ever it is … I just don’t trust them with that kind of
information, you know.
… if I had to write this out and I had to give it to somebody I’m not going to put that I am
not doing that because that means that they are going to reassess me and I am going to
have to stay in a bit longer … so I am not going to answer truthfully.
Participants were clear that a flexible approach needed to be adopted in relation to the
completion of self-assessed outcome measures:
That is why I prefer to read it, because it is too hard for me to tell someone about selfharming and stuff like that. There is no way I will sit there and tell someone, but I would
write it on a bit of paper. So it is just different for everybody.
While acknowledging that it would be useful on a personal level, a large number of non-Māori
consumers said they would not feel comfortable completing such a tool in front of others and/or
sharing the information with others.
In contrast, many Māori participants reported they would like the option of having somebody
else write their responses for them, allowing them to be able to talk rather than write:
Yeah. I’d find it helpful if I could explain it.
But personally I don’t like paperwork. I’d much rather somebody just talk to me... Yeah.
Or talk about it ... talk about it.
Māori participants identified that a preferable method of communication for Māori was through
verbal kanohi-ki-te-kanohi, face-to-face discussion, as opposed to communication through
paperwork. They indicated that written responses did not serve to give accurate assessments of
their well-being.
Non-Māori participants were unanimous in the view that if someone was to support them with
the completion of a tool of this nature then it would need to be a peer support person or
advocate. Māori participants believed it imperative that the person interviewing was known to
them through having continued involvement in their care.
Room to write
Widely identified by participants was the wish to have the opportunity and space to write in
more detail about the issues personal to them and/or the thoughts and feelings that had been
raised during the completion of the tool:
… you only need about that much space for any other comments and I think people should
be allowed to have their own personal input rather than just filling in a form someone has
given them.
Results
63
And for me it should, it could have spaces so that I could make a comment ... to make it
personal about me. Like about... “being able to communicate with your whānau”- well, I
might write “Much Less” ... oh, “they don’t want to know me”. Or “Much More - Mum
loves me and is worried that I’m ill” you know, “and has come to visit me every day”. So
there’s no room for personal comments.
I reckon the lines for explanations are missing. ’Cause you’ve got to be able to explain
each one, you know. And not just go on ... so when somebody else reads it and they just ...
say “oh, what for” ... they don’t understand what’s going on with him. You need an
explanation... You know, just a couple of lines per question … and it gives the person a
better idea of what this other person is thinking and feels.
And I like to write why I answered that question.
It should ask why.
Well, I think it doesn’t give the individual the chance to express themselves other than
“Much more”, “More”, “No change” ... Because there are certain aspects of this I
disagree with this totally. There’s a need to express. That’s what I feel...
Assessment of Wellness Outcome Tool – consumer completed section (Appendix 5)
The heading on this tool currently reads ‘Early Psychosis Intervention’ (EPI), which by its very
title influenced many participants’ view of the tool from the beginning; they perceived it as a
clinical tool:
Oh, I can tell straight away that it’s ...
Another immediate issue raised was the difficulty in understanding how to complete the
assessment. There are four parts to the tool. The first part, which involves 23 statements, asks
the consumer to choose a response from five options about the degree of change associated with
EPI’s involvement in their life. If the consumer indicates a ‘no change’ response, the second
part of the tool asks the consumer to ‘check whether there has been change for other reasons’,
and if so, to indicate whether this has made things ‘better’ or ‘worse’ for them. The third part of
the tool asks consumers to indicate the amount of support they need in that area. The fourth part
of the tool provides four service satisfaction questions for the consumer to comment on.
The complexity of these multiple instructions was difficult for many participants to follow:
I think that this is just too complicated.
It would be a good idea to have someone who had an intelligent mind to come and help you
fill this out, you know like a lawyer or something. (Laughter)
Need to simplify it, eh.
Well, I just can’t really understand the whole form itself. It’s like with the other forms...
there’s a lot of questions, you know, I just can’t understand most of it. The questions, and
just the layout of it.
Well, I didn’t even see that bit …I missed the whole bit, I just answered those and I didn’t
realise that was even there.
Tāngata Motuhake suggested this could be particularly problematic for people who were unwell
or had limited concentration spans:
I feel to answer one question you have to give three separate answers and so that might be
hard for a person to keep track of, you know. Really … to have some thought about it … to
try and give three separate answers. It would be difficult for a person who only can do one
thing at a time...
64 Results
’Cause like when I first started off reading it ... I thought it was quite neat. But when I
started getting a little bit further down onto these boxes, I became confused because I had
to look up and down and up and down. And it was like three times I had to look up and
down and then I got interrupted and then my whole thought of process started going
haywire…
The concentration required to complete the tool could result in consumers not even attempting
to use it, as was demonstrated in the response of some participants:
’Cause I just filled in the first question, and then I felt that I couldn’t be bothered doing the
rest of it.
In attempting to make the tool compact, there is a great deal of text, lines and boxes, making it
difficult for consumers to read. Some of the text runs vertically, requiring the paper to be turned
in order to be read. Some participants compared it to a multiple choice exam and joked that it
was possible you could get to the end of it and realise you have one tick left over and have filled
the wrong responses against the questions:
I agree with [another participant]. I don’t ... I can’t follow the last two boxes at all and the
first two ... it’s a bit confusing. The first one’s a bit confusing. I can see ‘better’ words and
stuff like that, but the pictures don’t make a lot of sense to me. And I think it’s ... I don’t
like it at all. It looks like an exam ... I just think the set-out’s horrible.
Please ... don’t give me any more time ... because I promise ... those boxes are getting more
and more shimmied ... Someone has tried really, really hard to put a whole lot of... to make
a whole lot of things look very, very small. I think for me, it’s a disaster. I get hopelessly
lost trying to follow lines across. It just makes my eyes go all shimmy. Something to do
with my eyes probably, but still ... There’s too many lines and boxes and things and
arrows.
No. It’s too crammed. With the boxes, you might tick the wrong one...
Well, then you have to practically turn the page like this [turns the page sideways] to read
... to read the fine print ...
The questions are complicated. Not complicated ... [It’s] the way they are spaced out.
... It hasn’t got spaces or lines and you have to look through the lines. I think it’s difficult
[to follow] ...
Some Tāngata Whai Ora felt by allowing space for comments, the tool’s appeal would improve:
And maybe you need, like, some comments or something.
… And there’s no ‘comments’ here at all possible. At least the other one [Crisis Hostel
Healing Scale], you could sort of comment and it had a person there perhaps.
There’s no space for comments.
Others suggested that if consumers were able to take the tool away to consider over a period of
time, it may be easier to complete:
Maybe if we had all the time in the world ... it’d be a good project, eh? We’d accomplish
something ... (Laughter)
The tool also used faces to illustrate choices, such as a smiley face for ‘better’. In the main
people felt the visual images aligned with the rating scale were a valuable adjunct to the tool:
Cool pictures!
I do like the pictures. (Laughter) Especially the “Much worse” one! (Laughter)
Results
65
I like it. It sort of looks like fun to fill in, eh. With the little faces there ... And it’s not so
long either.
However, some did express the reservation that the facial images could be interpreted as
patronising. In terms of the actual expressions contained on the images, that were supposedly
reflective of each of the scale dimensions, participants generally felt the graphics could be
improved to reflect (by image) the distinct response options better.
Participants felt the questions were phrased in a presumptive manner. For example, question 9:
‘You feel more aroha or love from others’ presumes that, at some time, you did not feel aroha or
love from others:
I’ve got this awful feeling that there is an influence there on the other side of all these
questions that I am the opposite to all of these. That I am not confident, that I don’t have
fun, that I don’t have control, that I am not normal, that I am not confident, that I don’t
have spiritual strength or wairua, that I don’t trust other people.
If you were to put ‘no change’ it could mean, like to me ... like ‘You are having more fun’
and you say ‘no change’ because I’ve already had fun and have always had it ... and then
they’ll say “why haven’t you changed it?” ... “Do you need support?”! No! I’m having a
nice time by myself …
The formatting of the questions in this manner creates a presumed generalisation about all
people with experience of mental illness and the issues they are likely to face. This issue was
exacerbated by the fact there was no option of ‘not applicable’ within the rating scale.
Overwhelmingly, participants argued that there was a need for a ‘not applicable’ option within
the scales of outcome measures.
In addition, people thought some of the questions (framed in a presumptive manner) involved
judgments about the effects of mental illness. These judgements did not sit comfortably with
participants. Question 4 (‘You are more normal or like you used to be before you became
unwell’) generated strong but mixed responses. This statement was seen as a judgment that
people with experience of mental illness are not and/or cannot be ‘normal’. Some found the
question offensive, and frequently responded with “What is normal?”.
Others appeared to interpret it as asking whether their level of wellness had improved, and were
not affronted by it. Some suggested they could not answer this question because they could not
recall what they were like before becoming unwell. A number of Māori participants just found
the question odd and were unsure how they should respond to it:
I wouldn’t want to be more normal. It wouldn’t happen. If I said ‘much better’ what does
that mean then? That means I become madder? Or less normal?
Question 11: ‘You are able to behave in a more socially acceptable and responsible way’ was
also seen as containing a judgment that when people experience mental illness they often
‘behave’ badly, according to a set of parameters dictated by those who consider themselves
‘normal’. Many consumers did not feel such parameters were necessarily acceptable and/or
responsible from their perspective, and to be measured against them was offensive.
People in the non-Māori consultation also felt that by framing the questions in a comparative
style (where you were reflecting on some earlier time) was not the best approach. They
commented that it was difficult to provide one overall response about a period of time that
might have included the experience of a multitude of different states of being. Rather, people
thought it would be better to have the questions framed to assess current mental health wellbeing at any point in time.
In contrast, some Māori participants suggested a comparison with another time period was
required, in order to avoid misinterpretation of responses:
66 Results
And it has no comparison ... “Than what?”, “Than when?”, “Why?” ... or “In what way?”
You have no control over choice ...
Participants felt a couple of the questions had been framed from a clinical perspective. For
example, question 18: ‘You have less psychotic symptoms’ and question 13: ‘You are more
clear and consistent in your thinking’. Non-Māori participants argued strongly that material
contained in a self-assessed measure of consumer outcome must reflect ‘the way a consumer
might consider things’.
Many participants commented on the way statements had been phrased in this tool. Unlike the
Crisis Intervention Hostel Scale, which used ‘I’ statements, the Assessment of Wellness
Outcome tool uses statements that begin with ‘you’, e.g. ‘You are having more fun’.
Consumers suggested the use of ‘I am’ (rather than ‘you are’), as the preliminary to any
question, serves to personalise a tool and make it more consistent with the whole concept of a
self-assessed measure of consumer outcome. In addition, some people suggested the use of
‘you are’ implied some form of external assessor being involved:
And I get a bit of a feeling like, you know, that someone is sitting there like this “YOU ARE
MORE CONFIDENT!” “YOU ARE MORE CONTENT!” “YOU ARE MORE NORMAL!”
(Laughter)
I wonder that, if all of the questions start with ‘you’, like ‘you are having more fun’.
Perhaps personalising it and having it more, you know, ‘I’m having more fun’. With
having ‘you’ it is like having this external third person, the clinician. I’m sitting here
marking and ticking the box, whereas if it is a self-assessment then it should be ‘I am
having more fun’, you know, ‘I am more positive about the future’.
It tells them “YOU are this”...
… And it’s ‘you’ and not ‘I’.
Almost every group commented negatively about question 12 (‘You are better treated and more
accepted by the community in which you live’). Overwhelmingly, non-Māori participants felt
this question raised an issue that was more about the community than the individual, and that if
people’s mental well-being was dependant on being ‘better treated and accepted by the
community in which you live’ then most people, unfortunately, had very little hope. People
argued strongly that the problems (and prejudices) of society are a collective issue, and until
they are dealt with at that level, individual people will work towards and achieve recovery
regardless of the treatment and acceptance of the community. Including this type of issue in an
outcome measure suggests that people’s well-being is somewhat dependent on a community
issue over which individuals have no control; that is disheartening and difficult for them to deal
with.
Tāngata Whai Ora suggested it would be impossible to measure one’s acceptability in the
community, as the following dialogue implies:
How would you measure that? How would you know?
You’d have to go and ask them! (Laughter)
They’d be too scared to say anyway.
Yeah. Psychosis doesn’t really endear you to the community normally! (Laughter)
Some people questioned whether ‘independence’ (as referred to in question 14: ‘You are more
able to live independently’) was necessarily a state of being that most consumers were pursuing
through recovery. They highlighted the tendency of many people with experience of mental
illness to isolate themselves from support people, family and relationships during periods of
unwellness. They also considered re-connection was often reported as being important to
Results
67
people during recovery times. In this sense, they suggested interdependence might be a more
appropriate concept to be considered in a self-assessed measure of outcome.
Most people felt 23 questions was a good number of items to be included in a tool and allowed
for sufficient coverage of relevant issues without too much detail.
Most people appreciated that the tool provided the opportunity to consider change from a wider
perspective (not only in relation to the service):
I think that it is great they have been brave enough to kind of think that some stuff might be
independent of their service and that that needs to be factored in, so that is good. But it
would need to have, like, you would need to do it with someone or have someone with you
to help explain it because when you look at it you think what the, well yeah, I do, it’s a bit
complicated just on the immediate.
Some people also thought that asking about the extent of support necessary was a positive of
this tool:
And it is excellent that they ask you to indicate the amount of support that you would like.
That is an excellent thing to be asking, again a great indicator for yourself and for whoever
you might want to share it with.
Non-Māori participants in general felt the categories rating-scales were satisfactory (‘much
better’, ‘better’, ‘no change’, ‘worse’, and ‘much worse’), although a ‘non applicable’ category
was also considered necessary:
Gave you a lot of choices. I liked the choices. You know, sometimes they don’t give you a
big enough variety of choice. Where it is normally yes or no, black or white; and it gives
you a good number of choices and it is well illustrated as well.
The majority of people in the non-Māori consultation identified that combining two (or more)
concepts within a question that was formatted to receive a single response was not appropriate.
For example, question 13: ‘You are more clear and consistent in your thinking’; question 5:
‘You are more content, calm or happy’; question 6: ‘You are less anxious or stressed’; and
question 19: ‘You are better prepared or trained for work, or better able to manage work’.
People identified the need to answer each of the concepts quite differently (e.g. someone may be
much more content but their state of happiness had not changed) yet had the option of only a
single response.
Crisis Hostel Healing Scale (Appendix 6)
Participants felt the process involved in completing this tool was initially unclear. It required
them to consider statements in relation to their current well-being, as well as ‘within the past 6
months’. While this was not necessarily considered difficult, the format and instructions were
not clear about how to indicate responses to the retrospective aspect of the tool:
What’s this phrase mean? ... It’s got ‘Within the past 6 months’ ... Do I put a number in
there?
Participants felt there was far too much contained within this tool in terms of both the extent of
the questions and the process involved in completing the tool (having to consider both the
present and previous 6 months in relation to each question). In terms of considering ‘within the
past 6 months’ non-Māori participants emphasised that it is difficult to reflect upon a time frame
of that nature:
… I think 6 months back I will have no idea whether I am actually thinking 6 months or 2
months or even 9 months. What I think about is what was the last crisis that happened to
me and about the people who recognise me and that guides what number I put in the box.
68 Results
Many Māori participants felt being able to compare where they saw themselves now with
6 months ago, was beneficial:
You’re able to look back on it because it says what you did now and in 6 months, so I like
that.
In terms of the actual readability of the tool, many considered the format “quite well laid out”,
simple to follow and “down to earth”:
Oh, I liked that some of the questions were simple and they asked about, you know, real
things like my sleeping and ... what I think about the medication ... and my eating ... And I
liked that there’s a person sitting there with me that I can say “well, this doesn’t apply to
me” and such and such ...
Non-Māori participants felt this tool had been developed more around the concept of assessing
personal mental well-being rather than assessing service effectiveness:
… this one looks at life and the other tool looked more at intervention and what someone
else had done.
It went wider than what the service’s done and sort of gave credit for what we had done for
ourselves.
This was seen as a positive attribute of the tool. However, people did not like the fact there was
nothing in the tool that enabled an individual to assess the quality of service they were
receiving. Tāngata Whai Ora suggested that service satisfaction “could have a big effect” on a
consumer’s well-being, and should therefore be considered:
I reckon that it doesn’t cater for people in the service, ‘cause you might have something
that you don’t like about the service, like someone in the service ... Like, for the service
you’re using…
In addition, both non-Māori and Māori participants were critical of the lack of content about key
relationships, family and community with the focus of the tool being on the individual and their
state of mind:
It doesn’t ask about how’s ... at home. Or your friends. It could be your friends that are
causing the trouble.
It doesn’t ask you about your relationship with your partner. That’s, like, often a biggie for
a lot of us…
Not family either, I don’t think…
There was a mixed response to some of the questions that people identified as being more
‘personally sensitive’. For example, question 5: ‘I remember abuse but am not overwhelmed by
it’, question 13: ‘I have deliberately hurt myself’, question 15: ‘My self-inflicted violence has
decreased’, and question 31: ‘I can cry’:
Yeah, yeah. And the one about crying- ‘I can cry’. You know, it takes more than a man to
admit that you can cry.
Oh, they shouldn’t have that in there, eh.
Well I reckon it should still be there, bro. Sorry, brother.
Like, some of these questions I wouldn’t answer, you know. That’s my ... I wouldn’t
answer them … Some of them are personal. I’d just leave them blank.
Some felt that because this was a self-assessment, such questions needed to be asked for the
sake of self-reflection:
Results
69
I thought it was a question that needed to be asked. Because it tells you ... ’cause you’re
doing it for yourself … It’s a self-assessment … so if you say that you haven’t hurt yourself
and you have, then you’re just lying to yourself.
Some people in the non-Māori consultation felt these questions could trigger distressing
reactions if such issues were addressed so directly in a tool of this type:
There is one question that really stands out for me which is number 13. I see that as a
question that needs to be asked but not so bluntly.
People in the non-Māori consultation frequently commented that they felt formal supports
would need to be put in place for people completing the tool if questions like this were going to
be asked so directly and bluntly. Overall, non-Māori participants felt this level of detail was not
necessary in a self-assessed measure of outcome.
Another question that received considerable attention during the consultation process was
question 30: ‘I have a healthy interest in sex’. It was often discussed with much humour:
It just asks about healthy interest in sex, not whether you are getting it or not! (Laughter) I
think it’s far more important for people who need it, that they’re getting it! (Laughter) ...
It doesn’t state any achievements about sex! That I’m enjoying it and I am able to get as
much as I need! (Laughter)
I would like to say, I consider myself a sort of ... a bit of a frigid sort of person. But I didn’t
find these sort of, you know ... Like these questions, the one about sex - do I like sex? Well,
who doesn’t like sex, eh! (Laughter)
Yeah, I agree with what you’re saying. I look at this form and see no questions in here that
are offensive. All I see is that they might be a bit forward but to the point, you know what I
mean? It’s not blinkin’ saying that “do you have sex in this position?” or “do you have it
this way?” (Laughter) It’s the year 2000 now...
Some non-Māori participants felt this question was not suitable. However, that view was based
upon the fact that most people in the non-Māori consultation disagreed with what this question
was actually asking and how it was framed rather than the concept of including a question about
sex in a self-assessed measure of consumer outcome. Most people maintained the issue of sex
and sexuality should be included in a tool of this nature. However, they did not think the issue
was about whether individuals felt they had a healthy interest or not.
Non-Māori participants thought the tool had an underlying clinical basis to it. Although most of
the language was not clinical per se, participants felt this tool had been developed based on
clinical concepts and this was an attempt at translating a clinically focused tool into a ‘touchy,
feely’ tool relevant to consumers. More specifically, terms such as ‘insight’, ‘crises’, and ‘selfcare’ were seen as clinical jargon rather than terms commonly used by consumers.
People in the non-Māori consultation felt that question 4: ‘I do not see myself as sick nor allow
other people to see me as sick’, was framed based on a judgment that issues with mental wellbeing should be viewed as sicknesses. This is simply one view (judgement) on mental illness
and is not necessarily compatible with other perspectives on the experience of mental illness. In
addition, non-Māori participants were concerned as to how some items would be scored based
on another’s judgment of whether something was positive or not (e.g. it is very judgment
dependant whether you regard ‘allowing others to see you as sick’ or ‘ability to listen when
people talk about you’ as positive or negative).
With question 35: ‘I am able to listen when people talk to me and about me’, people in the nonMāori consultation questioned whether it was important (and constructive) to any person’s wellbeing that they were able to listen when people talked about them.
70 Results
A couple of the questions were identified as being ambiguous. For example, with question 11:
‘My inside voices are less bothersome’, people were divided as to whether that referred to an
experience of hearing voices or the effect of people’s inner voice on their mental health.
Similarly people expressed confusion as to what was meant by question 19: ‘I am feeling less
alive and in my body’.
Many people in the non-Māori consultation felt there was much repetition throughout this tool.
In the main, participants felt the lack of any specific questions about drug and alcohol issues
was a shortcoming of this tool:
And the other thing I thought. It doesn’t have anything about my drug and alcohol intake
... should that be applicable.
There was widespread concern among participants that this tool has no cultural component
contained within the questions covered. In particular, Tāngata Whai Ora noted the lack of
assessment with regard to cultural identity, connectedness and spirituality. They felt that if
anyone went through their self-assessment using this tool, ’they would know nothing’ about the
consumer’s Māoritanga, yet this was considered important for their waiora:
It touches on whether I’m working, my eating, my sleeping, my emotional. It doesn’t
mention a lot about my spirituality or my Māoritanga.
Most people reported they preferred the way the statements/questions contained in this tool
started with the words ‘I am’ rather than ‘you are’:
I like the way these things are written, I find they make yourself judge, these other ones, I
felt like everyone else was doing the judging. In this one I think, because of the ‘I am’
listings.
Yeah, I think it’s a good tool. It talks about yourself ... about what you are...
I do like the sort of form of the questions. They seem to be far more relevant, and … these
things, I can identify with [more] than these ones [Hua Oranga] ... it’s someone else
talking [with Hua Oranga] rather than ... here [Crisis Hostel] it starts with ‘I’, ‘I’, ‘I’, ‘I’
...
I like the questions are a lot more personal than the other one. They’re more ... the ‘I’
statements.
Non-Māori participants argued strongly that jumping between framing questions in the negative
and then in the positive was not a good idea. They felt this created much confusion and made
the tool less user-friendly:
… but the problem with it is that you have got to change the scale here because it sort of
creeps up and goes, hold on a minute, am I supposed to be agreeing or disagreeing with
that. It is a double negative, kind of.
Tāngata Whai Ora also agreed questions needed to be rephrased to become positive statements,
rather than negative ones:
And I would like all the questions to be positively framed instead of “I cannot trust my
decisions” I would rather it was … “I can trust my decisions”- do you “Agree” or
“Disagree”...
People in the non-Māori consultation identified the inclusion of double concepts within one
question (for example, question 2: ‘I have a sense of being in control of myself and my life’;
question 9: ‘I am aware of and respect the feelings of others’; question 16: ‘I am more
knowledgeable and informed about medication’; question 24: ‘I don’t care about my body and
Results
71
don’t take care of it’) as inappropriate; people might well wish to answer to each of the concepts
quite differently yet had only the option of a single response.
People in the non-Māori consultation generally felt the categories in which they could rate
responses were satisfactory (‘strongly agree’, ‘agree’, ‘disagree’, and ‘strongly disagree’). In
addition, people felt the inclusion of the option ‘not applicable’, at least in relation to some of
the questions, was a good practice.
Hua Oranga – Tāngata Whai Ora completed schedule (Appendix 7)
Once again, consumers considered the questions were framed in a presumptive manner – for
example, ‘More able to move about without pain and distress’ presumes you were, at some time
not able to move about without pain and distress. Sometimes it was suggested that a ‘not
applicable’ would resolve this problem. However, most participants thought it would be better
to actually re-frame the questions in a manner that did not involve such presumptions being
communicated.
People in the non-Māori consultation also felt framing the questions in a comparative style
(where you were reflecting on some earlier time) was not the best approach. They found it was
difficult to provide one overall response about a period of time that might have included the
experience of a multitude of different states of being. Rather, it would be better to have the
questions framed to assess current mental health well-being at any point in time.
In contrast, Māori participants commented on the value of this tool in assisting them to reflect
on both their current well-being and the positive changes that had occurred over time:
OK. What I thought was good about this was if you did want to assess yourself and you
never knew how to, this could be a way of using this tool. It gives you a structure. It gives
you something to go, “Oh yeah. I could say that. Maybe I feel valued now.” That could be
your assessment ... [And] if you have someone, say, using this at the very beginning. Then
after they have had their treatment, again. Then you sit down and ask “How valued do you
feel as a person?”, “How strong?”, etc., and put it away and then after, you know ...
whenever that person is ready to move on, to get this, have them fill it out again and then
give the other one back and say “Here. Look where you’ve moved and where did you need
to move.”
Overall non-Māori participants reported the tool was satisfactory as a measure of the ‘essence’
of mental health outcome. However, most felt it was too broad. They felt there was not
sufficient detail in the tool to capture their experience adequately:
I felt it just wasn’t comprehensive enough at all … With this I felt you could find the
answers in all this and they still wouldn’t have any idea of really how you were.
It’s very incomplete as to the finer detail, because if I’m going to rate myself I need a wee
bit more. To me it is incomplete and is very vague.
This one for me is a question about karma.
On the other hand several people identified that they felt the tool actually contained more than
what is apparent at first glance:
It’s actually quite cunningly set down, there is quite a lot more in there than what you first
realise.
Overwhelmingly, Tāngata Whai Ora did not feel the tool was able to provide them with an
assessment of their well-being, as defined by them. When they reflected on all the factors they
considered fundamental to their well-being, there were a number of aspects missing from the
tool:
72 Results
All these things. It doesn’t ask you if you’ve got comfortable housing. It doesn’t ask a lot
of those things. Like ... I mean, “Are you more content with yourself?” Well, I suppose you
just have to be, eh! (Laughter) I mean, there are times when you don’t feel like you like
yourself. I don’t know...
What is missing is there should be “Do you like taking your medication?” “Do you have
side-effects from your medication?” You know, and when you’re a Māori and you get
locked up, or ... locked up inside and that … you can lose your mana … making you wild ...
They suggested it would be much easier for them to assess themselves based on their own
responses to being asked what was important to them for their well-being or waiora:
I mean, if they were to ask us questions like this [indicating to list developed by
participants], and we were to answer them like in a written answer ... well, then they’d be
better to ... Probably much easier to assess against something like that. We could easily,
you know ... could say, “Oh, this is working for you, and this, and that ... and I’m still
working on that ... that’s not so good ...” I mean, everyone here in the room could put
themselves somewhere on that picture. And also make a little list against of what we need
to do a little work on ...
Many people in the non-Māori consultation commented on the value of including some
questions about physical well-being in a self-assessed measure of consumer outcome.
There was a mixed reaction to the questions contained within the whānau section. Most people
emphasised the importance of questions relating to significant relationships. However, in the
non-Māori consultation there was a mixed reaction to question 4 (d): ‘More able to participate
in your community’, with several people arguing that they didn’t necessarily want to participate
in their community (well or unwell). This seemed to be primarily an issue of interpretation,
with people defining community quite differently, from going to a movie through to joining the
local parent-teacher association. In addition, the issue of ‘healthy communities’ was raised;
people stressed that it was not always healthy to be involved in your community (e.g. if they
held negative attitudes to people with experience of mental illness), and that was a community
issue rather than an individual issue.
Most participants were critical of the opening line of the consumer schedule of the Hua Oranga
tool, which reads ‘as a result of the INTERVENTION do you feel: …’. Two distinct points
made in relation to this issue. First, most participants had no comprehension of what was being
referred to by the term ‘intervention’:
Does ‘intervention’ ... does that mean being introduced to the mental health system ... or
being put under the mental health system?
It says here “As a result of intervention” ... What do they mean exactly?
In the process of consultation, when people had no concept of what ‘intervention’ implied, the
facilitators of the non-Māori consultation often suggested the word ‘service’ be substituted in
place of ‘intervention’. This seemed to be more relevant to most people. Participants in the hui
found substituting the word ‘intervention’ for a particular ‘clinical intervention’, such as
‘medication’, ‘receiving treatment by a mental health service’, ‘specific mental health services’,
‘going to see a doctor’ and ‘anything that required a key worker’, helpful:
As a result of getting this mental health help ... or help from this particular service .. Or ...
getting the help from your doctor ...
As a result of going to the White House do you feel more valued as a person and do you feel
stronger in yourself as a Māori?
Participants indicated substituting an option they understood for the word ‘intervention’, was
preferred:
Results
73
Yeah, I would fill it in if it was something instead of ‘intervention’ because I could
understand it.
I just couldn’t understand the word ‘intervention’ either... because I haven’t got much ...
but if it was a different word instead of ‘intervention’, I would probably understand it.
I don’t like the word ‘intervention’ unless it was ... they had something else like someone
asked you specifically ...
Second, people felt the tool was unduly restrictive and presumptuous, in terms of measuring
outcomes, when it confined reflecting on change to that which had been a result of the
‘intervention’.
It also assumes that whatever is happening to you is a result of the intervention. There is
no room on the form for your improvement or decline being a result of other things.
… but it might be in spite of the mental health service, it might be because my best friend is
moving back into town and I’ve got back with my family and I have finally got a decent flat
and, you know, what is happening in the mental health service might be irrelevant …
Because otherwise, like, “Do you feel more valued as a person?” Actually, I may feel
unvalued by the service and the environment but because my family and all my friends have
been visiting me every couple of hours and brought smokes and lots of love and stuff like
that I actually feel valued.
It’s just the words. “As a result of intervention are you more able to set goals for
yourself?” Well, as a result of the intervention you might be so pissed off with other people
trying to do that, that you actually take things into your own hands...
Under question three number (b) it says “As a result of the intervention are you more
committed to having good physical health?” You know I’ve got that thing ... you like to
look after yourself but then because you’re on special medications, they make you balloon
with weight, or lethargic, or ... so it’s quite hard to exercise or take better care of yourself.
I mean, to specifically question yourself “as a result ... are you more committed to having
good physical health?” I have certain views about whether I’m able to maintain physical
health with medication.
I would like to have it the other way around. Actually find out the result of what made
people feel that way, not say “as the result of intervention”, but “What during the time has
made you more valued as a person, if anything?” And then you’ll find out. You know, you
could say, “Well, I got to know a few really good, you know, patients in the ward that I got
on really well and, you know, they helped me out here”. OK. That’s, you know, not an
intervention but it’s ... Then you get an idea of actually what works.
There was concern this could lead to a misinterpretation of outcome results, e.g. this person has
had no change in these areas, whereas the change has occurred but not as a result of anything to
do with the intervention:
... if I am saying “more valued as a person, much less” because [of some other reason] ...
and they’ll think “Oh, he’s still depressed or whatever”.
Overwhelmingly, participants in the non-Māori consultation reported that the use of thematic
sub-sections, within a tool of this nature, was a good idea.
People in the non-Māori consultation generally felt the rating scale was satisfactory using the
categories of ‘much more’, ‘more’, ‘no change’, ‘less’, and ‘much less’, although they did feel a
‘not applicable’ category was necessary.
74 Results
Hui
A ‘Self-Assessed Tāngata Motuhake or Tāngata Whai Ora Outcome Measure’ is a considerably
hard concept to understand. By breaking it down, participants were better able to provide
feedback on the measurement tools presented.
When asked to define well-being or waiora, Tāngata Motuhake described having a healthy life,
both spiritually and otherwise. When asked to consider specifically what they as individuals
saw as important for their well-being, participants were very clear: having a healthy diet,
exercising, sleeping or relaxing, and having adequate accommodation and a comfortable
environment were important for well-being:
Having a rest ’cause sometimes you might be stressed out.
Besides these necessities, they signalled very definite other needs, such as money, with Tāngata
Motuhake identifying the associated need for financial sustainability and budgeting. This
related to paid employment being seen as an important factor in well-being. On a practical
front, transport, good hygiene, cigarettes, education, home-help, pets and comfortable clothing
were reported as important for some.
Having fun and being able to play were seen as particularly important for participants’ wellbeing. A wide range of recreational activities, sports and hobbies were identified, such as going
to the movies or watching videos, fishing, and diving. Many participants suggested creativity
was important for their well-being, and reported involvement in various art forms including
crafts, music, dancing, and literature. Some found having a garden and being close to nature
contributed to waiora.
Trips away, holidays (such as camping), and simple outings were also identified as important.
This was related to the need for positive social interaction, which was highlighted as particularly
important:
... ’cause you’ve been made part of a team ...
Ability to have social contact on a beneficial basis.
All participants reported the importance of healthy and supportive relationships for well-being.
Friends and whānau (including partners and children) were identified as being particularly
important, but relationships with others in the community, such as doctors and WINZ, were also
reported as being important for participants’ well-being:
To be offered support by friends and whānau when it goes wrong.
Having communication with and access to services when needed was specifically mentioned as
important to Tāngata Motuhake, as was education about mental illness and medication. Tāngata
Whai Ora also suggested that being able to make their own choices concerning treatment and
support, and being able to have a choice about which clinicians they wanted to have involved in
their care, were important:
So, if you don’t think that the mental service is the right one, you should be able to say “No.
This is not ... I don’t want this. I want something else”.
Public acceptance and feeling comfortable with who they were, were identified by participants
as particularly important factors to their well-being:
Acceptance of oneself and others.
Being proud of who I am.
Respected as a person not a label.
To be able to express one’s own identity.
Results
75
To be able to talk without prejudice.
The manner in which they were treated by others and the need to belong were also important to
some participants:
Being appreciated and valued.
Being listened to.
To be encouraged in own endeavours.
To be forgiven.
To be loved.
To be respected.
To be wanted.
Having a positive outlook was also reported as important. In particular, a positive outlook
influenced a person’s ability to have control over their future and to plan their own lives,
described generally as being “able to make living decisions”:
Ability to see a positive outcome as opposed to thinking the worst.
Looking forward to the future.
Moving on and up.
A wide range of other aspects of well-being showed the depth of insight participants had about
what was important to them in terms of their wellness:
Knowledge of one’s own experience.
To be able to communicate.
To be happy with what I have and not want for things I can’t afford to get.
Some participants suggested spirituality was important for their well-being, identifying both
God and prayer as important factors.
In terms of Māori-specific factors that contribute to waiora, Tāngata Whai Ora identified
Māoritanga, karakia, te reo and Te Whare Tapa Whā as important for them.
Love making, helping others, and safety were other factors identified as important for some
participants.
A number of participants voiced skepticism about whether current practice took into
consideration any of the factors they had highlighted as important to their well-being:
I doubt they would be interested on how well you were taking care of and enjoying the
company of your pets ... which is a really good indication of how well someone is.
Hua Oranga – Tāngata Whai Ora Completed Schedule
Participants found some of the words and questions contained in this tool difficult to understand
and were concerned about potential misinterpretation:
Some of these things I can’t understand. So I circle “No change” and “Less” ... So it’s my
understanding of these questions...
I don’t understand the meanings of questions … I can’t even understand the whole thing ...
I’d rather they just asked me straight out. Like “How are you feeling?”
76 Results
The bad thing about it is that some ... you might not be able to understand it or comprehend
what it’s saying.
I think there are few more questions that could be asked more simpler and straight to the
point … I found that the questions are vague … And in every question there’s something
that I don’t think makes sense like “more content within yourself”... “Because of being in
the unit am I more content with myself?” “Because of being in the unit am I able to
understand how physical health improves mental health well-being?” It’s a bit too much
really. It could be simpler. And “because of being in the unit am I clearer about the
relationship with your whānau ... with my whānau?” What’s to be clear? I don’t know ...
“And more able to participate in your community” … What does able mean? If it said
something like “Have you joined any clubs or groups since leaving?” and you could say
“Yes, I have.” Oh ... well then that person is obviously able to participate. Whereas
“being able to” ... oh yeah, I could. Not that I ever was, but I could.
In relation to this point, many participants argued that the tool should have allowed a space for
them to write down or explain why they chose a specific response, rather than just be expected
to circle one response. As the non-Māori participants also identified, it was important for
participants to have the opportunity to write in more detail about the issues personal to them:
I reckon the lines for explanations are missing. ’Cause you’ve got to be able to explain
each one, you know. And not just go on ... so when somebody else reads it and they just ...
say “Oh, what for” ... they don’t understand what’s going on with him. You need an
explanation... You know, just a couple of lines per question … and it gives the person a
better idea of what this other person is thinking and feels.
And I like to write why I answered that question.
It should ask why.
Well, I think it doesn’t give the individual the chance to express themselves other than
“Much more”, “More”, “No change” ... Because there are certain aspects of this I
disagree with this totally. There’s a need to express. That’s what I feel ...
If extra lines were added for consumers’ comments, many participants reported they would like
the option of having somebody else write their responses for them, allowing them to be able to
talk rather than write:
Well if I had lines down for an explanation, I’d be able to do it or somebody else could do
it.
Yeah. Because I think you get far more if you talk about it. Or give people the choice. Sit
down and write it for them. Talk about it. Because I’d be really curious, you know, that
during a time that a person was in the mental health services, be that a year or 10 years,
I’d be really interested to find out if there was anyone in there or anything that happened in
there that made them more valued as a person…
Like, I wouldn’t mind if they actually asked the questions and, you know, you’d be able to
verbally answer them instead of writing them down … Like [another participant] said,
You’d get upset writing all this down, wouldn’t you? If it’s asked of you, you can express
yourself...
This was felt to be particularly important for consumers with literacy problems:
It is not that uncommon, eh. You’re limited already. You’d never get this one.
Tāngata Whai Ora thought it imperative that the person who sat down with the consumer and
wrote the responses to their questions, was someone who had had continued involvement with
that consumer’s care, and not someone who had just “come along at the end”:
Results
77
Yeah. If it was somebody who had talked to me everyday while I was in there too. So not
just a new person ... to do this.
Others felt confident and comfortable in being able to write their own explanations:
Not for myself, because hopefully, like, I can explain myself ...
Participants agreed overall that a preferable method of communication for Māori was through
verbal face-to-face contact, rather than communication through paperwork:
And paperwork to me is not Māoritanga. It’s kanohi-ki-te-kanohi. Come talk to me. Don’t
talk to your bit of paper and write about me, you know? There’s been enough of that.
What I found missing in my experience was that nobody very much actually came and found
out what was happening in my life, that I didn’t feel safe and I had just been beaten up, that
I needed to go to the Police ... and all sorts of other things, you know, that were really
happening. Those things were never addressed. So you, know, this doesn’t do it for me …
And personally, I just don’t like paperwork about me, you know.
A number of comments were made about the need of the assessment to focus on how to
maintain wellness rather than reflect on it:
Because it says here “healthier from a spiritual point of view after intervention” ... Well, I
mean, I feel it should have said something like “how do you hold on to this healthy spiritual
...” you know… to stay spiritually attached to your ... I mean while it varies … after coming
out of the hospital, well, then you do have an experience. You know, you’ll say you’re close
to God or whoever your higher power is ’cause you’ll feel quite naked and vulnerable ... I
just find that some of the questions are vague ...
Like “more committed to having good physical health”. Well, after being sick, well, then
you know that you’re better and you want to stay like that but how do you sustain, or stay
like that, you know.
Despite these concerns with the measure, a number of positive comments were made by
participants, ranging from feeling assured about the tools’ capabilities because “professionals”
were involved in its development; to finding it “quite easy to fill out”:
It just talks about self-esteem ... to help your relationship with your whānau. [I find that
good.] Very good.
Well, I like it. It’s short. It’s easy to fill out ... and that’s quite positive ...
A number of participants identified that the completion of any self-assessment measure would
be more difficult when the consumer was unwell:
’Cause I felt really shitty when I first got introduced to the system.
Yeah ... if you’re on medication, you probably couldn’t read it.
As some participants pointed out, this could lead to incorrect assessments:
... you could circle all “Much more”, “Much more”, “Much more” and you might be really
sick. And they will come along and say “But look, you’re doing really well.”
... someone might assess themselves and say “Yeah I am ... [well]” but ... they could be
lying! They could be high on medication ... at the time they asked for it to be done ... They
just mightn’t have been in the right frame of mind ...
Some participants’ positive feelings toward the tool were focused on its capacity to assist
clinicians in what participants perceived to be more accurate assessments of them, although they
felt this was dependant on clinicians asking them the reasons behind their responses:
78 Results
Like, it really tells the professional about the sort of state of mind that they [consumers] are
[have].
Well, people would get a better understanding of where you’re at when you sign these
things, you know what I mean? You know, they’ll understand where your head’s at a little
bit. ’Cause they’ll have to ask you the questions afterwards. You know, why did you write
that? And you can explain yourself, you know?
Oh … “what’s good about it? ”... ’Cause we get to measure ourselves and how we feel.
’Cause the psychiatrists ... they’d get a… a bit of an education of how we are feeling inside.
A number of participants intimated that the tool could be used to their advantage. For example,
if a positive mental health well-being self-assessment influenced a discharge from hospital,
some participants felt a tool such as Hua Oranga could be manipulated towards this outcome:
Like, if they get a person that writes “much more” ... “Oh, he’s well” … You know. You
could write a bad buzz if you were in there, eh ...
You’re trying to give the answers they want to hear ... because you want to make a positive
impression so, you know, they leave you alone.
Conversely, participants were concerned that non-compliance with such an assessment might
result in what was perceived as a less positive outcome:
’Cause if you’re, like, anti the service ... and you’re, like, “much less”, “much less” ...
(Laughter) “Oh, this fulla’s gotta stay here for longer.”
Participants commented on the value of this tool in helping them reflect on both their current
well-being and the positive changes that had occurred over time:
I’ve found a lot of strength from reading it ...
I found writing it down ... I began to learn about the way I’ve been and about myself. And
I’ve found ... learning about my illness and about myself. I found it really good.
Well, it’s something you’re looking at now. And later on ... in another 6 months time ...
you can look back at it and see what you were like before.
A number of participants expressed concern at the prospect of a clinician completing a similar
assessment of their mental health well-being. They did not have confidence in clinicians to be
able to do so satisfactorily:
Like, it’s got here “Do you feel healthier from a spiritual point of view”. I don’t know how
a clinician can figure how I’m healthier from a spiritual point of view.
It’s got no room for explanations [so] ... why would they understand your health problems?
I don’t see how they could figure that out either. They’d just take a stab in the dark ...
I doubt the psychiatrists would actually be able to fill this out, eh. ’Cause they don’t even
get to know you, eh. You know, they don’t get to know you. They just meet you for 5, 10
minutes, and they think that they know you ...
Or they read your files and they know you as if they were your mates at school ... and
they’re not...
The triangulated approach to assessment was challenged by participants who queried what the
outcome would be if two of the three assessments were different from the third – if there were
“two against one”. Despite being informed there was supposed to be no prioritisation of
assessments, participants were suspicious of the weight given to clinicians’ assessments in
comparison with theirs. They also queried whether they would get to see the whānau and
clinician assessments, given current practice did not actively promote this:
’Cause you don’t really see what the doctor’s put down there …
Results
79
Yeah. That’s what I wanted to ask you, ’cause you might fill this out. And then they take
all the assessments and the other two are more agreeable than your one and they’re saying
... and they turn around and go “No! You’re not well!”
Yeah, everyone else around you might think “No ... she’s not well”... then what would they
do with this survey?
Yeah, where’s it going to?
Crisis Hostel Healing Scale
As was suggested with Hua Oranga, any self-assessment would be made more difficult when
the consumer was either unwell or had a limited concentrate span. Some participants indicated
this was the case for them:
When I got to the first three I thought ... I looked at the rest of the page and I thought ... I
am not actually sure how far ... can’t cope with it at the moment. It’s too long for me ...
and to understand ... and I struggle. I’d give up after the first ... It’s too long.
I have got past page one. I can’t actually be bothered to go on through the whole lot. It
seems to be overwhelmingly much.
A number of participants spoke about the need to have time to consider their responses before
giving them. They felt this was particularly important for consumers in hospital who might not
have the energy to undertake a self-assessment all in one go:
I like it better than this (Hua Oranga), but the same thing, like, I know that if I was in
hospital back then ... and if they sort of, like, gave me time to do it, you know. Gave you at
least a couple of hours to do it … [rather than] if someone was still, like, standing over you
and was kind of like “Do it!” ... bloody, in 5 minutes or something. And you were really
exhausted ... and stuff like that.
Yeah, I found that I “Strongly Agreed” with most of them. But if I had, like … [if] I could
maybe come back to work on them ...
However, the preamble at the beginning of the tool indicates that someone is asking the
questions of the consumer, which people considered a positive approach:
But, yeah, forty questions is too long. And like they said before, if you’re in hospital and
you’re heavily medicated, I mean you wouldn’t know what you were doing really when
you’re filling this out! But if you had someone with you who … talked you through it and
could explain things to you and maybe that would make it a better way to do this.
Participants identified that an interview format provides the consumer with the opportunity to
tell the person asking the questions when a question was not applicable and possibly why:
Yeah, well I personally ... I like it … It covers everything and because of the person going
through it with me, I could expand on…
However, Tāngata Whai Ora warned they would not feel comfortable completing this tool with
somebody with whom they did not have an established rapport.
While acknowledging it was a particularly long tool, a number of participants considered this
was necessary for a more accurate assessment of their well-being:
I believe the amount of questions are necessary so that we can be traced whenever we shift,
which area we should be in, because it asks you how do you feel now, it gives you a time to,
in retrospect ... to track the period between the 6 months. It doesn’t look clear back to the
beginning of your life, it just looks at certain periods of time ...
80 Results
I thought that it was really good actually. I thought that the questions were straight to the
point, you know, they weren’t broad, like the first one (Hua Oranga). The first one I found
that the questions were broad and so you had to give a written or oral explanation as to
what you really meant. Whereas this tool, you know, some of the questions were much
easier to answer ... much clearer to answer ... well, I was able to answer them in a way that
I felt comfortable with.
Outlining what the questions were going to be about in advance was regarded positively:
To start … it’s too long. But it states what the questions are going to be about. It says
here, you know “your well-being, emotional state and aspects of your life”. It also says in
there if it does not apply to you please tell them. So it gives you the chance to write, and to
make comments as to why it doesn’t, it’s not applicable ... or you don’t like the question.
And it says about answering the questions carefully so you have to give some thought to it.
It’s not just “yeah, no, yeah”.
One Tāngata Whai Ora raised the issue of having whānau support when being assessed, whether
via self-assessment or otherwise. This concept is not uncommon for Māori and for many is a
given. The omission of any statement at the beginning of the tool that endorsed this practice
was viewed as a shortcoming:
It doesn’t ask if you want whānau or significant others along with you during the process.
’Cause sometimes when you’re going through those care-plans or whatever, or a survey,
they might spring a question on you and you’re not prepared for it. And you might feel like
you’re intimidated or kept in the dark because they have just sprung it on you. And so
you’ve had to go out and reach out for the support that you need.
The tool appealed to some participants because it acknowledged things that they found
particularly relevant to their circumstances. For example:
I like that 29, though – “My awareness of different ways of healing is increasing” … The
fact that it’s got healing in it. When I said I was a healer I got locked up … Yeah, I said I
was a healer, bro ... and I got locked up.
And I like that it mentions abuse. Like, often those are the sorts of things that get ignored
that impact on our mental health quite a lot ... whether we’ve suffered abuse in the past,
you know, before we came into hospital.
Participants considered the question on abuse (number 6) was more of a talking point because it:
doesn’t actually go into whether the situation is better now or whether there’s been an
improvement or whether you are actually moving back into the same situation.
if you’ve got some help or some support or some counselling.
All these were seen as factors that required further information sharing.
A number of Tāngata Motuhake felt able to relate to question 34 (“I become hostile when I
express my feelings”), which stimulated discussion on whether they felt it was “OK to feel
angry”:
Yeah, I can relate to that. Especially when I talk about my sexual issues, eh. I get all
hostile with it, eh. [Then I]… rant, rage, yell and scream ... fuckin’ break ups … [Then] I
get all ... I get locked up.
I get hostile where I can’t express my feelings across to someone. When my feelings are …
I get really hostile.
Results
81
However, they pointed out that the question was not clear on the kind of feelings expressed that
might cause them to feel hostile. Feelings on anger were acknowledged as causing hostility, but
anger expressed in a healthy way was felt to be “part of nature”:
It’s alright to be angry as long as you don’t become violent ...
Question 3 (“I have regained my sense of humour”) was felt by some to be presumptive; others
felt it was important to ask:
“I have regained my sense of humour” ... I never lost it! (Laughter)
Yeah, and things like sense of humour, that is important to me because those are some of
the things I lost when I’ve been feeling depressed.
The participants perceived some obvious omissions in the tool. It was perceived as assessing “a
lot about emotional stuff and feelings” without necessarily considering many of the practical
things participants identified as important for their well-being, such as contact with or access to
children, or care of pets. As was the case with the non-Māori consultation, participants were
particularly concerned with the lack of assessment around relationships with others, particularly
whānau and friends.
A number of participants identified the lack of questions about money. While the tool does ask
about having ‘enough resources to live well’ (question 38), this was considered inadequate:
It hasn’t got anything about money in there. And that’s quite an issue for everyday life, you
know.
And they haven’t asked any of the basic questions like “Do I have enough money to live
on?”, “Do I get on with my family?” ... questions like that ... questions like we’ve got on up
here [participants’ list of factors important to them for their well-being] … they haven’t
asked any of those questions - “Do I eat regularly?”, “Do I exercise?”, “What are my
interests?”, “What are my goals?” ... It hasn’t got anything like that on here …
It asks if I’ve got enough money to live well. It doesn’t bring up whether I need help with
this or not.
As with Hua Oranga, participants intimated the tool could be manipulated to falsely ‘improve’
their well-being assessment:
And also, I suppose ... because I would again use it to get out, if I wanted to get out. This
one seems easier to use to get out than the other one [Hua Oranga]. If you give the
answers that they’re expecting, exactly what they’re expecting, you know. If you are there
for, say, an eating disorder or something, of course you say “Oh, I feel in control of my
eating habits now” ... let me out of here so I can get real control, the sort of control I want!
It sort of gives you the idea of what the psychologists want to hear from you, eh.
Assessment of Wellness Outcome Tool
A number of participants considered the phrasing of the statements in the questions peculiar,
and implied they would be difficult to measure:
… I would have a real problem with this one because the questions are really, really
strange … And you know, “You have more spiritual strength” ... how would you even know
that for yourself? I mean, that’s such a high thing. And then, to connect it up with a smiley
face, I don’t know. It’s demeaning ... “You are better treated and more accepted by the
community in which you live” ... I found that... how would you measure that?
82 Results
A different aspect of this tool compared with the other two (Hua Oranga and Crisis Hostel
Healing Scale) is the section that asks consumers about their satisfaction with services. Tāngata
Motuhake liked this option:
That’s what the other two need, eh?
There was discussion about whether service satisfaction fell within the realms of well-being
assessment. Participants argued that “to some degree your wellness or unwellness will be
determined by the service you receive” and therefore it was appropriate that such questions were
included:
But if the service isn’t working for you, then you’re gonna get more sicker. If you’re
staying in the unit ... if you’re locked up ... you’ve gotta have more say in it...
Others felt the benefits of being able to comment on service satisfaction were dependant on who
was doing the assessing, implying consumers may not always feel able to provide honest
feedback for fear of repercussions.
Having it clearly stated that more paper could be used if needed provided participants with a
feeling of freedom to explain responses:
Well, I believe it’s allowed us to express ourselves. And that is the bottom, it says “Use
more paper if you need to”. If you want to write a volume, go ahead. (Laughter)
However, many participants said they would much prefer a tool that allowed them to give only
verbal responses, rather then written ones. As reported in the Hua Oranga consultation, the
majority of participants prefer the Māori approach of kanohi-ki-te-kanohi, face-to-face
discussion, for any assessment. Written responses did not serve to give accurate assessments of
their well-being:
No, not really. ’Cause you can fill in paper but it’s not the same as talking to someone.
… it doesn’t say anywhere where you can talk to someone.
The tool appealed to some participants because it acknowledged what they found particularly
relevant to their circumstances. For example:
And I like question number 13 (“You are more clear and consistent in your thinking”)
because I think that’s important ...
Some found questions 8 (“You have more awhi or trust in people”) and 10 (“You have a
stronger connection with your culture, whakapapa or ancestral background (e.g. Māori, New
Zealand, Samoan, Hindu, Jewish)”) pertinent:
Yeah, I think it’s good because it acknowledges your culture.
Others felt completely differently:
I don’t like the tokenism of those Māori words put in there.
As with the other tools (Hua Oranga and Crisis Hostel Healing Scale), it was identified here that
responses could be manipulated to present a positive self-assessment:
… It’s one of those ones where you could probably put an answer just so it was right for
them.
Tāngata Whai Ora noted that while the tool did acknowledge relationships with friends and
whānau in the assessment, a more useful perspective might have been to focus on the
understanding the friends and whānau had of the consumer’s mental illness.
Results
83
Overall, when participants reflected on all the factors they considered fundamental to their wellbeing, few were missing in this tool. One aspect that was missing was money:
They do ask a lot of questions that do relate to this [indicating list developed by
participants] but not in detail ...
Fono
While participants generally supported the ‘I am’ framing of questions, participants in this
group were divided over whether they preferred the ‘I am’ or ‘you are’ framing. Some felt the
use of ‘I am’ served to individualise the tool overmuch, which they felt was not culturally
appropriate. Others, however, preferred the ‘I am’ wording, feeling that it provided the
individual with a sense of ownership of the process.
In the Crisis Hostel Healing Scale, participants were particularly concerned about the lack of
items dealing with relationships with others, and in particular with family. They also
specifically indicated they found the lack of items relating to cultural identity, connectedness
and spirituality unacceptable. Participants acknowledged the inclusion of those dimensions in
the Assessment of Wellness Outcome and Hua Oranga tools.
Participants of this group expressed more concern about items that had direct and specific
reference to subjects such as self-harm, abuse, sex and violence. They argued that the approach
to such issues was not sensitive enough and would be considered highly offensive by some
Pacific people. Overall, it would not fit comfortably and/or safely with Pacific people. They
emphasised the importance of one-to-one verbal communication in dealing with such issues.
Preferences
The final task of each general consumer consultation meeting, the Māori consumer hui, and the
Pacific consumer fono, involved asking participants to rate the three tools in order of preference.
All groups were most concerned about expressing a preference for any of the tools presented.
They argued strongly that none of them was satisfactory for adoption as a self-assessed measure
of consumer outcome for the New Zealand context.
In terms of the preferences expressed by the groups, there was no consistency or common trend.
With the three measures considered, almost every possible permutation of order was reflected
through the preference results.
Deaf consumers’ consultation meeting
Participants of this forum were asked to reflect on the needs assessment tool that is currently
used by the deaf mental health service. Overwhelmingly, people were concerned about making
mistakes in their completion of the tool. This seemed to be exacerbated by the fact that people
had difficulty with understanding the questions in the tool, which were constructed based on
English language and grammar:
Yeah, but what if I make mistakes and we don’t even know if I have made mistakes?
Well, it is hard for me because I can’t even read the language there.
In response to this issue, participants argued strongly that such a tool is of no use to deaf mental
health consumers if it is not constructed and presented in a manner they can understand:
There is not enough, they approach it in the wrong way, so they need another system that
would be more helpful, that would be, that would work better. Understanding
84 Results
communication is a very important thing for us, and the fact that we can’t even understand
the form is not a very good start.
In the main participants believed the preferred method for deaf consumers would be a tool
translated into basic sign language and then completed, by way of a face-to-face interview,
through an interpreter or, failing this, presenting a basic sign language interpretation of the tool
on a video.
With considerable support from the deaf mental health workers present at the meeting,
participants were able to reflect on some of the questions contained in the tool. In general,
participants thought that many of the questions were ‘cheeky’, condescending and not relevant
in terms of what they asked about. For example, participants were particularly critical of the
questions around a person’s ability to undertake domestic chores. While they acknowledged the
need for a general question around the upkeep of the home they did not feel this issue needed to
be covered in as much specific detail as with the needs assessment tool.
Participants had a far more favourable response to the questions about communication.
Participants generally agreed they would prefer a tool that focused more on how they were
feeling in themselves and in their relationships with others.
Mental health service provider consultation
Approximately 30 people attended the mental health service provider forum. The participants
came mainly from non-government mental health organisations and were in a range of roles
including management, service provision and consumer advisory positions.
Overall,
participants who attended this forum expressed support and enthusiasm for the development of a
self-assessed measure of consumer outcome.
From their perspective, participants identified that some form of independent administration
would be the best way to integrate a measure of self-assessed measure of consumer outcome
into their respective services. They felt an independent process would facilitate results that
were more reliable and, consequently, of more use to them in service development.
Participants were unanimous that there needed to be clear information about the purpose and use
of self-assessed measures of consumer outcome, for consumers and service providers alike. In
addition, participants stressed the importance of timely and constructive feedback on the results
of outcome measures.
Interviews
The interviewees were unanimous in their view that mental health outcome measures need to
cover more domains than those focused on assessing symptomatology. In fact, people argued
strongly that outcome measures should ideally cover a range of domains described generally as
those aspects of life and functioning associated with recovery, including specific domains such
as relationships, physical well-being, participation in community as desired, ability to undertake
day-to-day tasks, spirituality (including the concepts of meaning, purpose and hope), cultural
identity and connectedness, basic resource issues (e.g. employment, housing) and mental wellbeing (which includes symptomatology). Some also felt satisfaction with services should be
included in an outcome measure. A few questioned whether an outcome measure could be
developed based on the concept of recovery, given that a key element of the concept is
subjective and personal perspectives of well-being.
The majority of particpants supported the definition of recovery as publicised by the Mental
Health Commission: ‘living well in the presence or absence of mental illness’. In terms of
Results
85
outcome measurement, a couple of people were concerned about the end-point connotations that
exist with the word ‘outcome’ when the concept of recovery is very much perceived as an
ongoing process. One person was particularly vehement that outcome measures not be
described as tools that can measure change. Rather, it is the process whereby an outcome
measure is administered on more than one occasion, which facilitates the identification of any
change that has occurred between measuring points.
Participants stressed that a flexible approach needs to be adopted for the completion of outcome
measures. Numerous methods were suggested including pen and paper, face-to-face/phone
interview, discussion group, and computer-based systems. All those interviewed from a Māori
or Pacific peoples’ perspective reported a preference for a method that involved person to
person communication. Several people highlighted the need for consumers to feel safe enough
to complete a measure, both with the material and the process. Some interviewees felt
consumers would be less open to a self-assessed measure of consumer outcome administered by
service providers.
Interviewees, concerned with the perspective of deaf consumers, highlighted the issues involved
in developing an outcome measurement that could also be suitable for deaf consumers. They
stressed that it is not simply a matter of translating a measure into sign language. Sign language
has a completely different context and construct base. As a result, deaf consumers can often
have difficulty and feel alienated and concerned about misinterpretation when communicating
through a mainstream-constructed method. For this reason, interviewees suggested the
involvement of the deaf mental health sector in the present project was necessary so the distinct
views and needs of that population could be considered. It was for this reason that an additional
consultation meeting with deaf consumers was organised as part of the overall consumer
consultation.
Participants argued strongly that there was no need for separate measures based on illness type,
particularly if the measure was not (as they all believed it should not be) focused on
symptomatology. They also argued that different measures were not required for different types
of intervention setting (i.e. inpatient service versus community-based service). However,
several participants pointed out that depending on the focus of a particular service, different
domains within an outcome tool would be more (or less) pertinent.
The majority of participants thought the domains included in a self-assessed measure of
consumer outcome should be based on a combination of what consumers consider to be the
indicators of mental well-being and the indicators identified through research on causes and
cures of mental illness, albeit with a bias towards the consumer perspective.
Most people felt different versions of one single measure (rather than different measures) could
be used to assess outcome from the perspective of consumers, family/whānau, and service
providers. However, consumer interviewees argued that if a triangulated approach was used,
there should be some method for greater weighting of consumer’s views than of the
family/whānau and service provider perspectives.
People were unanimous that outcome measures should facilitate the assessment of change
however caused, whether by the interventions of mental health services or by factors such as
what the person had done to help him/herself and/or support from family and friends.
All interviewees felt it was very difficult to determine definitively the cause or causes of any
change in mental health well-being. In the main, people reported that the best way to determine
this was by asking consumers what they thought. Some interviewees felt that ‘simply asking
people’ was a worrying practice because people are too unreliable and subjective in reporting
the cause of change. Rather, they recommended that rigorous empirical research methods were
needed to determine this.
86 Results
Interviewees agreed that once collected, data from outcome measures should be used at a
number of levels (individual, organisational, national) and for a variety of purposes including
reflection, communication, monitoring and decision-making. People argued strongly that
outcome measurement should not be undertaken simply as a data collection exercise, it was
imperative the information resulting from the process is purposefully.
Everyone reported that consumers should be strongly involved in the development of a selfassessed measure of consumer outcome. This included roles from directing and undertaking the
entire process of development through to being significantly involved in the development. It
should be noted, in relation to this last point, however, that the interviewer was a consumer
herself, so interviewees might have felt restricted in responding to the contrary.
Interviewees in the main reported they were not aware of the existence of any outcome
measures they perceived to be really good. Rather interviewees’ main concerns were that the
measures were not relevant to the New Zealand context and were generally symptom focused.
Survey
Questionnaires were nationally distributed to 360 mental health services, and replies were
received from 158, a response rate of 44%. Only 63 (40%) respondents indicated they currently
used a self-assessed measure of consumer outcome within their service, and 10 (6%) provided
no reply. Some 43 different measures were in use, ranging from those developed locally to
those used internationally. Although some 28 services indicated they used the Mental Health
Consumer Survey, it appears likely this was interpreted as a generic term rather than a specific
measure. The next most commonly used measure was the Standards Satisfaction Survey, used
by 10 services; then Hua Oranga, used by 5 services. All other measures were used by 3 or less
services.
Of the 46 services that indicated the proportion of consumers offered the opportunity to
complete an outcome measure, 26 (56%) involved over 90% of consumers, while 12 (26%)
involved 50% or less.
Of the 43 services that estimated return rates, 16 (37%) reported high rates of return of 70% or
more, while 10 (23%) reported return rates of over 90%.
Service size, indicated by the number of consumers attending the service during the year,
included the full range, from under 100 to over 1000. Ten of 34 (29%) responding services
were attended by less than 100 consumers per year, and the same percentage were attended by
over 900 per year.
The time taken to complete the measures generally varied between 5 and 30 minutes, although
some took longer. Nearly half (47%, 20 of 43) took 10 minutes or less.
Of those respondents already using a self-assessed measure of consumer outcome, 84% (51 of
61) considered such a measure either very useful or reasonably useful for an organisation to use.
Of those respondents who reported they were not currently using such a measure, 75% (62 of
83) thought a self-assessed measure of consumer outcome would either be very useful or
reasonably useful, and 22% (18 of 83) were undecided.
Results
87
Discussion
From comprehensive examination of the recovery literature, the following sum of domains
consumers (across cultures) considered important in terms of their mental well-being were
identified:
• relationships, trust, connectedness, taha wairua/whānau, whānau/family support, social
support, interdependence;
•
day-to-day functioning, coping and managing, including work (having the ability to work),
taha tinana;
•
connection to one’s culture, cultural identity, drawing strength from one’s culture, taha
wairua;
•
physical health and health risks, taha tinana, includes alcohol and drug use, side-effects of
medications, sleeping and eating;
•
quality of life, life satisfaction, enjoying the environment, feeling alert and alive, able to
enjoy pastimes/hobbies;
•
illness symptoms, taha hinengaro;
•
coping with and recovering from illness, self-managed care, staying out of the mental
health system, understanding of illness;
•
hope, journey from alienation to purpose, reawakening of hope after despair;
•
empowerment, being in control, exercising choice, positive sense of self, selfdetermination;
•
spiritual strength, increased spirituality, taha wairua;
•
resources, basic needs (e.g. food, money, accommodation, transport), and
•
satisfaction with services (including cultural relevance of services).
The project consumer reference group supported this set of domains of mental well-being and
felt they should be used to form the substance of any self-assessed measure of consumer
outcome for the New Zealand context.
In addition, the group responses (see Table 2) from the consumer consultation forums
corroborated the literature-based findings that the identified set of domains are relevant and
appropriate to New Zealand consumers. The majority of the responses recorded by those
groups fit within one of the indicator domains as reflected in the above list. For example, the
domain described as ‘relationships, trust, connectedness, taha wairua/whānau, whānau/family
support, social support, interdependence’ was reported, in some way, by all groups using the
following terms: relationships, friends, family support, social support, do not isolate, supportive
and understanding people, children, whānau, relating to others, socializing, stable relationship
with partner, loving friends, employment associates, peer support, enjoying being with people,
participation in the community.
An equally large list of responses relates to the ‘resources/basic needs’ domain: “Managing
finances, decent place to live, adequate income $$$, coping with a benefit, bank account – how
much money in account?, food, am I living in suitable/friendly/light accommodation?, am I
living within my budget?, housing, financial sustainability, food/shelter/work, accommodation,
money”.
The importance of ‘quality of life, life satisfaction, enjoying the environment, feeling alert and
alive, and (being) able to enjoy pastimes/hobbies’ was emphasised by the large number of
reported factors that were relevant to this domain: interest in external world – people/issues,
meaningful way to spend your time, having fun, activities, are you happy?, are we happy with
88 Discussion
our quality of life?, still pursuing hobbies/interests, etc., energy, leisure time, going for walks,
playing sport, bike riding, am I doing things that are meaningful to me with my time?, music,
holidays, trips away, poetry, movies/videos, recreation, diving, dancing, rapping, reading,
drawing/arts and crafts, sports, fishing, camping, hobbies, outings (trips, sports, clubs), going to
the
movies,
work/play/fun,
creating/making
things,
going
out/walking/shopping/activities/feeling good”.
It is clear from these results that the measurement of mental health outcomes, from the
consumer perspective, is much wider than a consideration of symptomatology. This result is
supported by the people interviewed who argued strongly that outcome measures should not
focus on assessing symptomatology but should ideally cover a range of domains described
generally as those aspects of life and functioning that are associated with recovery. Given this,
people emphasised that different measures were not required for different illness types. In
addition, interviewees were generally of the view that different measures were not required for
different types of intervention setting, although this may not be the same for services that are
delivering from a distinct cultural perspective such as Kaupapa Māori services.
The use of a single tool certainly has implications for the implementation of an outcome
measurement process. Using one measure across illness and service types reduces the amount
of administration involved and alleviates any difficulties associated with possible conflict of
results between different measures. In proposing such an approach, it is important to
acknowledge that the impact of any given service intervention may be predominantly on a
single domain or a few domains, although it will not be the sole factor influencing such
domains.
For example, the domains most pertinent to a supported employment service would most
probably be “resources, basic needs (e.g. food, money, accommodation, transport)”;
“Empowerment, being in control, exercising choice, positive sense of self, self-determination”;
“Hope, journey from alienation to purpose, reawakening of hope after despair”; “Quality of
life, life satisfaction, enjoying the environment, feeling alert and alive, ability to enjoy
pastimes/hobbies”, and “Day-to-day functioning, coping and managing, including work”.
The other benefit of using one tool across different types of services is the consistency for
consumers. Having a single measure allows consumers to become and stay familiar with the
content and processes involved with one tool. A frequently expressed consumer concern is the
lack of consistency both within and across services. This can create confusion and impacts on
how comfortable consumers feel undertaking assessments within services.
The responses specific to the hui and fono consultation forums highlight the importance of
cultural matters in relation to mental well-being for Māori and Pacific people. This is shown by
the reporting of things such as Māoritanga, Te Reo, Te Whare Tapa Whā, karakia, singing
waiata, whānau, and spiritual well-being in the consultation forums. Given that the literature
review covered material particular to Māori and Pacific people’s concepts of mental well-being,
the sum of indicators specifically includes items of cultural import, such as connection to one’s
culture, cultural identity, drawing strength from one’s culture, and spiritual strength, increased
spirituality, taha wairua. It should be highlighted, in relation to spiritual well-being, that there
was also considerable reference to spirituality in the general consultation meetings.
One concern with the overall set of indicators is the appropriateness of ‘empowerment’ to all
groups, as explored earlier in the literature review. ‘Empowerment’, in the recovery literature,
usually refers to concepts such as a sense of choice, personal power over one’s life,
assertiveness, and confidence in dealing with authority figures such as mental health
professionals (Rogers et al., 1997). It was highlighted that some cultures do not view all these
concepts of empowerment positively, particularly the aspect relating to questioning authority
figures (Beale, undated). For this reason, within the list of domains here, ‘empowerment’ has
been specifically limited to ‘being in control, exercising choice, positive sense of self, and selfDiscussion
89
determination’. These concepts were all identified as important to Tāngata Whai Ora through
the consultation hui.
The inclusion of service satisfaction constructs within outcome measurement is contentious. It
is clear from consumer literature and consultation that people believe satisfaction with services
(which includes cultural relevance of services) affects their mental well-being and should be
included. For example, the responses to the first question put to participants at the consultation
forums included: “Support services, having access to the right services, support from support
workers, how available is a psychiatrist?, no crabby nurses, am I happy with my relationship
with mental health services/my mental health worker in relation to respect/equal partnership, do
I know that an appointment with my psychiatrist will change anything for me?, communication
with service, education about service, good relationships with doctor and psychiatrist etc.,
access to services when needed, psychiatrist/counsellor, making my own choices concerning
treatment and support/clinical people involved with me”.
In addition, consumers identified that a negative aspect of the Crisis Hostel Healing Scale tool
was that it contained no items relevant to assessing satisfaction with services.
However, if satisfaction with services is to be included in an outcome measure, it is important to
recognise that many consumers fear the repercussions of reporting dissatisfaction with a service
and are concerned how their responses will be used and who will have access to them. People
argued they would be cautious about providing honest information when responding to outcome
measurement tools generally, for fear of the possible effects (such as extension of assessment
and treatment procedures). This point was also made by some interviewees and participants at
the service provider meeting, who felt the openness of consumer responses to a self-assessed
measure of consumer outcome would not be facilitated by a method administered by service
providers. This highlights the extreme importance to consumers of ensuring open and
transparent process in collecting outcome measurement information. This seems one of the
main factors that will impact on the reliability and validity of outcome measurement results. It
is recommended, given the issues discussed, that a process be developed whereby if people so
desired complete anonymity of outcome results could be an option.
This issue ties in with the consideration of how outcome measures should be completed. The
majority of participants argued that a flexible approach was needed to accommodate different
levels of literacy, comfort, safety, and simple preference. This point was supported by the
participants’ unanimous view that a flexible approach was needed. Participants suggested the
following options should be considered: pen and paper, face-to-face/phone interview, discussion
group, and computer-based systems. Māori or Pacific peoples interviewees reported a
preference for person-to-person communication. This point was reinforced by the results of the
consultation hui, where participants reported they would feel more comfortable with an
interview-type process that allowed them to give verbal responses rather than written ones, as
these were not considered able to give accurate assessments of their well-being. Deaf
consumers want a tool translated into basic sign language and then completed, in a face-to-face
interview, through an interpreter. These findings support the Ohio research where quite
different preferences for methods of completion were identified across different cultural groups.
If interviews were used, participants argued strongly that they would prefer the interviewee to
be a peer support person or advocate, rather than a service provider. Māori consumers argued
they would prefer the interviewer to be someone who had continued involvement in their care.
This flexibility of approach obviously needs to be a prime consideration when the type and
content of the actual measures are considered. In addition, the effect of process will require
specific attention through the testing and evaluation of any tool.
A more difficult issue around service satisfaction is the level of consumers’ expectations oin
relation to service delivery. In their research, Bridgman at al. (2000) identified some evidence
that suggests consumers’ expectations in this regard are low. This is really an advocacy issue.
For consumers to expect a satisfactory level of service delivery, they first must be informed and
90 Discussion
educated about their rights. This needs to be addressed through a joint and concerted effort by
services and advocacy and rights organisations.
From the data on each of the 18 self-assessed measures of consumer outcome chosen for indepth analysis, it was apparent that none of the tools covered all the domains that had been
identified through the literature review as important to consumers. Of particular note is that
none of the overseas-developed measures included any items particular to cultural
identity/connection, and very few covered the domain of spiritual well-being. In addition, only
a couple had been tested across different cultural populations. This severely limited the number
of measures that could be considered unconditionally appropriate for the New Zealand context.
As a result, the research team considered that if the rest of the content of a tool looked good,
then such a tool could be investigated further for possible development and revision to cover
culturally specific indicators. Despite this, in choosing the short list of measures to be taken to
the reference group, the research team identified that the three New Zealand developed
measures (Assessment of Wellness Outcome Tool, Hua Oranga, and Lotofale Evaluation
Measure) had better overall content coverage of the sum of domains consumers (across cultures)
identified as being important to their mental well-being. The other three measures chosen for
the short-list were the Basis and Symptom Identification Scale (BASIS-32), the Crisis Hostel
Healing Scale, and the Mental Health Inventory (MHI).
After considering these six measures, the reference group was unanimous that none were
appropriate for direct application in New Zealand. However, they did choose three they
believed were most appropriate for taking out for wider consumer consultation (Assessment of
Wellness Outcomes Tool, Hua Oranga, and Crisis Hostel Healing Scale). The results from that
wider consultation largely echoed the views of the reference group and raised a number of
interesting issues about self-assessed outcome measurement in general and the specific content,
format and lay-out of the individual tools considered.
Consultation participants supported the views of the research team and the reference group that
the primary purpose of outcome measurement should be focused on direct potential benefits to
consumers. In particular, people emphasised the value of using self-assessed measures of
consumer outcome as tools that supported them to reflect on and monitor their mental wellbeing. In addition, participants were emphatic that outcome measurement tools should not be
restricted to and/or focused on outcomes that are only a result of formal intervention. This was
identified as a particular problem with the Hua Oranga measure, which restricts the assessment
of outcomes to the perceived results of intervention. Consumers were concerned that this
perspective was likely to lead to misinterpretation of results. For example, a ‘no change’ result
could be interpreted as ‘no change’ in the mental well-being of an individual, when in fact there
had been considerable change, but not as a result of the intervention. In contrast, participants
evaluated positively the fact that the Crisis Hostel Healing Scale had been developed more
around the concept of assessing personal well-being rather than service effectiveness.
Interviewees also argued strongly that outcome measures should facilitate the assessment of
change, however caused, whether as a result of mental health service interventions or other
factors, such as what the person had done to help themselves and/or support from family and
friends. Furthermore, interviewees emphasised that it is very difficult to define an exclusive
reason for change. While the majority of interviewees felt the best method of determining
causal factors was to ask consumers what they believed change was due to, some people,
concerned about issues of subjectivity and unreliability, advocated rigorous empirical research
methods to determine causation. These findings support the introductory section of this report
where a reconsideration of the current definitions and precepts of outcome measurement was
recommended as necessary to reflect the quite distinct and separate concepts of outcome
measurement and attribution measurement.
Discussion
91
In relation to the actual content of the tools that were taken out for consultation, it was
consistently identified across groups that particular questions were based on a clinical
perspective and/or judgments that did not sit comfortably with the majority of consumers. None
of the measures analysed as part of this work were fully developed by consumers themselves.
Both the research team and the reference group felt strongly that consumer participation in the
development of a self-assessed measure of consumer outcome was vital to generate a tool not
only relevant to the needs of consumers but also sensitive to their perspective and position.
Given the lack of substantial consumer participation in the development of most tools, it was not
surprising these issues were consistently raised throughout the consultation process. Despite the
relevant literature highlighting the value and importance of consumer participation, the results
of the present research raise questions about the effectiveness of the way consumers are
currently involved in the design and evaluation of outcome measures. We contend that
consultation and participation are not effective strategies to ensure a consumer perspective and
that consumers actually need to take a leading role in terms of the entire process of the
development and testing of self-assessed measures of consumer outcome.
In relation to the framing of the questions contained in the measures taken out for consultation,
participants were highly critical of the style where ‘more than’ was used. For example, ‘you
feel more aroha or love from others’. While acknowledging the actual purpose of this type of
question framing (when you are being asked to compare yourself with how you were at some
earlier time), participants argued that they felt it signalled a presumption that all consumers
experience the same issues in relation to their experience of mental illness. This perception was
exacerbated by the lack of a ‘not applicable’ option. Non-Māori expressed difficulty with
providing a single response applicable to a time period during which they may have experienced
a multitude of different states of being, some of which, such as their last crisis, may have had a
more profound effect on them than the rest of the period in question. In contrast, Māori
participants preferred reflection on some former period. This issue has relevance to assessing
the feasibility of measures.
People generally preferred questions to be framed in the first person (I am) rather than the third
person (you are), although there was a mixed response to this issue from the Pacific consumers
who attended the fono, suggesting the need for more extensive consultation on this matter.
Attention was also drawn to the inappropriateness and difficulty of assessing matters over which
individuals have completely no control, such as whether people are better treated and accepted
by the community in which they live.
Consumers stressed the importance of ensuring that every outcome measure was well presented,
easy to follow and understand, and complete - indeed, consumer friendly. Several specific
considerations were identified. First, there needs to be sufficient coverage of major indicators in
enough detail to facilitate observation of significant changes in mental well-being so that it is
sensitive to change. However, there is a fine line between sufficient and too much detail, which
can lead to an over-long measure and decreased acceptance. Increasing the specificity of a
measure will lessen the overall relevance of the tool (and items contained within it) to the wider
population of consumers. For example, with the Crisis Hostel Healing Scale (the lengthiest tool
taken out for consultation), many people identified items they did not feel were in any way
relevant to their individual mental well-being. Participants of the consultation fono also
identified particular concerns with the comfort and safety of Pacific consumers responding to
sensitive items such as self-harm, abuse, sex, and violence. In seeking an appropriate balance it
should be recognised that the details and specificity of personal consumer issues are dealt with
on an individual basis rather than through a standardised outcome measure. Outcome measures
should be used as tools on which to reflect and also as tools to communicate a broad set of
domains generally relevant to the consumer population. At the same time, if individuals wish to
consider more specific and sensitive issues as part of outcome assessment, this should be
facilitated as an adjunct to the measure, provided it was clear such information would remain
92 Discussion
personal and aggregation of data across individuals would only be in relation to the generic
domains.
Consultation participants urged that all questions should be phrased consistently and that more
than one concept should not be contained within single questions.
People generally felt that the use of rating scales, with four or five options, were valuable within
an outcome measure. However, they argued strongly that ‘not applicable’ was a necessary
option in relation to every item included in a tool.
Participants also thought it was helpful for the material of outcome measures to be separated
into thematic sub-sections.
Much of the feedback and comment from the present consumer consultation is similar to that of
other outcomes research with consumers in Australia (Graham et al., 2001) and Ohio (Ohio
Department of Mental Health, 2000). This lends even greater weight to the views on the
content, format and processes involved with self-assessed outcome measures. However, it is of
concern that currently no self-assessed measures exist that meet the parameters consumers have
widely expressed as necessary.
It is heartening to see the amount of enthusiasm and support for a self-assessed measure of
consumer outcome from mental health service providers, both individuals and organisations.
This was evident in the survey results (where 40% of respondents indicated they currently used
a self-assessed measure of consumer outcome within their service) and in the attendance and
feedback at the mental health service provider forum. However, while there appears to be
general enthusiasm for the concept of a self-assessed measure of consumer outcome, the wide
range of measures in use, many without formal validation or evaluation of reliability, indicates
either an unstructured approach to this area or a keen appreciation of the lack of an outstanding
measure. It is of concern to note the amount of confidence placed in data resulting from those
home-made measures that have undergone no psychometric testing. The literature review
highlighted the risk that relying on poor measures can lead to false confidence or even to
dangerous decisions, based on data that are neither valid nor reliable.
Perhaps unsurprisingly, organisations expressing indecision about the value of consumercompleted outcome measures were less likely to be using such a measure. In contrast, most
organisations that had taken the plunge, despite all the methodological pitfalls, reported they
found such outcome measures to be either very useful or reasonably useful.
The overall purpose of this project was to undertake the preliminary work towards the
development of a self-assessed measure of consumer outcome. While it would have been
convenient to have found that an existing self-assessed measure was suitable for unconditional
and immediate application in the New Zealand context, this has proved not to be the case.
Neither is it possible to recommend some limited revision or modification of such a measure,
due to the fundamental philosophical differences between current New Zealand consumer
perspectives and existing measures. However, New Zealand consumers have clearly indicated a
framework on which such a measure could be developed and validated, including detailed
advice on some technical aspects of such a process. The research team firmly believes this
development is required for self-assessed outcome measurement to be an effective and efficient
process for both consumers and other stakeholders in New Zealand. The following
recommendations are offered towards such a goal.
Discussion
93
Recommendations
•
That a project be established for a self-assessed measure to be developed and tested by
consumers in New Zealand. This should be based on the sum of domains identified
through the literature review, and validated in the process of consumer and other
consultation.
•
That the development and testing of the tool be undertaken based on some of the key
findings of the present research, including:
•
1.
That the primary aim of the measure will be to provide a tool for individuals to assess
their own mental well-being. The secondary aims of the measure should be to
facilitate service monitoring and development, communication, process and systemic
evaluation, and lobbying for improvement.
2.
That a flexible approach is needed in relation to the completion of a self-assessed
measure of consumer outcome, with some form of face-to-face communication being
the preferred method of Māori and Pacific populations.
3.
That a procedure is established, in association with the development of the tool, that
will maximise consumer safety in relation to the provision of outcomes information,
particularly relating to those who will have access to the information and the purposes
to which it will be applied.
4.
That a process be developed for the provision of clear information to consumers in
relation to how outcomes information will be used and where it will go.
5.
That a self-assessed measure of consumer outcome would need to be translated into
basic sign language to be suitable for the deaf consumer population.
6.
That the measure be developed with due regard to the general and specific feedback on
the three measures taken out for consultation as part of the present research.
That the self-assessed measure of consumer outcome be developed and evaluated with a
view to it becoming part of the suite of outcome measures supported by the Ministry of
Health.
94 Discussion
References
Allott, P. & Loganathan L. (2002). Discovering hope for recovery from a British perspective – a
review of a sample of recovery literature, implications for practice and systems change.
Birmingham, UK: West Midlands Partnerships for Mental Health, www.wmpmh.org.uk.
Andrews, G., Peters, L., & Teeson, M. (1994). Measurement of consumer outcome in mental
health: A report to the National Health Information Strategy Committee. Sydney, Australia:
Clinical Research for Anxiety Disorders.
Asay, T. & Lambert, M. (1999). The empirical case for the common factors in therapy. In
Hubble, M., Duncan, B., & Miller, S. (Eds.) The heart and soul of change. Washington DC,
US: APA Books.
Atkinson, M., Zibin, S., & Chuang, H. (1997). Characterizing quality of life among patients
with chronic mental illness: A critical examination of the self-report methodology. The
American Journal of Psychiatry, 154, 99-105.
Bathgate, M. & Pulotu-Endemann, F. K. (1997). Pacific People in New Zealand. In Ellis, P. M.
& Collings, S.C.D. (Eds.) Mental health in New Zealand from a public health perspective:
Public Health Report Number 3. Wellington, NZ: Ministry of Health, www.moh.govt.nz.
Beale, V. (n.d.). The Ohio Mental Health Consumer Outcomes System family member training.
Ohio, US: Ohio Department of Mental Health.
Bridgman, G., Dyall, L., Bidois A., Gurney, H., Hawira, J., Tangitu, P., Huata, W., Webster, S.,
& Heron, M. (2000). Mental Health Outcomes Project: The assessment of wellness – an
outcomes tool drawn from the participant perspectives in Māori and mainstream mental
health. Presentation to the Mental Health Outcomes Research Conference, Wellington,
2000.
Bridgman, G. (2003). Introduction to issues in deaf mental health. Unpublished.
Burlingame, G. M. & Lambert, M. J. (1995). Pragmatics of tracking mental health outcomes in
a managed care setting. Journal of Mental Health Administration, 22, 226-237.
Campbell, J. & Schraiber, R. (1989). The Well-Being Project: Mental health clients speak for
themselves. Sacramento, CA: California Department of Mental Health.
Ciarlo, J. A., Edwards, D. W., Kiresuk, T. J., Newman, F. L., & Brown, T. R. (1986). The
assessment of client/patient outcome techniques for use in mental health programs.
Washington D.C., USA: National Institute of Mental Health.
Corrigan, P. W., Giffort, D., Rashid F., Leary, M., & Okeke, I. (1999). Recovery as a
psychological construct. Community Mental Health Journal, 35, 231-239.
Corrigan, P. W., Salzer M., Ralph, R. O., Sangster, Y., & Keck, L. (n.d.). Examining the Factor
Structure of the Recovery Assessment Scale. Unpublished Manuscript.
Durie, M. (1998). Whaiora: Māori health development. Auckland, NZ: Oxford University
Press.
Durie, M. H. & Kingi, T. K. R. (1997). A framework for measuring Māori mental health
outcomes. Palmerston North, NZ: School of Māori Studies, Massey University, Te
Pumanawa Hauora.
Eisen, S. V. & Dickey, B. (1996). Mental health outcome assessment: The new agenda.
Psychotherapy, 33, 181-189.
Eisen, S. V., Grob, M. C., & Klein, A. A. (1986). BASIS: The development of a self-report
measure for psychiatric inpatient evaluation. The Psychiatric Hospital, 17, 165-171.
References
95
Fenton, L. & Te Kotua, T.W. (2000). Four Māori Korero about their experience of mental
illness: Mental Health Commission recovery series: One. Wellington, NZ: Mental Health
Commission.
Goldberg, R. W., Seybolt, D.C., & Lehman, A. (2002). Reliable self-report of health service use
by individuals with serious mental illness. Psychiatric Services, 53, 879-881.
Gowers, S., Levine, W., Bailey-Rodgers, S., Shore, A., & Burhouse, E. (2002). Use of a routine,
self-report outcome measure (HoNOSCA) in two adolescent mental health services. The
British Journal of Psychiatry, 180, 266-269.
Graham, C., Coombs, T., Buckingham, B., Eagar, K., Trauer, T., & Callaly, T. (2001). The
Victorian Mental Health Outcomes Measurement Strategy: Consumer perspectives on
future directions for outcome self-assessment. Report of the Consumer Consultation
Project. Victoria, Australia: Department of Human Services.
Green, R. S. & Gracely, E. J. (1987). Selecting a rating scale for evaluating services to the
chronically mentally ill. Community Mental Health Journal, 23, 91-102.
Horenstein, D. B., Houston, K., & Holmes, D. S. (1973). Clients’, therapists’, and judges’
evaluations of psychotherapy. Journal of Counseling Psychology, 20, 149-153.
Kent, H. & Read, J. (1998). Measuring consumer participation in mental health services: Are
attitudes related to professional orientation? International Journal of Social Psychiatry, 44,
295-310.
Kingi, T. K. R. & Durie, M. H. (2000). Hua Oranga – A Māori measure of mental health
outcome. Palmerston North, NZ: School of Māori Studies, Massey University, Te
Pumanawa Hauora.
Lapsley, H. L., Nikora, W., & Black, R. (2002). “Kia Mauri Tau!” Narratives of recovery from
disabling mental health problems. Wellington, New Zealand: Mental Health Commission.
Lehman, A. F. (1999). A review of instruments for measuring quality-of-life outcomes in
mental health. In Miller, N. E. & Magruder, K. M. (Eds.) Cost-effectiveness of
psychotherapy: A guide for practitioners, researchers and policymakers. New York, NY:
Oxford University Press.
Lehman, A. F., Postrado, L. T., & Rachuba, L. T. (1993). Convergent validation of quality of
life assessments for persons with severe mental illnesses. Quality of Life Research, 2, 327333.
Malo, V. (2000). Pacific people in New Zealand talk about their experiences with mental
illness: Mental Health Commission recovery series: Three. Wellington, NZ: Mental Health
Commission.
McCarthy, R. V. (1995). An index of consumer satisfaction with mental health services:
Instrument development and testing. Arizona: Unpublished dissertation.
Mellsop, G. & O’Brien, G. (2000). Outcomes summary report, Project 5.1 mental health
research and development strategy. New Zealand: Health Research Council.
Mental Health Advocacy Coalition. (2001). Policy advice to the Ministry of Health on outcomes
in mental health. Unpublished.
Mental Health Commission. (2001). Blueprint checklist for mental health services in New
Zealand: Mental health services for Pacific people, Blueprint Information Series: 3.
Wellington, NZ: Mental Health Commission.
Mental Health Commission. (1998a). Blueprint for mental health services in New Zealand.
Wellington, NZ: Mental Health Commission.
96 References
Mental Health Commission. (1998b). Report of key messages to the Mental Health Commission
from hui held February–April 1998. Wellington, New Zealand: Mental Health
Commission.
Ministry of Health. (1997). Making a Pacific difference: Strategic initiatives for the health of
Pacific people in New Zealand. Wellington, New Zealand: Public Health Group, Ministry
of Health.
Nabati, L., Shea, N., McBride, L., Gavin, C., & Bauer, M. S. (1998). Adaptation of a simple
patient satisfaction instrument to mental health: Psychometric properties. Psychiatry
Research, 77, 51-56.
Nonu-Reid, E., Lui, D., Erik, M., Puloto-Endemann, K., & Bridgman, G. (2000). The Lotofale
development of the Fonofale model of health. Unpublished.
Ohio Department of Mental Health. (n.d.) Using an outcomes-based model to re-engineer your
organisation: Administrators and managers’ manual. Ohio, MS: Ohio Department of
Mental Health, www.mh.state.oh.us/initiatives/outcomes/outcomes.
Ohio Department of Mental Health. (2000). The Ohio Mental Health Consumer Outcomes
System: Consumer Outcomes System overview. Ohio, MS: Ohio Department of Mental
Health, www.mh.state.oh.us/initiatives/outcomes/outcomes.
O’Malia, L., McFarland, B. H., Barker, S., & Barron, N. M. (2002). A level-of-functioning selfreport measure for consumers with severe mental illness. Psychiatric Services, 53, 326-331.
Onken, S. J., Dumont, J., Dornan, D. H., Ralph, R., & Ridgway, P. (2002). Mental health
recovery: What helps and what hinders? A national research project for the development of
recovery facilitating system performance indicators. US: National Technical Assistance
Center for State Mental Health Planning.
Prager, E. H. (1980). A client-developed measure of self-assessment in mental health. Case
Western Reserve University: Unpublished Dissertation.
Ralph, R. O. (2000). Review of recovery literature: A synthesis of a sample of recovery
literature 2000. Alexandria, VA: National Technical Assistance Center for State Mental
Health Planning (NTAC).
Ralph, R. O., Kidder, K., & Phillips, D. (2000). Can we measure recovery? A compendium of
recovery and recovery-related instruments.
Read, J. (2003). Emancipation songs: Individual participation by service users in mental health
care. Occasional paper – No. 2.Wellington, NZ: Mental Health Commission.
Ridgway, P. (2001). Restorying psychiatric disability: Learning from first person accounts of
recovery. Psychiatric Rehabilitation Journal, 24, 335-343.
Rogers, E. S., Chamberlin, J., Ellison, M. L., & Crean, T. (1997). A consumer-constructed scale
to measure empowerment among users of mental health services. Psychiatric Services, 48,
1042-1047.
Ruggeri, M. & Dall’Agnola, R. (1993). The development and use of the Verona expectations for
care scale (VECS) and the Verona service satisfaction scale (VSSS) for measuring
expectations and satisfaction with community-based psychiatric services in patients,
relatives and professionals. Psychological Medicine, 23, 511-523.
Ruggeri, M, Dall’Agnola, R., Agostini, C., & Bisoffi, G. (1994). Acceptability, sensitivity and
content validity of the VECS and VSSS in measuring expectations and satisfaction in
psychiatric patients and their relatives. Social Psychiatry and Psychiatric Epidemiology, 29,
265-276.
Sansoni, J. (1995). Measurement issues and the SF-36 instrument. Health Outcomes, 9-14.
References
97
Sonnaburg, K. (1996). Meaningful measurement in psychotherapy. Psychotherapy, 33, 160-170.
Stedman, T., Yellowlees, P., Mellsop, G., Clarke, R., & Drake, S. (1997). Measuring consumer
outcomes in mental health: Field testing of selected measures of consumer outcome in
mental health, a report to the Australian Health Ministers’ Advisory Council National
Mental Health Working Group. St Lucia, Australia: University of Queensland.
Strupp, H. H. (1996). The tripartite model and the consumer reports study. American
Psychologist, 51, 1017-1024.
Tooth, B. A., Kalyanansundaram, V., & Glover, H. (1997). Recovery from schizophrenia: A
consumer perspective. Queensland, Australia: Queensland University, Centre for Mental
Health Nursing Research.
Wallace, C. J., Lecomte, T., Wilde, J., & Liberman, R. P. (2001). CASIG: a consumer-centred
assessment for planning individualized treatment and evaluating program outcomes.
Schizophrenia Research, 50, 105-119.
Walsh, D. M. A. (1999). Perspectives on needs and satisfaction with mental health services:
Views of providers and consumers. Boston University: Unpublished Dissertation.
Weinstein, R. M. (1972). Patients’ perceptions of Mental Illness: Paradigms for analysis.
Journal of Health and Social Behavior, 13, 38-47.
Young, S. L., Ensing, D. S., & Bullock, W. A. (1999). The Mental Health Recovery Measure.
Toledo, OH: University of Toledo, Department of Psychology.
98 References
Appendix One – Interview Questions
•
Do you think it is enough for a measure to ask how satisfied consumers are with the
services they receive or does it need to go beyond this?
•
How would you define recovery?
•
Do you think a measure of mental health needs simply to look at changes in symptoms or
would you prefer it went wider?
•
What aspects or domains of well-being do you think need to be included in a measure of
mental health outcome?
•
Do you think outcome measures are best to be pen and paper, or to use some other
approach, or be flexible as to the way they can be answered?
•
Do you think a separate measure is needed for each type of illness (e.g. depression as
opposed to schizophrenia)?
•
Do you think different measures are needed for different types of setting (e.g. one that
focuses on daily routines for people in hospital as opposed to one that focuses on wider
domains of functioning for people in the community)?
•
Do you think a separate measure is needed to measure outcome from the point of view of
consumers, their family or caregivers, and service providers? Or would separate versions
of the same measure be OK?
•
Would you want to see an outcome measure look at changes which are due solely to
professional/formal mental health interventions, or to look at changes that are due to other
factors, such as what a person has done to help themselves, or support from friends and
family?
•
How can we know what changes in well-being are due to (e.g. to formal services or other
aspects of a person’s life)?
•
How do you think a self-assessed measure of mental health would best be administered?
•
What aims do you think such a measure should have aside from measuring mental health
outcomes? For instance, providing feedback to services to enable them to improve,
indicating where standards are not being met nationally, aiding the dialogue between client
and provider, highlighting areas where individual consumers need more or better services.
•
Do you think indicators of well-being in a self-assessed measure should be based only on
what consumers think makes a difference to mental health, or also include indicators
identified by research on causes and cures of mental illness?
•
How important do you think the concept of empowerment is to recovery? How would you
personally define it?
•
In what ways would you like to see consumers involved in developing a self-assessed
measure of mental health?
•
What examples of good measures have you come across to date? What do you think is
particularly good about them?
Appendix One – Interview Questions
99
Appendix Two – Survey Form
Survey of use of self-assessed measures of consumer outcome in New
Zealand mental health organisations
The mental health research and strategy group is investigating the development of a selfassessed measure of consumer outcome in New Zealand. As part of this, they have contracted
Case Consulting to review such measures currently used in New Zealand.
We would be grateful if you could provide the following information on an anonymous basis. If
there is information you do not have, simply put down what you know rather than skipping the
question.
1) Does your organisation currently use a measure (or measures) which consumers complete in
order to assess their current well-being and recovery to date?
Yes
No (If No, please go straight to question 5)
2) Tick any of the following self-assessed measures of consumer outcome that your organisation
use:
Te Hua Oranga (a measure of Maori mental health outcome developed by Te Kani King
and Mason Durie)
BASIS-16 (Behaviour and Symptom Identification Scale)
BASIS-32 (Behaviour and Symptom Identification Scale)
Ohio Mental Health Consumer Outcomes System
Mental Health Inventory (MHI)
Standards Satisfaction Survey
Mental Health Consumer Survey
Medical Outcomes Study 26 item Short-Form Scales (SF-36)
Unitec EPI Assessment
Consumer version of HoNOS (NOT CLINCIAN VERSION)
Personal Vision of Recovery Questionnaire
Making Decisions Empowerment Scale
Recovery Attitudes Questionnaire
Recovery Assessment Scale
100 Appendix Two – Survey Form
If the tool you are using is not listed above, please write the name of it here and a note of
how/where we could find more information about it, if possible:
………………………………………………………………………………………………
………………………………………………………………………………………………
3) What percentage (approximately) of consumers, that attend your service, are asked to
complete a self-assessed measure of consumer outcome? …………
3a) What percentage (approximately) of these are returned completed? …………
3b) Approximately how many consumers attend you service per year? ………….
4) How long does it take (on average) for the consumer to complete the self-assessed measure
your organisation uses? …………………………………………………………..
5) How useful do you think it is for organisations to use self-assessed measures of consumer
outcome?
Very useful
Reasonably useful
Undecided
Not particularly useful
Not useful at all
What are the reasons for your answer:
……………………………………………………………………………………………………
…………………………………………………………………………………………
……………………………………………………………………………………………………
…………………………………………………………………………………………
Could you please return the completed form to Case Consulting, PO Box 51273, Wellington,
[email protected].
Thank-you for taking the time to complete this survey form. We appreciate it.
Appendix Two – Survey Form
101
Appendix Three – Deaf Mental Health Service – Support
Needs Assessment
Name _____________________
Looking after your home
1
I can choose the right cleaning materials (i.e. Glass for
cleaner for windows, laundry detergents for clothes,
dishwashing liquid, cleaners for bench tops, cleaners
toilets and floors).
2
I know when the rubbish is collected, can put it out on
on time and I know how to recycle my rubbish.
3
I can clean the house well – use the vacuum cleaner,
dust, mop the floors or sweep, clean the bathroom and
toilet, clean the kitchen benches, oven, microwave. I
can clean up after cooking and do the dishes well.
4
I know when my clothes need washing and I can use
the washing machine. I can hang out the clothes or put
them in the dryer. I am able to iron my clothes.
no
problem
a little
bit of a
problem
a
moderate
problem
Cooking and preparing food & shopping
5
When shopping I can find the right shop, find the items
on the shelves or ask for help. I can choose the best
priced items and check the quality of the fruit and
vegetables.
6
I can use a checkout at the supermarket and I can pay
for the shopping and I know I get the right change.
7
At home I can put the frozen items in the freezer
and store the other foods well. I throw out old food.
8
I can follow a recipe well and can plan a healthy meal
well. I know how much to cook and when it is ready.
9
I know what to do if I spill something, or burn myself
or something else. I can cook without regular
supervision.
102 Appendix Three – Deaf Mental Health Service – Support Needs Assesment
quite
a big
problem
a big
problem
no
problem
10
I can bake in the oven, fry or boil safely, I can use the
microwave and toaster. I can use knives and cooking
instruments safely.
11
I know about dangers in the kitchen and know what to
do if there is an emergency – e.g. a fire, or injury to
myself or someone else.
12
I can use and have a fire extinguisher or smoke
detector. I know about fire safety and how to leave the
house quickly in an emergency.
13
I turn things off when I have finished and I know about
being careful with electric appliances. I make sure my
hands are dry when I touch electrical plugs or outlets.
a
a little
bit of a moderate
problem problem
quite
a big
problem
Budgeting / Money
14
I know how much money I can spend each week and
don’t spend more than I can afford.
15
I know the difference between coins e.g. $1 and $2
coins, 50c, 20c, 10c and 5c and notes $5, $10, $20, $50
16
I am able to give the right money and know when I get
the right change and can keep receipts if I need
to.
17
I can save money for entertainment (eg ice cream,
movies, outings). I know how to pay back money if I
owe some money.
18
I am aware of benefits (WINZ) and know if I am on the
right benefit or know if I am able to get one. I
know the rules about being on a benefit.
19
I am able to fill in a deposit/withdrawal form at the
bank and can use and eftpos machine. I can use a
credit card or eftpos card if I have one.
Appendix Three – Deaf Mental Health Service – Support Needs Assesment
103
a big
problem
no
problem
a little
bit of a
problem
a
moderate
problem
Personal Care
20
I am able to brush/comb my hair daily and look tidy. I
shave/wash/clean my teeth daily.
21
I change my clothes regularly and dress in clean
clothes. I shower regularly and wash my hair regularly.
Do you need a reminder to do these things. YES/NO
Medication
22
I am able to be responsible for my own medication and
know the correct amount that I should be taking. I
agree to and take my medication.
23
I know about the medication side effects and I also
know the medication helps me.
24
I know about allergies to medication and symptoms of
an allergic reaction and can tell someone I am having a
reaction. If it is serious I know how to contact
emergency services.
25
I know when my prescription is running out and know
how to get a new prescription. I am able to go to the
chemist to get my new prescription and know how I
need to pay for the medication.
26
I need help with taking my medication and need a daily
pill planner to help me take the right amount at the
right time.
104 Appendix Three – Deaf Mental Health Service – Support Needs Assesment
quite
a big
problem
a big
problem
no
problem
a little
bit of a
problem
a
moderate
problem
quite
a big
problem
Communication Skills
27
I am able to start communication with a new person
and can join in a conversation. I don’t interrupt other
people.
28
I use sign language or am oral and make good eye
contact.
29
I understand other peoples feelings and am happy to
share my feelings with others.
30
I can give good honest replies and can accept good
honest replies from others.
31
I can control my frustration/anger.
32
I can use the fax/TTY/email/ or SMS text
messaging. (Please state which, if you have a problem
with using one or more of these.)
33
Other people understand more clearly and I understand
other people.
34
I know how to access interpreters and feel comfortable
when using interpreters.
35
I need help with learning sign language, or reading or
writing.
Appendix Three – Deaf Mental Health Service – Support Needs Assesment
105
a big
problem
no
problem
Smoking
36
I know about non smoking areas and follow the rules.
37
I can afford to buy cigarettes each week and don’t buy
more than I can afford. I can make my cigarettes last
until my next pay day.
38
I understand that people who don’t smoke, often don’t
like me smoking too near them and I try not to do that.
a little
bit of a
problem
a
moderate
problem
Religious / Spiritual needs
39
I go to church or another group for my religious /
spiritual needs and don’t have a problem with this.
Cultural needs
40
I identify or feel I belong to a culture or two cultures
(eg Deaf and culture of origin, or just culture of origin).
41
I can get the right support for my culture if I need to
and mix with other people of the same culture if I want.
Transport / Travel
42
I can tell people where I live and the name and number
of the street.
43
I am able to read bus / train timetables and know where
the nearest stop is. I can get more information if I need
to and can read a map.
106 Appendix Three – Deaf Mental Health Service – Support Needs Assesment
quite
a big
problem
a big
problem
no
problem
Road and car safety
44
I have a current drivers license or can get one.
45
I have enough money for petrol and know when to get
a warrant of fitness for my car.
46
I can pay for my car registration and insurance and
they are up to date.
47
I make sure my car is safe (e.g. oil, water, tyres,
regular service and I understand safety on the road and
road rules).
a little
bit of a
problem
a
moderate
problem
quite
a big
problem
Community Services
48
I can contact or use community services and know
about community services (e.g. library, church, NZ
Post).
Health
49
I have a GP and know the name of my GP and how to
contact them.
50
I have a dentist and know the name of my dentist and
how to contact them.
51
I am aware of other supportive agencies (places) that
can help me.
Appendix Three – Deaf Mental Health Service – Support Needs Assesment
107
a big
problem
no
problem
52
I know when I am starting to become unwell and know
who to contact if I become unwell.
53
I know the name of my key worker, Deaf Mental
Health Worker, or Deaf Association Service
Coordinator.
a little
bit of a
problem
a
moderate
problem
Free time
54
I can use my free time well and have hobbies or
interests.
55
I know how to get more information about other
hobbies or interests and know about other sporting
groups or clubs in the community.
56
I belong to clubs or sporting groups or other groups
such as church or cultural groups.
108 Appendix Three – Deaf Mental Health Service – Support Needs Assesment
quite
a big
problem
a big
problem
Appendix Four – Definitions of Psychometric Terms
In Table 1, some of the columns refer to various types of validity, reliability and feasibility.
These terms refer to qualities of the measures that tell us how useful and accurate both they, and
the information they produce, are to us.
Validity
Validity is the extent to which a test being used actually measures the characteristic(s) that it is
intended to measure. There are a number of different aspects of validity that are commonly
measured:
Construct validity: the degree to which the test measures the theoretical construct that it
intends to measure.
Content validity: whether or not the items of the measure adequately cover the domains that
are supposed to be measured.
Face validity: whether the measure, on inspection, appears to cover the important issues related
to the domain under study. It is based on an informed impression rather than detailed analysis.
Criterion validity: is a measure of agreement between an instrument and an external criterion.
Since there are few recognised ‘objective’ criteria against which to validate outcome measures,
new measures are usually compared to existing measures of the same area or construct. If there
is evidence of agreement between scores on both measures, the new instrument is said to have
criterion (or concurrent) validity for that area or construct. It differs from convergent validity
(below) because the measure under consideration is linked only to a single, ‘gold standard’
measure, not a number of different measures.
Convergent/divergent: these two terms refer to the expectation that scales, which theoretically
measure similar constructs, should provide similar results when their results are compared
(convergent validity); whereas scales that theoretically measure dissimilar constructs should
provide dissimilar results when the results are compared (divergent validity) (Stedman et al.,
1997).
Sensitivity to change over time: a measure that is sensitive to change should be able to indicate
whether a significant change has taken place for a consumer over consecutive administrations of
the measure. This is the definitive property of a measure of outcome. A measure may provide
information with respect to mental health status, but it is the extent to which it assesses
meaningful change that determines whether it can be called an outcome measure (Andrews et
al., 1994).
Reliability
Reliability refers to whether a measure is being used in a systematic and therefore repeatable
way. Test reliability is reduced by errors of measurement.
Inter-rater reliability: involves two or more independent raters assessing the same
phenomenon or event using the same measure. The degree to which the raters agree in their
assessment can be used to evaluate the reliability of inferences being made from the data
obtained from the measure (Mellsop & O’Brien, 2000). Differences between raters may reflect
poor levels of definition of the items, different perspectives by raters, changes in the subject
between the times of rating, or other factors.
Appendix Four – Definitions of Psychometric Terms
109
Test-retest reliability: involves administering a measure on two separate occasions to the same
people. If there is no reason to expect a change in the manner that people respond to the items it
would be expected that the scores from the two occasions would be similar or show a high level
of correlation (Mellsop and O’Brien, 2000).
Internal consistency: refers to the degree to which the items or questions of an item ‘hold
together’ or correlate with each other. While an individual may not score the same on each of
five items in a sub-scale, overall there will be a tendency for those scoring positively or highly
on one item in the sub-scale to score high on other items in the sub-scale. One method of
measuring internal consistency is to calculate a statistic known as an alpha coefficient (Mellsop
and O’Brien, 2000).
Feasibility
Acceptability: describes the ease with which a consumer or clinician can use a particular
measure (i.e. user friendliness). To be acceptable the measure needs to be brief, and the
purpose, wording and interpretation should be clear (Andrews et al., 1994). The content and the
presentation of results should be understandable to a wide audience.
Applicability: the degree to which the proposed measure is applicable to the dimensions of
importance to the situation in question. For example, in relation to mental health outcome
measures in the present context, they should address dimensions of importance to the consumer
and allow for the aggregation of data in a meaningful way.
Practicality: relates to the cost of implementation, training requirements, and the complexity of
scoring, reporting and interpreting the data (Andrews et al., 1994, in Stedman et al., 1997). It
should involve simple methodology and procedures that can be implemented uniformly, using
accessible and well-defined training materials and instructions. The measurement materials and
implementation procedures should be relatively inexpensive. The scores from a measure should
have clear and objective referents (‘meanings’) that are consistent across consumers, to ensure
interpretability of scores as well as changes in scores.
110 Appendix Four – Definitions of Psychometric Terms
Appendix Five – Consumer Completed Part of the Assessment of Wellness Outcome Tool
1
To what extent has the involvement of EPI in your life meant:
You are having more fun
2
You are more confident about future plans
3
You have more control over the choices in your life
4
You are more normal or like you used to be before you became unwell
5
You are more content, calm or happy
6
You are less anxious or stressed
7
You have more spiritual strength or wairua
8
You have more awhi or trust in people
9
You feel more aroha or love from others
10
You have a stronger connection with you culture, whakapepa or ancestral
background (e.g. Maori, New Zealand, Samoan, Hindu, Jewish)
11
You are able to behave in a more socially acceptable and responsible way.
12
You are better treated & more accepted by the community in which you live.
13
You are more clear and consistent in your thinking
14
You are more able to live more independently
15
You are more able to do the basic tasks of day-to-day living (e.g. getting up,
feeding yourself, keeping tidy)
16
You are more able to live without abusing drugs or alcohol
17
You have less unpleasant side-effects from the use of medication
no
change
111 Appendix Five – Consumer Completed Part of the Assessment of Wellness Outcome Tool
worse
much
worse
better
worse
Long-term support
better
Regular consistent
support
much
better
Short-term support
(For each question tick one box in the first group. If you tick “no change”
check whether there has been change for other reasons (second group). In
the third group tick one box to show the amount of support you need in this
area).
To what extent do you need to
have support in this area?
A small
amount of support
In this area EPI has made
things
In this area things
have changed, but not
because of EPI.
Things are:
No support needed
check
To what extent has the involvement of EPI in your life meant:
18
You have less psychotic symptoms (hearing voices, seeing visions, bizarre
or grandiose thoughts)
19
You are better prepared or trained for work, or better able to manage work
20
Your personal safety in the community or at home has improved
21
Your relationships with friends, whänau and family has improved
22
You are able to live with less involvement from mental health services
23
All aspects of you life are more in balance or harmony
21
Your relationships with friends, whänau and family has improved
22
You are able to live with less involvement from mental health services
23
All aspects of you life are more in balance or harmony
What have been the least helpful aspects of you contact with the
Early Intervention Service? (Use more paper if you need to)
What has been the most helpful aspects of your contact with the
Early Intervention Service? (Use more paper if you need to)
Where things have changed without EPI’s involvement can you tell
us why? (Use more paper if you need to)
How do you think the service offered by Early Intervention could
improve? (Use more paper if you need to)
24
25
26
27
no
change
worse
much
worse
better
worse
Development from Bridgman GD, Dyall L What’s a good outcome? – Frameworks of wellness, Paper presented to the THEMHS 8th Annual Mental Health Services
Conference, Melbourne Convention Centre, 22-24 September 1999.
112 Appendix Five – Consumer Completed Part of the Assessment of Wellness Outcome Tool
Long-term support
better
Regular consistent
support
much
better
Short-term support
(For each question tick one box in the first group. If you tick “no change”
check whether there has been change for other reasons (second group). In
the third group tick one box to show the amount of support you need in this
area).
To what extent do you need to
have support in this area?
A small
amount of support
In this area EPI has made
things
In this area things
have changed, but not
because of EPI.
Things are:
No support needed
check
Appendix Six – Crisis Hostel Healing Scale
Now I would like to ask you some questions having to do with your well-being, emotional state
and other aspects of your life. For each statement I read, please give me 1 of the 4 answers; that
is, tell me if you strongly agree, agree, disagree, or strongly disagree. If the statement does not
in any way apply to you or your situation, tell me. Please try to answer as carefully as you can.
The first part of the question has to do with your thinking at this point in time. The second part
asks you how you felt within the past six months.
1.
Strongly
Agree
Agree
Disagree
Strongly
Disagree
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
I recognize that some people care
about me.
Within the past 6 months… |___|
2.
I have a sense of being in control of
myself and my life.
Within the past 6 months… |___|
3.
I have regained my sense of humour.
NA
Within the past 6 months… |___|
4.
I do not see myself as sick nor allow
other people to see me as sick.
Within the past 6 months… |___|
5.
I have hope about my present
situation.
Within the past 6 months… |___|
6.
I remember abuse but am not overwhelmed by it.
NA
Within the past 6 months… |___|
Appendix Six – Crisis Hostel Healing Scale
113
7.
Strongly
Agree
Agree
Disagree
Strongly
Disagree
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
less
4
3
2
1
I am able to focus on tasks at hand
whatever they may be.
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
I am not making satisfactory
connections with others.
Within the past 6 months… |___|
8.
I feel spiritually in touch.
Within the past 6 months… |___|
9.
I am aware of and respect the
feelings of others.
Within the past 6 months… |___|
10.
I cannot trust my decisions.
Within the past 6 months… |___|
11.
My inside
bothersome.
voices
are
NA
Within the past 6 months… |___|
12.
Within the past 6 months… |___|
13.
I have deliberately hurt myself.
NA
Within the past 6 months… |___|
14.
I have decreased self-confidence.
Within the past 6 months… |___|
15.
My self-inflicted
decreased.
violence
has
Within the past 6 months… |___|
114 Appendix Six – Crisis Hostel Healing Scale
NA
16.
Strongly
Agree
Agree
Disagree
Strongly
Disagree
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
I am knowledgeable and informed
about medication.
Within the past 6 months… |___|
17.
I feel I have a lot of energy.
Within the past 6 months… |___|
18.
I feel in control of my eating habits.
NA
Within the past 6 months… |___|
19.
I am feeling less alive and in my
body.
Within the past 6 months… |___|
20.
My fearful ideas have increased.
NA
Within the past 6 months… |___|
21.
I have insight into what leads to my
crises and so I can think of ways to
change.
Within the past 6 months… |___|
22.
I am sleeping well.
Within the past 6 months… |___|
23.
I feel like I have a valuable
contribution to make.
Within the past 6 months… |___|
24.
I don’t care about my body and don’t
take care of it.
Within the past 6 months… |___|
Appendix Six – Crisis Hostel Healing Scale
115
25.
Strongly
Agree
Agree
Disagree
Strongly
Disagree
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
I can say no.
Within the past 6 months… |___|
26.
I feel like working.
Within the past 6 months… |___|
27.
I feel like I have access to adequate
support in my community.
Within the past 6 months… |___|
28.
I can tell what is real and what is not.
Within the past 6 months… |___|
29.
My awareness of different ways of
healing is increasing.
Within the past 6 months… |___|
30.
I have a healthy interest in sex.
Within the past 6 months… |___|
31.
I can cry.
Within the past 6 months… |___|
32.
I am taking an active role in
decisions about medication.
Within the past 6 months… |___|
33.
I care about myself.
Within the past 6 months… |___|
116 Appendix Six – Crisis Hostel Healing Scale
NA
34.
Strongly
Agree
Agree
Disagree
Strongly
Disagree
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
I become hostile when I express my
feelings.
Within the past 6 months… |___|
35.
I am able to listen when people talk
to me and about me.
Within the past 6 months… |___|
36.
I am able to express feelings of
anger.
Within the past 6 months… |___|
37.
I am not able to give and receive
love.
Within the past 6 months… |___|
38.
I have enough resources to live well.
NA
Within the past 6 months… |___|
39.
I have increased self care.
Within the past 6 months… |___|
40.
I feel safe.
Within the past 6 months… |___|
Appendix Six – Crisis Hostel Healing Scale
117
Appendix Seven – Tāngata Whai Ora Schedule of
Hua Oranga
118 Appendix Seven – Tāngata Whai Ora Schedule of Hua Oranga