D6.4 Summative Evaluation - Sign in - Google Accounts

Transcription

D6.4 Summative Evaluation - Sign in - Google Accounts
http://mature-ip.eu
D6.4
Summative Evaluation Report
Date
04 June 2012
Dissemination Level
Restricted
Responsible Partner
LTRI
Editors
John Cook, Claire Bradley, Colin Rainey, Andreas
Schmidt, Graham Atwell, Dirk Stieglitz, Andreas
Kaschig, Alexander Sandow, Ronald Maier
Authors
John Cook, Andreas Schmidt, Claire Bradley,
Sally-Anne Barnes, Jenny Bimrose, Simon
Brander, Simone Braun, Alan Brown, Barbara
Kump, Christine Kunzmann, Athanasios Mazarakis,
Tobias Nelkner, Caron Pearson, and Isabel Taylor
Work Package
WP6 (Evaluation)
MATURE
Continuous Social Learning in Knowledge Networks
http://mature-ip.eu
Grant No. 216356
MATURE is supported by the European Commission within the 7th Framework
Programme, Unit for Technology-Enhanced Learning
Project Officer: Mikolt Csap
DOCUMENT HISTORY
Version
Date
Contributor
Comments
0.1
01.03.2012
First draft of collated evaluation reports
0.2
10.05.2012
Handed over for internal review
0.3
20.05.2012
Internal review completed
0.4
04.06.2012
Final due/released
1.0
04.06.2012
Final editorial work and submission
3
Contents
1 EXECUTIVE SUMMARY ............................................................................................. 9
2 INTRODUCTION.................................................................................................... 12
2.1
2.2
2.3
Background information ............................................................................................................................. 12
2.1.1
Evaluation as requirements in action and part of the overall design process ............................. 12
2.1.2
Formative evaluation overview .................................................................................................... 14
Overview of the Summative Evaluation approach ...................................................................................... 15
2.2.1
What is the scope of MATURE IP? .............................................................................................. 15
2.2.2
Re-prioritization and re-planning of the Summative Evaluation ................................................. 16
2.2.3
Typology for understanding ‘Continuous Social Learning in Knowledge Networks’ ................. 18
The MATURE Evaluation Studies.............................................................................................................. 21
2.3.1
Process leading to the development of the Template ................................................................... 21
2.3.2
Template for re-plan of Summative Evaluation ........................................................................... 21
2.3.3
The six Summative Evaluation studies, associated goals and number of participants ................ 22
3 CONSOLIDATION OF INDICATOR PERSPECTIVES ............................................................. 23
3.1
Introduction ................................................................................................................................................. 23
3.2
The MATURE indicator landscape ............................................................................................................. 24
3.3
The Indicator Alignment Process ................................................................................................................ 25
3.4
Summary of GMIs & study goals covered in Summative Evaluation ........................................................ 26
4 SUMMATIVE EVALUATION AT FNHW (STUDY 1) ............................................................. 28
4.1
Background ................................................................................................................................................. 28
4.2
Evaluation design ........................................................................................................................................ 29
4.3
4.2.1
Overall concept ............................................................................................................................ 29
4.2.2
Research questions and hypotheses ............................................................................................. 30
4.2.3
Methods and instruments ............................................................................................................. 30
Results ......................................................................................................................................................... 35
4.3.1
Statistics of the knowledge base ................................................................................................... 35
4.3.2
Observations during the walkthrough .......................................................................................... 38
4.3.3
Results of the post-walkthrough interview ................................................................................... 39
4.4
Discussion ................................................................................................................................................... 39
4.5
Conclusions ................................................................................................................................................. 40
5 SUMMATIVE EVALUATION AT CONNEXIONS NORTHUMBERLAND/ IGEN (STUDY 2) .................... 41
5.1
Introduction ................................................................................................................................................. 41
4
5.1.1
Central question 1 ........................................................................................................................ 41
5.1.2
Central question 2 ........................................................................................................................ 42
5.2
Evaluation description................................................................................................................................. 43
5.3
Results ......................................................................................................................................................... 44
5.3.1
SMI 1: A person is annotated with additional tags at a later stage by the same user ................. 47
5.3.2
SMI 2: A topic tag is reused for annotation by the "inventor" of the topic tag ............................ 48
5.3.3
SMI 3: Topic tags are reused in the community........................................................................... 49
5.3.4
SMI 6: Topic tags are further developed towards concepts; e.g. adding synonyms or description50
5.3.5
SMI 7: A topic tag moved from the "prototypical concept" category to a specific place in the
ontology 51
5.3.6
SMI 8: The whole ontology is edited intensively in a short period of time, i.e. gardening activity
takes place ................................................................................................................................................... 52
5.3.7
SMI 9: An ontology element has not been changed for a long time after a period of intensive
editing 54
5.4
5.3.8
SMI 10: A person profile is often modified and then stable ......................................................... 54
5.3.9
SMI 11: An individual changed its degree of networkedness ...................................................... 56
5.3.10
SMI 4: A person is (several times) tagged with a certain concept ............................................... 56
5.3.11
SMI 5: A person is tagged by many different users...................................................................... 60
5.3.12
Discussion and Implications ........................................................................................................ 61
Conclusions ................................................................................................................................................. 64
6 SUMMATIVE EVALUATION AT CONNEXIONS KENT (STUDY 3) ............................................. 66
6.1
6.2
Introduction ................................................................................................................................................. 66
6.1.1
Summary of formative evaluation ................................................................................................ 67
6.1.2
Summative evaluation: Research questions and hypotheses........................................................ 68
Evaluation Description ................................................................................................................................ 69
6.2.1
6.3
Evaluation Workshops ................................................................................................................. 69
The Questionnaire Study: Evaluation of Knowledge Maturing with SMIs ................................................ 71
6.3.1
Questionnaire (Pre/Post) ............................................................................................................. 71
6.3.2
Findings from the Questionnaire Study related to Specific Knowledge Maturing Indicators ..... 73
6.3.3
Findings from the Questionnaire Study related to experiences with the Instantiation, usefulness
and usability (not related to Specific Knowledge Maturing Indicators ...................................................... 77
6.4
6.5
The Focus Group: Evaluation of Knowledge Maturing based Hypotheses ................................................ 82
6.4.1
Procedure ..................................................................................................................................... 82
6.4.2
Findings ....................................................................................................................................... 83
Discussion and Implications ....................................................................................................................... 86
6.5.1
Implications for Knowledge Maturing ......................................................................................... 86
6.5.2
Usability considerations and implications for future development.............................................. 87
6.5.3
Reflections and Limitations .......................................................................................................... 89
5
6.6
Conclusions ................................................................................................................................................. 89
7 SUMMATIVE EVALUATION AT STRUCTURALIA (STUDY 4) .................................................. 91
7.1
Introduction ................................................................................................................................................. 91
7.2
Evaluation Description ................................................................................................................................ 93
7.3
7.4
7.5
7.2.1
Evaluation Timeline ..................................................................................................................... 93
7.2.2
Sample .......................................................................................................................................... 93
7.2.3
Face-to-face training ................................................................................................................... 94
Evaluation methods ..................................................................................................................................... 95
7.3.1
Questionnaires ............................................................................................................................. 95
7.3.2
Log data ....................................................................................................................................... 96
7.3.3
Teacher’s views ............................................................................................................................ 97
7.3.4
Issues which arose during the evaluation .................................................................................... 97
Findings ....................................................................................................................................................... 98
7.4.1
How did the participants use the Instantiation (from the log data)? ........................................... 98
7.4.2
Questionnaire data ..................................................................................................................... 100
7.4.3
Teacher’s point of view .............................................................................................................. 106
Discussion of findings ............................................................................................................................... 106
7.5.1
How did people use the Instantiation? ....................................................................................... 106
7.5.2
How well did the Instantiation support Knowledge Maturing activities related to Knowledge
Maturing Indicators (GMIs)?.................................................................................................................... 107
7.5.3
7.6
How easy was the system to use? (SUS) .................................................................................... 107
Conclusion ................................................................................................................................................ 108
8 PARTNERSHIPS FOR IMPACT STUDY (STUDY 5) ............................................................ 110
8.1
Introduction ............................................................................................................................................... 110
8.2
Engagement with top-down perspectives: partnerships with UK-wide initiatives ................................... 110
8.3
Engagement and partnerships in the four home countries ........................................................................ 111
8.3.1
Top-down perspectives: England ............................................................................................... 111
8.3.2
Bottom-up perspectives: England .............................................................................................. 113
8.3.3
Top-down perspectives: Northern Ireland ................................................................................. 117
8.3.4
Bottom-up perspectives: Northern Ireland ................................................................................ 118
8.3.5
Top-down perspectives: Scotland............................................................................................... 119
8.3.6
Bottom-up perspectives: Scotland .............................................................................................. 119
8.3.7
Top-down perspectives: Wales ................................................................................................... 120
8.3.8
Bottom-up perspectives: Wales .................................................................................................. 120
8.4
Engagement and partnerships in European and International Networks for Knowledge Maturation ....... 120
8.5
Conclusions ............................................................................................................................................... 121
9 LONGITUDINAL STUDY OF KNOWLEDGE MATURING IN CONNEXIONS KENT (STUDY 6) ............. 124
6
9.1
Introduction ............................................................................................................................................... 124
9.1.1
Background ................................................................................................................................ 124
9.1.2
Workplace learning .................................................................................................................... 125
9.1.3
How practice aligns with Knowledge Maturing ........................................................................ 125
9.2
Researching and supporting Knowledge Maturing for professional development (CareersNet and Career
Constructor) .......................................................................................................................................................... 126
9.2.1
Scoping the role of ICT in Kent.................................................................................................. 127
9.2.2
Developing the e-portfolio system – Career Constructor .......................................................... 127
9.2.3
Developing the INSET website (CareersNet Kent) to improve Knowledge Maturing
collaboratively........................................................................................................................................... 131
9.3
9.4
Researching and supporting LMI Knowledge Maturing for careers guidance practice ............................ 135
9.3.1
The design process ..................................................................................................................... 135
9.3.2
Conclusions and reflections on Knowledge Maturing ............................................................... 140
Conclusions ............................................................................................................................................... 140
10 COLLABORATIVE CONCLUSIONS AND FUTURE WORK ................................................... 142
10.1
Collaborative Conclusions (project wide perspective).............................................................................. 142
10.1.1
Overview of process ................................................................................................................... 142
10.1.2 (Q1) How successfully did your Instantiation make use of General knowledge Maturing
Indicators (GMIs) to support Knowledge Maturing activities (e.g. as a service) or as an instrument for
evaluation? ................................................................................................................................................ 142
10.1.3 (Q2) How successfully did your Instantiation/study support Knowledge Maturing generally (e.g.
Phases)? 145
10.1.4 (Q3) How do the results compare across the studies in terms of key similarities and differences
with respect to Knowledge Maturing (and the model)? Specifically, what was confirmed across all
studies, what was not confirmed, what needs further investigation? ........................................................ 149
10.2
10.1.5
model
Summative overview of Collaborative Conclusions on Indicators and Knowledge Maturing
150
10.1.6
Future work on Indicators ......................................................................................................... 151
Future Work (LTRI perspective)............................................................................................................... 151
10.2.1
2c. cognitive load ....................................................................................................................... 152
10.2.2
2d. personal learning networks (group or distributed self-regulation) ..................................... 153
10.2.3
Next Steps (for LTRI) ................................................................................................................. 155
11 REFERENCES ................................................................................................... 156
12 APPENDICES .................................................................................................... 158
12.1
General knowledge Maturing Indicators (GMIs) ...................................................................................... 158
12.2
Cook and Pachler (2012) (BJET paper) .................................................................................................... 164
12.3
Summary of coverage of Indicators by study............................................................................................ 179
12.4
Data from FNHW ...................................................................................................................................... 186
12.4.1
Indicator alignment results ........................................................................................................ 186
7
12.4.2
12.5
12.2
12.6
Evaluation data .......................................................................................................................... 191
Data from Connexions Northumberland ................................................................................................... 206
12.5.1
Indicator alignment results ........................................................................................................ 206
12.5.2
Evaluation data .......................................................................................................................... 212
12.5.3
Person profile example A ........................................................................................................... 216
12.5.4
Person profile example B ........................................................................................................... 218
12.5.5
Person profile example C ........................................................................................................... 219
12.5.6
Person profile example D........................................................................................................... 222
Data from Connexions Kent...................................................................................................................... 345
12.5.7
Indicator alignment results ........................................................................................................ 345
12.5.8
Evaluation data .......................................................................................................................... 353
Data from Structuralia ............................................................................................................................... 358
12.6.1
Indicator alignment results ........................................................................................................ 358
12.6.2
Evaluation data .......................................................................................................................... 363
12.7 Using the typology to provide examples of Learning Factors involved in "Continuous Social Learning in
Knowledge Networks".......................................................................................................................................... 369
8
1
Executive Summary
This deliverable contains the Summative Evaluation results for the MATURE project. Following
collaborative reflection on Recommendation 6 from the third Annual Review (see section 2.2.2), the main
task for this deliverable became one of gaining evidence about (i) whether specific tool approaches
support Knowledge Maturing, and (ii) whether the key assumptions that the conceptual foundations hold
can become a key priority for the evaluation. For both activities, a strongly Indicator-focused Summative
Evaluation approach was chosen, which was to be based on a clearly integrated perspective of Indicators
(Recommendation 4). The Summative Evaluation also included additional studies from both partnership
for impact and longitudinal perspectives in order to provide a macro view of Knowledge Maturing.
Six evaluation studies were conducted that were viewed as having different levels of granularity (see
section 2.2.2 for details). At the micro-level there is a focus on the technical development and the use of
Knowledge Maturing tools in context with users (these are called an Instantiations). At the meso level we
also have some links to conceptual development. Finally, at the macro level we get an examination of
how notions of Knowledge Maturing have played out at the organisational, regional and even UK
Government Department/ministerial level.
Column 2 in the table below shows that a total of 279 users participated in the evaluation of the four
Instantiations. Study 5 and Study 6 were narrative-reflective in nature and consequently did not present
numbers for participants.
Number of
participants
Time
period
Evaluation methods
applied
Study 1
Maturing process
knowledge at FNHW
(Micro/meso level)
3
M34-48
Study 2
People Tagging at
Connexions
Northumberland
(Micro/meso level)
212
M34-51
Study 3 Communitydriven Quality
Assurance at
Connexions Kent
(Micro level)
Study 4
Online Course Support
at Structuralia
(Micro level)
Study 5 Partnerships for
Impact Study of Careers
Guidance (UK)
(Macro level)
9
M34-48
Systematic software
walkthrough with artificial
case; workshop:
observations, discussions &
interviews.
Log file analysis and
standardized
questionnaires. ‘Training
phase’ (face-to-face and
online) were used for
scaling-up.
Interviewing managers and
software users;
questionnaire, focus group.
55
M34-48
Application log data,
questionnaire.
Not
applicable
M34-48
Study 6
Longitudinal Study,
Connexions Kent
(Meso level)
Not
applicable
M34-48
Drawing together of events
and evidence that has
already occurred and
linking it to events in this
period and prospectively
with other planned events.
Story elicitation.
9
Context &
tool/concept
evaluated
Adaptive process
management for
university
matriculation
(KISSmir)
Organisational
development for
Careers Guidance
(SOBOLEO people
tagging)
Community-driven
quality assurance
for Careers
Guidance
(SIMPLE)
Online course
support for
Building sector
(SIMPLE)
Careers Guidance,
Knowledge
Maturing generally
Careers Guidance,
Knowledge
Maturing generally
In column 3 of the table, we see that the time period of study for most Summative Evaluations was M34
to M48 (15 months). Study 2, however, had a 2 month extension to M50 to deal with the request from 3rd
Annual Review to scale up (Recommendation 5). Study 1-4 tested specific Indicator related
hypotheses/questions, plus other questions related to Knowledge Maturing, using the methods shown in
column 4 of the table, as applied to the Knowledge Maturing tool in the context indicated in column 5.
Study 5 and 6 used more diverse methods in keeping with a focus on the meso and/or macro level.
Major findings arising from the studies
Four out of the six studies involved careers guidance. The work with careers organisations in the UK was
successful from the developers’ micro-perspective; mixed from a users’ micro-perspective (successful in
Northumberland; generally successful in Kent, but with the one exception being that one set of
developments did not lead through to implementation outlined in section 9); highly successful at a mesolevel; and even more successful at a macro-level in getting widespread discussion about knowledge
development, sharing and Knowledge Maturing in a careers guidance context all the way up to UK
ministerial level. Furthermore, key assumptions of Study 2’s ontology Knowledge Maturing model
(which is based on the Knowledge Maturing Phase Model) have been confirmed.
Comparison of results across studies in terms of key similarities and differences with respect to the
Knowledge Maturing Model & tools
“Yes”, we can find instances of Knowledge Maturing in organisations. For example, as part of the
Summative Evaluation in Northumberland, Study 2, we have been able to observe the use of a
Knowledge Maturing tool as part of everyday practice. Within the Study 2 evaluation we have also been
able to observe Knowledge Maturing over the period of use. Furthermore, as we point out above, the
Longitudinal Study 5 at Kent facilitated dialogue about Knowledge Maturing processes, this included in
many cases partners developing their ‘readiness to mature knowledge’. “Yes”, we can support
Knowledge Maturing with tools and services. For example, as part of the Summative Evaluation in Study
2, we have successfully been able to introduce the tool to a significantly larger user base (n = 212) than
originally planned. This has yielded evidence about user acceptance and usefulness. However, it is not
always as simple as we originally thought, both in terms of the linear hierarchy of the Knowledge
Maturing Model and in terms of the different paths that can be taken to achieve Knowledge Maturing.
The Knowledge Maturing Model and Phases hold true but not in a hierarchical sense. FNHW Study 1
illustrated, in a suggestive way, that for some knowledge you may not want to move towards a higher
level or you may need to re-negotiate some knowledge. Indeed, 91% of the General knowledge Maturing
Indicators (GMIs 1 ) that were investigated in the Summative Evaluation belonged to Phase I of the
Knowledge Maturing Model, i.e. ‘Expressing ideas’ and ‘Appropriating ideas’. Is it the case that users
need some additional guidance and scaffolding to make the transition into later Knowledge Maturing
Phases like Phase II ‘Distributing in communities’?
Summative view on the usefulness of Indicators in the Summative Evaluation studies
In terms of levels of empirical validation/justification for GMIs/SMIs used in Study 1-4, in summary we
found the following. Study 1 used 8 SMIs that had a mixture of strong and weak justifications (see
section 3.2). Study 2 made use of 11 SMIs that had a mixture of strong and weak justifications. Study 3
made use of 10 GMIs with strong justification. Study 4 made use of 12 GMIs with strong justification. In
Study 1 the sample size was too small to draw conclusions. Study 2 found GMIs/SMIs provided a useful
structured approach to validate key assumptions that the Study 2 team had for the overall concept of the
Instantiation. However, Study 2 also found GMIs/SMIs too complex and labour intensive. For example,
Study 2 found that for many of the Indicators, they are useful but not very easy to use and they get quite
complex if there is a need to adapt them in a real-world scenario or use them in combinations. Study 3 &
4 did not find them to be as useful as hoped for. However, Study 3 & 4 did find the Indicator approach
useful for devising questionnaires. In addition, we consider that GMIs, which have been validated in as
1
See section 3.2, GMI stands for General knowledge Maturing Indicator; SMI stands for Specific knowledge
Maturing Indicator
10
many contexts as possible, as being part of the MATURE heritage. We had 24 GMIs (out of a total of 75
GMIs) that fell into an empirically validated/justified category at the start of the Summative Evaluation.
Of the total number of 24 GMIs that were included as a study goal, 22 belonged to Knowledge Mentoring
Phase I (i.e. GMI ID-type I & II combined). This number could grow, Study 2 used its design activities to
suggest that we could in the future also go on to collect sufficient evidence to justifying four more GMIs
which have not previously been validated as part of the likes of the Representative Study or Application
Partners Study (WP1).
What was confirmed across all studies, what was not confirmed?
The common theme of the Summative Evaluation was of a focus on a Knowledge Maturing Phase I, (as
stated above 91% of the GMIs studied belonged to Knowledge Maturing Phase I), essentially an artefactcentric approach. This in itself confirms what we found at the Demonstrator formative evaluation stage
(D6.2). Knowledge Maturing Phases II (Distributing in communities), III (Formalising) and V (Formal
training, Institutionalizing & Standardizing) were not examined (as a goal of study) at all in the
Summative Evaluation and could clearly be the focus of future work. What was not confirmed, because it
was found problematic, was a systematic view of the relationship between Indicators and Phases (GMIs
are not Phase specific).
What needs further investigation?
Our conclusions on GMIs/SMIs suggest that the following lines of future work would be productive:
1.
2.
3.
Investigate further Indicators (GMIs/SMIs) in terms of varying levels of validity. As stated above,
we had 24 GMIs in an empirically validated/justified category at the start of the Summative
Evaluation; could this approach continue to evolve and deliver validated Indicators for other,
higher Knowledge Maturing Phases? And, what are the implications for using GMIs/SMIs in
tools or as flags that Knowledge Maturing has taken place?
Investigate the problems encountered with the usefulness of Indicators in the Summative
Evaluation studies (e.g. complexity and labour intensive to use) and propose solutions that would
enable bundles of Indicators to be used effectively.
Develop a procedure (based on a review of existing Indicators) to increase levels of validity, to
collect further candidates for Indicators, and that considers factors surrounding benchmarking
organisations and the wider aspects of “Continuous Social Learning in Knowledge Networks” in
the age of Social Network Sites like Facebook and LinkedIn.
Is it possible that we could see GMIs at a higher level of abstraction and hence denoting that something
has changed in terms of Knowledge Maturing? Indicators that are empirically justified could be
systematically built into project tools and services, they could be used where possible to automate
exception reporting (e.g. showing where Knowledge Maturing has or has not taken place) or they could
also be used as performance indicators for evaluation if they are attached ‘systematically’ to a larger scale
framework like the Knowledge Maturing Model/Landscape. Other wide ranging conclusions are
discussed in section 10.
11
2
Introduction
2.1
Background information
This deliverable contains the Summative Evaluation results for the MATURE project. Following
collaborative reflection on Recommendation 6 (see section 2.2.2) from the third Annual Review, the main
task for this deliverable became one of gaining evidence about (i) whether specific tool approaches
support Knowledge Maturing, and (ii) whether the key assumptions that the conceptual foundations hold
can become a key priority for the evaluation. For both activities, a strongly indicator-focused Summative
Evaluation approach was chosen, which was to be based on a clearly integrated perspective of Indicators
(Recommendation 4). The Summative Evaluation also included additional studies from both partnership
for impact and longitudinal perspectives in order to provide a macro view of Knowledge Maturing.
Figure 2.1 shows the overall Evaluation timeline for WP6 Evaluation. As we will see below, this includes
an extension in Phase 5 for one aspect of the Summative Evaluation.
M1
M12
M18
M33
M48_M50
Phase 1 Pre-Formative evaluation of key
concepts / techniques
Phase 2 Draft Requirements Method and
Eval. Plan
Phase 3 Final Requirements Method and
Eval. plan
Phase 4 Formative evaluation of
Demonstrators (Phase 1 & 2)
Phase 5 Summative Evaluation of
Instantiations & other studies
Figure 2.1 Timeline for the evaluation phases throughout the project
2.1.1
Evaluation as requirements in action and part of the overall design process
Early on in the project (see D6.1), we have adopted the approach of “requirements in action”. This has
evolved into a design process that is shown in Figure 2.2 (Ravenscroft, Schmidt, Cook, & Bradley,
2012) 2.
2
This was published in JCAL Impact Factor: 1.25. ISI Journal Citation Reports Ranking: 2010: 41/177 (Education
& Educational Research). “The impact factor, often abbreviated IF, is a measure reflecting the average number of
citations to recent articles published in science and social science journals. It is frequently used as a proxy for the
relative importance of a journal within its field, with journals with higher impact factors deemed to be more
important than those with lower ones.” http://en.wikipedia.org/wiki/Impact_factor, accessed 2 May, 2012.
12
We followed agile project management methodologies such as Scru. For the purpose of the Summative
Evaluation, we take the first phase of our design cycle, which is introduced in Figure 2.2, as prioritize. In
this important phase, the objectives and focus areas for the current cycle are defined (or they could be
refined from the previous cycle). This prioritization at the start of the Summative Evaluation was the
result of an initial interpretation of the problem space (in response to the 3rd Annual Review comments
mentioned above). The outcome of this reflective process is transformed into a vision for the next cycle
(and usually also reaching beyond), which also contains concrete objectives representing what the design
team attempts to achieve, along with open questions that need to be answered. These objectives are not
concrete development objectives (e.g., feature X for component Y), but rather more general objectives
about which aspects to focus on.
For example, in the People Tagging Demonstrator 3 (Formative Evaluation, D6.2, see below) the initial
priorities related to exploring the influence of people tagging activities for competencies within
organisational cultures. This was a top level priority issue that raised important questions such as: Do
employees understand the people tagging approach? What if people don’t like being tagged?; Does
tagging conflict rather than complement traditional competency records? For D6.4 the ‘Prioritize’
objectives revolved around ‘scaling up’ and an alignment of Indicators (as set by 3rd Annual Review
comments).
Figure 2.2. Overview of the MATURE Design Process (Ravenscroft, et al., 2012)
The prioritization is followed by a phase of learn and problematize about experiences and constraints
in the context. This is an exploratory phase in which design teams investigate the key features in the
target context and includes the involvement and participation of target users as much as possible, e.g.,
observations and interviews prior to implementations and user-tests. This is an important, although
frequently neglected phase in design processes that is often seen as requirements engineering. But unlike
traditional software engineering, a main goal of this activity is the deepening of the understanding of the
context and design space amongst design team members, and not necessarily the explication of formal
requirements. Therefore, it is important that any usual division of labour between those doing
13
requirements engineering and software development does not de-contextualize experiences and lose
important insights and depth. Figure 2.2 has scare quotes around “Learn” because in complex research
projects like MATURE there is a need for the design team and maybe the users to “learn” about the
Shared Conceptual Model (e.g. the Knowledge Maturing Phase Model), which acts as a guide.
In the case of the People Tagging tool, in the formative evaluation (D6.2) this involved comparative field
studies in different organizations, a large-scale interview study, and focus group interviews. These
facilitated the collaborative conceptualisation of the way in which social technologies could address
individual and organisational problems and opportunities.
The next design phase involves creative technical processes to devise and provide potential solutions or
original social media processes linked to technical design and implementation activities. This also
requires significant communication and dialogue with users. The notion of ‘requirements’ is also
important in this design phase, but in our (embedded social media case) these become ‘requirements-inaction’ instead of a ‘contract’ between those understanding the context or problem and the developers
aiming to address it. The goal is also to obtain early and on-going feedback on important assumptions in
the technical design process, e.g., that certain functions will support certain aspects of informal learning.
Evaluation is an on-going activity throughout the design process, which becomes less exploratory and
more confirmatory as development proceeds, where particular methods map to these changing roles for
evaluation. In practice, this means that early evaluations are similar to participatory design approaches,
using open-ended and exploratory methods, such as workshops around conceptualisations and mock-ups.
Later evaluations will investigate the performance of tools against criteria such as usability, suitability
and performance according to informal learning objectives (e.g. groups are working together and sharing
knowledge more effectively).
In the case of the People Tagging tool, in the formative evaluation phase this involved close collaboration
with representatives of Connexions Northumberland, a Career Guidance organization. Early mock-ups
and prototypes were developed and demonstrated in workshop settings, where both open and structured
feedback was received, and these findings were incorporated into subsequent developments in an ongoing fashion. These technical design studies were also coordinated and influenced by the shared and
instantiated conceptual model for informal learning and Knowledge Maturing. This informed the
technical design activities, and was evolved through these design activities.
2.1.2
Formative evaluation overview
The outcome of the formative evaluation of the Demonstrators is described in D6.2 (see also Ravenscroft,
et al., 2012). Specifically, the Formative Evaluation Phase of MATURE was conducted from M19 to
M29 (10 months), with the Report writing taking an additional four months (completed at M33). This
involved two phases of formative evaluation for each of the four Demonstrators. The four Demonstrators
were:
•
•
•
•
Demonstrator 1: Assuring Quality for Social Learning in Content Networks
Demonstrator 2: The collaborative development of understanding
Demonstrator 3: People Tagging for Organisational Development
Demonstrator 4: Adaptive process management (KISSmir)
Phase 1, which focused on formative evaluation and participatory design studies was conducted between
M19 and M23, and reported in detail in D2.2/3.2. The Phase 2 formative evaluations were performed
from M24-M29 (note that most of these involved extensions to the original timelines to accommodate
unanticipated practical problems and constraints that emerged at the application sites that were outside of
the project’s control). The results are reported for each Demonstrator as appropriate in the relevant
sections of the D6.2 report. The project felt that it was more important to address these issues and
constraints to complete the studies and learn as much as possible from them instead of stopping them
prematurely. Similarly, because the formative evaluation was much larger in scale than anticipated,
applying to four distinctive Demonstrators instead of one ‘system prototype’, it meant that the data
analysis, interpretations, reporting, and coordination of reporting was more time-consuming and complex
14
than originally specified, and so overran by six months in total. This is explained in more detail in Section
3.1 of D6.2.
The Formative Evaluation drew the following conclusions. Firstly, all of the Demonstrators required
considerable, and often unanticipated, thought, attention and adaptation to make them usable and suitable
within the user contexts. Secondly, related to this, the results showed the complex and multi-faceted
nature of introducing and using these sorts of socio-technical innovations, where there is often a paradox
in providing something that is suitably understandable and useful whilst also providing something that is
innovative and cutting-edge. This is especially the case with those types of socio-technical tools that are
typically embedded in non-deterministic and often unpredictable organisational situations. Finally, all the
Demonstrators, with varying degrees of adaptation, were suitably deployed within often challenging user
contexts, with some (Demonstrator 1 and Demonstrator 4) showing initial signs of Knowledge Maturing
over their relatively brief deployment periods, and for the other participants confirmed the potential in the
longer run. This formed a solid base for further developments.
While the demonstrators focused on a single strand of Knowledge Maturing (content, semantics, people,
and processes), within the third year the developments shifted towards exploring the added value of
combining these strands. The demonstrators were decomposed into building blocks (as explained in
D2.3/3.3), and these building blocks were combined to address the Knowledge Maturing problems
observed at the target contexts. This led to target context specific ’Instantiations’, which are now the
object of Summative Evaluation.
2.2
Overview of the Summative Evaluation approach
In order to set the scene for the Summative Evaluation section 2.2.1 provides a brief recap around the
question: What is the scope of MATURE IP? This is followed in section 2.2.2 by an overview of how the
3rd Review comments caused a re-prioritization and re-plan of the Summative Evaluation. Section 2.2.2
provides a diagrammatical representation of the Summative Evaluation. In addition to the innovative
focus on General knowledge Maturing Indicators (GMIs) and Knowledge Maturing Phases, LTRI wanted
to offer an approach to taking a view of the overarching goal of the project, namely facilitating
"Continuous Social Learning in Knowledge Networks". The reason for this was LTRI deemed it
necessary to have a Summative Evaluation checklist (called a typology); this is described in section 2.2.3.
2.2.1
What is the scope of MATURE IP?
A discussion around the question “What is the scope of MATURE IP?” was led by LTRI at the
consortium meeting in Karlsruhe (January, 2012). The summary below was agreed by participants. As
such it sets the context for the re-planned / re- prioritization of the Summative Evaluation s.
It was confirmed that the “Knowledge Maturing Phase Model [i.e. 6 phase model] provides a first insight
in the nature of Knowledge Maturing. To refine this view we have to consider the different levels of
interaction that accompany this process. Here we find a progression from the level of individuals to the
level of communities, and, finally, to the level of organization. During the maturing process from
expressing ideas to formalization we find patterns in the flow of knowledge from the individual to the
organisational level.” (http://mature-ip.eu/maturing-scopes, accessed 23 January, 2012). The starting
point is the knowledge worker as an individual. Coming up with new ideas and experiences, they often
freely share these with others. If these experiences are to spread, a joint understanding is necessary, and is
accomplished by communication within groups sharing the same interest and vision. Such communities
are compelled to find a common footing for their joint action and the achievement of common goals.
However, communities are characterised by common interests and aim at the exchange of experience and
not at the realisation of common goals. This is the focus of the organisations, the third level of interaction.
Furthermore, there are three Stands of MATURE: Content, Processes, and Semantic Structures. Also,
motivational aspects are deemed as important. It was also noted that the Knowledge Maturing Model
takes a particular theoretical orientation (Information Systems & Management / Computer Science); there
are 27 MATURE services (backend), building blocks are the frontend, Instantiations are in context;
15
MATURE has other solutions (e.g. Scorecard); and there is an Integration and knowledge bus dimension.
Finally, the meeting in Karlsruhe confirmed that the original project tagline of ‘Continuous Social
Learning in Knowledge Networks’ is still a relevant way to summarise MATURE.
2.2.2
Re-prioritization and re-planning of the Summative Evaluation
The Summative Evaluation is Phase 5 of WP6 (originally described in D6.3) and took place between
months 33-48 (up to month 50 for the Northumberland Instantiation, which involved ‘scaling-up’, i.e.
using an Instantiation with a large number of users. The 3rd Review comments caused a re-plan or new
prioritization (Figure 2.2) of the Summative Evaluation (originally in D6.3).
Specifically, the re-planning took into account the following recommendations:
• Recommendation 4. The transition Indicators used in WPs 2 & 3 and the Indicators used in
MATURE Scorecards should be consistently aligned with the Indicators derived from WP1.
• Recommendation 5. Scalability issues need to be better addressed. The limited set of data
available for some demonstrators is a problem, and the consortium should strive to use more
realistic data.
• Recommendation 6. The hypotheses supporting the Summative Evaluation need to be better
defined so as to highlight the innovative part of the project.
These recommendations were taken into account although some evaluation activities had already started
at the time of the recommendations.
As a consequence, the Summative Evaluation perspective was shifted in the following ways:
•
•
The people tagging Instantiation evaluation at Connexions Northumberland was scaled up to include
the parent company (igen), which required negotiations with application partners, and committing
additional resources for supporting the evaluation (response to Recommendation 4).
In a collaborative reflection on Recommendation 6, the project came to the conclusion that gaining
evidence about (i) whether specific tool approaches support Knowledge Maturing and (ii) whether
key assumptions that the conceptual foundations hold can become a key priority for the evaluation.
For both activities, a strongly indicator-focused Summative Evaluation approach was chosen, which
was supposed to be based on a clearly integrated perspective of Indicators (Recommendation 4).
As part of a thorough revision of the original evaluation hypotheses (see D6.3), the indicator perspectives
needed to be more closely aligned. The process and result of this part of the evaluation re-planning is
presented in section 3.
The proposed problem solution to the above recommendations was arrived at collectively in the
Barcelona Consortium Meeting (October 2011); it is called the ‘Indicator Alignment Process’ and is
detailed in section 3 (Figure 3.4 provides a visualization of this process). In summary, all studies (*) were
required to:

Go through the “Indicator Alignment Process” (or get confirmation that they are exempt).

Get moderated approval for their approach from LTRI (except for the 2 higher level studies
conducted by UWAR)
The current version of the document describing the General knowledge Maturing Indicators (GMIs) is
reproduced in Appendix 12.1 (column 5 is of particular interest because it identifies for each GMI the
level of empirical justification) 3.
It was agreed that the reasons for seeking closer alignment were:

Because you can “inherit justifications”. In some cases GMIs from WP1 had 4 levels of
justification (e.g. 126 people said they were useful).
3
Note there is a slight mismatch (early on in the numbering) between GMI IDs & KM Phases. "Table 10.1:
Summary of study coverage of GMI Indicators by phase" in section 10.1.3 provides a clarification of this.
16

We needed to refine our Summative Evaluation in terms of common semantics and level of
explanation.
Thus the premise was that depending on the “level of justification” for an alignment that is being claimed
(and confirmed by the ‘moderators’ 4) for each evaluation study, you could have as a top level Summative
Evaluation study goal of one (or maybe more) of the following claims regarding GMIs and Specific
Maturing Indicators (SMI, see section 3.2):
 1. GMIs / SMIs serve as a basis for Knowledge Maturing services.
 2. GMIs / SMIs are used to evaluate the Instantiation’s effect on Knowledge Maturing.
 3. GMIs / SMIs are evaluated themselves.
Figure 2.3 provides a diagrammatical representation of Summative Evaluation activities. The black boxes
on the left indicate levels of granularity from micro at the bottom to macro at the top. The six Summative
Evaluation studies are placed inside the circle in Figure 2.3 at points that indicate the level of granularity
at which they operate.
EXPLOITATION
Macro level
 Partnerships for Impact
Study, CAREERS GUIDANCE UK
 Longitudinal Study,
CONNEXIONS KENT
CONCEPTUAL
DEVELOPMENT
Meso level
 Maturing process
knowledge, FNHW
 People Tagging,
CONNEXIONS
NORTHUMBERLAND
Micro level
 Community-driven
Quality Assurance,
CONNEXIONS KENT
 Online Course
Support,
STRUCTURALIA
TECHNICAL
DEVELOPMENT
EVALUATION
Figure 2.3: Overview of the Summative Evaluation
The yellow boxes on the right show relevant MATURE activities that relate to that level of the diagram.
So at the micro-level, for example, Study 4 looks at the use of MATURE tools for online course support;
there is a pre-occupation at the micro level with technical development and users. At the meso level we
also have some links to conceptual development. A key question from the meso level perspective is ‘what
did the activities before, during and alongside the Mature development project contribute to user and/or
our understanding of Knowledge Maturing processes?’ Study 2 is partially in this miso level because, as
4
‘Moderators’ in a similar sense to an online moderator: setting the rules/framework of engagement, acting as
referee in disagreements, guiding direction of travel and summarising the results
17
well as having a technical/user focus, it also investigated assumptions surrounding a Knowledge Maturing
ontology. In contrast at the macro level we get Study 5, which looks at how notions of Knowledge
Maturing have played out, for example, at the UK Government Department/ministerial level. There is a
strong link at the meso level to exploitation.
2.2.3
Typology for understanding ‘Continuous Social Learning in Knowledge Networks’
In addition to the innovative focus on GMIs/SMIs prioritized in the Summative Evaluation, LTRI also
wanted to offer an approach to taking a view of the overarching goal of the project, namely facilitating
"Continuous Social Learning in Knowledge Networks". The reason for this was that LTRI deemed it
necessary to have a Summative Evaluation checklist (called a typology) for various reasons:
•
•
•
This was needed to act as a lens for interpretation of results from the Summative Evaluations of
Instantiations and other studies.
A typology was needed that enabled some informal learning oriented conceptualisation, and that
could thus aid interpretation of results and feed into future work. Specifically, following our
design process (Figure 2.2) something was needed that allowed LTRI to “Learn and problematize
about experiences and constraints in context”. The typology was needed to provide an additional
approach to analyse of (mainly) the conclusion sections of each evaluation report (it is therefore
used qualitatively to provide a meta-analysis).
LTRI also took the view that a checklist was also needed that drew on broader theoretical
perspectives so as to help disseminate the outcomes of MATURE to the wider community, i.e. to
broaden the concepts used beyond those from Information Systems and Management / Computer
Science. Providing an interdisciplinary perspective for the TEL community is needed if we are to
investigate scaling and sustaining TEL so as to narrow the “implementation gap” (Javier
Hernandez-Ros at the EC-TEL 2011 conference; see also Cook, Pachler, Schmidt, Attwell, and
Ley, accepted)
Consequently, the typology was introduced as an additional interpretative approach; it was presented at
the Karlsruhe meeting in January 2012 where it was confirmed as a valid additional way forward for
D6.4. In the rest of this section we summarise the typology (full details can be found in Appendix 12.2).
As we mentioned above, it is then used to analyse (mainly) the conclusion sections of each evaluation
report; each Study was asked to provide conclusions of main points; a ‘conclusions only’ approach was
chosen due to person-time limitations (performing content analysis using the typology is time
consuming). However, other paragraphs from the body of reports were included in the analysis where
they seemed relevant; i.e. where aspects of the typology were apparently being illustrated. First, below we
outline the impact that the typology is already having.
LTRI’s typology is of informal workplace learning, which pervades the early to middle Phases of
Knowledge Maturing; specifically:
Ia. Expressing ideas (investigation)
Ib. Appropriating ideas (individuation)
II. Distributing in communities (community interaction)
III. Formalizing (in-formation)
IV. Ad-hoc training (instruction)
Thus the typology provides a framework for understanding social network(ing) services in work-based
learning from a Knowledge Maturing (and beyond) perspective. Potentially, this approach could give us a
critical high level overview of (some) MATURE results and thus could potentially help our conceptual
understanding of the area under investigation (particularly informal workplace practice and learning).
This typology (Cook & Pachler, 2012a) has been published in British Journal of Educational Technology
18
(or BJET) where it was used to analyse the People Tagging Demonstrator 3 (Braun, Kunzmann, &
Schmidt, 2012). The full BJET paper is reproduced in Appendix 12.2 5.
Briefly, the derivation of the main nodes shown in Table 2.1 was made after examining the literature and
returning to the simple focus presented by Eraut (2004, p. 269), who talks about ‘Factors affecting
learning in the workplace’ calling them Context Factors and Learning Factors (Figure 2.4).
Figure 2.4: Factors affecting learning in the workplace (Eraut, 2004, p. 269)
Learning in the workplace is viewed as a response to a complex problem or task. Learning needs to be
embedded in meaningful and authentic cultural contexts.
Table 2.1: Factors in work-based Social Network(ing) Services
1. Context Factors
a.
Work process with learning as a by-product
b.
Learning activities located within work or learning processes
c.
Learning processes at or near the workplace
2. Learning Factors
a.
individual self-efficacy (confidence and commitment)
5
Note that BJET is a high quality international journal: Impact Factor: 2.139; ISI Journal Citation Reports Ranking:
2010: 11/177 (in Education & Educational Research). This critical review and typology-based approach has already
drawn community interest. Slides from a talk in London, November 2011 (Cook & Pachler, 2011), about the first
version of the typology have had 5583 views (as of 31/05/2012, see http://tinyurl.com/czbauqf). Slides from an
invited talk in Canada, April 2012 (Cook & Pachler, 2012b), about the refined version of the typology that is
presented in the BJET paper has had 1178 views (as of 31/05/2012, see http://tinyurl.com/6lhlrwu). Papers on the
typology have been accepted for various conferences: e.g. ECER (Cook & Pachler, 2012c) and JTEL Summer
School (Cook, 2012; 1033 views, see http://tinyurl.com/c89qk56, as of 31/05/12). A related Workshop and paper
have been submitted to the Alpine Rendez‐Vous 2013 (Cook, Pachler, Schmidt, Attwell and Ley, accepted) and 46th
Hawaii International Conference on System Sciences (Cook, submitted). Other journal articles are planned.
19
b.
acts of self-regulation
c.
cognitive load
d.
personal learning networks (group or distributed self-regulation)
3. People Tagging Factors
a.
efficiency gains
b.
cost reduction
c.
expert finding
d.
People tagging tactics
The key elements of the critical literature review were added to the Learning Factors node (expanded in
Table 2.2). This was required because Eraut’s body of work deals with face-to-face learning. In this sense
we have extended Eraut’s work. Finally, it became clear that a specialized node for people tagging factors
was needed (given we wanted to apply the typology to a case study to test it). Thus the Learning Factors
node is generic (some of these factors are overlapping), and hence typology includes branches
surrounding personal learning networks, whereas the People Tagging Factors node is very specific.
Table 2.2: Learning Factors expanded and related literature
2a.
2b.
2c.
2d.
individual self-efficacy (confidence and commitment) (Eraut, 2004, p. 269)
i.
feedback
ii.
support
iii.
challenge
iv.
value of the work
acts of self-regulation (Dabbagh and Kitsantas, 2011)
i.
competence (perceived self-efficacy, overlap with 2(a))
ii.
relatedness (sense of being a part of the activity)
iii.
acceptance (social approval)
cognitive load (Huang et al., 2011)
i.
intrinsic (inherent nature of the materials and learners’ prior knowledge)
ii.
extraneous (improper instructional design)
iii.
germane (appropriate instructional design motivates)
personal
learning
networks
(Rajagopal, et al., 2012)
(group
or
distributed
self-regulation)
i.
building connections (adding new people to the network so that there are
resources available when a learning need arises);
ii.
maintaining connections (keeping in touch with relevant persons); and
iii.
activating connections (with selected persons for the purpose of learning)
iv.
aggregated trustworthiness (perceived credibility) = social validation + authority
and trustee + profiles (Jessen and Jørgensen, 2012)
20
The typology was tested as a checklist for analysis when applied to a case study. Briefly, from a
qualitative analysis we claim that the typology can easily be applied to a MATURE case study (People
Tagging Demonstrator 3; Braun, Kunzmann, & Schmidt, 2012). The mapping of the nodes and branches
in our typology, as mentioned in the text in the case study, is thus summarised by a list of node-branches.
These refer to the node-branch names of our typology and can be seen as one way of assessing the current
status of a project or initiative in terms of the factors from our typology that are found present or missing
in a specific case. The analysis of the MATURE Demonstrator example has, we claim, proved productive
and we suggest that the typology we developed has the potential to provide a fruitful tool for further
exploration of the field (hence we draw on it to describe LTRI’s plans for future work in section 10.2).
For example, on the basis of our analysis, we were able to see that certain gaps in the sense that of some
node-branches were absent in the MATURE People Tagging Demonstrator 3 case analysis; on this basis
we claim that learning factor Indicators that would seem to be areas where future work on computerbased scaffolding could be needed are:
•
•
•
individual self-efficacy (2a)
self-regulation (2b)
personal learning networks (2d)
Thus the purpose of our critical review, typology and qualitative analysis using a case from the literature
was to provide a framework or checklist to assist our understanding of social mobile network(ing)
services in work-based learning. Rather than provide a definitive map of the field, our typology provides
an explanatory, analytical frame and as such a starting point for the discussion of attendant issues.
(See Appendix 12.2 for the full BJET paper from which this section is abstracted.)
2.3
The MATURE Evaluation Studies
This section provides a textual overview of the process of Summative Evaluation. In terms of process,
each team made their evolving plans visible through the MATURE wiki.
2.3.1
Process leading to the development of the Template
At the Summative Evaluation session at the consortium meeting in Barcelona, Oct 2011, the template
described below was developed collaboratively, under LTRI’s lead. Each Summative Evaluation activity
(Instantiation Study 1-4 and UWAR Study 5&6) were allocated main contact pairs (one person from each
Study team and a ‘buddy’ from LTRI to assist in the process). At this meeting a set of general issues and
questions were discussed and resolved, resulting in the template and process described in the next section.
For example, Question 1 was “Why have GMIs? Is it to make Knowledge Maturing traceable? Or is it to
improve KM or GMIs? Is it to help build GMIs into tools?” Question 7 was “What happens if Knowledge
Maturing is scaled up? Is organisational knowledge involved different (a hypothesis?). Do the present
Indicators give us the above?” It was agreed at the meeting that all studies needed to pass an “Indicator
Alignment Process” (or get confirmation that they are exempt). The outcomes of the meeting and
summary of the process to be followed were posted into the MATURE wiki and each contact pair was
asked to use the wiki to evolve plans, to show progress and facilitate progress.
2.3.2
Template for re-plan of Summative Evaluation
Each of the Summative Evaluations shown in Figure 2.3 was provided with, and was asked to follow,
these steps (and maintain their progress in the evaluation section of the MATURE Wiki):
•
•
•
Go through the Indicator Alignment Process (see section 3).
Present any other research questions addressed (for the evaluation studies, in high level
Knowledge Maturing terms)
Provide a timeline to observe phenomena (from one day to several months)
21
•
Present methods and measurements (e.g. data analysis, individual questionnaires,
individual/group story elicitation, user-system replays/logs, etc.).
As we mention above, each evaluation study was appointed a coordinator from within the team and an
LTRI ‘buddy’ who helped to develop the Summative Evaluation re-plans to comment on issues
surrounding deployment and writing up of results.
2.3.3
The six Summative Evaluation studies, associated goals and number of
participants
Table 2.4 provides a top level summary of the high-level evaluation goals of the six Summative
Evaluation studies. These goals were developed by each of the individual Study teams using the Indicator
Alignment Process summarised above and detailed in section 3. It relates to the six Studies listed
hierarchically in Figure 2.3 in that it expands on the goals for each study. Note that because they are
descriptive / reflective in nature, Study 5 and Study 6 did not present the number of participants.
Table 2.4: Summary of evaluation studies high-level evaluation goals
Evaluation study
High-level evaluation goals/questions
Number of
participants
 Maturing
process knowledge,
FNHW
1. Impact of KISSmir: Is the prototype and the knowledge collected with it
useful, i.e. does it adequately support retaining and transmitting knowledge
among secretaries?
n=3
2. Assessment of Knowledge Maturing Indicators : Can Knowledge
Maturing Indicators support the selection of the right resources in a given
situation?
3. Assessment of Knowledge Maturing: do we find (the right) traces of
Knowledge Maturing in the knowledge base?
 People Tagging,
CONNEXIONS
NORTHUMBERL
AND
 Communitydriven Quality
Assurance,
CONNEXIONS
KENT
n = 212
1. Does the use of people tagging improve the collective knowledge of how
to describe capabilities (vocabulary)?
2. Does the use of people tagging improve the collective knowledge about
others? (social effects/aspects)
n = 9 (8+1)
1. Does the Connexions Kent Instantiation support Knowledge Maturing
activities related to Knowledge Maturing Indicators (GMIs)?
2. How do the users use the Connexions Kent Instantiation? What do they
appreciate, what needs to be improved?
3. How usable is the Connexions Kent Instantiation?
 Online Course
Support,
STRUCTURALIA
n = 55
1. How do people use the Instantiation?
2. Does the Instantiation support Knowledge Maturing activities related to
Knowledge Maturing Indicators (GMIs)?
3. How easy is the Instantiation to use?
 Partnerships for
Impact Study,
CAREERS
GUIDANCE UK
Engage a broad range of UK partners involved in developing different forms
of Knowledge Maturing linked specifically to both career guidance and
workforce development policies and practices.
Not
applicable
 Longitudinal
Study,
CONNEXIONS
KENT
Is it possible to build a longitudinal narrative of Knowledge Maturing (KM)
processes in a community of practice?
Not
applicable
Total n = 279
22
3
Consolidation of Indicator Perspectives
Knowledge Maturing Indicators, introduced in D1.1, have evolved into the key instrument to making
Knowledge Maturing traceable. They have served various purposes: input and output of maturing services
(i.e., automated assessment, see D4.3 and D4.4), using and displaying Indicators at the user interface level
(see D2.3/3.3), for measuring and reflecting upon effects of Knowledge Maturing support activities as
part of the Maturing Scorecard (see D2.4/3.4), and for evaluating tool support for Knowledge Maturing
(as originally described in D6.3 and in this deliverable).
Along the course of the project, we have collected considerable evidence about the validity of these
Indicators (such as the interview study in D1.2, by connecting via “Knowledge Maturing criteria” to
evidence from other fields, or the formative evaluation in D6.2). But the Indicators have also been the
subject of active further development in the various strands of the project. This has led to what can be
described as a ‘not fully integrated’ perspective on Indicators at the end of year 3, which was also noted at
the 3rd Annual Review. Consequently, in order to clarify the situation Figure 3.1 provides an overview of
Indicator relationships and in the following sections we expand on this.
Figure 3.1: Overview of Indicator relationships
3.1
Introduction
A closer investigation of the problem at the beginning of year 4 has revealed the following issues:
•
•
Indicators were used in different places of the project, i.e., as part of the tool development, as part
of the Maturing Scorecard, in the Maturing Services, in the empirical studies, and in the
evaluation.
These indicator sets are not integrated and are also not on the same level of abstraction; also links
between them are often not explicit. Furthermore, the naming in the various places have not been
consistent, sometimes making it difficult to decide which set of Indicators has been referred to.
23
•
Due to the dynamic development with various versions, inconsistencies were found due to
referring to an older version, etc.
To address these issues, the following steps have been taken collaboratively:
•
•
•
•
3.2
Clarification of the notion of Knowledge Maturing Indicator and its forms.
Provide easily accessible reference documents and a well-defined owner for each set of
Indicators.
Make links between the different levels of abstractions explicit.
But note that there is a slight mismatch (early on in the numbering) between GMI IDs (in
Appendix 12.1) & Knowledge Maturing Phases. "Table 10.1: Summary of study coverage of
GMI Indicators by phase" in section 10.1.3 provides a clarification of this.
The MATURE indicator landscape
As a result of a discussion process, the following distinct types of Knowledge Maturing Indicators have
been identified and agreed upon:
•
•
•
General Knowledge Maturing Indicators (GMI). These Indicators for Knowledge Maturing
are independent of tool and organizational context. For GMIs, evidence from various sources has
been accumulated. In D1.2 Knowledge Maturing criteria have been used as abstractions to
connect the GMIs to external theories and evidence. These Indicators are owned by WP1 as the
coordinator for empirical studies and concept development. A systematic hierarchical numbering
scheme has been introduced in year 3 that is stable towards refinements (for example see
Appendix 12.1).
Specific Knowledge Maturing Indicators (SMI). These Indicators are specific to tools (e.g.,
they capture user interactions with the system), how these tools are used and in which problem
context they are used (e.g., specific tagging or sharing behaviour might have different meaning
with respect to Knowledge Maturing in different contexts). As opposed to GMIs, there are
different sets of SMIs for which the respective Instantiation teams are the owners (see D2.3/3.3).
In some contexts, a subset of SMIs was sometimes formerly called “transition Indicators” which
were capable of indicating phase transitions. SMIs are often based on system log data. Ideally,
SMIs are specializations of GMIs with respect to (a) the concepts and activities used, and (b)
narrowing constraints, i.e., an instance of an SMI should always be an instance of one or more
corresponding GMIs. Usually, for SMIs evidence has not been collected on a larger scale. Rather,
through the specialization relationship, justification can be inherited from GMIs; i.e., if for a GMI
X evidence has been gathered that it is an indicator for Knowledge Maturing, and SMI Y is a
specialization of GMI X, then Y also has evidence that justifies its use.
Maturing Scorecard Indicators. These Indicators serve the purpose of a reflection instrument
for a team that intends to improve Knowledge Maturing. It relates Knowledge Maturing to
organizational goals and as a consequence needs to be on a higher level of abstraction than GMIs.
Maturing Scorecard Indicators are thus specific to an organizational context and will evolve
within such a context over time. Maturing Scorecard Indicators can be conceptually based on
GMIs (to link it to the body of evidence), but use SMIs as operationalizations. In addition to
SMIs, Maturing Scorecard Indicators also introduce additional Indicators that are specific to the
organizational context and not linked or grounded in other Indicators. These Indicators are owned
by the respective organizations. (Note that this type of Indicator is not addressed in the
Summative Evaluation.)
For the evaluation of Instantiations, SMIs are the primary instrument. They can be used in two ways in
the evaluation design:
1.
SMIs serve as a basis for Knowledge Maturing Services, i.e., they are part of tool functionality.
(Evaluation Type 1 also referred to as ‘top level goal 1’)
24
2.
SMIs are used to evaluate the Instantiation’s effect on Knowledge Maturing, i.e., the Indicators
are used to make Knowledge Maturing as influenced by the Instantiation traceable. (Evaluation
Type 2 also referred to as ‘top level goal 2’)
Both of them depend on sufficient evidence for the validity of the SMI, which can be achieved through
inheritance when the SMI is a proper specialization of one or more GMIs. However, in some cases, the
development of SMIs found additional Indicators that were partially fed into the GMI list, partially have
no GMI counterpart, in the latter case they lack any evidence about their validity. As it has been found
that some of these SMIs correspond to key concepts of the respective Instantiations, there was a need to
evaluate those SMIs or the corresponding GMI. Therefore a third category for usage of Indicators was
needed:
3.
GMIs are evaluated themselves. (Evaluation Type 3 also referred to as ‘top level goal 3’)
Figure 3.2 summarises this extended view of Indicators and clarifies the relationship between GMIs,
SMIs and Scorecard Indicators. Our clarification also reveals that a sound and precise analysis of the links
between SMIs and GMIs is required that refines the mapping from D1.3. This was achieved through an
Indicator Alignment Process, as described in the following section, which was conducted as part of the
Summative Evaluation re-planning process.
Figure 3.2: Extended view of Indicators
3.3
The Indicator Alignment Process
The Indicator Alignment Process had the goal of defining precise links between SMIs and GMIs to enable
the inheritance of the justification from the GMI level. The starting point was the list of GMIs annotated
with the level of justification of GMIs in D1.3 (see excerpt in Figure 3.3; see Appendix 12.1 for detail).
25
Here we distinguished only between empirical justification from representative or associate partner study
(APS) in D1.2 or no justification).
Each study (except for the two higher level Studies 5 and 6) was required to follow the process
determined by LTRI as evaluation lead (but negotiated and agreed at the Barcelona Consortium Meeting
(October 2011) see section 2.2.2). Following the Barcelona Consortium Meeting the approach was
distributed to partners for comment and clarification before use. In this process, the individual
Instantiation teams were requested to make explicit specialization relationships between their SMIs and
GMIs. If there was a specialization relationship between a GMI with empirical justification and the
respective SMI, this SMI was available for usage in the Summative Evaluation either as Evaluation Type
1 or 2. Otherwise, it was only to be used if an Evaluation Type 3 was conducted for this specific indicator.
ID
Level 1
Level 2
Level 3
KM Indicator
I.2.3.4
Artefacts
Creation context
and editing
creation process
An artefact has not
been changed for a
long period after
intensive editing
TopicDependent
Level of
Justification
validated by
RepStudy
Figure 3.3: Extract from the General knowledge Maturing Indicators (GMIs)
This whole process of making indicator relationships or mappings explicit was called the Indicator
Alignment Process.
The results for the Indicator Alignment Process were to be presented using a template, which is
reproduced below as Figure 3.5.
Study Goal: Top level Summative Evaluation study goal(s), XXX
Template
•
•
•
•
•
•
•
•
GMI-ID e.g. I.2.3.4
GMI-Level 1 e.g. Artefacts
GMI-Level 2 e.g. Creation context and editing
GMI-Level 3 e.g. creation process
GMI name, e.g. An artefact has not been changed for a long period after intensive
editing
GMI Level of Justification e.g. validated by Representative Study
Specific Maturing Indicator (SMI) - name of SMI and 4-5 word summary
Description of mapping to Specific Maturing Indicator – a description of why the
mapping was done in that specific way and how the GMI was instantiated. This should
be seen as a reflection about the use of the Indicators.
Figure 3.5: Template for completing Indicator Alignment Process
Note that the outcomes of the Indicator Alignment Process for Study 1-4 are contained in Appendices 12
(see Appendices 12.4 to 12.7).
3.4
Summary of GMIs & study goals covered in Summative Evaluation
Appendix 12.3 gives a summary of coverage of GMIs by the Summative Evaluation. The total number of
GMIs is 75 and of these, 24 were studied in the Summative Evaluation by at least one study (indicated by
green shading). Of these 42 are validated/justified, with 33 not validated. 51 GMIs were not studied
directly (white background). An obvious question is ‘What were the reasons for excluding a very large
26
part of Indicators?’ Essentially, each of Study 1-4 took decisions on which GMI/SMIs were relevant to
their study/tool (as we have seen above, the SMI are usually already built into the tool, and the team were
working ‘upwards’ to the GMI that it came from (to hopefully inherit justification). We return to this
issue in the Collaborative Conclusions.
Note that:
•
•
•
For historical reasons indicator IV.1.1 does not exist.
Studies 5 and 6 were exempt from this Indicator Alignment Process as they operated at levels
above GMIs.
More than one hundred SMIs were identified in year 3 (see D2.3/3.3 in Appendix 7.2).
In summary, we can say that Study 1 & 2 used a mixture of top level study goals (1 & 3), whereas Study
3 & Study 4 examined top level study goal 2 only. There was no focus on a particular top level study
goal.
The following six sections (i.e. sections 4 to section 9) present the reports from study 1-6.
27
4
Summative Evaluation at FNHW (Study 1)
4.1
Background
This section reports the results of the Summative Evaluation of the FHNW Instantiation. Within the
School of Business at FHNW, there are two Masters programmes – “Business Information Systems”
(BIS) and “International Management” (IM) – that have a similar student selection process. The so-called
“matriculation process” for the two Masters programmes, i.e. the process of checking and deciding on
student applications and of communicating these decisions to applicants, forms the context of this
evaluation.
For each of the two Masters programmes, there is one secretary in the administration office (see the
middle layer of the process model below) who performs the majority of activities within the student
selection process. For more details of the matriculation process, please refer to section 4.6 of deliverable
D2.3D3.3.
Figure 4.1: The matriculation process at FHNW
The matriculation process is supported by a special configuration of the KISSmir prototype, which is used
to support the work of the two secretaries and – in exceptional cases – of the Deans of Study. In this
Summative Evaluation, we have concentrated on the sub-process “Check application” that consists of
four sub-tasks as shown in Figure 4.1 and that is performed by the secretaries alone.
The KISSmir configuration of the FHNW Instantiation allows the registration of incoming student
applications in a web-based form. That registration then triggers the KISS workflow engine that creates
task description objects (TDOs) for the activities that have to be performed in the “Check application”
sub-process (depending on the student’s data, this may not comprise all activities shown in Figure 4.1)
and sends out emails to the secretaries which contain a link to these TDOs. The secretaries can open the
tasks in their KISSmir front-end by clicking on the link in the email and can then work on the task,
28
making reference also to task patterns that are provided along with each task. For details of the KISSmir
functionality and its implementation, please refer to section 4.6 of deliverable D2.2D3.2.
While working on their tasks in the KISSmir frontend, the secretaries have the possibility of and are
encouraged to both:
• access information in task patterns and
• contribute their knowledge to enhance task patterns.
In addition to the knowledge that is made explicit in the task patterns, certain interactions of the
secretaries with the prototype (e.g. changes they make to their tasks) are recorded, feeding into a pool of
more implicit procedural knowledge. To summarise, we can say that the knowledge about the
matriculation process is captured in three kinds of artefacts:
• The process model (or “process skeleton”), which has been modeled in advance and is only adapted
at rare intervals. The model includes the activities, but also certain “hard-wired” attachments of e.g.
resources to tasks
• The task patterns that contain categories of resources and potential task collaborators as well as
collections of problem statements with corresponding solutions
• The collection of historical tasks and cases, including the recorded actions of secretaries who
worked on them
We will refer to the entirety of these artefacts as the “knowledge base”. We make the assumption that the
development of artefacts in the knowledge base reflects the process of Knowledge Maturing around the
matriculation process. Hence, it is the aim of this evaluation to analyse the quality and usefulness of the
items in the knowledge base (and the way they are exploited to support the secretaries’ work) in order to
draw conclusions about the quality of Knowledge Maturing related to the matriculation process.
4.2
Evaluation design
4.2.1
Overall concept
As described above, there are mainly two persons (the secretaries) involved in the execution of the
matriculation process, plus the two deans of study who get involved only in rare cases. Thus, we
obviously have to deal with a small-scale context. Such a context poses challenges for evaluation – since
quantitative methods cannot be applied – as well as for the construction of a knowledge base because
there are few people to contribute their knowledge.
Despite these problems, we consider it a typical situation for many business processes (e.g. HR processes)
that only a comparatively small number of persons are involved in them, even in relatively large
organizations. One could say that this scenario is a sample drawn from the “long tail” of the set of all
business processes. Hence, we wish to show with our evaluation that even with few people feeding into a
knowledge base, such a knowledge base can be useful for the participants of business processes – and we
need to do so with purely qualitative methods.
In autumn 2011, the secretary responsible for the “International Management” Masters programme left
FHNW and was replaced by a new colleague who took over the handling of matriculations as well as all
the other tasks of the former secretary. There was a relatively short time of overlap where the secretary
who left had the chance to “transmit” her knowledge to the new colleague. Thus, we have here a typical
example of the knowledge retention challenge in the face of employee turn-over. And it is obvious that
the cost of training a new colleague is relatively larger in small-scale scenarios (e.g. since there are fewer
people who can stand in for the new one during the time of training) – hence, this situation is a highly
relevant one for many small organizations.
We exploited this situation for our evaluation in the following way:
• Knowledge base construction: we made sure that in a preparatory phase, the two former secretaries
had the possibility to make use of the KISSmir prototype when dealing with their matriculation
cases. We encouraged them to consult and contribute to task patterns, especially to record problems
and their solutions to them. We also conducted a dedicated workshop with them to capture some
29
selected (exceptionally interesting) historical cases of matriculation that had occurred before the
introduction of KISSmir.
• Knowledge base evaluation: we then constructed some artificial cases of matriculation that had
problems similar to the ones that had occurred in historical cases. In a one-day workshop with the
new secretary, we went through these artificial cases in detail using KISSmir and observed how she
approached the problems. We encouraged her to consult the knowledge base and made her aware
of the recommendations of resources and solutions offered by the tool.
• Besides observing her approach, we asked questions during the walkthrough; the workshop was
concluded by a combined interview and questionnaire that we conducted with the new secretary as
well as with the secretary of the BIS Masters programme (who had been working with the KISSmir
prototype since the beginning).
Thus, we can say that the evaluation procedure – as a side effect – introduced the new secretary to her
tasks and transmitted the necessary knowledge via the KISSmir prototype. Since some opportunity had
been given for “traditional” knowledge transmission from the old to the new colleague, we had the chance
to test with our evaluation if the KISSmir knowledge base was richer than what had been transmitted
orally and whether and how it could contribute to fill any “knowledge gaps” left open by that earlier
knowledge transmission process. The small-scale setting of the evaluation allowed us to make in-depth
observations and ask detailed questions to deeply understand how each single step in the matriculation
was solved and how the usefulness of the prototype and the knowledge base were perceived in the
process.
4.2.2
Research questions and hypotheses
Our evaluation addressed three main research questions:
1. Impact of KISSmir: Is the prototype and the knowledge collected with it useful, i.e. does it
adequately support retaining and transmitting knowledge among secretaries? Does it help to deal
with new matriculation cases more efficiently and accurately? This includes aspects such as
efficiency of work, usability of the prototype and the quality of the process model and task
patterns that were available at the time of evaluation.
2. Assessment of Knowledge Maturing Indicators: Can Knowledge Maturing Indicators support
the selection of the right resources in a given situation? That is, can people judge the maturity of
knowledge that is accessed through certain resources/artefacts by knowing the values of certain
Indicators derived for these resources and does it help them to select resources that are adequate
for the task at hand?
3. Assessment of Knowledge Maturing: do we find (the right) traces of Knowledge Maturing in
the knowledge base? More specifically, are the artefacts that have been developed through usage
of the prototype at “the right level of maturity”? That is, has knowledge matured and if so, has it
reached a level of maturity that is appropriate for the task/situation at hand? This question is
based on an earlier insight within the project: it is not always the case that the highest level of
maturity is the most appropriate.
4.2.3
Methods and instruments
In the following section, we will describe the methods employed in our evaluation. This includes the
procedures (observations, interviews etc.), but also – and more importantly – the theoretical concepts that
helped us shape the questions we asked, in particular the Knowledge Maturing Indicators.
As outlined above, the first step in the evaluation was the gradual construction and evolution of the
knowledge base that was facilitated by the secretaries’ use of the KISSmir prototype and intensified by a
one-day workshop where interesting historical cases were captured and added to the knowledge base.
Then, in order to answer the three research questions outlined in the previous section, we first held a oneday workshop with the new colleague where she undertook the matriculation process for 6 applications by
fictional new students. In a second step, we interviewed her and the secretary of the BIS Masters
30
programme about the perceived quality and maturity of the information and knowledge contained in the
knowledge base.
4.2.3.1
Workshop
The special cases workshop with the new secretary had a combined format, including observations,
discussions and interview questions. The procedure was as follows:
After a short presentation of the MATURE project and its goals to the new colleague, we briefly
acquainted her with the KISSmir prototype. Then, she was given a sheet with descriptions of 6 fictional
cases (see Appendix 12.4.2.1). Each description contained the fictional data of a student application (as
would be available when receiving a real application) and sometimes additional information (that could
normally be obtained from the application letter or other sources).
Then, we asked the new colleague to enter the data from the fictional applications in the KISSmir system
and work through the cases by executing the tasks that were assigned to her by KISSmir. She was
encouraged to make use of the recommendations and information provided in the task patterns.
In order to capture observations and answers to prepared questions during the workshop, we had prepared
a walkthrough document. For each case, that document contained a table that had a line for each activity
of the matriculation process and various columns to capture associated information (see Figure 4.2 and
Appendix 12.4.2.2 for examples).
Figure 4.2: Example table for capturing observations during the workshop
The first three columns of the table were pre-filled with the problems that were contained (and expected
to be spotted) in the current task, the expected solution and the resources we deemed useful for that task.
The fourth column provided the possibility to select Knowledge Maturing Indicators that the secretary
deemed useful for making a more informed choice among resources or problem statements recommended
to her (below, we will refer to the contents of this column as indicator selection fields, see Figure 4.3 for
an example). The last column was left empty for remarks and observations.
31
Figure 4.3: An indicator selection field: a form for selecting translated specific
Knowledge Maturing Indicators that could facilitate the choice among
recommended resources
In addition to discussing problems and resources as proposed by the first three columns of the table, we
asked questions regarding the usefulness of information and usability and captured the answers beneath
the table belonging to each case (the question guidelines we prepared are also included in the example in
Appendix 12.4.2.2).
Both for the selection of helpful Knowledge Maturing Indicators and the other questions we asked, we
had prepared guidelines, but allowed ourselves deviations and asked additional questions as required by
the context. This included, above all, questions to clarify and explain certain decisions by the secretary.
We also encouraged discussions and the results/answers were recorded on the sheet belonging to the
current case.
Table 4.1: Elements of the workshop documentation
Element
Research question addressed
Aspects of the question
Indicator selection fields
2. Assessment of Knowledge
Maturing Indicators
Questions (beneath table)
1. Impact of KISSmir
Usefulness of information in
knowledge base, usability
Observations
1. Impact of KISSmir
Efficiency of work, usefulness of
information in knowledge base
Table 4.2 summarizes the elements that were used to document the results of the workshops and points
out which element helped us to address which (aspects of which) research question.
In order to assess how far information about the values of certain Knowledge Maturing Indicators could
support a more informed choice among recommended resources (research question 2), we had to translate
the General knowledge Maturing Indicators (GMI) into Indicators that were both understandable to the
secretaries and directly measurable in the given situations.
We proceeded in the following way:
- We first thought of criteria that could be used to rank recommended resources (first column of
Table 4.2)
- For each ranking criterion, we then checked if any Specific or General knowledge Maturing
Indicators (SMI/GMI) existed that expressed the same idea as the SMI. The resulting mapping
(indicator alignment) between ranking criteria and GMI was recorded, see the last two columns of
Table 4.2.
- Finally, we determined the value of each criterion for each recommended resource in the given
situation and presented these values to the new colleague to see whether that information would
help her with the selection of resources.
Table 4.2: Knowledge maturing Indicators and mappings as used in indicator selection fields.
Ranking criteria (translated
SMI)
SMI
GMI
In how many tasks has the
document been used?
D4.II.5 Process-related
knowledge increases its
maturity when a task pattern
and its abstractors and/or
problem/solution statements
I.3.2 An artefact is used widely
32
are more widely used by
everyone
How often has a problem been
used?
D4.II.5 Process-related
knowledge increases its
maturity when a task pattern
and its abstractors and/or
problem/solution statements
are more widely used by
everyone
I.3.2 An artefact is used widely
How many times has a solution
been picked out of all available
ones for a given problem?
D4.II.5 Process-related
knowledge increases its
maturity when a task pattern
and its abstractors and/or
problem/solution statements
are more widely used by
everyone
I.3.2 An artefact is used widely
Was the document added to
the task pattern by a
reputable/trusted person?
I.2.1.2 An artefact has been
edited by a highly reputable
individual
How many times has a solution
been changed?
I.3.10 An artefact was changed
How well do the document
contents match the current
task?
How well does the problem
description match the current
case?
How well does the solution
match the current case
context?
As can be seen in Table 4.2, the ranking criteria based on the degree of match between a resource and the
task context (last three lines of the table) cannot be mapped to GMI, since that degree is not a general
property of a resource, but depends on the task context. However, asking for the usefulness of that
ranking criterion allowed comparisons between this context-specific, ranking criterion and the more
abstract GMI.
4.2.3.2
Post-workshop interview
In order to answer the third of our research questions, namely the question of whether knowledge had
matured through using the KISSmir prototype to a level of maturity that is appropriate for the
task/situation at hand, we developed a questionnaire that we discussed with both secretaries (the new
secretary for the IM programme and the secretary of the BIS programme). We recorded answers and
additional comments and explanations.
Again, we used Knowledge Maturing Indicators to detect whether Knowledge Maturing had occurred.
We found that again, General knowledge Maturing Indicators (GMI) were not suitable for doing so
directly in most cases, mainly for two reasons: firstly, we had to make sure that the Indicators were
comprehensible for end users, i.e. contextualized within their work situation. Secondly, GMI are not
phase-specific; thus we resorted to employing Specific knowledge Maturing Indicators (SMI) that were
designed to detect transitions between phases of the Knowledge Maturing model and hence enabled us to
differentiate between various levels of maturity. However, the SMI that we used are special cases of
existing GMI, the corresponding mapping (indicator alignment) is laid out in Appendix 12.4.1.
33
Table 4.3: Example interview questions derived from GMI that measure transitions
between phases of the Knowledge Maturing Model.
Example question
SMI
“It would be sufficient to
ask a colleague when a
problem occurs.”
GMI
Phase
transition
I.2.1.1 An artefact
has been changed
after an individual
had learned
something
I.3.4 An artefact
became part of a
collection of
similar artefacts
I.3.5 An artefact
was made
accessible to a
different group
of individuals
0 → I6
(knowledge is
recorded)
I → II
“It would be sufficient to
keep my own list of
problems and/or special
situations.”
D4.II.2 Process-related
knowledge has reached phase II
when a personal task
attachment or subtask is being
added to an abstractor in a
public task pattern
“There should be more
formal guidelines (more
formal than the
problem/solution items)
to ensure consistent
handling of special
cases.”
D4.III.1 Process-related
knowledge has reached this
phase when task patterns /
process models have been
approved internally after
consolidation (insufficiently
used resources have been
removed from a task pattern,
abstractors of a task pattern
have been renamed and
polished or removed, similar
subtask abstractors have been
merged, problem or solution
statements have been cleaned
up / merged, and quality has
been checked
IV.1.4 A process was
internally agreed or
standardized
II → III
“There should be an
official compendium of
resources to use with
detailed instructions
when to use which.”
D4.IV.1 Process-related
knowledge has reached this
phase when - after an analysis
of usage of a task pattern - the
underlying process model has
been adapted, e.g. a frequently
used subtask abstractor was
added as a new activity to the
model
I.4.3 An artefact has
become part of a
guideline or has
become standard
III → IV
(knowledge is
shared)
(knowledge is
shared,
categorized,
and agreed)
(knowledge is
standardized)
As can be seen in Table 4.3, many questions have been formulated negatively with respect to the phase
transition. For example, to determine whether the secretaries found it appropriate to share their
knowledge, i.e. to transition into phase II, we asked them whether they found it sufficient to keep their
own record of problems and to stay in phase I.
Table 4.4 shows the complete set of questions we asked about functional knowledge about resources
applied within a process. The full questionnaire can be found in Appendix 12.4.2.3.
6
This is not necessarily Phase I.
34
Figure 4.4: Interview questions for functional process-related knowledge
4.3
Results
4.3.1
Statistics of the knowledge base
In the following section, we will briefly present some statistics from the knowledge base, in particular the
collection of task patterns and the repository of historical tasks at the time of the evaluation.
4.3.1.1
Task patterns
Figure 4.5 shows the number of the various kinds of abstractors and problems per activity of the
matriculation process – the activities 2.1 to 2.4 are the ones that belong to the sub-process “check
application” (see box in lower left corner of Figure 4.1) whereas numbers 3 to 5 correspond to the
activities on the right-hand side of Figure 4.1. Activity number 1 is not listed here – it consists of entering
the application data, which does not happen within the KISSmir front-end and is not knowledgeintensive.
6
5
4
document abstractors
person abstractors
3
subtask abstractors
2
problems
1
0
2,1
2,2
2,3
2,4
3
4
5
Figure 4.5: Number of abstractors/problems per activity of the matriculation
process
We can see from the figure that each activity has document abstractors. However, some of these overlap,
e.g. there is one that contains a link to the application data, which is present in each activity. Person
abstractors are not present all the time, subtask abstractors occur only in activity 4 (“send acceptance
35
letter”), which may indicate that this activity has in fact a sub-structure that may need to be included in
the process model.
Activities 2.1, 2.2 and 2.3 are the most knowledge-intensive and therefore are the only ones that have
problems defined. Overall, 13 unique problems have been captured and 12 unique document abstractors
exist. The former were all contributed by the secretaries, the latter ones were predefined at modeling time.
10
5
8
4
6
3
4
Number of
problems
2
0
Number of
doc
abstractors
2
1
0
0
1
2
0
Number of solutions
1
2
3
6
Number of resources
(a)
(b)
Figure 4.6: Number of problems with a given number of solutions (a) and number
of document abstractors with a given number of resources assigned to them (b)
Figure 4.6 shows the distribution of solutions (a) and resources (b) over problems and document
abstractors respectively. We can see that most problems have a single solution, one has none and three
problems have two solutions to choose from. Regarding the number of resources assigned to document
abstractors, the picture is more complicated: here, a third of all abstractors are empty. This is because the
KISS workflow engine assigns resources dynamically at run-time (e.g. the abstractor “Student application
data” gets filled with a link to the current application data at run-time). There is no abstractor that has
more than six resources assigned to it.
All in all, it can already be seen from these statistics that the task pattern knowledge base – as can be
expected from the small-scale setting – is rather small. The quality of, for example the problems and
solutions contributed remains to be investigated below.
4.3.1.2
Collection of historical tasks
Table 4.4 shows the overall statistics of the knowledge base. The secretaries used KISSmir to handle 61
cases of student applications.
From the number of subtasks that they added to their private tasks (34), we can see that the subtask
feature was used in a significant number of cases. There were only 16 distinct subtask titles, which
indicates that some subtasks occurred repeatedly. Figure 4.7 (a) shows the distribution of title
frequencies: there are only four titles that occurred more than once, one of them occurred 12 times, the
others five, three and two times, respectively. This shows that few subtasks are frequently used – it may
be a good idea to create subtask abstractors for them (if not already done) or even include them in the
process model. More details of such “process mining” issues are reported in D4.4.
Table 4.4: Statistics of the knowledge base
Measure
Value
Number of cases (process instances)
61
Number of subtasks added by secretaries
34
Number of distinct subtask titles
16
Number of resource attachments
314
Number of distinct resource attachments
64
Number of problems used
19
36
Number of distinct problems used
7
Number of solutions used
17
Number of distinct solutions used
9
Regarding the resource attachments to tasks, we can see that there is on average more than one
attachment per task. Figure 4.7 (b) shows the frequency distribution for resources: the majority of
resources occur four times. This can be understood as follows: a link to the application data of the current
student is always added to the three tasks of the “check application” sub-process and the subsequent
“Accept/Reject application” task (the sub-process contains four activities, but the fourth “check
matriculation number” is only executed for Swiss students).
50
15
40
10
30
Number of subtask
titles
5
20
Number of
resources
10
0
0
1
2
3
5
2
12
3
4
5
6
13 37
Frequency of resource
Frequency of subtask title
(a)
(b)
Figure 4.7: Distribution of frequencies for subtask titles and resources used in
tasks
There are only five other resources that have been used 37, 13, 6, 5 and 2 times, respectively. All of them
are static recommendations from task patterns. The frequencies show which are the standard cases: the
resource www.anabin.de occurs 37 times (it is needed in almost every case to check the approval of the
university). The other four resources are acceptance letter templates, the standard one occurring 13 times,
the others (as expected) fewer times.
Finally, we turn to the analysis of problems and solutions. We can see that 19 times a problem statement
was used in a task – which is remarkably high. And in nearly all these cases (namely 17), the secretary
also added a solution. Figure 4.8 shows the distribution of frequencies for both problems and solutions.
Obviously, there are two rather “popular” problems which occurred 5 and 6 times. The fact that 5 of the 7
problems occurred at least twice indicates that the problems that were captured are not “complete
exceptions” which will never re-occur, but although they constitute exceptional situations, it is
worthwhile recording them since they will re-appear. Regarding the solutions, 5 out of the 9 were picked
only once, two of them twice and two of them four times. It is hard to deduce anything from that since
most problems have only one solution.
6
4
3
4
2
Number of
problems
1
0
1
2
5
6
2
Number of
solutions
0
1
Frequency of problem
2
4
Frequency of solution
(a)
(b)
Figure 4.8: Distribution of frequencies for problems and solutions
37
Overall, the analysis of the historical tasks shows that indeed the proposed resources, problems and
solutions from the task patterns have been used by the secretaries and that subtasks have been created.
4.3.2
Observations during the walkthrough
As far as the first of our research questions is concerned (the impact of KISSmir on the secretaries’ work),
we found the following (the complete results of the walkthrough are contained in Appendix 12.4.2.4):
• Efficiency of work, accessing required information: After the first case, the participant (the new
secretary) stated that using the prototype consumed more time than it saved, as compared to the
paper-based case handling she was used to. Later, after the fifth case, she changed her mind and
stated that the prototype could save her time if it was integrated with the tool she needs to use for
capturing student base data.
• Efficiency of work, speed of finding the right problems and/or solutions: Regarding the time
required to find the right problem statements and solutions, we made a number of different
observations: some problems could be found easily, others were difficult to spot, mostly because of
“strange” labels. We noticed that – because of some poor labels – a problem that was relevant in a
given situation would not always be recognized as such.
• Usefulness of prototype with regard to improving the quality of work: Regarding the relevance of
problem statements, we also made diverse observations: some problems (and their solution) were
already known to the participant, others were not and it was obvious that making her aware of these
had considerable value because it improved both the quality and the speed of handling of special
cases dramatically. In two cases, the participant stated she had learned something genuinely new
and in two other cases she proposed extensions to the solution on her own initiative which we
included into the knowledge base.
• Quality of the process model and task patterns: the participant stated that she had a preference
regarding the execution order of the four activities of which the “Check application” sub-process is
composed (see Figure 4.1). She said that, by default, the task “Check approval of the university”
task should be executed first since it typically entails the most problems and can lead directly to the
rejection of an applicant, in which case the other tasks do not need to be executed. This rather
obvious preference had not been detected so far since rejections were very scarce up to now.
However, the new secretary stated that she expected to reject far more applications than before,
mostly because of the increasing number of applicants. Thus, it can be expected that from her
behavior – regarding the execution order of tasks – it will be possible to detect the preferred order
of tasks via process mining. In addition, the participant stated that some activities seemed to be
missing from the process model: she has understood that she should check additional acceptance
criteria, such as grades, level of English and work experience. According to the dean of study of the
BIS programme (who was present in the workshop), these are necessary steps and should thus be
included in the process model, especially in the new situation with the radically increased number
of applicants. Finally, the participant repeatedly remarked about her plans to do some things
differently from what has been established as “good practice” by the two original secretaries. One
example is an individual checklist for checking the completeness of certificates that she has started
to develop (see the additional tasks proposed for the process model above).
• Usability: From what we could observe, the participant quickly got used to the user interface, and
had no difficulty (much less than participants in the formative evaluation) in remembering where to
find information/functionality. She also stated this and said the tool was designed clearly and was
more usable than the other system she usually works with.
• Motivation: The participant said she was motivated to invest time into documenting problems
because she thought that collecting problems/experience in this way was a good idea; but she also
stated that a minimal effort would be crucial for this to happen.
To summarise, we can say that the new secretaries saw few barriers or problems in adopting the KISSmir
prototype for their work; they recognized the benefits, especially the chance of retaining and sharing
knowledge about special cases and problems. As far as the “training” aspect was concerned, the tool laid
open a few, but interesting “knowledge gaps”, and the new colleague was able to learn about new
problems through using the tool – although the oral explanations of her predecessor and the experience
38
she had already gathered with her first matriculation cases covered about 70% of the problems that we
had hidden in our simulated cases.
As far as the usefulness of Knowledge Maturing Indicators for supporting the selection of relevant
resources and/or problems is concerned, we found the following:
• In many cases, none of the Indicators were needed according to the participant. Sometimes, the
resource to pick was obvious, in other cases a better label of the resource would have helped more
than the values of the Indicators .
• From all the criteria presented, there were two that the participant deemed interesting in a number
of cases: firstly the degree to which a problem or resource matched the task context and secondly
the frequency of usage of the resource/problem. She stated that even in cases where that frequency
was not strictly necessary for selecting among choices, the frequency would be interesting to know.
4.3.3
Results of the post-walkthrough interview
An analysis of the secretaries’ answers to the post-walkthrough interview questions provided the
following insights (although comments with respect to Phase I and II are not surprising):
• Phase I: Both participants found it necessary to record knowledge about the matriculation process
in some form.
• Phase II: The sharing of recorded knowledge was found useful and necessary by both secretaries.
• Phase III: The way of offering resources and problems/solutions per activity, i.e. providing the
right information in the right context was accepted by both participants. The questions that
proposed putting all resources and problem statements in one common folder or list, respectively,
were unanimously rejected by the participants.
o Phase IV: Both participants agreed that a more formal compendium of resources to be used in
the process was completely unnecessary and that more formal guidelines regarding handling
of special cases would be desirable. On the other hand, the participants disagreed on the
subject of usefulness of workflow support: whereas one participant liked that support (as tasks
stay as useful reminders when working on multiple cases in parallel), the other participant
thought it was not necessary (and a set of task patterns would be sufficient).
4.4
Discussion
Based on the results we have gained the following insights:
• Impact of KISSmir: through our observations and questions in the walkthrough workshop, we
could establish that KISSmir can be successfully used for sharing and retaining relevant processrelated knowledge. The knowledge base that had been built in advance by the two former
secretaries (i.e. by a very small team) proved useful for training the new colleague and helped her
to gain insights beyond the knowledge she had acquired from her predecessor. On the other hand
we learned about potential improvements that could be applied to the process model (and
particularly, task patterns), including activities to be added and the order of activities to be
introduced. This shows that, on the one hand, there is still room for improvement; on the other
hand, our approach of learning from user behavior and adapting the model accordingly should
stand good chances to detect these improvements based on the expected behavior of the new
colleague (which remains unproven since the results of the new secretary working with the
prototype could not be included in this evaluation). Finally, we found evidence that supports the
claim that KISSmir makes work on matriculation cases more efficient and that the KISSmir UI is
usable.
• Assessment of Knowledge Maturing Indicators : Not surprisingly, the relevance of the
Knowledge Maturing Indicators for ranking resources is rather limited in a setting where only a
small number of resources exist in task patterns. The values of Indicators are merely interesting as
additional information about resources. Thus, we were not able with this evaluation to fully judge
their potential, especially in larger-scale contexts. For the small-scale scenarios we learned,
however, that quality control regarding the labels of resources is more important to end users than
knowledge of the values of GMI.
39
• Assessment of Knowledge Maturing: The results of the post-workshop interviews show very
clearly that the approach chosen by KISSmir and the resulting degree of maturity of knowledge and
related artefacts is perceived as appropriate by the evaluation participants. With KISSmir, the
knowledge and the artefacts that encode it have reached a level of maturity that is somewhere
between the phases II and IV of the Knowledge Maturing model: some of the knowledge is
encoded in the process model and has hence reached a high degree of standardisation (phase IV),
other knowledge has been published in task patterns and consolidated through various cycles of
editing (phase III), whereas yet other knowledge has been shared in task patterns, but not been
consolidated (phase II). The answers of the participants suggest that – overall – each piece of
knowledge has been developed to the right degree of maturity. The only notable exception to this is
their disagreement over the necessity to provide workflow support for the activities in the “check
application” process, which reveals that standardization of process knowledge (in the form of a
deployed workflow, phase IV) may not be seen as required by all individuals.
Apart from the answers to our initial research questions, we also learned from the workshop with the new
secretary that she did not perceive all of the work practices introduced by the two former secretaries –
which are reflected by the current state of the knowledge base – as mandatory or useful (the new
colleague stated at various times that she would do a number of things differently from the established
routine).
From this observation, we can draw the conclusion that some forms of “maturity” may be perceived as (or
may in fact be) “bad old habits”, i.e. things that are established, not for very good reasons, but because of
lack of motivation to explore an alternative. Sometimes, they are also no longer appropriate because of
changed circumstances, but are not adapted because this is not detected. Thus, with new team or
organizational members, some of the commonly accepted routines (matured knowledge) may have to be
re-negotiated and re-examined.
4.5
Conclusions
At FHNW we have conducted a small-scale Summative Evaluation that investigated how far a small team
of people working on a common business process could successfully build up a knowledge base around
that process that would enhance the speed and quality of process activities. In addition, we investigated
the usefulness of Knowledge Maturing Indicators (i.e. GMIs/SMIs) for selecting relevant resources within
the process and the appropriateness of the maturity level of the process-related knowledge. The context
for this evaluation was a situation where a new colleague joined the administration office to replace
another who left FHNW. We used the knowledge base built on historical cases to “train” the new
colleague and learn about different aspects regarding the usefulness of our prototype for this task.
With the results of the Summative Evaluation , we were able to prove that:
• even with few people feeding into a knowledge base, that knowledge base can be useful for all
participants. This is so because many problems occur relatively infrequently (such that it is
worthwhile for an individual to record the solutions);
• the level of maturity of knowledge and the artefacts in which the knowledge is encoded is
appropriate for the situation, with some minor exceptions;
• the KISSmir prototype is useful to increase the efficiency and quality of process executions;
• the new colleague had “knowledge gaps” that could be filled with the help of the tool; the
knowledge retention exists in the situation and that it can be (partly) resolved by using KISSmir;
• the usefulness of Knowledge Maturing Indicators for supporting the selection of relevant resources
is limited in the kind of small-case scenarios that we targeted with our evaluation.
40
5
5.1
Summative Evaluation at Connexions Northumberland/
igen (Study 2)
Introduction
The goal of this Summative Evaluation was to learn if we could support the Knowledge Maturing
processes with the use of the SOBOLEO people tagging tool across a range of organisations in the igen
group, which includes Connexions Northumberland. We decided to extend our evaluation approach to
igen, as we had only evaluated Connexions Northumberland in the formative evaluation (see also D6.3).
Therefore we needed igen to conduct the scaling up, necessary for the Summative Evaluation . igen
delivers public and private contracts to provide careers and personal development advice, information and
guidance. Their services include Connexions and Foundation Learning for young people and Next Step
for adults. It aims ‘to inspire, guide and enable individuals to achieve their potential through the provision
of
impartial,
high
quality
career
and
personal
development
services’
(http://www.igengroup.co.uk/about_igen).
The company was keen to collaborate in the MATURE project, introducing the SOBOLEO people
tagging tool, because they were able to appreciate the potential value of introducing this tool across a
geographically dispersed service where face-to-face meetings and communication was becoming
increasingly expensive and therefore staff were only seldom able to meet face-to-face. They also
appreciated the relevance of the functionalities of the tool and found the system visually appealing and
easy to use.
The Summative Evaluation was guided by two central questions that were aligned with the two types of
knowledge involved in people tagging (see D6.3):
1. Does the use of people tagging improve the collective knowledge of how to describe capabilities?
2. Does the use of people tagging improve the collective knowledge about others?
Additionally we were interested in observing the agreement of tags, the regular re-use of tags and stable
tags, which are distributed over the complete Summative Evaluation . In the following two sections we
will present the central questions in detail.
5.1.1
Central question 1
For the first central question “Does the use of people tagging improve the collective knowledge of how to
describe capabilities”, we have carefully selected eight different SMIs from more than one hundred SMIs
that were identified in year 3 (see D2.3/3.3 in Appendix 7.2). This selection was necessary in order to
keep the evaluation manageable in terms of complexity and effort with respect to collecting and analysing
data. The resulting eight SMIs were selected according to the following criteria.
•
•
•
SMIs should cover both aspects of describing capabilities, i.e., the development of the vocabulary
used and the development of individual person profiles.
SMIs should cover both Knowledge Maturing activities that are key to this central question:
“embedding information at an individual and organizational level” (sharing knowledge about
capabilities with others in the organization) and “reorganize” (gardening the vocabulary).
SMIs should cover all maturing phases that are relevant for the people tagging tool, particularly
phases 1-3.
This resulted in the SMIs in Table 5.1. As we intended to use the SMIs for evaluating if Knowledge
Maturing takes place through the usage of the people tagging tool, we linked these SMIs to corresponding
GMIs that are strict generalizations so that the SMIs inherit the justification from the empirical validation
of the GMIs (see also section three: Indicator Alignment Process). Not all of the relevant SMIs, however,
had a high level of justification from the GMI level (including I.1.1.3, I.2.3.6 and II.4.1) as they emerged
from the Demonstrator development in parallel to the Representative Study in year 2. Therefore we also
needed to evaluate them through questionnaires which we designed in a methodologically comparable
41
way to D1.2, adopting the same Likert-scale. (Note: for a more detailed explanation on the relation
between Indicators in Table 5.1 see D2.3/D3.3.) KMA in Table 5.1 (column 3) stands for Knowledge
Maturing Activity. In MATURE, Knowledge Maturing Activities (KMA) are defined as individual or
group activities that contribute to the development of knowledge within the organisation.
Table 5.1 SMIs for central question 1
SMI
1
2
3
6
7
8
9
10
Title
KMA
Unit of
analysis
Remark
GMI
I.3.6
II.4.1
I.3.9
I.3.10
I.3.5
Level of
justification
from GMI
A person is annotated
with additional tags at a
later stage by the same
user
reorganise
log file data
and
questionnaire
A topic tag is reused for
annotation by the
"inventor" of the topic
tag
embed
information
and
reorganise
log file data
Topic tags are reused in
the community
embed
information
and
reorganise
log file data
Topic tags are further
developed towards
concepts; e.g. adding
synonyms or description
reorganise
log file data
A topic tag moved from
the "prototypical
concept" category to a
specific place in the
ontology
reorganise
log file data
and
questionnaire
requires GMI
evaluation
(I.1.1.3)
The whole ontology is
edited intensively in a
short period of time, i.e.
gardening activity takes
place
reorganise
log file data
and
questionnaire
requires GMI
evaluation
(I.2.3.6)
An ontology element has
not been changed for a
long time after a period
of intensive editing
reorganise
log file data
I.2.3.4
1 (strong)
A person profile is often
modified and then stable
reorganise
log file data
I.2.3.4
1 (strong)
requires GMI
evaluation
(II.4.1)
I.3.9
I.3.3
I.4.6
I.3.9
I.3.10
I.1.1.2
I.1.1.3
I.2.3.6
I.3.10
3 (weak)
1 (strong)
1 (strong)
1 (strong)
3 (weak)
3 (weak)
More information about the mapping between the GMI and SMI can be found in section three and in
Appendix 12.5.1.
5.1.2
Central question 2
For the second central question “Does the use of people tagging improve the collective knowledge about
others?” we applied a similar approach as for question 1, we selected the most relevant SMIs based on the
following criteria:
42
•
•
SMIs should cover all relevant Knowledge Maturing activities (finding people, embedding
information, and reorganizing).
SMIs should cover both artefacts-bound (based on tagging behaviour) and non-artefact-bound
aspects (effects on social structures).
Similarly to Question 1, we linked the SMIs to GMIs via generalization relationships. This has led to
three GMIs (I.1.1.3, I.2.1.3 and IV.2.1) to have (according to the Indicator Alignment Process) a weak
justification (level three) and were therefore deemed to be in need of an evaluation themselves. As before,
we evaluated them via questionnaires which are methodically comparable to Representative Study (D1.2),
with the same Likert scale. Note that GMI I.1.1.3 shows up in two different SMIs in central questions one
and two.
Table 5.2 SMIs for central question 2
SMI
11
4
5
Title
KMA
Unit of
analysis
Remark
GMI
Level of
justification
from GMI
An individual changed
its degree of
networkedness
find people
questionnaire
Requires GMI
evaluation
(IV.2.1)
IV.2.1
3 (weak)
A person is (several
times) tagged with a
certain concept
embed
information
and reorganise
log file data
and
questionnaire
Requires GMI
evaluation
(I.1.1.3)
I.1.1.3
3 (weak)
A person is tagged by
many different users
embed
information
and reorganise
log file data
and
questionnaires
Requires GMI
evaluation
(I.2.1.3)
I.2.1.3
3 (weak)
The Indicators were mainly based on activity-related Indicators that can be logged by the people tagging
system. We iteratively aligned the Specific Knowledge Maturing Indicators to General Knowledge
Maturing Indicators and to phases. In a second step the General Knowledge Maturing Indicators were
taken as a starting point and the list of Specific Knowledge Maturing Indicators was completed by
including additional contextualisations of the General Knowledge Maturing Indicators. Also we checked
the mapping between Knowledge Maturing Indicators and Transition Indicators (see also Appendix 7.2 in
D2.3/3.3). The selection of these Indicators serve two purposes: (1) they form the basis for Maturing
Services and (2) they are used to evaluate the maturing effect of the Instantiation. Our selection Indicators
map almost completely to the early phases of the KMM (see D2.3/D3.3). For a complete mapping of
SMIs and GMIs see Appendix 12.5.1.
5.2
Evaluation description
The evaluation has been conducted with a multi-method approach using log file data and questionnaires.
The motivation for up-scaling was to get more users to use the people tagging tool and thus to gather
more data.
The up-scaling for Connexions Northumberland started with a training phase. In total, 11 half-day
training sessions were delivered between July 2011 and January 2012. The training sessions were very
practical with time bound tasks to be undertaken and with an optional set of tasks to be completed with
two to three weeks of time between them (see Appendix 12.5.2.1 and Appendix 12.5.2.2 for more
details). Although staff were strongly encouraged to do this, there was no compulsion.
Initially it was planned to finish the training phase in September, but low participation and an unclear
budget led to a four month delay at Connexions Northumberland (this issue will be covered in the Project
Management Deliverable). In total 109 users were trained face-to-face. Apart from those trained in the
43
face-to-face sessions, a step by step PowerPoint presentation together with a number of tasks to be
undertaken were sent to a further 103 people in the company. So apart from staff on maternity leave or
long term sick leave, all staff received training material and support via E-Mail to use the system
efficiently. This raises the total number of trained staff to 212. (See Appendix 12.5.6.2 for additional
reports from Connexions Northumberland and igen).
The second up-scaling phase could not start until the end of January, 2012. We aimed for a two month upscaling phase, from late January until late March, 2012. We also analysed log file data from July, 2011
until March, 2012.
A questionnaire for the GMI evaluation was sent via email in March, 2012 (see Appendix 12.5.2.3). In
total 27 staff members completed in the questionnaire. In total, 298 users had access to SOBOLEO for the
formative and Summative Evaluation.
5.3
Results
Communication within the SOBOLEO people tagging tool is organized around the concept of events. We
collected a total number of 12,620 “events” representing three different types of events:
•
•
•
A Command Event representing any form of change; for instance the request to create a concept
sends the command event CreateConceptCmd containing an initial name to create a new concept.
A Query Event representing queries to the system; for instance a query to search for persons
sends the query event SearchPersons containing the query string.
A Notification Event representing any form of notification by the system; for instance to notify
about an user opening a tagged person’s profile sends the notification event BrowseProfile
containing the URI of the tagged person whose profile is opened.
Each event contains additional standard information that consists of the creation time, an id to establish an
order within the events, and sender information. All events exist as Java objects whose XML
serializations are stored as individual log files and on which we performed our analysis. What follows is
an example what such an event look like. Because our analysis mainly depends on such events, a short
example is deemed to be appropriate at this point. For instance the CreateConceptCmd of the topic tag
“Labour Market Information”:
<de.fzi.ipe.soboleo.event.ontology.CreateConceptCmd>
<initialName>
<string>Labour Market Information</string>
<lan>en</lan>
</initialName>
<impliedChanges class="linked-list">
<de.fzi.ipe.soboleo.event.ontology.primitive.PrimitiveCreateConcept>
<newURI>http://soboleo.com/ns/1.0#space-default-gen454</newURI>
<initialName reference="../../../initialName"/>
<parent class="de.fzi.ipe.soboleo.event.ontology.CreateConceptCmd" reference="../../.."/>
<id>-1</id>
<creationTime>07/07/2011 17:27:34</creationTime>
</de.fzi.ipe.soboleo.event.ontology.primitive.PrimitiveCreateConcept>
</impliedChanges>
<id>850</id>
<senderURI>http://soboleo.com/ns/1.0#users-db-gen4</senderURI>
<senderName>Caron Pearson</senderName>
<creationTime>07/07/2011 17:29:59</creationTime>
</de.fzi.ipe.soboleo.event.ontology.CreateConceptCmd>
44
We excluded the 8,432 events (comprising 67% of the total number of events) generated by the 11
training sessions as we cannot distinguish between real usage and training usage of the system and the
inclusion of these would have led to unnaturally high findings for the analysis. Additionally, we excluded
the events generated the day before the training sessions as on those days the tagging system was adjusted
to meet the tasks of the training. Therefore we excluded another 877 events (7% of the total number). We
used the remaining 3,311 (26% of the total) events for the Summative Evaluation . Table 5.3 provides an
overview of all events during the evaluation period.
Table 5.3 All events during the evaluation period
Event
Events
including
training phase
AddConnectionCmd
Events without
training phase
and day before
1487
408
71
46
AddPersonTag
1243
401
AddTextCmd
405
55
AddWebDocumentTag
217
51
BrowseConcept
2013
489
ChangeTextCmd
31
17
CreateConceptCmd
930
214
GetDocumentsSearchResult
3951
979
RemoveConceptCmd
228
32
RemoveConnectionCmd
537
186
1
0
RemovePersonTag
190
96
RemovePersonTagsByUser
15
3
RemoveTextCmd
30
1
RemoveWebDocumentTag
37
20
1234
313
AddOfficeDocumentTag
RemoveOfficeDocumentTag
SearchPersonsByNameAndTopic
For the Summative Evaluation the users also compared different person profiles in terms of differences in
knowledge maturity by answering a questionnaire (see also Appendix 12.5.2.3). The different person
profiles represent different levels of maturity and formality. We have based our notion of maturity of a
person profile on two SMIs (see D2.3/3.3), which were slightly modified: the number of different topics
assigned to a person (corresponding to GMIs I.3.6, II.4.1) and how often a certain topic has been affirmed
(SMI4). We explain this below and illustrate this with Figure 5.1, Figure 5.2, Figure 5.3 and Figure 5.4.
Finally we selected four different profiles with the value of 'few' and 'many' for each indicator. The
assumption is that the more topics that have been assigned to a person and the more the topics have been
affirmed by other users the more mature is the profile and collective knowledge.
45
Profile B shows a very immature profile (see Figure 5.1). There is only one topic once assigned to a
person. Profile A might be seen as more mature as many different users affirmed the topic (see
Figure 5.2). However, it is only one topic that is used to describe the person. Thus, the profile might be
incomplete.
Figure 5.1 Sample picture of person profile B
Figure 5.2 Sample picture of person profile A
In comparison, profile D shows a more diverse profile regarding the assigned topics (see Figure 5.3).
However, the topics are only assigned once to the person. That means there is no widespread and
consolidated view on the person. Profile C is assumed to represent a mature profile (see Figure 5.4).
There are many different topics assigned to the person and several different users affirmed the topics, e.g.
“3x First Aider” or “2x Newcastle College website” in Figure 5.4.
Figure 5.3 Sample picture of person profile D
Figure 5.4 Sample picture of person profile C
Table 5.4 Summary of different person profiles
Number of topics
Few
Many
Few
Profile B
Profile D
Many
Profile A
Profile C
Number of affirmations
46
Table 5.4 gives an overview of the different person profiles which represent different levels of maturing
and formality. The complete person profiles of the questionnaire, including name, tag assigned to a
person, related documents and related people can be found at Appendix 12.5.2.3. The selected person
profiles are actual users who participated in this Summative Evaluation .
We also conducted a pre-analysis of the data to find out after what time period a person profile can be
considered as stable. This pre-analysis was undertaken in December 2011 by analysing the data and
identifying stable periods for person profiles. Our pre-analysis indicates that a two week period of nonediting can be considered as stable. We checked this assumption at the end of the evaluation period and
observed the same result. Therefore, we can define a stable period in our Summative Evaluation as a time
period of two weeks or longer.
Finally, we applied a questionnaire with 22 core questions and additional sub questions, depending on the
answers. We did not ask for personal data from the users, because we preferred anonymous answers to
maximise the number of replies to the questionnaire. In total, 27 staff completed the questionnaire. Their
answers cannot be linked to their actual usage of the SOBOLEO people tagging system due to anonymity.
For the GMI evaluation, we needed to identify criteria that allowed respondents to assess the maturity of
the person profile. For this purpose, we have selected three criteria from the canonical set of data quality
criteria (cp. Wand & Wang 1996): accuracy, completeness, and usefulness (or “relevance”), which
correspond to the quality criteria for Knowledge Maturing (Braun & Schmidt, 2007).
The questionnaire consists of two parts. The first part shows four person profiles (see also Figure 5.1,
Figure 5.2, Figure 5.3 and Figure 5.4) and we asked several questions about the accurateness,
completeness and usefulness of the person profiles. The remaining questions are about the other aspects
of the GMI evaluation. In the next section we present first the results of the relevant SMIs for the first
central question and then for the second central question. We discuss each central question and provide
conclusions at the end of this section.
5.3.1
SMI 1: A person is annotated with additional tags at a later stage by the same
user
This SMI refers to the status whereby a person is annotated with additional tags at a later stage by the
same user. We defined three different time periods as later stages: 5 minutes, 30 minutes and 24 hours.
We collected data for 87 person profiles which met our criteria including 26 cases with retagging. This
happened seven times between five minutes and 30 minutes of the tag being created. In three cases we
experienced this after 30 minutes, but not later than 24 hours. Finally, in 16 cases the retagging has taken
place after 24 hours. In total we have approximately nine per cent of users retagging (26 out of 298 total
users), where in total 29% of the users tagged others (87 out of 298 users).
We also analysed the SMI using a questionnaire as we had a weak justification for the GMI in this
instance and therefore needed to evaluate the GMI itself. We used the following questions from the
questionnaire:
•
•
Questions 7, 8, 10 and 11: To what extent do you agree with the statement “profile C/D
represents this person accurately”?
Questions 7, 8, 10 and 11: If you rated profile C/D as “slightly agree”, “agree” or “fully agree”
for accuracy, please explain why in the box below.
47
Table 5.5 Summary of the answer for the questions 7, 8, 10 and 11
Rating
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully
agree
No
answer
Q7: To what extent do
you agree with the
statement “profile C
represents this person
accurately”?
0
0
0
5
4
12
4
2
Q8: To what extent do
you agree with the
statement “profile C is
complete”?
0
1
0
4
3
15
3
1
Q10: To what extent do
you agree with the
statement “profile D
represents this person
accurately”?
0
0
0
5
1
11
5
2
Q11: To what extent do
you agree with the
statement “profile D is
complete”?
0
1
1
4
4
12
4
1
Question
Table 5.5 shows the summary for the questions 7, 8, 10 and 11. As we are only interested in more mature
person profiles and usefulness is not in the scope of this SMI, only the answers for accurateness and
completeness of the person profiles C and D are analysed. The results indicate that the users perceive both
person profiles as accurate and complete. We also collected more detailed answers from the users, which
rated each person profile for accurateness, completeness or usefulness at least “slightly agree” (see
Appendix 12.5.6.3).
In conclusion we find strong evidence to support SMI 1. The log file data and the answers from the
questionnaire (see Appendix 12.5.6.3) are also in favour of this assumption. One issue that arises is this:
One would still need to argue, why (or who decides), that the profiles used for evaluating this are
“mature” and hence, can be used as a reference. We argue as follows. First, we have taken four different
user profiles which we rate as being ‘differently mature’. We then ask the users if they think that the
profiles are useful, complete, and so on. Only then, if the users agree, do we claim that the users support
our assumption (that we have indeed supported Knowledge Maturing). Of course someone could argue,
why we did not choose other profiles or maturity criteria. But this is something we have always to deal
with in research. In this case our assumption was this and it is quite well supported through this section.
5.3.2
SMI 2: A topic tag is reused for annotation by the "inventor" of the topic tag
This SMI will indicate if a topic tag is reused for annotation by the creator of a topic tag, which shows
that the tag moves from expressing ideas to an appropriation. Out of 243 topic tags, 153 artefacts have
been reused at least once by the inventor of the topic tag, with 25 topic tags used more than once. The
following list shows the different reuses:
•
•
•
•
•
•
•
65 topic tags have no reuse by the inventor
153 topic tags have one reuse by the inventor
12 topic tags have two reuses by the inventor
Two topic tags have three reuses by the inventor
Six topic tags have four reuses by the inventor
One topic tag (“igen Assessment Centre”) has five reuses by the inventor
One topic tag (“Benefits for Teenage Parents”) has six reuses by the inventor
48
•
•
One topic tag (“Digital technology in guidance seminar”) has eight reuses by the inventor
One topic tag (“Wakefield jobs”) has nine reuses by the inventor
The mean and median number of topic tags reused by the creator of the topic tag is one tag, with a
standard deviation of 1.46 tags. We can therefore claim support for SMI 2.
5.3.3
SMI 3: Topic tags are reused in the community
This SMI demonstrates if topic tags are reused in the community. Therefore we count every add, remove,
browse, edit and search events. In total we have 528 topic tags with a mean of five events per tag and a
standard deviation of 17.58 events, the median is 3. The following list shows events ranging from 34 to
10 events:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
GRT Issues
Leeds Connexions Targeted Support Staff list
High Schools
South Leeds
Teenage Parents and Teenage Pregnancy
Vacancies
Labour Market Information
Information Technology
Higher Education
Connexions
Young People's Issues
LDD
Job Search
Adult Careers Services
Information For Parents
Leeds
CV information
Career Theories
CEIAG
Wakefield jobs
LLDD Placement Panel
Low self-esteem
Apprenticeships in Northumberland
Mental health
Online & Development Team
Adult Careers Advice
Administration
Langley Furniture Works
Barndale School
Digital technology in guidance seminar
Benefits for Teenage Parents
Northumbria University Law School July 2011
Education
Foundation Learning Leeds Learning Links
Leeds Drop-ins
Northumberland College
Teenage Pregnancy and Young mums
IT Support
49
•
•
•
•
•
•
•
Hexham
Lmi
Do It Volunteering web site
Benefit
Student chat re HE and academic issues
Tynemet
The Grove Special School
We need to refine the results by examining events made by others than the creator of the topic tag. In
total, we have now 209 topic tags with a mean of two events per tag and a standard deviation of 3.72
events, with a median of 1. The following list shows the events with events ranging from 31 to 9 events:
•
•
•
•
•
•
•
•
High Schools
Vacancies
Career Theories
CEIAG
Labour Market Information
Leeds Connexions Targeted Support Staff list
Information Technology
Lmi
For an easier comparison, we have marked the topic tags from the second list, with bold, italic font in the
first list. This includes the tag “Lmi” as well as “Labour Market Information”, which represent the same
concept. Indeed, both were merged later in the evaluation into one concept. However, here we consider
any topic tag that is reused within the evaluation time as independent even though they might have been
deleted or merged later on. Furthermore, it is notable that every topic tag in this list is on the highest, i.e.
most abstract, level of the vocabulary. This confirms the assumption that more general topics are more
often reused than more specialized ones. We can conclude that we have found support for SMI 3.
5.3.4
SMI 6: Topic tags are further developed towards concepts; e.g. adding synonyms
or description
SMI 6 is about the development of topic tags, e. g. they get enriched by added synonyms or descriptions.
We counted the total number of the following events: add, remove relation, add label, remove label,
change label, add description, remove description and change description. In total there were 372 topic
tags for analysis. The mean number of these events for each topic tag is 3.33, with a standard deviation of
17.20 and the median of 2 events. We have this extreme of numbers, because 332 events have to be
accounted in the “latest topics” section, which are not relevant for this SMI, out of the total number of
1,240 events for this SMI. The “latest topics” section is the place where new tags are added. If the tag was
edited then the most frequent event was adding a description to the event. Almost always we observed the
adding of a description as the relevant event. Additionally this happened in a very early stage of maturing
of the topic tag. Six topic tags generated at least ten or more events:
•
•
•
•
•
•
“Education” (15 events)
“Leeds Connexions Targeted Support Staff list” (14 events)
“Information Technology” (12 events)
“Young Peoples Issues” (11 events)
“Online & Development Team” (10 events)
“Connexions” (10 events)
We can therefore conclude that there is empirical support for SMI 6.
50
5.3.5
SMI 7: A topic tag moved from the "prototypical concept" category to a specific
place in the ontology
This SMI examines if a topic tag was moved from the “prototypical concept” category, which is the same
as the “latest topic” concept, to a specific place in the ontology. This means that we want to observe if
tags changed their position within the structure of the ontology, e.g. if a tag is moved from “latest topics”
to another section of the Topic List. Also we need to validate a GMI because of low justification (see also
Appendix 12.5.1), with the following questions in the questionnaire:
•
•
•
Question 16: If a tag is moved from “latest topics” to another section of the Topic List, then there
is a better understanding of the tag.
Question 17: If a tag is moved from “latest topic” to another section of the Topic List, then I can
retrieve the tag more easily.
Question 18: When a tag is moved from “latest topic” to another section of the Topic List, then
the tag is less ambiguous.
Out of 684 topic tags, 680 were added once to the “latest topic” concept, three were added two times and
one topic tag was added four times to the latest topic concept. More interestingly, 292 topic tags (43%)
were not moved to another location. Whilst 329 topic tags were moved once (48%), 32 topic tags were
moved twice (5%), and 30 (4%) changed three times. Finally, one topic tag was changed four times to a
specific place in the ontology. This means that for approximately 10% of the ontology elements, we
achieved multiple relationships. Also two topic tags were not explicit enough and returned back to the
latest topic section. Another two were deleted without any movement in the structure. Again, it is
necessary to highlight that we did not include the activities during the training phases. Having a look at
the final state of the ontology, it is comprised of 693 topic tags in total of which 163 (24%) did not move
from the latest topics category. That means that a lot of moving to specific place in the ontology took
place during the training.
Table 5.6 Summary of the answer for the questions 16, 17 and 18
Rating
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully
agree
No
answer
Q16: If a tag is moved
from “latest topics” to
another section of the
Topic List, then there
is a better
understanding of the
tag.
0
2
1
11
5
4
2
2
Q17: If a tag is moved
from “latest topic” to
another section of the
Topic List, then I can
retrieve the tag more
easily.
0
1
5
9
4
4
1
3
Q18: When a tag is
moved from “latest
topic” to another
section of the Topic
List, then the tag is less
ambiguous.
0
1
3
12
1
6
2
2
Question
51
Table 5.6 shows the results for the questionnaire. The users state no preference for all three questions, but
the answers indicate that if a tag is moved away from the latest topic section, then there is a slightly better
understanding, easier retrieval and less ambiguity for a tag. This gives some support for the evaluation of
SMI 7.
5.3.6
SMI 8: The whole ontology is edited intensively in a short period of time, i.e.
gardening activity takes place
This SMI is looking for ontology editing events, in particular we are examining each ontology element to
determine if there has been a minimum of five events in a time period of less than 2 weeks. Also we are
validating the GMI I.2.3.6 called “An artefact is edited intensively within a short period of time” with the
questionnaire questions 21and 22. The questions in the questionnaire are the following:
•
•
Question 21: Have you done any gardening/editing activities?
o If yes, please give some examples
o If yes, please state in which situation you did gardening/editing?
 planned session in a group
 unplanned session in a group
 planned session on your own
 unplanned session on your own
 when you saw obvious errors or disorder
 when you followed recommendations
 other (please write down)
o If no, please give a short explanation as to why you have not done any gardening/editing?
Question 22: What triggers you most to use SOBOLEO for tagging? Please give an example and
a reason.
We observed 11 time periods, summarised in Table 5.7.
Table 5.7 Periods with at least five operations within two weeks
Begin of period
End of period
Thu Jul 07 15:58:44 CEST 2011
Fri Jul 15 11:24:28 CEST 2011
51
Fri Jul 22 11:33:32 CEST 2011
Thu Aug 04 17:11:55 CEST 2011
360
Fri Aug 05 12:42:01 CEST 2011
Tue Aug 16 17:21:31 CEST 2011
86
Tue Aug 23 12:56:32 CEST 2011
Tue Aug 30 16:59:15 CEST 2011
98
Mon Oct 31 14:58:26 CET 2011
Mon Oct 31 15:27:53 CET 2011
5
Mon Nov 14 17:00:20 CET 2011
Wed Nov 16 13:11:01 CET 2011
5
Thu Dec 01 11:46:12 CET 2011
Wed Dec 14 17:44:07 CET 2011
32
Thu Dec 15 17:20:55 CET 2011
Thu Dec 15 17:24:14 CET 2011
6
Mon Jan 16 10:58:49 CET 2012
Wed Jan 25 16:34:46 CET 2012
83
Tue Jan 31 13:39:01 CET 2012
Mon Feb 13 15:12:30 CET 2012
75
Tue Feb 14 14:29:34 CET 2012
Tue Feb 28 13:52:11 CET 2012
75
Total number:
Number of operations
(more than five in less
than two weeks)
876
52
In Table 5.8 we show the summary for question 21. Most gardening activities took place in a planned
group session or when users saw an obvious error.
Table 5.8 Summary of the answer for question 21
Answer
Question
planne
d
session
in a
group
unplanned
session in
a group
planned
session
on your
own
unplanned
session on
your own
when you
saw obvious
errors or
disorder
when you
followed
recommendat
ions
other
7
1
4
6
6
4
1
Q21: Please state
in which situation
you did gardening/
editing
Table 5.9 provides a detailed summary for question 21focussing on three different clusters of answers for
this question.
Table 5.9 Detailed summary of the answers for question 21
Answer
N
planned
session
in a
group
1
X
unplanned
session in
a group
planned
session
on your
own
X
2
X
3
X
4
X
5
X
6
X
7
8
unplanned
session on
your own
when you
followed
recommend
ations
Clus
ter
X
1
X
2
X
2
X
1
X
X
X
1
X
9
10
X
11
X
X
3
X
3
12
13
other
2
X
X
when you
saw obvious
errors or
disorder
X
X
14
X
53
3
15
X
3
16
X
X
1
17
X
X
1
The three clusters can be defined in the following manner:
•
•
•
Cluster 1 has a bold “X” including numbers 1, 5, 8, 16 and 17. This shows planned group
sessions and unplanned individual ontology gardening sessions.
Cluster 2 is marked with a bold and italic “X” comprising cases 2, 3 and 4. This cluster is made
up of individuals who corrected the ontology when they saw an obvious error.
Cluster 3 is marked with an italic “X”. This group followed the recommendations of the system
and comprises of the cases 9, 10, 14 and 15.
We also collected 13 examples from the 17 users who answered question 21 positively. Three did not
answer at all, seven answered negatively. We also collected seven examples for the negative answers to
question 21.The answers can be viewed in Appendix 12.5.6.5.
For question 22 we collected 21 answers. Six individuals did not answer the question. Most respondents
gave as trigger for tagging sharing information with others and making others aware of the information.
One respondent also included concerns about tagging others, which is in line with the formative
evaluation results in D6.2. The full list of answers can be found in Appendix 12.5.6.4.
We conclude that especially in the beginning of the evaluation there was more activity in ontology
editing. Later the users found a more structured environment and the necessity for gardening activities
was less pressing. The training phase had a considerable impact in the evaluation of this SMI, due to
intense activity in the training sessions. Additionally, the recommendations managed to motivate a group
of persons, who were not motivated by any other measures we had taken. The GMI evaluation can be
successfully supported by the answers to the questionnaire.
5.3.7
SMI 9: An ontology element has not been changed for a long time after a period
of intensive editing
Here we want to know if an ontology element has not been changed for a long time after a period of
intensive editing. We did not observe many stable periods. From 146 ontology elements with stable
periods, 16 ontology elements have two stable periods and three ontology elements have three stable
periods. The number of events before the first stable period is 78, which provides a mean of less than one
for the 146 ontology elements. We observed little activity for most tags and therefore we do not have
support for SMI 9.
5.3.8
SMI 10: A person profile is often modified and then stable
With this SMI we are interested to see if a person profile is first often modified and then remains stable
for at least two weeks. The person profile needs at least two add or remove tag assignment events. In total
we gathered 55 person profiles which meet our criteria, from which at least 44 have one stable period,
five have two stable periods and two have three stable periods. The following list shows the person
profiles with two or more stable periods:
•
•
•
Noel Keigly – three stable periods with nine events in total
Brenny Matterson – three stable periods with 15 events in total (see also Figure 5.5 and
Figure 5.6)
Zoe Olden – two stable periods with four events in total
54
•
•
•
•
Stephen Edminson – two stable periods with four events in total
Isabel Taylor – two stable periods with 16 events in total, 11 before the first stable period
Andrew Oliver – two stable periods with five events in total
Emma Carlin-Marshal – two stable periods with two events in total
Figure 5.5 Short person profile of Brenny Matterson
Figure 5.6 Extended version of the person profile of Brenny Matterson
55
We also observed the number events before a first stable period. The mean number of events before a first
stable period is 2.80, with a standard deviation of 10.87. The high standard deviation is because 30 person
profiles have zero events and one person profile has 77 events. This extreme person profile belongs to
Charles Birch, which interestingly was only once stable according to our criteria. We will later again
point to this person profile for SMI 4 (see also Figure 5.9 and Figure 5.10). Because we found person
profiles were often modified we have support for SMI 10.
SMI 11: An individual changed its degree of networkedness
5.3.9
SMI 11 has also a weak justification and again therefore we need to evaluate the GMI itself, in this case
GMI IV.2.1. With this SMI we want to obtain information as to if an individual changed his or her degree
of ‘networkedness’.
The following questions were asked in the questionnaire to validate the GMI:
•
•
Question 19: Do you think using SOBOLEO has helped you to increase the number of colleagues
in your professional network?
Question 20: Have you built up more relevant contacts for your work practice by using
SOBOLEO, the people tagging tool?
Table 5.10 Summary of the answers for questions 19 and 20
Rating
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully
agree
No
answer
Q19: Do you think
using SOBOLEO has
helped you to increase
the number of
colleagues in your
professional network?
3
6
4
6
3
3
1
1
Q20: Have you built
up more relevant
contacts for your work
practice by using
SOBOLEO, the people
tagging tool?
3
8
4
5
4
3
0
0
Question
Both questions lead to the conclusion that SOBOLEO did not help to increase the number of colleagues
in the professional network. Also, the participants state that they did not acquire more relevant contacts
for their work practice with the help of SOBOLEO. Summing up we do not find support for SMI 11 with
the questionnaire.
5.3.10 SMI 4: A person is (several times) tagged with a certain concept
SMI 4 should shed light on the interesting indicator as to if a person is tagged several times with a certain
concept. Therefore we analyse the number of assigned tags where tags were assigned more than once and
also we analyse the total number of assigned tags. Additionally we needed to evaluate the GMI itself
because of the weak justification. The questions asked were the following:
•
•
Questions 1 – 12: To what extent do you agree with the statement “profile A/B/C/D represents
this person accurately”?
Questions 1 – 12: If you rated profile A/B/C/D as “slightly agree”, “agree” or “fully agree” for
accuracy, please explain why in the box below.
56
We collected data for 87 users with a mean number of tags per user is 3.23, a standard deviation of 2.44
and a median of 2 tags. The following list summarises the results:
•
•
•
•
•
•
•
•
•
•
•
22 users have one tag
23 users have two tags
14 users have three tags
Nine users have four tags
Five users have five tags
Six users have six tags
One user has seven tags
One user has eight tags
Two users have nine tags
Three users have 10 tags
One users has 11 tags
Figure 5.7 Extended version of the person profile of Amy Clelland
Figure 5.8 Short version of the person profile of Amy Clelland
57
We analysed the top four users and the user with the most tag confirmations in detail. The following list
shows their tag assignment numbers (please note that numbers may be higher because of the training
phases, which we excluded in our analysis):
•
•
•
•
•
Isabel Taylor with 10 tags in total and confirmations for two of her tags
Ian McIntosh has 11 tags
Charles Birch has 10 tags (see also Figure 5.9 and Figure 5.10)
Michelle Bronson has 10 tags and a confirmation for one of her tags
Amy Clelland has three tag confirmations with four tags in total (see also Figure 5.7 and
Figure 5.8)
Figure 5.9 Extended version of the person profile of Charles Birch
58
Figure 5.10 Short version of the person profile of Charles Birch
For the evaluation of the GMI, we conducted a questionnaire with 12 questions. We have made the
assumption that the lowest maturing is experienced for person profile A and the highest for person profile
C). Table 5.11 shows the results for these 12 questions.
Table 5.11 Summary of the answer for the questions 1 – 12
Rating
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully
agree
No
answer
Q1: To what extent do
you agree with the
statement “profile A
represents this person
accurately”?
3
1
5
8
2
6
0
2
Q2: To what extent do
you agree with the
statement “profile A
is complete”?
3
10
5
4
2
2
0
1
Q3: To what extent do
you agree with the
statement “profile A
is useful”?
1
8
2
5
5
5
0
1
Q4: To what extent do
you agree with the
statement “profile B
represents this person
accurately”?
1
3
3
13
2
3
0
2
Q5: To what extent do
you agree with the
statement “profile B is
complete”?
2
10
5
5
1
3
0
1
Q6: To what extent do
you agree with the
statement “profile B is
useful”?
1
3
9
5
4
4
0
1
Q7: To what extent do
you agree with the
statement “profile C
represents this person
accurately”?
0
0
0
5
4
12
4
2
Q8: To what extent do
you agree with the
statement “profile C is
complete”?
0
1
0
4
3
15
3
1
Q9: To what extent do
you agree with the
statement “profile C is
0
1
0
4
1
14
6
1
Question
59
useful”?
Q10: To what extent
do you agree with the
statement “profile D
represents this person
accurately”?
0
0
0
5
1
11
5
2
Q11: To what extent
do you agree with the
statement “profile D
is complete”?
0
1
1
4
4
12
4
1
Q12: To what extent
do you agree with the
statement “profile D
is useful”?
0
1
0
4
3
14
3
2
It is obvious that person profile A is the least mature. Person profile B is perceived as slightly more
mature. The person profiles C and D are experienced as more mature, with a small advantage for person
profile C. As before for the SMIs in section 5.1.1, our assumption is confirmed and the SMI is supported.
We collected more detailed answers from the users, which rated each person profile for accurateness,
completeness or usefulness at least as “slightly agree” and which support our findings for SMI 4 (see
Appendix 12.5.6.6).
We also observed that confirmations for tags were almost never used. The mean number of tags per user
is three, with 30% of all users in the system (298 users), respectively with more than 40% of users, who
participated in the training phase (212 users). We managed to show person profile maturing for four
different person profiles and can therefore support this SMI. Additionally, we got support for the GMI
evaluation from the results of the questionnaire.
5.3.11 SMI 5: A person is tagged by many different users
SMI 5 covers the question of if a person is tagged by many different users. We analysed the data with
respect to the occurrence of at least two different taggers for one person. In total we found that 50 users
met our criteria and in total 141 different taggers have been observed. One tag was deleted again and 34
tags have been given by the users to their own person profile. The mean number of different taggers per
person with at least two different taggers is 2.82; the median is at 2.50 and the standard deviation exactly
1. The following list shows a summary of the distribution of the different taggers per user:
•
•
•
•
•
25 users have two different taggers
13 users have three different taggers
Nine users have four different taggers
Two users have five different taggers
One user has six different taggers
The analysis of the three users with five and six different taggers, did not lead to further insight. Selftagging does not play a big role for SMI 5.
Additionally we need to evaluate again one GMI, because we have a weak justification. The questions
taken from the questionnaire are the following:
•
•
•
Question 13: I consider a person profile more accurate if many different people have tagged it.
Question 14: I consider a person profile more complete if many different people have tagged it.
Question 15: I consider a person profile more useful if many different people have tagged it.
Table 5.12 shows a quick summary of the questions.
60
Table 5.12 Summary of the answer for the questions 13, 14 and 15
Rating
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully
agree
No
answer
Q13: I consider a
person profile more
accurate if many
different people have
tagged it.
1
1
1
7
4
11
1
1
Q14: I consider a
person profile more
complete if many
different people have
tagged it.
1
1
2
7
6
8
1
1
Q15: I consider a
person profile more
useful if many
different people have
tagged it.
1
0
1
6
8
10
1
0
Question
In terms of Knowledge Maturing we find strong support for the hypothesis that the users consider a
person profile more accurate, complete and useful, if many different people have tagged it. We can claim
therefore support for this SMI.
5.3.12 Discussion and Implications
Table 5.13 shows the summary for the support of the different SMIs and Table 5.14 and depicts the
results of the GMI evaluation. In general terms we managed to find support for the first central question
with seven SMIs, where at the same time one SMI (SMI 9) was unsupported. Additionally the second
central question gets support from two SMIs and no support from one SMI (SMI 11).
Table 5.13 Support for the different SMIs
SMI
Support
SMI 1
A person is annotated with additional tags at a later stage
by the same user
Supported
SMI 2
A topic tag is reused for annotation by the "inventor" of the
topic tag
Supported
SMI 3
Topic tags are reused in the community
Supported
SMI 4
A person is (several times) tagged with a certain concept
Supported
SMI 5
A person is tagged by many different users
Supported
SMI 6
Topic tags are further developed towards concepts; e.g.
adding synonyms or description
Supported
SMI 7
A topic tag moved from the "prototypical concept" category
to a specific place in the ontology
Supported
SMI 8
The whole ontology is edited intensively in a short period of
time, i.e. gardening activity takes place
Supported
SMI 9
An ontology element has not been changed for a long time
61
Not supported
after a period of intensive editing
SMI 10
A person profile is often modified and then stable
Supported
SMI 11
An individual changed its degree of networkedness
No support from two
questions of the
questionnaire
Table 5.14 Summary of the GMI evaluation
GMI
Evaluation
II.4.1
An individual has been rated with respect to expertise
Successful
SMI 1
I.1.1.3
An artefact has changed its degree (score) of formality
Successful
SMI 4 and SMI 7
I.2.3.6
An artefact is edited intensively within a short period of time
Successful
SMI 8
I.2.1.3
An artefact has been created/edited/co-developed by a diverse
group
Successful
SMI 5
IV.2.1
An individual changed its degree of networkedness
Not successful
SMI 11
Additionally, we observed for SMI 6 that adding a description is something that happens in an earlier
phase of the KMM. But we need to bear in mind that this can also be attributed to changes in the structure
in the training phase, although we do not have any evidence for this limitation. Moreover for SMI 4 the
user interface seems to be very important and a different UI might help to increase the number of tag
confirmations. Finally we got support for the different types of person profile maturity, which can be seen
as well in SMI 4.
In general, all Indicators were assessed to one point in time. The concept of Maturing could probably be
evaluated better, if theses assessments had taken place after several distinct points in time or timespans
(like 5.3.6). Hence, it is hard to argue for (support of) maturing in general. Furthermore, the level of detail
for presenting the results and how the researcher came to them is (section 5.3) quite heterogeneous.
5.3.12.1 Implications for Future Developments
Further developments and investigation will be needed to better understand the formalization process and
which support could be additionally provided, for instance with tools similar to gardening
recommendations. Also we got support for the potential of gardening recommendations, e.g. if a tag is
moved away from the latest topic section, then there is a better understanding of the tag. Further
development and research about different types of recommendations and their impact, and also in which
situations which recommendations work well, needs to be done. We may support the reseeding and
gardening by identifying inconsistencies, redundancies or gaps and providing suggestions, better
recommendations and support for merging concepts. Methods to extract semantics from folksonomies
may be used to provide recommendations for the enrichment of the ontology. An overview of such
methods is given in Braun (2011). In order to move from lightweight to heavyweight ontologies, we may
use algorithms as presented by Lacasta et al. (2010, pp. 99) that help, for instance to identify a
broader/narrower relationship between two concepts as an is-a-relationship.
Also not yet further investigated is the aspect of “redundancy” and obsolete tags and how to support the
cleanup and thus when Knowledge Maturing again decreases (see also study of tag redundancy in D4.4).
62
Research about the degree of networkedness was not successful. We might need a longer period of
investigation and additional support, e.g. visualizations that show people-topic-connections, to infer more
conclusions. Also motivational aspects like feedback mechanisms to support participation could be
helpful to boost usage of the system and therefore achieve a clearer picture of the networkedness of the
users (see D2.2/D3.2).
5.3.12.2 Implications for Knowledge Maturing
The main driver for this evaluation has been the validation of key assumptions underlying our ontology
maturing model (as described in D1.1) , which is a specialization of the Knowledge Maturing phase
model for knowledge how to describe expertise and know-who, and the SOBOLEO tool, which
implements the ontology maturing model’s suggested support. The key idea is that instead of expert
groups specifying the vocabulary and updating it periodically in longer timeframes, every user of the
system continuously contributes to the evolution of the vocabulary by adding new tags, reusing them, and
consolidating them towards a shared vocabulary. The same applies not only to the vocabulary, but also to
person profiles, which are incrementally built by the users of the system. The success of such an
evolutionary development within everyday work processes depends on key assumptions that have been
evaluated as part of the Summative Evaluation (see also Figure 5.11) 7:
•
Phase Ia to Ib. For the transition from phase Ia to Ib, individuals need to reuse their own tags so
that they are not one-time keywords, but rather expressions of the individual’s vocabulary. SMI2
data has shown that individuals use their own tags at a later stage. Likewise, person profiles are
also not constructed in a single step, but refined by the users of the system (SMI1).
•
Phase Ib-II. Crucial for entering the community consolidation phase is the take-up by the
community, which manifests in the reuse of tags by others. This has been observed with SMI3.
•
Phase II and II-III (vocabulary). The collaborative consolidation and formalization depends on
sufficient user activities in terms of adding additional details to tags like description or synonyms
(SMI6) moving from the unsorted “latest topics” section to the hierarchy (SMI7) and gardening
activities (SMI8), all of which have been observed in the evaluation. Convergence could also be
observed because of the stability periods in the analysis of SMI8. We have further gained
additional insights, such as:
o Adding a description could be observed much more frequently than adding synonyms or
mistyped spellings. Placement in the hierarchy, on the other hand, often has been
performed early on so that we can conclude that the transition between phases II and III
is much more fluid than originally expected. All of the ontology editing operations could
be observed without a particular order, e.g., first adding synonym, then adding to the
hierarchy and adding a description. This means that there seems to be no perceived
difference in the formality level between lexical relations of the SKOS format (Simple
Knowledge Organization System by Miles & Bechhofer, 2009), comprising of a
description, alternative labels, and hidden labels, and concept relations (broader and
narrower).
o As part of the cluster analysis of SMI8, recommendations have been found to be used by
users that are not involved in gardening at other occasions so that we can conclude that
recommendations for gardening reach beyond the typical “gardeners”.
o Increase of maturity should correlate to higher levels of stability. Such stable periods
could not be sufficiently observed in the evaluation as part of SMI10. This might be
7
As explained in D6.3, the people tagging tool focuses on the early phases of the Knowledge Maturing and
ontology maturing process as there are many ontology modeling tools available for the later phases (for dedicated
experts), but broad participation in early phases. Therefore our evaluation has also focused on the early phases, and
our insights are restricted to those phases listed below.
63
related to the exclusion of training-related time periods, but in any case needs further
investigation.
•
Phase II and II-III (person profiles). Similar to the vocabulary, also the evolution of person
profiles requires a sufficient level of activity with respect to affirmation of existing tags (SMI4)
by a diverse group of individuals (SMI5).
o While there has been sufficient support for those SMIs, the affirmation of tags has
remained below expectation. This can be traced back to UI decisions as part of the design
framework introduced in D2.2/3.2. Tags by others are shown to the users so that they
probably felt no need to add/affirm the tag anymore. For explicit affirmation, it would
have been beneficial to hide the judgments of others, which in turn would have had other
unintended effects. This confirms the design trade-offs as part of the design framework:
each context-specific configuration has to make its own decisions as to which
combination of design aspects makes most sense for a particular organizational context.
o In contrast to the ontology, stability of person profiles could be observed so that there
seems to be sufficient evidence for the assumption that there actually is consolidation
which leads to agreement.
Overall, the key assumptions of the ontology maturing model (which is based on the Knowledge
Maturing Phase Model) have been confirmed. Further investigation will be needed to better understand
the formalization process and what support could be additionally provided.
Figure 5.11 Ontology
Summative Evaluation
5.4
maturing
model
annotated
with
evidence
from
the
Conclusions
As part of the Summative Evaluation, we have been able to observe the usage of the system as part of
everyday practice. We have also successfully been able to introduce the tool to a significantly larger user
64
base than originally planned. This has yielded evidence about user acceptance and usefulness. Within the
evaluation we have been able to observe Knowledge Maturing over the period of use. Furthermore, we
have also been able to collect evidence that the key assumptions of the ontology maturing model as the
specialization of the Knowledge Maturing phase model to the maturing of vocabulary and know-who by
evaluating additional specific Knowledge Maturing Indicators. On the other hand the evaluation had some
constraints and external factors that might have affected some of the results:
•
•
•
The revised planning of the evaluation towards upscaling has led to a phased introduction of the
tool so that users entered the system at different stages and thus had also different periods of
usage, which is often the case in real world scenarios.
The methodological decision to exclude training events and their preparation has excluded a large
quantity of events which were particularly related to vocabulary development.
The organization experienced severe economic pressures that have arisen during the evaluation
period, which might have affected attitudes towards the system – although we have found no
direct evidence of such an influence in the responses of the users.
Still, as discussed in the previous section, overall the evaluation at igen has yielded evidence that the key
assumptions behind the concept of ontology maturing and people tagging hold. Furthermore, we could
also collect evidence for justifying four GMIs which arose from the design activities and have not been
validated as part of the representative study. These were:
•
•
•
•
An individual has been rated with respect to expertise
An artefact has changed its degree (score) of formality
An artefact is edited intensively within a short period of time
An artefact has been created/edited/co-developed by a diverse group
Further we can say that retagging by the same user takes place either directly (within 5 minutes) or on
another day (more than 24 hours). The latter one is important for keeping the data up-to-date and this
confirmed our initial assumption that up-to-dateness is important. Unfortunately we could not observe
sufficient support for “an individual has changed its degree of networkedness” as a wider impact of
people tagging on the social network of individuals. The situation at Connexions Northumberland was not
facilitating the possibilities of getting new contacts, because of severe economic pressure and parts of the
company closed operations. This might have led to a decreased motivation to get in contact with new
colleagues. Also due to this we could not conduct further interviews with the employees to find possible
causal explanations for the non-support of the networkedness-SMI.
Besides individual aspects like feedback, support and value for the work, we need to improve the feeling
of networkedness by trying to offer space for reflection. Although this seems to have worked for most of
our SMIs, unfortunately the severe economic pressure did not allow the employees at Connexions
Northumberland to relate sufficiently to the company.
65
6
Summative Evaluation at Connexions KENT (Study 3)
6.1
Introduction
The Summative Evaluation of the Connexions Kent Instantiation took place with careers advisers from
Connexions Kent. The objective of the software developed for the Labour Market Information (LMI) case
(Connexions Kent Instantiation) is to actively support social learning in a distributed setting with a focus
on maturing knowledge that is represented in documents (content). It was designed for, and tested in, the
application context of career services. The aim of the software is to support knowledge workers in sharing
their knowledge and experience and to foster informal, work-integrated learning when dealing with
rapidly changing information, as for example LMI in the context of career services. Therefore, the
Connexions Kent Instantiation provides a system supporting a community-driven quality assurance
process in combination with social learning. This quality assurance process relates to both the personal
need for an adviser to find appropriate and up-to-date information as fast as possible for the current work
context and to the organisational need of achieving a coherent and high quality organisational identity in
the development of knowledge artefacts.
The Connexions Kent Instantiation consists of a desktop client that integrates five different so called
‘widgets’ and a Firefox browser plugin called MatureFox. The desktop widgets and MatureFox use the
same data that is stored in a Social Semantic Server, that is, different users using both the desktop widgets
and MatureFox can share the information which they have on their desktops and which they find when
browsing the web. The following widgets are part of the Connexions Kent Instantiation that was tested in
the formative evaluation:
• A Collection Widget allows users to ‘collect’ information both from their desktop and from the
web in so called collections that can be shared with other users, and to subscribe to the collections
of other users.
• In the Tagging Widget, users can add private and public keywords (tags) to resources in order to
find these resources more easily.
A Tag Cloud Widget provides an overview of all tags that have been used in the system by all users.
In the Search Widget the user can insert a keyword (search term) either manually or select from the
list of tags that are displayed in a sidebar of the Search Widget. In the Gardening Widget (Taxonomy
Editor), existing tags can be organised into hierarchies, to ‘tidy up’ the user generated tags.
• The widgets are interlinked: From the collection widget, the users can start the Tagging Widget, to
annotate elements in the collection with freely chosen keywords (tags). From the tagging widget, a
search can be triggered that returns all resources that have been tagged with the selected tag.
• The most interesting widget in terms of Knowledge Maturing is the search widget. Search
keywords in the sidebar can be organised into hierarchies using the Gardening Widget (Taxonomy
Editor). In addition to user generated tags, the Search Widget also displays automatically generated
tags, i.e. tags that refer to the ‘maturity’ of resources and that are automatically computed based on
the list of Knowledge Maturing Indicators/Events (e.g., ‘has not been added to a collection’, ‘has
been tagged by many users’ etc.). These automatically generated tags can be used to further refine
the search. The search returns internal resources that have been tagged with the selected keyword.
From the search widget, these resources can be added to a collection in the Collection Widget, or
they can be annotated with new tags in the Tagging Widget. Clicking on a search result opens it in
the web browser. Besides searching for internal resources, the search widget can be used to search
for external (public available) resources on the web through Yahoo! Answers.
In addition to the widgets, the Connexions Kent Instantiation includes MatureFox, a Firefox plugin that
allows users to tag and rate websites as they browse the internet, thereby adding these resources to the
collective knowledge base (Social Semantic Server). Resources tagged with MatureFox can be accessed
from the other widgets by either searching for tags in the Search Widget, or by clicking on tags that the
resource has been tagged with in the Tag Cloud Widget. An overview of the widgets in the Instantiation is
given in Figure 6.1.
66
Figure 6.1: Connexions Kent Instantiation: Widgets and MatureFox
6.1.1 Summary of formative evaluation
[LTRI note: this section has been included at request of Instantiation authors because difficulties were
encountered and these have a bearing on the Summative Evaluation. LTRI agrees that we need to tell the
success stories as well as the ‘hard news’ stories.]
The formative evaluation was undertaken between April and June 2010 with a pilot group of seven
employees including two senior managers from Connexions Kent. This group represented the full
spectrum of confidence and competence in the use of ICT, with two describing themselves as ‘phobic’ to
its use. Two hands-on workshops were given, but some users still found the system difficult to operate.
Collections were set up and users uploaded resources and tagged items. Users found the following to be
the most useful elements of the software:
• Tagging in Firefox, dragging URL into Firefox
• Tagging window
• Search functions
• Tag cloud
• Creating/editing in the wiki
• Collections.
The formative evaluation raised issues around:
• Tag cloud display and ordering
• Displaying and selection of personal and organisational/shared tags
• Difficultly in using the gardening widget
• Inability to share collections.
67
Users were more positive with the software and more confident in its use than in previous sessions.
However, it was felt that the system needed to flow better and technical faults needed to be fixed before
users linked up with another person to expand the pilot group. It was felt that, currently, the software
could be “off-putting” to new users. The development team stated that improvements could not be made
until June. The pilot users felt that more time could be given to the software in July/August.
The formative evaluation showed that – albeit not being familiar with the Web 2.0 community-driven
bottom-up approach to a large extent – users got increasingly more familiar with the central ideas and
features of the software. They continued to see clear potential for such a system in their context where
several functionalities of the software (such as the discussion-forum, collection-widget or ratingfunctionality) received very positive ratings. For others (e.g. the wiki and gardening-widget) a clearer
design rationale needed to be established.
6.1.2 Summative evaluation: Research questions and hypotheses
Before designing the Summative Evaluation for the Connexions Kent Instantiation, a mapping was
carried out for G/S knowledge Maturing Indicators on the basis of which the Instantiation had been
designed.
Starting from the GMI – SMI mapping (Indicator Alignment) provided in D2.3/D3.3 for the Instantiation
for Structuralia/Connexions Kent, the first step was to reduce the set of SMI drastically by checking the
redundancy, missing functionality in the Instantiation, and missing plausibility. Afterwards, we clustered
the SMI according to GMI and gave the reasoning for this mapping, although in many cases it is quite
self-explanatory. Just for clarity, there exist very specific SMI like “A new resource has been added to a
shared collection.”; the pendants “… existing resource…” or “… private collection.” simply did not exist
in the first list of transition Indicators provided in D2.3/D3.3 and was (for matter of correctness) not
extended here.
The mapping of GMIs/SMIs to the Connexions Kent Instantiation showed that for all G/S knowledge
Maturing Indicators, the level of justification was strong (e.g. based on GMI developed via ethnographic
study, and re-assessed in the representative study). Therefore, we decided to use GMIs/SMIs to evaluate
the Instantiation’s effect on Knowledge Maturing (Top Level Evaluation Goal 2).
The Summative Evaluation at Connexions Kent focused on three different research questions to be
answered:
1. Does the Connexions Kent Instantiation support Knowledge Maturing activities related to
Knowledge Maturing Indicators (GMIs)?
2. How do the users use the Connexions Kent Instantiation? What do they appreciate, what needs to
be improved?
3. How usable is the Connexions Kent Instantiation?
In order to answer these research questions, a multi-method design was chosen. First, in a questionnaire
study, quantitative ratings should provide insight into the usefulness of the most relevant Knowledge
Maturing functionality to be supported with the Connexions Kent Instantiation. The Questionnaire Study
addressed questions 1 and 3 mentioned above.
Items in the questionnaire related to Knowledge Maturing were shaped by the assignment of Knowledge
Maturing Indicators to the Connexions Kent Instantiation (Deliverable D2.3/3.3).
General Knowledge Maturing Indicators used for this study are listed below:
• I.2.3.3: An artefact has been the subject of many discussions
• I.3.10: An artefact was changed
• I.3.3: An artefact was selected from a range of artefacts
• I.3.4: An artefact became part of a collection of similar artefacts
• I.3.6: An artefact is referred to by another artefact
• I.3.9: An artefact has been used by an individual
68
•
•
•
•
I.4.6: An artefact has been assessed by an individual
II.1.3: An individual has contributed to a discussion
II.1.5: An individual has significant professional experience
II.3.1: An individual has a central role within a social network
(For a complete mapping of SMIs for the Connexions Kent Instantiation to GMIs, and to questions in the
questionnaire see Appendix 12.6.1.1.)
Second, a Focus Group was carried out to gain further information on the usefulness of the Connexions
Kent Instantiations as experienced by the careers advisers with regard to Knowledge Maturing support
(research question 2).
6.2
Evaluation Description
6.2.1
Evaluation Workshops
In November 2010, the system had initially been introduced to the Connexions Kent careers advisers.
Some minor fixes and changes were identified and agreed by the group. The Summative Evaluation took
place between May and July 2011 in three workshops. In the rest of this section we will describe both the
procedure and the issues that occurred during the workshops.
6.2.1.1
First Evaluation Workshop in May 2011
Figure 6.2: User Workshop (Tagging Exercise)
The first workshop took place early in May 2011. Eight Connexions Kent careers advisers joined the
workshop and six representatives of the project. The revised system was loaded onto the careers advisers’
laptops. Dongles (USB sticks for accessing mobile internet) were also given to the advisers during this
first workshop to enable them to access the Connexions Kent Instantiation irrespective of location, as the
69
formative evaluation had highlighted that some users had experienced difficulties accessing the internet in
some schools where they were worked. Users were guided through key elements of the system and topics
were agreed for individuals to work on in the course of a Tagging Exercise with post-its (Figure 6.2).
Careers advisers began to use the system, but found that they could not view all shared collections,
collections could not always be subscribed to and that when they logged into the system, shared
collections had disappeared. It was also noted that collections which had previously been put into the
system were no longer available; careers advisers felt that their work had been lost. Also, the web search
facility started at google.com, so results were not relevant to the UK and careers advisers. In addition,
some users had concerns about having to log-in to each widget. Comments were also received on the
vocabulary used in the system, as it was considered too technical (i.e. tag gardening). Careers advisers
agreed that, rather than trying to use the system in isolation from colleagues, the established super-user
group would meet and work on LMI research. To facilitate this process, topics of joint interest were
identified and prioritised.
In order to collect more data, two further hands-on workshops were organised for June and July.
6.2.1.2
Second Evaluation Workshop in June 2011
The updated system as (technically) expected was implemented at the first workshop. Four careers
advisers (who had been involved in the first workshop and the formative evaluation) participated in the
workshop in June together with three representatives of the project. The reduced number of participants
was the result of structural changes within the organisation and a conflict with a management meeting.
At the second workshop, the users who participated, were successfully re-introduced to the system based
on the topics (‘tags’) identified in May, during the tagging exercise,. Also at the second workshop, three
careers advisers continued to have problems with understanding and using the system. Users commented
that the system was complicated to set up (i.e. loading widget bars, opening and manipulating different
windows), difficult to navigate and disjointed (i.e. it was not easy to move from one step in the process to
the next). It was agreed that a user guide with step-by-step instructions needed to be developed, which
was prepared for the third workshop. Users also had difficulty in managing and manipulating the various
windows comprising the software, particularly on their laptops.
At the second workshop various minor technical issues were observed:
• Logging in difficulties for one user.
• MatureFox was not always displaying ratings and tags which had been saved by other users.
• Tag display did not always function (i.e. not all tags were listed, shared and individual tags did
not display).
• For some users, collections could not be seen, subscribed to shared or disappeared after users
logged in again.
• Website search required a minor fix so could not be used until the planned July workshop
(participants used Firefox for web search instead).
The Taxonomy Editor (Gardening Widget) was not presented for this part of the evaluation, as the
technical team considered this part of a higher complexity of usage. Careers advisers reflected on how
this would be used, discussing roles and responsibilities of users in updating and ensuring tags were
correct and appropriate.
Careers advisers worked with the software, collected data and increased their knowledge of the software.
Finally, each careers adviser chose a topic relevant to their professional interests and knowledge
development, on which they agreed to collect further information within the system in order to gain log
data for the evaluation.
6.2.1.3
Third Evaluation Workshop in July 2011
The third and concluding workshop was held at the end of July. Five career advisers (four of whom had
been involved in the Summative Evaluation since the start and one new user) participated, plus three
representatives of the project. Careers advisers had the choice either to work on their own or to jointly use
70
the system guided by particular tasks. One participant preferred the option to work alone, whilst the
others followed the guide with a representative of the project.
At this final workshop, although the careers advisers had identified materials to load on to the system they
did not feel confident in uploading the materials. Only one user had added materials since May, which
others took advantage of downloading. Confidence in using the Mature Firefox widget and dragging
URLs into collections was low. Users reported that they were unsure the information would be stored,
would be retrievable or that it could be shared with colleagues.
The usefulness of the dongles in enabling access to the Connexions Kent Instantiation was also reflected
upon during this workshop. Users considered them very useful as they had enabled them to access the
Instantiation regardless of their location. However, users did not fully exploit the internet access provided
by the dongles as the problems they experienced with the Instantiation had created some uncertainty. The
session focused on reviewing individual progress with using the system, looking at the new user guide
and working on collecting resources in the system. Usage data was collected. At the end of the workshop,
the participants took part in a Focus Group, which was led by two of the project representatives. The
findings are described in detail in Section 6.4.
6.3
The Questionnaire Study: Evaluation of Knowledge Maturing with SMIs
The Questionnaire Study was split into two phases – a pre-evaluation phase and a post-evaluation phase
in order to be able to compare the situation before they had used the software with after they had used it.
6.3.1
Questionnaire (Pre/Post)
In order to get ratings of the perceived need for Knowledge Maturing functionality and the perceived
usefulness of the Instantiation to support Knowledge Maturing activities, a pre-usage questionnaire and a
post-usage questionnaire were designed that will be described in the following. Note that with the users
the term “demonstrator” was used in all questionnaires and sessions to refer to the Connexions Kent
Instantiation as they had already become familiar with this. The questionnaires and the mapping of
questions and SMIs can be found in Appendix 12.6.2.1 and 12.6.2.2. The following abbreviations will be
used for reporting the findings: mean (M), standard deviation (s).
Seven participants filled the pre-usage questionnaire after they had signed a consent form during the first
participant workshop in May 2011. Completing the pre-usage questionnaire took them 15 minutes
approximately.
Six of them were female and one participant was male. When asked how long they work as careers
advisors, an average of nearly 12 years was stated (M = 11,86; s = 7,559; range: 6-27). On average,
participants stated that they usually spend 48% (M = 47,86; s = 16,79; range: 20-65) of their time at work
using a computer.
Pre-usage Questionnaire
The pre-usage questionnaire consisted of two parts.
The first part was based on typical activities that relate to the Instantiation (as assigned to the Instantiation
in D2.3/3.3., p.67). Out of the list of activities in D2.3/3.3 (p.67), these activities were selected that were
planned to be deployed at Connexions Kent. This resulted in a Questionnaire (See Appendix 12.6.2.1)
that consists of 17 questions which evaluate how typical various activities for a person are.
Question: “Please indicate for each of the following activities to which extent these are typical for your
own work”
Examples for items:
“I search for colleagues to ask for help”
“I store relevant results on collections on my desktop or laptop”
71
The items were rated on a four-point rating scale with the following options: Untypical - Rather
Untypical – Typical - Very Typical. It was designed to help us to identify typical Knowledge Maturing
activities in the participant group.
The second part of the questionnaire consisted of 13 items that were rephrased into questions from the
statements in the first part of the questionnaire. Four out of the 17 statements in the first part of the
questionnaire were not asked in the second part of the questionnaire because the Connexions Kent
Instantiation could not support these activities. For the remaining 13 activities we asked whether the
activities and/or resources work well or whether they would need improvement.
Question: “Please indicate for each of these activities whether they work well at the moment or whether
the facilities (such as IT, paper-based materials etc.) that support these activities need changing or
improving”
Examples for Items:
“Searching for colleagues to ask for help”
“Storing relevant results in collections on my own desktop or laptop”
The questions were answered on a four-point rating scale with the following options: Not crucial for my
work – Works well – Needs some Improvement – Needs a lot Improvement.
Additionally, two open questions focused on socio-demographic data:
•
•
•
How long have you been working as a careers advisor?
How much of your work time do you spend on your computer (in %)?
Comments.
Post-usage Questionnaire
Four of the participants who completed the pre-usage Questionnaire also filled in the post-usage
questionnaire in the final workshop at the end of July. All of them were female.
The post-usage questionnaire (Appendix 12.6.2.2) consisted of three parts.
The first part of the post-usage questionnaire addresses the perceived need for improvement of the
Instantiation (i.e. to find out whether the Instantiation supports the activities well or improvements are
needed). The questionnaire consists of 12 items that start from Knowledge Maturing activities (the same
as in the pre-usage questionnaire) but that are rephrased in terms of the functionality that is provided by
the Instantiation. Further, this part of the questionnaire asks one open question (“Comments”).
Question: “In the following, a couple of activities are described that are intended to be supported with the
MATURE demonstrator tool. Please indicate for these activities whether you think the demonstrator
supports them well or whether improvements are needed”.
Examples of questionnaire items:
“Searching for colleagues to ask for help”
“Searching on the internet for relevant information”
The items were rated on a four-point rating scale with the following options: Supports the activity well –
Needs some improvement – Needs a lot of improvement – I don’t know.
The second part of the post-usage questionnaire addresses experiences with the Instantiation. The
activities supported within the Instantiation were assessed with the same 12 items as in the first part of the
questionnaire.
Question: “Please indicate for each of these activities, which of the suggested answers corresponds to
your experiences with the demonstrator. (Multiple answers are possible). Examples of questionnaire
items: “Searching for colleagues to ask for help”
“Searching on the internet for relevant information”
72
A multiple choice list with 7 options was provided to the users: I found the functionality useful – I didn’t
have time to use it – I wasn’t interested in the activity – I found the functionality confusing – I couldn’t
find the functionality – I didn’t know about the functionality – I didn’t need the functionality.
Moreover, as at the time when the post-usage questionnaire was distributed, we were aware that there had
not been much usage (log data), we posed one open question: “What stopped you from using the
demonstrator?”.
We wanted to seize the post-usage questionnaire to collect some general information on the usability of
the demonstrator (research question 3). Therefore, in the third part of the post-usage questionnaire, the
participants were asked to assess functionality and usability of the 5 widgets of the Instantiation
(Collections, Tagging, Tag Cloud, Search, Discussion), the MatureFox, and the software (“MATURE
Demonstrator”) as a whole, and to suggest modifications to make the software more useful (open
question).
For each of the widgets, the participants were asked to rate functionality on a three-point rating scale
(Very useful – Somewhat useful – Not useful at all) and usability on a three point rating scale (Easy to use
– Rather easy to use – Difficult to use).
6.3.2
Findings from the Questionnaire Study related to Specific Knowledge Maturing
Indicators
Pre-usage
Table 6.1 gives an overview of the answers in the first part of the pre-usage questionnaire.
Table 6.1: Answers to questions concerning current practices of knowledge
creation and knowledge sharing
Please indicate for each of the following activities to which extent these are typical for your own work
Item
N Untypical
Rating
Rather Typical Very
untypical
typical
1
2
3
4
M
Median
s
I search for colleagues to ask for help
7
0
2
3
2
3,00
3,00
,816
I search on the internet for relevant
information
7
0
0
0
7
4,00
4,00
,000
I search on my own desktop for relevant
information
6
0
1
2
3
3,33
3,50
,816
I search in other resources for relevant
information (paper based copies…)
7
0
0
4
3
3,43
3,00
,535
I take individual notes that I revisit at later
points in time
7
0
1
2
4
3,43
4,00
,787
I store relevant results in collections on my
desktop or laptop
7
1
0
3
3
3,14
3,00
1,069
I add keywords or tags to my digital resources 7
in order to find them at a later date
0
4
3
0
2,43
2,00
,535
7
I add keywords or tags to my paper-based
resources in order to find them again at a later
2
2
3
0
2,14
2,00
,900
73
date
I make relevance judgements for digital
documents in order to highlight the most
interesting resources and find them at a later
date
7
1
2
3
1
2,57
3,00
,976
I make relevance judgements for paper-based
resources in order to highlight the most
interesting resources and find them at a later
date
7
0
3
3
1
2,71
3,00
,756
I maintain my private collections and
continuously add materials
7
1
1
4
1
2,71
3,00
,951
I discuss relevant resources with my
colleagues
7
0
1
3
3
3,29
3,00
,756
I share my private digital collections with
colleagues
7
2
0
5
0
2,43
3,00
,976
I share my private paper-based collections
with colleagues
7
1
2
3
1
2,57
3,00
,976
I share my private notes with colleagues
7
2
3
2
0
2,00
2,00
,816
My colleagues and I have a common
taxonomy/classification for tagging (or
labelling) resources
7
2
4
1
0
1,86
2,00
,690
My colleagues and I maintain common digital
collections of information materials
7
1
3
2
1
2,43
2,00
,976
In the following, the most typical Knowledge Maturing activities are summarised.
Seven out of seven participants stated they typically or very typically “search on the internet for
relevant information” and they “search in other resources for relevant information (paper based
copies...)”.
Six persons stated that they typically or very typically “take individual notes that [they] revisit at later
points in time”, that they “store relevant results on collections on [their] desktop or laptop” and they
“discuss relevant resources with [their] colleagues”.
Five respondents said that they typically or very typically “search for colleagues to ask for help”, they
“search on [their] own desktop for relevant information”, they “maintain [their] private collections and
continuously add materials” and “they share [their] private digital collections with [their] colleagues”.
Four out of seven also stated they typically or very typically “make relevance judgments for digital
documents in order to highlight the most interesting resources and find them at a later date”, they “share
[their] private paper-based collections with [their] colleagues” and that “they and [their] colleagues have a
common taxonomy/classification for tagging (or labelling) resources”.
The answers to the second part of the questionnaire are summarized in Table 6.2. Statistical measures (M,
Median, s) are calculated without taking into account the answer “Not crucial for my work”.
74
Table 6.2: Answers to questions concerning perceived need for improvement
Please indicate for each of these activities whether they work well at the moment or whether the facilities
(such as IT, paper-based materials etc.) that support these activities need changing or improving.
Item
N
Rating
Not
crucial
for my
work
Works
well
Needs
Needs a
some
lot of
improve- improvement
ment
-
1
2
3
M
Median
s
Searching for colleagues to ask
for help
7
1
2
3
1
1,83
2
0,753
Searching on the internet for
relevant information
7
0
4
3
0
1,43
1
0,535
Searching on my own desktop
for relevant information
7
0
3
4
0
1,57
2
0,535
Taking individual notes that I
revisit at later points in time
7
0
2
4
1
1,86
2
0,69
Storing relevant results in
collections on my desktop or
laptop
7
0
1
5
1
2
2
0,577
Adding keywords or tags to my
digital resources in order to find
them at a later date
7
0
1
3
3
2,29
2
0,756
Making relevance judgements
for digital documents in order
to highlight the most interesting
resources and find them at a
later date
7
1
1
1
4
2,5
3
0,837
Maintaining private collections
and continuously adding
materials
7
0
1
4
2
2,14
2
0,69
Discussing with my colleagues
about relevant resources
7
0
1
4
2
2,14
2
0,69
Sharing private digital
collections with colleagues
7
0
0
4
3
2,43
2
0,535
Sharing my private notes with
colleagues
7
0
1
3
3
2,29
2
0,756
Creating a common
taxonomy/classification for
tagging (or labelling) resources
7
0
1
1
5
2,57
3
0,787
Maintaining common digital
collections of information and
materials with colleagues
7
0
0
2
5
2,71
3
0,488
75
In the following, activities are listed that were stated to need some or a lot of improvement by more
than three people:
Seven participants stated that “Sharing private digital collections with colleagues” and “Maintaining
common digital collections of information and materials with colleagues” need some or a lot of
improvement.
Six respondents found that the following resources needed improvement: “Storing relevant results in
collections on my own desktop or laptop”, “Adding keywords or tags to my digital resources in order to
find them at a later date”, “Maintaining private collections and continuously adding materials”,
“Discussing with my colleagues about relevant resources”, “Sharing my private notes with colleagues”,
and “Creating a common taxonomy/classification for tagging (or labelling) resources”.
Five persons indicated that “Taking individual notes that I revisit at later points in time” and “Making
relevance judgments for digital documents in order to highlight the most interesting resources and find
them at a later date” needed to be improved.
Four participants thought that “Searching for colleagues to ask for help” and “Searching on my own
desktop for relevant information” need to be improved.
In contrast, four participants stated that Searching on the internet for relevant information” works well,
and three of the participants assessed “Searching on my own desktop for relevant information” as
working well.
In the open questions, one participant commented that “Keeping up to date info and constant change can
cause confusion among colleagues and [him/herself]”.
Post-usage
In the first part of the post-usage questionnaire, the four participants who completed it were asked to
indicate how well Knowledge Maturing activities were supported by the Instantiation, and if
improvement would be needed. The ratings are summarised in Table 6.3. Statistical measures (M,
Median, s) were calculated without taking into account the answer “I don´t know”.
“Storing relevant results in the ‘collections’” received the most positive ratings: three participants stated
that the Instantiation supports this activity well.
Two out of four participants stated that the following activities need improvement: “Searching on the
internet for relevant information”, “Taking individual notes that I revisit later”, “Adding keywords or tags
to my resources in order to find later” and “Discussing relevant resources with my colleagues”.
Three out of four participants thought that the Instantiation needs some or lot of improvement in
supporting the activity of “Searching for colleagues to ask for help” and “Creating a common
taxonomy/classification for tagging (or labelling) resources”.
Three participants commented on the perceived need for improvement also with open answers:
“Unfortunately I find the programme has too many sequences which are not my strength at all! I
also feel that you have to be a good user of computer to get the most from the product. I am often
left feeling very confused as I can´t see connectors very easily from one section to another.”
“I still need to familiarize myself with some of the widgets, but overall it is working a lot better
than before. It is still logging me off when there are a few people on it.”
“I don´t understand how tags will be organised and tidied-up if there are too many headings/tags,
people will not want to use the software. We haven´t really covered this in our sessions.”
A comparison of findings from the pre-usage questionnaire and post usage questionnaire will be
presented in the discussion section.
Table 6.3: Answers to questions concerning perceived need for improvement
76
In the following, a couple of activities are described that are intended to be supported with the MATURE
demonstrator tool.
Please indicate for each of these activities whether you think the demonstrator supports them well or whether
improvements are needed.
Item
N
Rating
I don´t
know
Supports
the
activity
well
Needs some
improvement
Needs a lot
of improvement
-
1
2
3
M
Median
s
Searching for colleagues to
ask for help
4
0
1
1
2
2,25
2,5
0,957
Searching on the internet
for relevant information
4
0
2
2
0
1,5
1,5
0,577
Searching on my own
desktop for relevant
information
4
1
2
0
1
1,67
1
1,155
Taking individual notes that
I revisit later
4
1
1
1
1
2
2
1
Storing relevant results in
the ‘collections’
4
0
3
0
1
1,5
1
1
Adding keywords or tags to
my resources in order to
find later
4
0
2
1
1
1,75
1,5
0,957
Maintaining private
collections and continuously
adding materials/resources
4
1
2
1
0
1,33
1
0,577
Discussing relevant
resources with my
colleagues
4
2
0
1
1
2,5
2,5
0,707
Sharing private digital
collections with colleagues
4
2
1
1
0
1,5
1,5
0,707
Sharing my private notes
with colleagues
4
2
2
0
0
1
1
0
Creating a common
taxonomy/classification for
tagging (or labelling)
resources
4
1
0
2
1
2,33
2
0,577
Maintaining common digital
collections of information
and materials with
colleagues
4
1
2
1
0
1,33
1
0,577
6.3.3
Findings from the Questionnaire Study related to experiences with the
Instantiation, usefulness and usability (not related to Specific Knowledge
Maturing Indicators
In Table 6.4, the participants’ answers to questions regarding their experiences with the Instantiation
are listed.
77
The most positive experience with the Instantiation was stated for the activity “Maintaining common
digital collections of information and materials with colleagues” – four participants found this
functionality useful. Three out of four persons thought that “Searching on the internet for relevant
information”, “Storing relevant results in the ‘collections’” and “Adding keywords or tags to my
resources in order to find later” were useful functionalities.
Negative experiences (I didn’t have time to use it, I wasn’t interested, I found the functionality confusing
etc.) were stated by four respondents for the following activities:
“Searching for colleagues to ask for help”
“Taking individual notes that I revisit later”
“Creating a common taxonomy/classification for tagging (or labelling) resources”.
We also asked the participants in an open question what stopped them from using the demonstrator.
The open answers are listed here:
“I see the world back to front, so as I am not clear in my mind of the overall ability and capacity
of the programme. I struggle to use it systematically as the system seems to be not the system I
would follow.”
“Slow internet access at times.”
“Time to still use the demonstrator.”
78
Table 6.4: Answers to questions concerning experiences with the demonstrator (frequencies)
Please indicate for each of these activities, which of the suggested answers correspond to your experiences with the demonstrator.
Item
I found the
functionality useful
I didn’t have
time to use it
I wasn’t
interested in the
activity
I found the
functionality
confusing
I couldn’t find the
functionality
I didn’t know about
the functionality
I didn’t need the
functionality
Searching for colleagues to ask for
help
1
1
0
0
2
1
0
Searching on the internet for relevant
information
3
0
1
0
0
0
0
Searching on my own desktop for
relevant information
2
0
0
1
0
0
1
Taking individual notes that I revisit
later
0
1
1
0
0
1
1
Storing relevant results in the
‘collections’
3
0
1
0
0
0
0
Adding keywords or tags to my
resources in order to find later
3
0
0
1
0
0
0
Maintaining private collections and
continuously adding
materials/resources
1
1
0
2
0
0
0
Discussing relevant resources with my
colleagues
1
0
1
2
0
0
0
Sharing private digital collections
with colleagues
2
1
0
1
0
0
0
Sharing my private notes with
colleagues
1
1
0
1
0
0
1
Creating a common
taxonomy/classification for tagging
(or labelling) resources
0
1
1
1
1
0
0
Maintaining common digital
collections of information and
materials with colleagues
4
0
0
0
0
0
0
79
The participants evaluated the widgets and the Instantiation as a whole regarding functionality (Table
6.5) and usability (
Table 6.6).
The following can be summarised with regard to functionality and usability:
• The “Search Widget” and “Collection Widget” were perceived as very useful and in general
easy to use.
• The “Tagging Widget” was considered generally useful but difficult to use.
• The “Tag Cloud Widget” was seen as rather not useful but easy to use.
• The “Tag Editor Widget” was generally regarded as somewhat useful. All participants stated
that the Widget is difficult to use.
• The “MatureFox” in general was seen very useful and easy to use.
When observing the software as a whole, the “MATURE Demonstrator” was evaluated as very useful
and rather easy to use.
Table 6.5: Answers to questions concerning functionality
Please indicate for each of the widgets whether you find them very useful, somewhat useful or not useful at
all
Item
N
Rating
Not useful at
all
Somewhat
useful
Very useful
0
1
2
M
Median
s
Search Widget (allows to
search for documents
and colleagues based on
tags)
4
0
1
3
1,75
2,00
0,500
Collection Widget (allows
to collect documents
from the web and the
desktop)
4
0
1
3
1,75
2,00
0,500
Tagging Widget (allows
to tag resources)
4
0
2
2
1,50
1,50
0,577
Tag Cloud Widget (gives
an overview of all tags in
the system)
4
2
1
1
0,75
0,50
0,957
Tag Editor Widget
(allows you to edit, rearrange tags and to put
them in a hierarchy)
2
0
2
0
1,00
1,00
,000
MatureFox (allows you to
tag web pages while
surfing)
4
0
1
3
1,75
2,00
0,500
MATURE Demonstrator as
a whole
4
0
1
3
1,75
2,00
0,500
80
Table 6.6: Answers to questions concerning usability
Please indicate for each of the widgets whether you find them easy to use, rather easy to use or difficult to
use
Item
N
Rating
Difficult to
use
Rather easy
to use
Easy to use
0
1
2
M
Median
s
Search Widget (allows to
search for documents and
colleagues based on tags)
4
1
1
2
1,25
1,50
0,957
Collection Widget (allows
to collect documents
from the web and the
desktop)
4
1
1
2
1,25
1,50
0,957
Tagging Widget (allows to
tag resources)
4
2
1
1
0,75
0,50
0,957
Tag Cloud Widget (gives
an overview of all tags in
the system)
4
1
0
3
1,50
2,00
1,00
Tag Editor Widget (allows
you to re-arrange tags
and to put them in a
hierarchy)
4
4
0
0
0,00
0,00
0,00
MatureFox (allows you to
tag web pages while
surfing)
4
1
0
3
1,50
2,00
1,00
MATURE Demonstrator as
a whole
4
1
2
1
1,00
1,00
0,816
In the open answer section, the participants gave their opinion to the question what would they change
in the Instantiation to make it more useful.
This is what they suggested in the open answers of the questionnaire:
“I really like the idea behind the process but continue to struggle with the complexity. Thinking about
encouraging other colleagues, I don´t think I will ever be in the position to explain how to use it to
someone else. New IT for the existing group (over 30) should take accent of prior knowledge and
therefore less functionality that slowly develops would be the way forward. The docs on the system
also are confusing as you can´t tell if they are a .doc .ppt article so symbols would be very useful.”
“Clearer use of language for some of the tools used in the Demonstrator, i.e. Tag editor.”
“1. Copy and paste facility within collections, so that you can copy items from one collection to
another. 2. Facility to hover over an item in a collection and see its full name or URL. 3. Currently I
can’t seem to delete a collection that I have created (only items from within a collection).”
The findings will be discussed in the discussion section.
81
6.4
The Focus Group: Evaluation of Knowledge Maturing based Hypotheses
6.4.1
Procedure
For the Focus Group, a set of questions were developed using hypotheses. For a detailed mapping of
questions in the Focus Group to hypotheses see Appendix 12.6.1.2.
It was primarily designed to explore how the software was used in practice and how this work would
have been undertaken before implementation of the software. It focused on increasing our
understanding of how work processes and the flow of information had been altered or not.
Held at the end of the final evaluation workshop in July, the focus group comprised five users,
including one user who was new to the software. The discussion on the software was digitally
recorded and lasted approximately one hour. It was explained to users that the purpose of the
evaluation was to try and understand if, and in what ways, the software has been useful in their labour
market information (LMI) research, collection, analysis and use. The information they provided was to
help the technical and evaluation team to understand better how technology can be used in
information, advice and guidance.
The following questions were asked to lead the discussion in the Focus Group:
1.
Give one example of how you have used the demonstrator.
2.
How was this different from the way you would have completed this task without the support of
the demonstrator?
3.
Do you think that the demonstrator has helped you to think more creatively about:
a. How LMI could be used?
b. How LMI could be integrated more into IAG sessions?
4.
Thinking about the ‘search’ tool in the demonstrator, has this tool been useful or not?
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Has using the ‘search’ tool in the demonstrator made it easier or harder to:
a. Locate LMI?
b. Identify new sources?
Thinking about creating and using ‘collections’ in the demonstrator, has this tool been useful or
not? (Users were also asked to say more about their answer, for example did this relate to
tagging/labelling of sources, organisation of sources, accessibility of sources, commitment to
creating a collection).
Has the ‘collections’ tool made it harder or easier to:
a. Collect LMI?
b. Collate LMI?
c. Identify new LMI?
Have you created a collection with a colleague/s? (Users’ were asked to explain, for example,
how was this discussed and agreed, what was it created for a joint project, interest etc.).
Have you shared your collections and/or subscribed to collections created by other colleagues?
(Users were asked to explain whether this was with colleagues in the same office or different
offices within the organisation).
Have you used or shared the collections with, for example, other colleagues, careers coordinators in schools or pupils/students? (Users were asked to give an example).
As a result of using the demonstrator, do you think that you have more awareness of what LMI
your colleagues are interested in and/or researching?
Do you feel more confident in your ability to:
a. Identify new knowledge on the labour market?
b. Assess the quality or reliability of labour market information and sources?
Do you think that by using the demonstrator you have increased you knowledge of, for example,
a particular topic, local labour market, educational courses and qualifications?
Do you feel more motivated to develop your understanding of LMI for IAG by engaging in
information searching, collecting, collating and tagging?
Overall, do you think that the demonstrator has been successful in:
a. Supporting the collection and development of LMI for practice?
82
b. Increasing efficiency of researching the labour market?
c. Reducing individual effort in researching the labour market?
d. Retaining and developing organisational knowledge?
Are there are any further comments or remarks you would like to make about the MATURE
demonstrator?
16.
In answering the questions, users were asked to think how they would normally research and use LMI,
compared with how they had used the software for these tasks. Where appropriate, users were asked to
explain their answers or provide examples. All users were given the opportunity to speak. The focus
group enabled users to discuss with each other and reflect upon the software. This highlighted areas of
the software that had worked well for some and less so for others. Users also shared how they used the
software and how it could be potentially used in the future. Through this discussion, the users’
knowledge of the potential of the software grew as more possibilities and ideas were formed.
Findings from the focus group discussion follow.
6.4.2
Findings
The Focus Group interview was recorded and a detailed account of the interview was produced
including interesting quotes which were transcribed using the recording. Two representatives from the
project independently conducted the analyses drawing out key themes. The findings revealed the
multiple ways in which the software was being used and how this was different to how an activity
would have been completed prior to the implementation of the software. During the focus group
interview, participants were questioned about specific elements of the demonstrator, how it had
worked for them and how this work would have been undertaken without the demonstrator (see
interview guideline in section 1.4.1). The aim on the focus group was to draw out in what ways
working practices had changed because of the demonstrator. Some examples include:
•
•
•
•
Organising resources: Being able to organise own and shared resources (such as policy
documents, research, LMI, PowerPoint presentations, information sheets, URLs etc.) was seen as
the most useful facility in the software. This included tagging, rating, ‘filing’ in collections and
sharing. One user said that “organising own resources better and making them available to others
if you want to” was how she had used the software, viewing it as the most useful aspect of the
software. Before the implementation of the software, hard copy resources were kept at local
offices. Difficulties in organising, sharing and finding resources meant that many users had their
own set of files and resources resulting in individually held knowledge.
Sharing resources: The software enabled users to share resources and collaboratively collect
resources, which was viewed as the major benefit of the software. Prior to the implementation of
the software, information was shared with local colleagues by email, except where files were too
large for the inbox. For example, PowerPoint presentations would not normally have been shared
as they would have been too large to email. By using the software, users were not only able to
share these PowerPoint presentations, but were also able to collaboratively add to and amend these
resources with their own knowledge. This was considered to be a good way of working efficiently.
Local intranets included resources, but users stated that they were unsure what was new or dated,
what had been revised, where to find particular items and how resources had been named.
Tagging resources: This was agreed to be a useful way of categorising and labelling information
that would not have normally been filed in this way. The concept of tagging was new to many of
the group. New language (i.e. tagging, gardening and tagging taxonomy) had been learnt in order
to communicate and share ideas with colleagues. This facility in the software was considered to be
very useful and a practice that would continue. On the intranet resources were named, but not
tagged. Tagging for the users meant that searching for and finding resources had been made easier
with the software.
Rating websites: Before the implementation of the software, users reported that they kept their
own set of website bookmarks and favourites using their browser. By using the software, URLs
could be rated, tagged, presented in collections and shared.
83
By reflecting upon the way they had used the software and how they would have performed tasks
previously, users believed that the software had helped think more creatively about how LMI could be
used and integrated into information, advice and guidance sessions. They commented that:
“having information in one place is particularly helpful as I can build up resources and
compile into one document”
“Working with students, I can get them to compare and contrast resources that have been
collated in the collections.”
“I can build up my own tags for particular schools or a particular activity…”
Search tool
Users were also positive regarding the ‘search’ tool in the software for undertaking their own research
and knowledge development. However, they felt that it could not be used with a student during a
careers session, as it was not quick enough (sessions are only 35 minutes). One user reported that they
used the search tool in their own time: “this informs what I am doing […] builds up my knowledge”.
Others defined why they liked the search tool:
“It is similar to using favourites, but with some additionally functionality”
“I like sharing with colleagues […] Sharing saves time and as we are now operating in a
different climate it is important that we change working practices to be more efficient […] this
means working collectively.’
In locating LMI and identifying new sources, all users were unsure whether the software had made
this activity easier or harder. It was recognised that with more information on the software, the search
feature would become more powerful and locating news sources would be much easier. Also, there
was recognition that tagging was important and that they “needed to get smart about the process”. The
possibility of some kind of control, collective system of tagging or guidance on tagging was discussed.
Private and shared tagging and the need for sub categories were also debated. It was agreed that tag
gardening had to be undertaken regularly to keep up-to-date, but this raised questions about who does
the gardening, how often and who assures quality. Process issues were also raised about searching,
with one user believing that there needed to be an element of quality control when searching. She
suggested that there needed to be perhaps a list of ‘tried and tested’ resources. There was some
concern that people may get overwhelmed with the results.
In terms of using the software, one user said that that they had not found new sources, but had been
able to find PowerPoint presentations created by a colleague that have been very useful. Another user
said that the software was just a different way of searching. Less positive comments on the search
facility were the result of technical errors in the software. However, users were unanimous in liking
the search tool and being able to search others collections. It was recognised that there needed to be a
“cultural shift” in the way resources are located, identified and shared. It was noted by one that she
was “still getting my head round the sharing approach. I need to think about having folders with topic
areas rather than names on.”
Collections
Creating and using ‘collections’ in the software was also agreed to be useful in identifying and sharing
LMI. It was noted that some people in the organisation had never used the staff intranet; the reasons
for which were debated. The software was agreed to be “much less taxing” than trying to find
resources on the intranet.
The collections tool was found by all users to be advantageous to their work, particularly in collecting,
collating and identifying new LMI. Echoing others comments, one user said that: “I really like the idea
of sharing […] avoids duplication of work, but in reality sharing maybe a challenge, as I have to
attribute author, who updates information and who takes credit?”. This raised issues around working
in a culture with an organisational policy of accreditation. Issues around ownership and intellectual
property were debated. Some technical problems were raised including: the ability to comment on
resources; right click feature on collections not working; PowerPoint presentations not opening; and
84
not being able to export collections. Although all had shared and looked at others collection, none of
the users had created a collection with a colleague as they had been focusing on their own collection.
Sharing collections was particularly useful, as in normal working practice they would not be doing so.
Identifying new knowledge – new sources and expertise
All agreed that the software had not helped in their awareness or understanding of others LMI interests
or work. Understanding colleague’s expertise was felt to be more tacit knowledge. It was noted that
across the organisation there was limited knowledge of individuals’ expertise; people tend to know
expertise of those in their area and those who work in a similar position. In contrast, the majority of
users believed that their knowledge of a particular topic had increased by using the software. One user
gave the example of collecting resources on internships, which had been a new area. By using the
software, a collection of resources in this topic had been created and shared with colleagues.
Although all felt more confident in their ability to identify new knowledge on the labour market, they
were less confident in the ability to assess the quality or reliability of labour market information and
sources found. Users were cautious, as they did not fully understand what criteria they were looking
for when rating/assessing sources. They debated the issue of being more explicit about the criteria and
that there was a need for protocols for rating. Positively one user noted that by using the software she
had begun to think about the sources she found: “It has got me into the habit of looking critically at
resources before I rate and topic resources or webpages.”
Users believed that by using the software they were more motivated to develop their understanding of
LMI for careers by engaging in information searching, collecting, collating and tagging. This was
mainly as they had not had access to any system to support LMI searches and collection. The software
was thought to support the collection and development of LMI for practice and particularly for
retaining and developing organisational knowledge. It was noted that as people leave they take their
expertise and knowledge with them, but if their collections and searches are stored in the system then
this knowledge is retained. The software was believed to have the potential to increase efficiency in
and reduce individual effort in researching the labour market. It was recognised that there would need
to be substantial initial investment in the software for it to increase research efficiency.
Overall, users were positive about the software and its functionality, particularly enabling localities to
talk to each other and share resources. Issues were raised about bugs in the software and getting to
grips with how the software worked. Many considered the software overly complicated, and the
sequence of what to do next unclear. Others struggled with fitting the different tools together and
found the multiple ways of performing the same task as confusing.
Feedback on the system
Mixed responses were received from careers advisers and managers on using the system:
“Unfortunately I find the programme has too many sequences, which are not my strength at
all! I also feel that you have to be a good computer user to get the most from the product. I am
often left feeling very confused, as I can’t see connections very easily from one section to the
next.”
“I still need to familiarise myself with some of the widgets, but overall it is working a lot
better than before.”
“I don’t understand how tags will be organised and ‘tidied up’ – if there are too many
headings/tags people will not want to use the software. We haven’t really covered this in our
sessions.”
“I really like the idea behind the process, but continue to struggle with the complexity.
Thinking about encouraging other colleagues, I don’t think I will ever be in a position to be
able to explain how to use it to someone else. New IT for an existing group (over 30) should
85
take account of prior knowledge and therefore less functionality that slowly develops would
be the way forward.”
Careers advisers and managers commented on what had stopped them using the system and putting it
into practice:
“I see the world back to front, so as I am not clear in my mind of the overall ability and
capacity of the programme I struggle to use it systematically as the system seems to be not a
system I would follow.”
“Slow internet access at times.”
“It is still logging me off when there are a few people on it.”
“Time to use the demonstrator.”
Although the system had not been attractive to some users, it had started users thinking about how to
change and/or improve their working practices:
“The idea is excellent, but you have to learn so much about how to use it that ‘cut and paste’
and shared drives would be so much easier.”
Functions that still need changing to make the software useful were identified:
“The resources on the system are confusing, as you can’t tell if they a document, PowerPoint
so symbols would be useful.”
“Clearer language from some of the tools used in the demonstrator is needed, i.e. tag editor is
unclear.”
“Copy and paste facility within collections, so that you can copy items from one collection to
another.”
“Facility to hover over an in a collection and see its full name or URL.”
“Currently I can’t seem to delete a collection that I have created (only items from within a
collection).”
6.5
Discussion and Implications
As stated in the introduction, with this Summative Evaluation at Connexions Kent, we aimed to
answer three different research questions:
1. Does the Connexions Kent Instantiation support Knowledge Maturing activities related to
Knowledge Maturing Indicators (GMIs)?
2. How do the users use the Connexions Kent Instantiation? What do they appreciate, what
needs to be improved?
3. How usable is the software?
In the following, the outcomes will be discussed with regard to these three research questions.
Moreover, we will discuss the overall development process before the background of issues that arose
(see evaluation description, Section 0) and describe findings and lessons learned.
6.5.1
Implications for Knowledge Maturing
Does the Connexions Kent Instantiation support Knowledge Maturing related to Knowledge
Maturing Indicators (GMIs)?
In order to find answers to the question of whether the Connexions Kent Instantiation supports
Knowledge Maturing (research question 1), we designed and conducted the Questionnaire Study, and
a questionnaire that both is related to typical activities and Knowledge Maturing Indicators (see
Appendix 12.6.1.1) With the pre-usage questionnaire we wanted to provide insight into the perceived
need for Knowledge Maturing functionality while the post-usage questionnaire addressed the issue of
perceived usefulness of the Instantiation to support Knowledge Maturing activities.
86
In the pre-usage questionnaire, the following Knowledge Maturing activities were assessed as typical
or rather typical for participants’ own work: “I search on the internet for relevant information”, “I
search in other resources for relevant information (paper based copies...)”, “I take individual notes that
I revisit at later points in time”, “I store relevant results in collections on my desktop or laptop”, “I
discuss relevant resources with my colleagues”, “I search for colleagues to ask for help”, “I search on
my own desktop for relevant information”, “I maintain my private collections and continuously add
materials” and “I share my private digital collections with colleagues”.
Furthermore, when asked whether the available facilities before the introduction of the software
support the Knowledge Maturing activities well, the participants stated that the following Knowledge
Maturing activities were well supported by the existing facilities: “Searching on the internet for
relevant information” and “Searching on my own desktop for relevant information”. The following
facilities needed to be improved in order to better support the following Knowledge Maturing
activities: “Sharing private digital collections with colleagues”, “Maintaining common digital
collections of information and materials with colleagues”, “Storing relevant results in collections on
my own desktop or laptop”, “Adding keywords or tags to my digital resources in order to find them at
a later date”, “Maintaining private collections and continuously adding materials”, “Discussing with
my colleagues about relevant resources”, “Sharing my private notes with colleagues“, “Creating a
common taxonomy/classification for tagging (or labelling) resources”, “Taking individual notes that I
revisit at later points in time” and “Making relevance judgments for digital documents in order to
highlight the most interesting resources and find them at a later date”.
The results from the post-usage questionnaire showed which Knowledge Maturing activities were
well supported by the Instantiation and where improvement would be needed. The Knowledge
Maturing activity that was the best supported by the Instantiation is: “Storing relevant results in
collections”. This activity is related to the following Knowledge Maturing Indicators (GMIs): ID I.3.4:
An artefact became part of a collection of similar artefacts, ID I.3.6: An artefact is referred to by
another artefact and I.3.10: An artefact was changed (see Appendix 12.6.1.1).
The Instantiation needs to be improved in order to provide better support to the following Knowledge
Maturing activities: “Searching for colleagues to ask for help”, “Creating a common
taxonomy/classification for tagging (or labelling) resources”. These activities are related to the
following Knowledge Maturing Indicators (GMIs): ID I.3.3: An artefact was selected from a range of
artefacts, ID I.3.4: An artefact became part of a collection of similar artefacts, ID I.3.6: An artefact is
referred to by another artefact, I.3.9 An artefact has been used by an individual and I.D. II.1.5 An
individual has significant professional experience.
How do the users use the Connexions Kent Instantiation? What do they appreciate, what needs to
be improved?
In summary, users reported that they liked the concept of the system, but believed it to be complicated
compared to their current methods of collecting LMI. Users supported the concept of a system that
enabled colleagues to collect and share LMI. The detailed examination of usability issues presented in
Section 6.3.2.2 provides more insights to relations, reasons and effects of the experiences described
above. For a detailed discussion of usability issues see Section 6.5.2.
6.5.2
Usability considerations and implications for future development
How usable is the software?
Given the evaluation situation at Connexions Kent (described above), a qualitative approach during
the workshop and a Focus Group interview method was chosen. This enabled detailed insights about
the drivers and blockers users recognised during system usage. These comprise issues regarding
external Indicators influencing the usage of the system, functionality, and perceived ease of use. The
overall perceived usability was rather positive as only one person stated the overall software not easy
to use (see Table 6.6) and all found the Instantiation very useful (see Table 6.5). The results might not
87
be generalizable due to the small sample size. However, different aspects of are discussed in the
following along individual statements of users.
The suitability for learning (ISO Norm) or subjectively expected effort for learning to use the system
reflects that software should support the user in learning to use the system as fast and easily as
possible. Two comments of users highlight typical conflicts in HCI research.
“Unfortunately I find the programme has too many sequences, which are not my strength at
all! I also feel that you have to be a good computer user to get the most from the product. I am
often left feeling very confused, as I can’t see connections very easily from one section to the
next.”
“I really like the idea behind the process, but continue to struggle with the complexity.
Thinking about encouraging other colleagues, I don’t think I will ever be in a position to be
able to explain how to use it to someone else. New IT for an existing group (over 30) should
take account of prior knowledge and therefore less functionality that slowly develops would
be the way forward.”
With a focus on complexity, both of these users seemed to struggle with the opportunity provided by
the widget-based approach to develop their own strategies for usage. On the one hand, literature states
that forced sequences of usage should be reduced in order to relinquish the processes to fulfil a task to
users. The modular widget-based approach is a perfect means and good implementation for that. On
the other hand, by this reduction, the complexity increases, and this is what the people above are
struggling with. Based on this evaluation group, more guidance by the software and during the
system’s introduction might have helped here. This is strongly related to another issue, the
“conformity with user expectations” (ISO norm) or the subjective expected complexity to use the
system. The latter comment above states that the software has been introduced by taking into account
their specific computer literacy and thus their expectations to use the system. The software’s interface
approach seems not to take into account the different mental user models (as the ISO norm suggests).
However, complexity and thus user expectations are influenced in different ways here. The second
statement above refers to the amount of functionality that seems new to him or her. The following one
refers more to the widgets itself as the way of using it is (obviously) new to him or her:
“I still need to familiarise myself with some of the widgets, but overall it is working a lot
better than before.”
Here, two or maybe three things clash together. Firstly, the system does not look and behave as users
would probably expect, which is quite clear as the overall interface concept is new. Secondly,
however, the software seems to have some weaknesses regarding the ease and suitability of learning
the system. Third, the user seemed to struggle with the sense and concepts behind the given
functionality, e.g. the tagging idea:
“I don’t understand how tags will be organised and ‘tidied up’ – if there are too many
headings/tags people will not want to use the software. We haven’t really covered this in our
sessions.”
Consequently, such issues need to be addressed for future development. The interface should be
improved, taking into account what users might know. For example, one would try to give it in this
case, a ‘Windows Look & feel’, with typical menus, buttons and so on, labelled and located where
known. Individual statements mention improvements that would be really helpful, e.g.
“Clearer language from some of the tools used in the demonstrator is needed, i.e. tag editor is
unclear.”
“Copy and paste facility within collections, so that you can copy items from one collection to
another.”
“Facility to hover over an in a collection and see its full name or url.”
“Clickable tags would be nice.”
All in all, different aspects of the software are not perfectly adapted to the end user group. Table 6.5
and Table 6.6 clearly show that the idea and to a high degree its implementation were perceived very
88
positively and thus allows us to deduce that the overall perceived usability was also very positive. This
includes the main software (Sidebar), but also most of the widgets themselves. The messaging server
is an important instance for improving the overall flow of usage, as it allows a kind of interaction
between the single widgets. The embedded Firefox plugin lowers the entrance barrier of using the
system as it addresses software – a browser – people already know well.
6.5.3
Reflections and Limitations
There were a number of technical issues in all workshops where the software was presented to the
user group (e.g. logging on issues, MatureFox not working for several users, missing/disappearing
collections). To some extent, these were also due to unexpected situational constraints such as
unexpected software configurations on different laptops of the users (e.g. firewall settings), and the
poor internet access at Connexions Kent (which was overcome in the end using dongles for mobile
internet connections). A web-based version of the tool would have avoided some of these problems.
While the software was working quite well on the users’ computers at the time when the evaluation
started, it was still not in the status of a software “product”. This led to some frustration of the users as
they had had high expectations for the prototype. These expectations needed to be managed better
from the start of software development in the course of a research-oriented EU project.
In the course of the agile, user-centred approach of software design and development, requirements
of the users naturally changed, e.g. additional features were requested, and re-design of some other
features was desired. This natural (and foreseen) change in user requirements led to a prioritisation of
features by the development team in the process of the necessary re-factoring of components in Year
3/4 of the project. Due to time constraints, the final version did not contain all the functionality that
was available in some earlier versions (e.g. the discussion widget was not enabled for the evaluation).
Some of the requirements were also not fulfilled. Some users felt that they had to relearn the system
every time they went back to it. It was also suggested that the end game would never have been clear
and there would have been less understanding why something was being done. Such decisions for
missing functionality/changed/unfulfilled requirements needed to be made transparent to the users,
and explained to them.
For some of the requirements that turned out to be essential for users, there were some
misunderstandings between the users and the development team. For example, the users expected to
have PowerPoint previews of *.ppt files in the widgets, or they would have wanted to export the whole
content of collections as *.pdf (as opposed to a list of links) the latter of which is technically not
reliably possible. Requirements must be defined in even more detail and concrete terms.
Finally, the software was radically different to most of the users’ previous experiences with using
software (mainly MS Office, web search). For example, detailed explanation was needed for them to
understand the concept of “tagging” and its use. However, by the end of the project “tagging” became
part of their normal practice in other elements of their work. For instance, resources loaded onto the
organisational website were given multiple tags to enable searches. Also interacting with different
“Widgets” at the same time was a new paradigm. These barriers were underestimated by the
development team. In addition, some parts of the MATURE software (e.g. Search Widget) came with
usability issues that made the tool even more difficult for them to use. More training should have been
considered during the design process that could have helped to not overload users.
6.6
Conclusions
A number of conclusions can be drawn on the overall design process, the question on whether the
software was found useful for carrying out Knowledge Maturing activities (question 1), how the users
were using the software (question 2) and the usability of the software (question 3).
Overall, the participatory design approach has been well accepted by the end-users and has
brought about several positive outcomes. First, through the involvement in the design of the
Instantiation the end-users have become more and more familiar with concepts underlying the
software and have started to integrate initially unaccustomed functionalities in their work-context.
89
Second, certain disadvantageous aspects of the software could have been identified and improved
during the evaluation process. Third, continuously observable issues concerning functional
requirements and usability problems have converged into principles guiding future work.
It is recognised that as a developmental process, this was successful in creating usable software, a
space in which users could contribute to and guide the design. However, the software was not robust
enough to be used in practice and be embedded into work processes where widespread testing could
take place. This was the result of a combination of software and hardware problems, which resulted in
some of the users from the user group being unable (and in some cases, unwilling) to continually use
the system.
Despite some technical barriers, the concept of the system and several widgets were
enthusiastically embraced. Above all, the users were excited about the possibility to share their
resources and collections with an online tool, and to create and subscribe to collections of their
colleagues. They also came to value the tagging functionality as an easy way to bookmark their
resources. From the findings of the questionnaire study, completed by four users, it can be seen that
the Connexions Kent Instantiation already supports some of the most typical Knowledge Maturing
activities.
Also, findings from the Focus Group interview evidence how Knowledge Maturing, more generally,
has taken place in the organisation; for instance, users’ knowledge and technical skills have improved.
The evaluation has shown that the Instantiation has been successful in supporting Knowledge
Maturing activities in the different Knowledge Maturing phases. The software has enabled users to
learn more about LMI both nationally and locally. Users were able to collectively collate resources
and documents, and share with geographically dispersed colleagues. Knowledge flow between those
using the software has increased. Following the Knowledge Maturing process, the software initiated
the expressing ideas phase, as users began to talk about their interests and gaps in their knowledge. By
using the software, information was sought, identified, collated and shared with their colleagues. At
the end of the evaluation period, users reported that they had started to create PowerPoint
presentations together, which were becoming part of practice (i.e. formalised). That is, users were
using the Connexions Kent Instantiation through the first three phases of Knowledge Maturing. Given
more time, these resources may have become well established, updated through formalised
interactions and part of training (these correspond to later Knowledge Maturing phases).
With regard to the usability of the software as perceived by the users at Connexions Kent, it can be
said that the evaluated software has some weaknesses regarding usability in practice, concerning
principles of locality, language, or possibilities for individual adaptations (e.g. colour). The brief
analysis above, with respect to standardized methods, also revealed strengths and positives in using the
system (e.g. flexibility, adaptation to work processes, less sequences).
As a general conclusion, it can be said that with regard to research question 1 (support of Knowledge
Maturing activities), the Connexions Kent Instantiation as used for the evaluation at Kent supports
some of the Knowledge Maturing activities well. This evaluation report therefore provides some
details on how to develop such software in the future. With regard to research question 2 (How do
users use the software?) we found that they use the software for the first three Knowledge Maturing
phases – generating ideas, sharing ideas, and formalising ideas. Regarding usability (research question
3) we have also achieved a satisfying result, keeping in mind that research interests are confronted
with specific domain and context-based interests, needs and abilities, and that the paradigm of the
software (tagging, widgets…) was totally different to other tools the users were typically working
with.
90
7
Summative Evaluation at Structuralia (Study 4)
7.1
Introduction
Structuralia is a leading e-learning Postgraduate and Executive Engineering School that invests in
developing specialized online training programs for the Construction, Infrastructure, Energy and
Engineering sectors. Structuralia offers 16 Master’s degree programs and 11 Executive Programs
including an MBA for engineering professionals.
In the Structuralia Evaluation, a similar tool was used as in the Connexions Kent evaluation. The
following widgets are part of the Structuralia Instantiation that was tested in the formative evaluation:
•
•
•
•
•
•
A Collection Widget allows users to ‘collect’ information both from their desktop and
from the web in so called collections that can be shared with other users, and to subscribe
to collections of other users. From the collection widget, the users can start the Tagging
Widget, to annotate elements in the collection with freely chosen keywords (tags).
In the Tagging Widget, users can add private and public keywords (tags) to resources in
order to find these resources more easily.
A Tag Cloud Widget provides an overview of all tags that have been used in the system
by all users. From there, a search can be triggered that returns all resources that have been
tagged with the selected tag.
In the Search Widget the user can insert a keyword (search term) either manually or
select from the list of tags that are displayed in a sidebar of the Search Widget. These
keywords in the sidebar can be organised into hierarchies using the Gardening Widget
(Taxonomy Editor). In addition to user generated tags, the Search Widget also displays
automatically generated tags, i.e. tags that refer to the ‘maturity’ of resources and that are
automatically computed based on the list of Knowledge Maturing Indicators/Events (e.g.,
‘has not been added to a collection’, ‘has been tagged by many users’ etc.). These
automatically generated tags can be used to further refine the search. The search returns
internal resources that have been tagged with the selected keyword. From the search
widget, these resources can be added to a collection in the Collection Widget, or they can
be annotated with new tags in the Tagging Widget. Clicking on a search result opens it in
the web browser. Besides searching for internal resources, the search widget can be used
to search for external (public available) resources on the web through Yahoo! Answers.
In the Gardening Widget (Taxonomy Editor), existing tags can be organised into
hierarchies, to ‘tidy up’ the user generated tags.
In the Discussion Widget, users can start discussions about diverse elements which they
have collected, such as documents, weblinks, etc. Discussions can also be started for
Collections.
In addition to the widgets, the Structuralia Instantiation includes the MatureFox, a Firefox plugin that
allows users to tag and rate websites as they browse the internet, thereby adding these resources to the
collective knowledge base (Social Semantic Server). Resources tagged with the MatureFox can be
accessed from the other widgets by either searching for tags in the Search Widget, or by clicking on
tags that the resource has been tagged with in the Tag Cloud Widget.
The Structuralia Instantiation differs from the Connexions Kent Instantiation in that there was no
discussion widget at Kent, and no Gardening Widget (Taxonomy Editor) at Structuralia. Apart from
that, the widgets were equal in both Instantiations (see also Figure 6.1 in Section 6 for a screenshot of
the Connexions Kent Instantiation).
The situation at Structuralia was different from Connexions Kent with regard to several aspects:
• Users from Connexions Kent had actively and intensively participated in the process of
designing the tool, therefore, the tool was tailored to their needs. At Structuralia, the tool was
applied for a purpose (an online course) that was slightly different from the situation the tool
was designed for (collaborative knowledge creation at work).
91
•
At Connexions Kent, for several reasons, we had a low number of users who could use the
tool productively, who participated in in-depth qualitative interviews and provided a lot of
insights. At Structuralia, we managed to attract a rather large sample that just filled in
quantitative questionnaires based on Knowledge Maturing Indicators . In addition, log data
was collected to analyse the actual usage of the system.
Therefore, the evaluation studies at Structuralia and Kent can be seen complementary.
The purpose of the evaluation of the Structuralia Instantiation was to validate the MATURE software
Instantiation during the process of learning by including it, as an additional resource, to an online
course. We wanted to study the effect of the Instantiation on the learning experience of students in a
blended learning section in a Structuralia Course.
Students at Structuralia have access to multimedia material, documents, mail options, exams and
discussions through the Structuralia learning platform. A special Instantiation of the learning platform
was set up to avoid the duplication of functionalities and to encourage the use of all the functionalities
provided by the Instantiation.
In a traditional learning setting, the knowledge flows only from the teacher to the students, with the
teacher being the one providing the study resources and solving arising issues for the students. When
using the Instantiation, students are allowed to share resources between them, with the use of the
Collection widget, and discuss issues with the use of the Discussion widget, so we wanted to find out
if knowledge can “mature” as a result of this interaction.
Before designing the Summative Evaluation for the Connexions Kent and Structuralia Instantiation, a
mapping was carried out for Maturing Indicators on the basis of which the Instantiation had been
designed.
Starting from the GMI – SMI mapping provided in D2.3/D3.3 for Instantiation
Structuralia/Connexions Kent, the first step was to reduce the set of SMI drastically by checking the
redundancy, missing functionality in the Instantiation, and missing plausibility. Afterwards, we
clustered the SMI according to GMI and gave the reasoning for this mapping, though in many cases it
is quite self-explanatory. Just for clarity, there exist very specific SMI like “A new resource has been
added to a shared collection.”; the pendants “… existing resource…” or “… private collection.”
simply did not exist in the first list of transition Indicators providing in D2.3/D3.3 and was (for matter
of correctness) not extended here.
The mapping of GMIs/SMIs to the Structuralia/Connexions Kent Instantiation showed that for all
Knowledge Maturing Indicators, the level of justification was strong (e.g. based on GMI developed
via ethnographic study). Therefore, we decided to use GMIs / SMIs to evaluate the Instantiation’s
effect on Knowledge Maturing (Top Level Evaluation Goal 2).
The Summative Evaluation at Structuralia focused on three different research questions to be
answered:
1. How do people use the Instantiation?
2. Does the Instantiation support Knowledge Maturing activities related to Knowledge
Maturing Indicators (GMIs)?
3. How easy is the Instantiation to use?
In order to answer these research questions, a multi-method design was chosen. First, with a log-data
analysis, the usage of the Instantiation will be analysed. Moreover, with a Knowledge Maturing
questionnaire, quantitative ratings should provide insight into the usefulness of the most relevant
Knowledge Maturing functionalities to be supported with the Instantiation. Items in the questionnaire
related to Knowledge Maturing were shaped by the assignment of Knowledge Maturing Indicators to
the Instantiations (Deliverable D2.3/3.3).
General Knowledge Maturing Indicators used for this study are listed below:
92
•
•
•
•
•
•
•
•
•
•
•
•
I.2.3.3: An artefact has been the subject of many discussions
I.2.3.8: An artefact was changed in type
I.3.10: An artefact was changed
I.3.3: An artefact was selected from a range of artefacts
I.3.4: An artefact became part of a collection of similar artefacts
I.3.6: An artefact is referred to by another artefact
I.3.9: An artefact has been used by an individual
I.4.3: An artefact has become part of a guideline or has become standard
I.4.6: An artefact has been assessed by an individual
II.1.3: An individual has contributed to a discussion
II.1.5: An individual has significant professional experience
II.3.1: An individual has a central role within a social network.
(For a complete mapping of SMIs for this Instantiation to general GMIs, and to questions in the
questionnaire see Appendix 12.7).
In order to investigate the usability of the software (Research Question 3), a standardised usability
questionnaire (System Usability Scale, SUS) was used.
7.2
Evaluation Description
It was decided to run the evaluation of the Instantiation by its use on a CYPECAD course at
Structuralia. The course is an online training course of 40 hours of duration, with just one onsite
session.
CYPECAD is a software tool used to design structures more efficiently and it is widespread in the
Spanish market. This popular software allows estimating and dimensioning of reinforced concrete
under horizontal and vertical pressure. The objective of the CYPECAD course was to allow the
student to look at all aspects of the use of reinforced concrete, starting with the calculation and
introduction of data, followed by a revision of results and the data listing.
7.2.1
Evaluation Timeline
The course that was selected for the evaluation started on the 16th of September with the onsite
training. Before the session, students were provided with an installation manual to guide them through
the installation of the software onto their computers.
The course was planned to last for 6 weeks ending on the 31st October, but taking into account the
demands required from the students (they needed to study eleven chapters with an exam at the end of
each chapter). Structuralia provided them with ten extra days, to allow all of them to complete the
course. So finally the evaluation lasted from 16th September to 10th November.
7.2.2
Sample
The CYPECAD course was offered free from the company webpage, requesting interested students to
provide their contact information to receive further information. Students were also recruited through
an email address to a set of previous students that had completed previous courses at Structuralia,
courses that were targeted at people who were unemployed.
Students were requested to provide their curriculum vitae where they had to demonstrate either formal
training on structure calculation or relevant work experience.
The selection criterion was based on having previous knowledge of structure estimation and being able
to attend the onsite training session. This knowledge implies that they were either architects or
engineers.
93
We allowed for the enrollment of 75 students, as a larger number would have been too hard to handle
for the teachers.
The onsite training was ultimately attended by 66 students, divided into two groups, 32 in the first
group and 34 in the second group.
Another four students who finally were not able to attend the onsite training, were enrolled and
received an email with the main instructions to be allowed to follow the course.
Once we received the personal details and curriculums (fulfilling the requirements), the students were
accepted on the course by chronological order of arrival. We enrolled 75 students, expecting that not
all of them would complete the course. Finally, 55 students finished the course.
7.2.3
Face-to-face training
The course is normally provided 100% online, but we had planned for the possibility of addressing
issues that could arise due to the variety of software that students would need to use to complete the
course. They had to access the Structuralia learning platform (to access to the multimedia material, to
use the internal course mail and to sit the exams) using Explorer as the chosen browser. They needed
Firefox, to use the MatureFox functionalities, and the standalone applications CYPECAD and the
Instantiation.
Another reason for the onsite session was to form a more collaborative group, as even if most of them
had studied on previous courses, not all came from the same one, so they did not know each other. The
onsite session also allowed us to present the MATURE project to the students and to explain the
participation we were expecting from them on the Instantiation evaluation.
Before the onsite session, students were provided with an Instantiation Installation Manual to help
them set it up on their own computer. This manual covers the three most popular operating systems:
XP, Windows 7 and Windows Vista.
During the onsite session, students were provided with an Instantiation User Manual printed, as well
as making it available in a soft version on the Instantiation. In the user manual there was a chapter that
refers to how we expected them to use the Instantiation: by adding new documents, links, collections,
by evaluating and tagging the resources available, etc.
We had two groups for the onsite training:
• Group 1: Sep.16, 2011 from 16 to 17:30 h
• Group 2: Sep.16, 2011 from 18 to 19:30 h
Figure 7.1: Presentation of the MATURE Software to the users Agenda for the on-site training:
94
•
•
•
•
7.3
Welcome: Structuralia presentation, MATURE presentation and team presentation
Short introduction to CYPECAD (CYPECAD teacher)
Structuralia virtual learning platform Presentation
Complementary software MATURE (Instantiation) Presentation
Evaluation methods
7.3.1
Questionnaires
In order to investigate the perceived support of Knowledge Maturing activities by the Instantiation
(Research Question 2), and the perceived usability (Research Question 3), three post-usage
questionnaires were designed. The questionnaires were provided to the students online, through a link
from the Structuralia learning platform (Figure 7.2), once they had finished all the exams of the
course.
Figure 7.2: Questionnaire in Structuralia Learning Platform
The total number of questions was 102, presented to the students in three separate questionnaires. The
first one covered the Knowledge Maturing Indicators with the other two covering the usability
questions.
The Structuralia Knowledge Maturing Questionnaire covers the two following issues:
• Part I: Current practices of knowledge creation and sharing (Appendix: Transition Indicators
Maturing Questionnaire 12.7.2.1)
• Part II: Perceived need for improvement (Appendix: Transition Indicators Maturing
Questionnaire 12.7.2.1)
the Usability questionnaires covers different widgets:
• Usability Questionnaire I contains questions referring to the Overall System, the Collection
Widget, the Search Widget and the Tag Editor Widget. (Appendix: Usability Questionnaire
I, 12.7.2.2)
• Usability Questionnaire II contains questions referring to the Tag Cloud Widget, the
Tagging Widget, the Discussions Widget and the MatureFox Firefox Plugin. (Appendix:
Usability Questionnaire II, 12.7.2.3)
95
All questions were translated and posed to the participants in the Spanish language. Most questions
were multiple-choice where the respondent was presented with 5 possible answers in the case of the
Knowledge Maturing questionnaire, or with 6 possible answers in the usability questionnaires.
The questionnaires were given to the respondents at the end of the evaluation. As the Instantiation was
used as well in Kent, we have adapted their questions to the use given to the Instantiation in the
Structuralia case.
7.3.1.1
Structuralia Knowledge Maturing Questionnaire
The Structuralia Knowledge Maturing Questionnaire consisted of two parts. Part I gave us an
overview on current practices of knowledge creation and knowledge sharing in the respondent group.
The respondents evaluated just how typical ten various Knowledge Maturing activities for their own
work were.
Examples of questionnaire items (translated from Spanish):
“Searching on the internet for relevant information”
“Storing relevant results in collections on my desktop or laptop”
The ratings were given on a five-point scale with the following options: very typical – typical – rather
untypical – untypical – do not reply. This questionnaire part should give us a better understanding of
typical Knowledge Maturing activities in our sample. For a mapping of the questions in the
questionnaire to Knowledge Maturing Indicators see Appendix 12.7.1.
Part II of the Structuralia Knowledge Maturing Questionnaire consisted of ten items, and
consequently ten Knowledge Maturing activities (the same as in first part of the questionnaire). This
part addressed the perceived need for improvement of the Instantiation in supporting the Knowledge
Maturing activities (i.e. to find out whether the Instantiation supports the activities well or if
improvements are needed). For a mapping of the questions in the questionnaire to Knowledge
Maturing Indicators see Appendix 12.7.1.
Examples of questionnaire items:
“Searching on the internet for relevant information”
“Storing relevant results in collections on my desktop or laptop”
The items were rated on the five-point scale with the following options: Do not reply – Works well –
Needs some improvement – Needs a lot of improvement – Not crucial for my work.
7.3.1.2
Usability Questionnaires
Usability questions were asked in 2 separate questionnaires, each covering different widgets.
The System Usability Scale (SUS) was used, a simple ten-item scale which gives a global view of
subjective assessments of usability. The statements in the scale cover a variety of aspects of system
usability, such as the need for support, training, and complexity, and thus have a high level of face
validity for measuring the usability of a system.
SUS is a Likert-scale, where a statement is made and the respondent then indicates the degree of
agreement or disagreement with the statement on a 5 point scale, ranging from “strongly agree” to
“strongly disagree”.
For the questions asked see Appendix 12.7.2.2 and 12.7.2.3.
7.3.2
Log data
Log data was collected in order to analyse the users’ interaction with the Instantiation. The analysis of
log data allows us to track what activities were performed, how often, by whom etc.
In total, 28 different event types were registered. For the list of all events, event types and a brief
explanation see Table 7.1.
96
7.3.3
Teacher’s views
At the end of the evaluation period, the teacher of the course also provided feedback on the usefulness
of the Instantiation. The teacher at Structuralia observed the development of the course from the start.
The teacher’s view should provide some further insight into how satisfied the students were, what
problems were occurring during the course, and how the students were handling them. The teacher’s
view is valuable input because, in addition to the questionnaire, they provide another perspective on
Knowledge Maturing processes that took place at Structuralia.
7.3.4
Issues which arose during the evaluation
We encountered some technical issues in the course of the evaluation, some could be overcome, but
others existed throughout the course.
We encountered technical problems during the presentation of the functionality to one of the second
group of students in the on-site session; some functionality was unavailable at the time, disrupting the
presentation with the aim to facilitate the use of the tool.
After experiencing disruption of the availability of the Instantiation over various mornings, it was
discovered that the server, where some widgets were allocated, was performing a backup daily and
was interfering with the availability of the tool. Once the issue was detected, it was resolved.
Some students found it hard to install the software despite the availability of the user manual.
The different issues that arose led to the students generating 254 questions through our learning
platform, with an overall number of 664 mails generated. We have to consider that these numbers
include as well questions related to the content of the course, the administrative procedures, the
Structuralia learning platform which means; these questions were not only related to the software.
Most issues reported by the students, can be classified as:
• Problems installing the Instantiation
o All students were either engineers or architects and we provided them with an
installation manual, but still some were having trouble installing the Instantiation.

In some cases, it was due that they didn’t follow the manual or that they had a
very bad internet connection and, in a few, that they were not used to
administering their own PC.

We did provide them with help through mail, phone and in some cases by
installing the Instantiation remotely using TeamViewer.
•
Connection error to MatureFox
o Some of the issues that we received were due to the fact that people did not use their
correct user/password or had a bad internet connection, therefore they were not really
related to the Instantiation itself.
•
Connection error to MATURE Widgets
o The MATURE Widgets, especially at the start of the evaluation, were unavailable
over some periods of time, or too slow when changing from one option to another that
were hard to use.
We have received too comments referring to functionality, caused by lack of knowledge of the widget,
as well as suggestions for improvements.
• Discussion Widget
97
o
•
•
7.4
During the evaluation, apart from the initial technical problems, comments were not
always displayed in chronological order. This was found confusing and was resolved
by the technical team.
Compound tags
o Tags consisting of more than one word were not working – instead, these were treated
as two separate tags. As this restriction was not stated before the start of the
evaluation or solved during the evaluation, we do not consider it a limitation, but an
important issue. It was reported recurrently by students, causing them to question their
own understanding of the way to use the tool, and reducing their trust in the tool.
This confusion could have been avoided by presenting the tool restriction of only
working with one word tag.
Tag Editor Widget
o Not straightforward to use as it requires a login again
o
Tags are displayed in no clear order.
o
Impossible to delete a tag.
o
Lack of common agreement about the use of capital letters, plurals and acronyms,
difficulty with some functions such as the search, tag editor and discussions.
Findings
7.4.1
How did the participants use the Instantiation (from the log data)?
The analysis of log data allows us to draw conclusions on the users’ behaviour within the Instantiation.
With the log data we can track what activities were performed, how often, by whom, etc.
A total of 80 users used the Instantiation from 16th September to 10th November. They produced in
total 14,265 events of 28 various event types, which means, an average of 178 events per user.
As can be seen from Table 7.1, the most frequent event types were: “view entity”, “appears in search
result” and “user login”. They covered 80% of the users’ activity within the Instantiation. Other event
types, their explanation and frequency can be found in Table 7.1.
Remark: the number of “User Login” events was reduced in that we assumed that one user logged-in
maximally once within 3 minutes. We removed the redundant events, where we found “user login”
event more than once in 3 minutes.
Table 7.1: Frequency of Log Data Events
Event Type
Explanation
Frequency
Percentage
View Entity
A resource got viewed / opened
4709
33,0
Appears In Search Result
A resource appeared in a search result
4533
31,8
User Login
A user logged in
2214
15,5
Search From Tag Cloud
A search with selected got started from
the tag cloud
727
5,1
Subscribe Collection
A collection got subscribed
493
3,5
Search With Keyword In Collections
A search got performed in collections’
names
425
3,0
View Entity Out Of Search Result
A resource got viewed / opened from a
search result
277
1,9
98
Export Collection Item
The url of collection item got exported
to a pdf document
234
1,6
Rate Entity
A resource got rated
144
1,0
Search With Keyword In Collection Items
A search got performed in collection
items’ names
125
0,9
Add Shared Tag
A shared tag got added to a resource
90
0,6
Unsubscribe Collection
A collection got unsubscribed
59
0,4
Structure Shared Collection Content
A shared collection got structured
50
0,4
Start Discussion
A discussion on a resource got started
45
0,3
Add Private Collection Item
An item got added to a private
collection
19
0,1
Remove Private Collection Item
An item got removed from a private
collection
19
0,1
Add Resource To Collection From Search
Results
A resource got added to a collection
from a search result
18
0,1
Create Private Collection
A private collection got created
18
0,1
Create Shared Collection
A shared collection got created
12
0,1
Remove Shared Tag
A shared tag got removed from a
resource
10
0,1
Rename Shared Collection Item
An item of a shared collection got
renamed
10
0,1
Add Shared Collection Item
An item got added to a shared collection
8
0,1
Remove Private Collection
A private collection got removed
8
0,1
Tag Resource From Search Results
A resource got tagged from a search
result
7
0,0
Remove Shared Collection Item
An item got removed from a shared
collection
6
0,0
Add Private Tag
A private tag got added to a resource
3
0,0
Rename Private Collection
A private collection got renamed
1
0,0
Rename Shared Collection
A shared collection got renamed
1
0,0
14265
100
Total
When we look at Figure 7.3, we can see how the users’ activity within the Instantiation was
developing over time.
The first two weeks (Sept 15-30 2011) yielded the highest activity with a total of 9,658 events, which
constitutes 68% of the total number of events. During that period, users were most occupied with these
three event types: “Appears in Search Result” (orange-coloured), “View Entity” (apricot-coloured)
and “User Login” (violet-coloured).
In the other three time periods the event type with the highest frequency was “View Entity”.
As regards the users’ activity, 35 of the 80 had used the Instantiation above average (more than 178
events) while others were below average. The 15 most active users comprised almost 50% of the total
activity.
99
Figure 7.3: Activity Overview over time (Sept 15 – Nov 14, 2011)
7.4.2
Questionnaire data
75 students were enrolled on the course while only 66 of them attended the onsite training and 55
finished the course. The questionnaires and various questions were not always answered by the same
number of respondents.
The demographic data questions represented in Error! Reference source not found. were included
in the Usability Questionnaire II (Appendix 0) and answered by 40 respondents. 75% of them were
male and 25% female. They were mostly between 30 and 39 years old.
Table 7.2: Demographic Data
Sex
Male
Female
<30
Age
30 - 39
40 - 49
50 - 59
Total
Frequency
Percentage
30
75
10
25
13
32,5
23
57,5
3
7,5
1
2,5
40
100
100
Maturing Questionnaire
A total of 46 respondents completed all the Maturing Questionnaires. Users were asked to assess
whether given Knowledge Maturing activities were untypical, rather untypical, rather typical or typical
for their work.
The most typical Knowledge Maturing activities are summarised in Table7.3. Statistical measures in
Table7.3 are calculated without taking into account the answer “Do not reply”.
30 out of 46 respondents stated that “Searching on the internet for relevant information” is something
they do typically or rather typically.
“Discussing with my colleagues about relevant resources” for 15 respondents is an activity which is
rather typical or typical for their own work.
14 respondents stated that “Maintaining private collections and continuously adding materials” is
typical or rather typical for their work. 12 respondents stated that also for “Sharing private digital
collections with colleagues”.
Activities which are the least typical (untypical or rather untypical) for respondents’ work are:
“Creating a common taxonomy/classification for tagging (or labelling) resources” (39 out of 46
participants), “Maintaining common digital collections of information and materials with colleagues"
(36 out of 46 respondents), “Making relevance judgements for digital documents in order to highlight
the most interesting resources and find them at a later date” (35 out of 45 participants) and “Adding
keywords or tags to my digital resources in order to find them at a later date” (35 out of 46
participants).
In the open question, 4 respondents commented that using the Instantiation was an interesting and
positive experience for them. Three respondents complained about not being accustomed to using the
system, and saw it as something that caused them problems. Lack of time to get familiar with the
system was also important for these two respondents. Further mentioned were problems regarding the
functionality of the tool, bad access to the tool, difficulties to keep up with new information, effort
needed to use the system, and dependency on others when using the system.
Table7.3: Part I. Current practices of knowledge creation and knowledge sharing
Please indicate for each of the following activities to which extent these are typical for your own work
Item
N
Rating
Do not
reply
Untypical
Rather
untypical
Typical
Very
typical
-
1
2
3
4
M
s
Searching on the internet for
relevant information
46
0
2
14
19
11
2,85
0,842
Storing relevant results in
collections on my desktop or
laptop
46
2
19
15
8
2
1,84
0,888
46
2
21
14
5
4
1,82
0,971
46
4
16
12
12
2
2
0,937
Making relevance judgements
for digital documents in order
to highlight the most interesting
resources and find them at a
later date
Maintaining private collections
and continuously adding
materials
101
Discussing with my colleagues
about relevant resources
46
1
12
18
11
4
2,16
0,928
Sharing private digital
collections with colleagues
46
4
14
16
11
1
1,98
0,841
Sharing my private notes with
colleagues
46
4
15
18
7
2
1,9
0,85
Creating a common
taxonomy/classification for
tagging (or labelling) resources
46
4
27
12
2
1
1,45
0,705
Maintaining common digital
collections of information and
materials with colleagues
46
3
21
15
6
1
1,7
0,803
Adding keywords or tags to my
digital resources in order to find
them at a later date
46
5
23
12
5
1
1,61
0,802
The answers concerning perceived need for improvement are represented in the Table 7.4. The
respondents evaluated how well Knowledge Maturing activities were supported with the Instantiation
and if the Instantiation needs to be improved in order to provide better support.
Statistical measures are calculated without taking into account the answers “Do not reply” and “Not
crucial for my work”.
28 out of 45 respondents assessed that the activity “Creating a common taxonomy/classification for
tagging (or labelling) resources” would need some or a lot improvement.
25 respondents stated that “Making relevance judgements for digital documents in order to highlight
the most interesting resources and find them at a later date” needs to be improved.
Furthermore, 24 out of 45 of them indicated also that both “Discussing with my colleagues about
relevant resources” and “Adding keywords or tags to my digital resources in order to find them at a
later date” needs some or a lot improvement.
In contrast, almost half of respondents (22 out of 45) stated that “Storing relevant results in collections
on my desktop or laptop” and “Maintaining private collections and continuously adding materials”
work well and therefore no improvements are needed. Twenty persons were content with “Searching
on the internet for relevant information”. They assessed the activity as working well.
In the open question, respondents described the problems they had when they were using the system
and suggested improvements. Three respondents thought that it would be necessary to show date and
time when the comments are created in the Discussion Widget. It would also be helpful to show
comments that have not been read and the information where can they be found. Two respondents
wished for a tool that is functioning without any problems (mostly problems with accessing the tool).
Furthermore, the following problems were stated by at least one respondent: The Tag Editor could be
more intuitive, it would be helpful having a moderator who deletes unneeded comments or repeated
and marks the discussions with the biggest activity, excessive creation of new folders, and being able
to access the resources tagged with a tag which has more than one word.
102
Table 7.4: Part II. Answers to questions concerning perceived need for improvement
In the following, a couple of activities are described that are intended to be supported with the MATURE
demonstrator tool.
Please indicate for each of these activities whether you think the demonstrator supports them well or whether
improvements are needed.
Item
N
Rating
Do not
reply
Works
well
Needs
some
improve
ment
Needs a
lot of
improve
ment
Not
crucial
for my
work
-
1
2
3
-
M
s
Searching on the internet
for relevant information
45
2
20
15
8
0
1,72
0,766
Storing relevant results in
collections on my desktop
or laptop
45
4
22
13
5
1
1,58
0,712
Making relevance
judgements for digital
documents in order to
highlight the most
interesting resources and
find them at a later date
45
6
13
24
1
1
1,68
0,525
Maintaining private
collections and
continuously adding
materials
45
7
22
12
3
1
1,49
0,651
Discussing with my
colleagues about relevant
resources
45
4
14
15
9
3
1,87
0,777
Sharing private digital
collections with
colleagues
45
7
16
16
3
3
1,63
0,646
Sharing my private notes
with colleagues
45
8
15
15
3
4
1,64
0,653
Creating a common
taxonomy/classification
for tagging (or labelling)
resources
45
9
7
15
13
1
2,17
0,747
Maintaining common
digital collections of
information and materials
with colleagues
45
8
12
17
6
2
1,83
0,707
Adding keywords or tags
to my digital resources in
order to find them at a
later date
45
3
16
13
11
2
1,88
0,822
Usability Questionnaires
Usability questions were asked in 2 separate questionnaires, each covering different widgets. A total of
38 respondents completed the questionnaires, which used the System Usability Scale (SUS) for the
measuring of usability aspects.
SUS yields a single number representing a composite measure of the overall usability of the system
being studied. To calculate the SUS score, first we summed the score contributions from each item.
The cases that are used in computations have no missing values in any of the items. Each item's score
103
contribution can range from 0 to 4. For items 1,3,5,7 and 9 the score contribution is the scale position
minus 1. For items 2,4,6,8 and 10 the contribution is 5 minus the scale position.
Then we multiplied the sum of the scores by 2.5 to obtain the overall value of SUS.
SUS scores have a range of 0 to 100.
As can be seen from Table 7.5 the users were rather satisfied with the usability of the system and its
widgets (Mean Range = 44.6 to 66.9). The best assessed is the Collection Widget (M: 66.9; Min: 15,
Max: 100) whose usability aspects were the most satisfying for the users. The Discussion Widget also
has an average score greater than 60, while the Overall System is evaluated with a somewhat lower
result (M: 55.1; Min: 13, Max: 93).
According to the users the Tag-Editor Widget has to be improved the most, in order to better support
the users’ needs (M=45.6; Min:0, Max: 83).
Table 7.5: System Usability Scale (SUS) Scores
SUS Scores (sorted in descending order of mean)
N
Min
Max
M
s
Collection Widget
36
15
100
66,94
18,899
Discussion Widget
30
23
85
62,58
16,433
MatureFox Firefox plugin
30
23
95
56,25
16,863
Overall System
38
13
93
55,13
19,425
Search Widget
36
20
83
54,31
17,905
Tag-Cloud Widget
34
0
93
51,84
24,389
Tagging Widget
29
0
95
47,07
19,663
Tag-Editor Widget
37
0
83
45,61
19,476
In Table 7.6 the SUS scores are represented with percentile ranks. A percentile is the value of a
variable below which a certain percent of observations fall. For example, the 20th percentile is the
value (or score) below which 20 percent of the observations may be found. In our study percentile
ranks have shown that the Collection Widget and the Discussion Widget received a better assessment
than the other widgets. Namely, 25% of the participants gave a SUS-Score above 77.5 (75th percentile
of all scores) on the Collection Widget, and above 73.1 (75th percentile of all scores) on the Discussion
Widget. According to Table 7.6 the Tagging Widget and the Tag-Editor Widget show the most room
for improvement (75% of all scores fall below the score 58.8 and 61.3 respectively).
104
Table 7.6: System Usability Scale (SUS) Scores – percentile ranks
SUS Scores divided into percentile ranks
Percentile
Item
N
25
50
75
Collection Widget
36
58,1
70
77,5
Discussion Widget
30
52,5
67,5
73,1
MatureFox Firefox plugin
30
49,4
56,3
67,5
Overall System
38
40
57,5
70
Search Widget
36
40,6
55
69,4
Tag-Cloud Widget
34
39,4
50
71,3
Tagging Widget
29
35
50
58,8
Tag-Editor Widget
37
32,5
45
61,3
Besides answering the SUS questions, users gave their opinion on open questions regarding usability
of the system’s widgets.
These are their suggestions for the Collection Widget: One user stated that it would be helpful to have
a moderator who organises the content, and to have more intuitive usage and access to the widget.
Three users commented that the Collection Widget was useful and easy to use.
The Search Widget has been evaluated as useless by two users. One of them stated that it was
difficult to use whereas the other expressed his/her dissatisfaction with the widget. Loading the
Widget was a problem for one user.
Four users could not use the Tag-Editor because they found it too complex and too hard to
understand. Two other users didn’t regard the Tag-Editor Widget as useful. One user complained that
loading the Widget was too slow.
The Tag-Cloud Widget was the most used widget for two users, and they found it intuitive and easy
to use. However, two other users stated it was too complicated to use. For one of them the loading of
the widget was taking too much time, and for the other the tools failed too often.
There were several negative remarks concerning the Tagging Widget: four users stated that they
found the widget too difficult to use because they didn’t understand it or didn’t find it intuitive, three
users couldn’t see the usefulness of the widget. Moreover, three users could not use it because of
problems accessing it, or because the widget was ‘freezing’.
Four users commented that the comments in the Discussion Widget should be better organised. The
date and time of each comment would also have been important for four participants. Three of them
stated they would like to see at a glance if there were new comments, and the topics of the comments.
One user wanted to have the possibility of deleting comments of no interest. One user had problems
with loading the widget.
The Firefox Plugin (MatureFox) needs a lot of improvement according to two users. It was also
mentioned that it could be more intuitive and shouldn’t freeze so often. One user suggested that a
control system for outdated content would be useful.
The most important suggestions for the Overall System are the following: three users suggested that
the complexity of the system should be better adjusted to the users’ needs. Furthermore, three users
could not clearly see the usefulness of the system. Two users emphasised that not all the widgets were
105
equally useful. More information about the overall system and how to use it would also be very
important for further usage of the system according to one user.
7.4.3
Teacher’s point of view
After the evaluation period, the teacher of the course was asked to provide some feedback on the
usefulness of the Instantiation.
The teacher reported to us that the students were more involved in the course than on other occasions
when the course had been provided. She implied that they were far more inquisitive and as a result
they appeared to have worked harder on the course. The teacher believed that being able to interact on
the forum with each other had motivated them. Not only were they helping each other but also it
appears that it has helped them into a mode of work that has made it easier for them to study.
On other occasions when the course was run, the students just asked the teachers how to solve the final
project, but this time the students were far more interactive. Apart from the use of the MATURE tools
we have to consider other factors that might have influenced the change in their behaviour. On this
occasion there were more students than usual, so maybe that made them less shy and encouraged them
to ask more often, once they saw that the others were already doing it.
Overall, the use of the MATURE tools was positive, keeping in mind the downside that they did not
always work properly and sometimes they caused difficulties, as described in the issues section (see
Section 7.3.4). From the point of view of learning, it was also a positive experience.
7.5
Discussion of findings
As stated in the introduction, with this Summative Evaluation at Structuralia, we aimed to answer
three different research questions:
1. How do people use the Instantiation?
2. Does the Instantiation support Knowledge Maturing activities related to Knowledge Maturing
Indicators (GMIs)?
3. How usable is the Instantiation?
In the following, the outcomes will be discussed with regard to these three research questions.
7.5.1
How did people use the Instantiation?
With the Log Data we registered how the users used the Instantiation during the course (from 16th
September to 10th November). It was found that one-third of the users used the Instantiation actively
while the others produced a number of events that was below the average. The most active fifteen
users comprised almost half of the total activity.
In the first two weeks the users’ activity was at its peak with two-thirds of the events produced in that
period.
In total we had 14,265 events and 28 various event types. However, only a few action types have high
frequency, meaning that the users used the system mostly in a similar way. More than one-third of the
total number of events represents viewing/opening a resource. This was the activity performed the
most with the Instantiation, followed by the activity of searching for results (a resource appeared in a
search result). That may lead to a conclusion that the users mostly used the Instantiation for searching
and viewing various resources thereby extending their knowledge. Additionally, it can be pointed out
that in the first two weeks the event type “A resource appeared in a search result” (searching for
results) was more represented while “viewing/opening a resource” appeared more often in the residual
time. That leaves the question, how much data/information was entered in the system?
In summary, it can be said that the users used the Instantiation most actively in the first two weeks,
using it mostly for viewing/opening resources and searching for results. The users’ activity was lower
in the time after that, when even the most active users (one-third of them) didn´t produce many new
106
events. Moreover, the Instantiation was mainly used for knowledge consumption than for active
sharing of knowledge by the users.
7.5.2
How well did the Instantiation support Knowledge Maturing activities related
to Knowledge Maturing Indicators (GMIs)?
In order to find answers to this second research question Instantiation, we designed and conducted the
Questionnaire Study. With the first part of the Knowledge Maturing questionnaire we wanted to
provide an overview on current practices of knowledge creation and knowledge sharing in the
participant group. The second part of the questionnaire gave us insight into perceived need for
improvement of the Instantiation in supporting the Knowledge Maturing activities.
The following Knowledge Maturing activities were mentioned as typical or rather typical for
respondents’ own work: “Searching on the internet for relevant information”, “Discussing with my
colleagues about relevant resources”, “Maintaining private collections and continuously adding
materials” and “Sharing private digital collections with colleagues”.
Furthermore, when asked whether the Instantiation supported the Knowledge Maturing activities
well or if it needs to be improved in order to provide better support, the respondents stated that the
following Knowledge Maturing activities were well supported: “Storing relevant results in collections
on my desktop or laptop”, “Maintaining private collections and continuously adding materials” and
Searching on the internet for relevant information”. These activities are related to the following
Knowledge Maturing Indicators (GMIs): ID I.3.3 An artefact was selected from a range of artefacts,
ID I.3.4 An artefact became part of a collection of similar artefacts, ID I.3.6 An artefact is referred to
by another artefact, ID I.3.9 An artefact has been used by an individual and ID I.3.10 An artefact was
changed.
The Knowledge Maturing activities that needed to be better supported are: “Creating a common
taxonomy/classification for tagging (or labelling) resources”, “Making relevance judgements for
digital documents in order to highlight the most interesting resources and find them at a later date”,
“Discussing with my colleagues about relevant resources” and “Adding keywords or tags to my digital
resources in order to find them at a later date”.
These activities are related to the following Knowledge Maturing Indicators (GMIs): ID I.2.3.3 An
artefact has been the subject of many discussions, ID I.3.4 An artefact became part of a collection of
similar artefacts, ID I.3.6 An artefact is referred to by another artefact, ID I.4.6 An artefact has been
assessed by an individual, ID II.1.3 An individual has contributed to a discussion, ID II.1.5 An
individual has significant professional experience.
The results show that the Instantiation already supports some of the most typical users’ Knowledge
Maturing activities. However, in open questions (see Section 7.4.2.1) users gave us guidelines on what
further has to be done in order to adjust the Instantiation to the users’ needs in order to provide better
support to Knowledge Maturing processes (e.g. the users wished for a tool that is more intuitive, with
no access problems and easy to use, etc.).
7.5.3
How easy was the system to use? (SUS)
In order to get an impression of how easy it was for users to work with the Instantiation, we asked
them to fill out the System Usability Score questionnaire (SUS). The SUS provides us with a
comparable measure of the easiness of use of a system. It is valid and reliable (Sauro, 2011) and it
needs around 12 respondents as a minimum (Lewis, 1995). It does not help to identify specific
problems. However, several questions ask for specific aspects that are also reflected e.g. in the ISO
norm 9241-110 (Dialog Principles), particularly learnability and usability (Lewis and Sauro, 2009).
According to Table 7.5, the mean values of the overall system, the MatureFox and four of the six
widgets are above 50 points. The Collections Widget and the Discussion Widget had a high value of
over 60 points. In general, it shows us that the overall system conception was accepted differently.
107
Users liked the focus on collecting, aggregating, sharing and discussing their resources but had
obvious problems with the Tag Editor (ontology creation) and the Tagging Widget (tagging
resources). Former discussions about both activities with application partners at both Instantiations,
Structuralia (cf. formative evaluation report of Instantiation 2 in D6.3) and Connexions Kent (cf.
Summative Evaluation report in D6.3), revealed that the ideas behind them are typically quite new to
people and that they need introduction time for training in order to convey the idea and relevance
behind them. In turn, this means weaknesses in usability and learning yield and a much higher level of
dissatisfaction for users as observed here.
The SUS score for the overall system shows a mean value of 55. Hence, people were in general happy
with the software but as they found, for example that the collection widget as really easy to use with a
remarkably higher score, the general system conception needs to be critically reflected (cf. D2.4/3.4).
Table 7.6 shows that the other prompted entities (Tag Cloud, MatureFox, Search Widget) are all
around but above the median and thus have a positive but also critical ease of use with room for
improvements.
In summary, it can be said that two of the most important widgets were perceived as easy to use,
which can clearly contribute to the Knowledge Maturing of artefacts (Collection Widget) and
sociofacts (Discussion Widget). Note that artefacts and sociofacts refer to representations of
knowledge. We had expected a more positive result on the Search and the Tag Cloud Widgets, but
compared to the other widgets, these might provide either too much functionality (search) or too few
(tag cloud) and is thus not perceived very easy to use or even not useful. Providing training that shows
how work processes can be approached and how the widgets can be used there might help.
7.6
Conclusion
The purpose of the evaluation at Structuralia was to validate the Instantiation during the process of
learning, by including it as an additional resource, to an online course.
The Summative Evaluation focused on three different research questions to be answered:
1. How do people use the Instantiation?
2. Does the Instantiation support Knowledge Maturing activities related to Knowledge Maturing
Indicators (GMI)?
3. How easy is the Instantiation to use?
A multi-method design was chosen to answer these research questions comprising an analysis of log
data and user questionnaires.
According to the results, the users mostly used the Instantiation for searching and viewing various
resources thereby extending their knowledge. One-third of the total number of users were very active,
comprising almost half of the total activity while the users in general were the most active in the first
two weeks of the course.
When considering the Instantiation’s ability to support Knowledge Maturing activities, the results
show that the Instantiation already supports well some of the most typical users’ Knowledge Maturing
activities (e.g. “Storing relevant results in collections on my desktop or laptop”, “Maintaining private
collections and continuously adding materials” and Searching on the internet for relevant
information”). However, users gave us suggestions of what further has to be done in order to adjust the
Instantiation to the users´ needs and provide better support to Knowledge Maturing processes.
Usability of the Instantiation has also been examined. It can be said that the overall system in general
was well accepted. However, the usability of individual widgets has been assessed differently. The
Collection Widget and the Discussion Widget, two of the most important widgets, were perceived easy
to use, which can clearly contribute to the Knowledge Maturing of artefacts (Collection Widget) and
Sociofacts (Discussion Widget). Note that artefacts and sociofacts refer to representations of
knowledge. Problems had occurred with the Tag Editor (ontology creation) and the Tagging Widget
(tagging resources). Not being able to create a tag consisting of more than one word or to delete a tag
has probably negatively affected the usability scores (for more information, see Section 7.3.4).
108
When considering the future development of the Instantiation it should be reconsidered what could be
done to facilitate the process of supporting the Knowledge Maturing activities by the MATURE
Instantiation. More extensive training for users which would show how work processes can be
supported and how the widgets can be used could be helpful in achieving better acceptance of the
overall system and individual widgets in the future. The reliability of the software also needs to be
improved before using it on a commercial course.
The use of the MATURE tools can contribute to the addition of the new course material, thanks to the
addition of resources into the collections by the students. It could improve, as well, the quality of the
study material, thanks to the rating system. Once the course material is rated by the students, we could
focus on the improvement of the lowest rated material, making better use of the resources dedicated to
the maintenance of the course material.
The strength of the MATURE widgets is that they address new ways of student collaboration, similar
to the new means of communication that students are using in their personal relationships, where they
exchange experiences at all times, report what are they doing and where, creating a feeling of being
connected to the internet round-the-clock.
The MATURE tools provide improvements in the domain of learning experience, enabling the
students to actively shape their learning setting. The use of the MATURE tools implies a change in the
relationship between students and teachers, where students take an active role, collaboratively
contributing to course materials. The teacher, acting more like a tutor, is providing support and
guidance to the students leaving them to choose how something can be done. The mentioned changes
and developments represent challenge for the teachers, the students and as well for the whole system,
requiring some procedures of change management, specially aimed to address the general resistance to
change.
Overall, one can say that we provided them with a promising research prototype that might support
Knowledge Maturing. It reduces barriers for users as it allows a very easy way of tagging, rating and
collection of resources. Furthermore, the discussion widget and the tag editor allowed users to create a
shared meaning and a common vocabulary, represented in a collaboratively created ontology. Thus, it
supports many important Knowledge Maturing activities, as "Find relevant digital resources", "Keep
up-to-date with organisation-related knowledge", "Familiarise oneself with new information",
"Reorganise information at individual or organisational level", "Share and release digital resources",
"Communicate with people", and "Assess, verify and rate information" (cf. Deliverable D1.2 and
D2.3/D3.3).
However, it needs more development effort for a productive and sustainable implementation in a realworld work environment of this context. This refers to usability issues, a stronger connection and
interaction of the widgets, a stronger focus on people using the system and some final integration work
between this prototype and SOBOLEO. In this process of further development, users of the given
application context have to be involved in order to adapt the system with respect to language,
usability, and workflow integration according to their needs and IT affinity or foreknowledge.
109
8
Partnerships for Impact Study (Study 5)
8.1
Introduction
From 2008 onwards, members of the MATURE project team sought to engage a broad range of UK
partners involved in developing different forms of Knowledge Maturing linked specifically to both
career guidance and workforce development policies and practices. The explicit intention was to create
partnerships for impact which could be categorised broadly in terms of individuals and organisations
who have taken a keen interest in:
• learning about the development of the MATURE project (process)
• learning how to contribute ideas and services to the MATURE project (inputs)
• learning from the test-bed products emerging from the MATURE project (outputs).
Within all three separate and sometimes overlapping categories, the shared discovery of existing and
new knowledge has been facilitated in two main ways. Knowledge Maturing has been facilitated
through ‘top-down’ perspectives, such as government-funded taskforce initiatives, formal reviews of
careers services, expert-led workshops, keynote presentations as well as through ‘bottom up’
perspectives, for example from communities of interest, peer learning and workforce development
staff training programmes.
At a time of economic crisis, the need and potential demand for career development services has
increased in the UK alongside pressures on public expenditure in response to this need. A key
objective within the career guidance and workforce development policies and practices has been to
identify and present new forms of persuasion (evidence) in order to induce a conversation designed to
achieve dialogical learning (shared knowledge) set within a dynamic community of practice
(networks). Many organisations recognise that they cannot function alone and there is a clear need for
exploitation of knowledge and building sustainable networks to help reshape our economy in the UK
and elsewhere (Brinkley, 2008). Hence in the career guidance field many organisations recognised that
they needed to engage in knowledge maturation, whereby individual and organizational knowledge
development are directed within an organisational context (Schmidt, 2005), and needed external
support to do this effectively.
Across the UK, governments operate within separate and devolved administrations and therefore tend
to focus on their own distinctive needs. In each of the four home nations (England, Scotland, Wales
and Northern Ireland) politicians and policy-makers have shown significant and renewed interest in
careers service developments, including recognising the potential of technology enhanced boundary
objects (TEBOs) 8 to facilitate communication between different communities. This interest has been
fuelled by their recognition that there are ever increasing numbers of young people and adults with
varying needs seeking services and/or in need of support and there is not adequate funding to meet
increased demand. Additionally, the question of whether the knowledge, skills and expertise of
guidance practitioners were sufficient to design and deliver services to meet the needs of young people
and adults cost-effectively was being increasingly asked by politicians and policy-makers. The impact
of technology and fast changing labour market information and intelligence has also exposed gaps in
knowledge and linkages between education, employers and careers specialists’ work. However, the
political, economic and social discourse that surrounds and impacts upon careers guidance and
workforce development has created ‘spaces’ and ‘places’ for different levels and types of engagement
on particular topical issues in each of the four home nations as described below.
8.2
Engagement with top-down perspectives: partnerships with UK-wide initiatives
The UK Commission for Employment & Skills (UKCES) 9 is a non-departmental government body
which has stimulated growing interest in careers support services for young people and adults.
8
Technology enhanced boundary objects (TEBOs) are boundary-crossing tools which support situated learning
of people operating at the boundary between different communities.
9
http://www.ukces.org.uk/
110
Commissioners, appointed as advisers to Ministers, have learned about the development of the
MATURE Project through commissioned research on ICT and LMI developments; the role of careers
adaptability and skills supply and presentations at a senior level. In 2011, a member of the Warwick
team was invited to attend No 10 Downing Street to contribute to high-level policy discussions,
facilitated by the Cabinet Office, on the role of ICT, LMI and more generally careers provision for
young people. Lessons learned from experience of bringing together ‘communities of interest’
involved in the MATURE project were drawn upon as part of a series of innovation exchanges.
In 2010, the UK Careers Sector Strategic Forum (UKCSSF) was formed as a result of EU Lifelong
Guidance Policy network (ELGPN) recommendations to bring together employers, trade unions,
careers profession associations and other interested stakeholders with a clear remit to seek to influence
careers policies on a UK-wide basis. Project team members have provided insights to the latest ICT
developments undertaken within the MATURE project through formal presentations and papers
designed to act as a ‘trigger for systems change’. Interesting lessons have been learned from the
UKCSSF’s recognition that the individual and organizational knowledge development underpinning
Knowledge Maturing processes had to be directed (Schmidt, 2005), as organisations attempted to
respond through ‘collective efforts’ to emerging and often unpopular new government policies. It is
noticeable that in driving forward proposed systems change the character of the Forum has changed
over time which has resulted in ‘mission drift’ and challenge by governments on its role and
legitimacy as a UK-wide body. New forms of dialogue are now necessary to reduce overlap in
activities and to determine its future role which may include potential options such as a lobbying
group; an employer, trade union and other stakeholders’ interest group; a facilitative group that creates
a new structure in which careers professional associations and employer interests co-exist.
The UK Careers Profession Alliance brings together five careers professional associations who are
in the process of creating a new single body responsible for common professional standards and a code
of ethics, including a UK-wide register of qualified and competence careers professionals.
Government funding secured from the Department of Business, Innovation and Skills (DBIS) in
England offers scope for new online continuing professional development opportunities which have
UK-wide applicability. The work of MATURE on how to support knowledge development and
Knowledge Maturing processes has featured prominently in the preliminary design work undertaken
by the CPA, in particular there has been a recognition that Knowledge Maturing and organisational
agility are key challenges linked to on-going policy demands for the re-professionalization of those
working in and across the careers sector.
At least two UK-wide professional associations have promoted the MATURE project through their
membership portals and national conferences, namely, the Association of Schools and College
Leaders 10 and the Institute of Career Guidance 11. Close working links have also been established
through informal networking with Graduate Prospects the UK’s official graduate careers website.
Overall, the focus of the UWAR team on developing partnerships for impact has resulted in MATURE
project ideas on support for Knowledge Maturing processes being discussed in a wide range of
national policy forums and in politicians and policy-makers seeing these ideas as important in
reshaping careers guidance services in the UK.
8.3
Engagement and partnerships in the four home countries
8.3.1
Top-down perspectives: England
In addition to the UK-wide discourse mentioned above, there are new political drivers and levers for
change in the form of the Coalition Government’s emergent policies in the careers field and, following
political devolution, these ideas have most effect in England. These ideas have included the creation
10
http://www.ascl.org.uk/
11
http://www.icg-uk.org
111
and promulgation of an ‘open and free market’ in which experimental learning is highly prevalent at
both a strategic and operational level within education, employment and careers sectors. A central
issue is whether or not markets, left alone, will automatically bring about long run improvements to
careers service design and delivery. In this dynamic and rather fluid situation, new players, new
arrangements and new partnerships are rapidly transforming careers support services.
The work of MATURE and the Careers Innovation Group (the outcomes of which were extensively
reported in the Year 3 Project Review as an example of support for inter-organisational Knowledge
Maturing processes) is very relevant in this context, given innovation, intelligence and intrarelationships led by the careers profession, employers, other service providers are being stimulated by
the government to improve the ‘supply-side’ and ‘demand-side’ of careers provision in England.
Policies have focused on ‘product markets’ for potential consumers and ‘the labour market’ where
employment and workforce development are key factors.
Major government-funded Taskforce initiatives within and across the education, employment and
careers sectors have invited MATURE project colleagues to provide inputs to on-going informal
reviews of the careers profession as well as fostering closer education and employer links. For
example, the Careers Profession Taskforce in England 12 has drawn upon interim findings from the
MATURE project to inform ten key thematic areas identified to help strengthen the careers sector and
associated profession. In addition, the Education and Employer Taskforce in England 13 brings
together leading figures from education and employment to help raise aspirations and shared learning
of the different worlds in which they operate and again links between the taskforce and MATURE
project team members have been strong. Strategic education and employers networks, chaired by
Deloitte, have identified synergies between their work in developing new online platforms such as the
‘Speakers for Schools’ and ‘Inspiring Futures’ initiatives and those which have been developed within
and alongside the MATURE project. 14 In this context, a common principle underpinning the UK
MATURE team’s work is that it is regarded as axiomatic that in presenting information on the
evidence-base and impact of careers provision, this should be presented in a way which is capable of
being inspected by others (transparency) and capable of withstanding critique from sceptics (rigour).
Active membership in and partnership with the Research Steering Committee and Partnership Board
has enabled this shared principle to be formally and informally adopted by others. Also, the concept of
dialogical learning and how this facilitates on-line and face-to-face ‘learning episodes’ has helped to
strengthen the evidence-base for careers work. Clearly, this is inexorably linked to developing a
research culture or ‘spirit of enquiry’ within and across the Taskforce membership and MATURE
Team.
Other employer-led projects such as ‘TES Growing Ambitions’ 15 have utilised lessons learned from
the MATURE process of creating technology enhanced boundary objects (TEBOs) to facilitate
learning at the boundary between different communities by forming online communities of interest to
create a new careers portal aimed primarily, though not exclusively, at teachers in schools and
colleges. The design and development of this careers portal was informed by an iterative process of
reflexivity and Knowledge Maturing principles derived from the MATURE project as a member of the
project team offered strategic support on the development of the portal. This private sector approach
designed to ‘fill an obvious gap’ that Government has left to market forces has led to new dialogue on
the level of economic investment required for careers service websites. Here, Knowledge Maturing is
stimulated by the actual and potential numbers of teachers willing to upload and openly share their
resources with other professionals. There are currently over 120,000 resources uploaded with inbuilt
incentives such as prizes offered in exchange for active participation. Looking ahead to future scenario
12
13
https://www.education.gov.uk/publications/eOrderingDownload/CPTF%20-%20External%20Report.pdf
http://www.educationandemployers.org/
14
The development of on-line platforms linked to the MATURE project has been extensively delineated in the
complementary evaluation report on the Longitudinal Study of Knowledge Maturation Processes in Connexions
Kent.
15
http://growingambitions.tes.co.uk/about
112
building, specifically in relation to ‘online careers communities’, there is on-going interest in
harnessing ‘infographics’ to enhance the site such as the mapping of regions linked to occupational
labour market information and intelligence and this is another area where the UK MATURE project
team are acknowledged to have particular expertise developed within and alongside the MATURE
project.
Government departmental bodies such as the Department for Education (DfE), the Department for
Business, Innovation & Skills (DBIS), the Department for Work & Pensions (DWP), the Skills
Funding Agency (SFA), and the Cabinet Office have each participated in meetings and workshops
where MATURE project developments have been showcased as part of on-going consultations. It has
been noted that departmental strategies have become more cross-cutting though they still operate
mainly in their own ‘silos’. The MATURE project development work has created new conversations
and exchanges on priority areas for government departments and the linkage to career development
policies and provision which tend to be located within sectors (schools, vocational education and
training, higher education, adult education, and employment). But careers services have a role to
perform in ensuring pathways within and outside these sectors are viewed as part of a lifelong learning
process. Services to support them need to be as seamless as possible. It is therefore important to
develop lifelong strategies based on greater collaboration and co-ordination across sectors. Systems
design frameworks for improved ICT access and LMI resources are being actively considered by
policymakers as a pre-requisite for achieving social and economic development goals.
As well as sustaining partnerships for impact on the take-up and utilisation of MATURE project ideas
at a national level, partnerships have also been forged at regional and local levels. Notwithstanding the
active involvement of our partners formally involved in MATURE development activities, other local
organisations have come forward seeking partnership to facilitate impact. The London Borough of
Tower Hamlets (2008) involved a member of the MATURE Team to provide an ‘expert-led’ input to
strategic development plans for Connexions services across the Borough. Knowledge of policy
discourse and the planned ‘direction of travel’ in relation to new proposed legislative changes required
not only ‘expert in-depth subject knowledge’ but also ‘critical friend’ support to reassure senior
managers that their proposed strategies would be ‘fit for purpose’. In contrast, The London 14-19
Partnership (2008-2010) aims to foster collaborative regional and local development projects to help
strengthen community provision for young people and families. In this context, project support for
Knowledge Maturing involved negotiation between various organisations involved in youth, careers
and community sector developments. The goal was to produce plans for investment by the London
Government Office to help improve local services for young people. The process of ensuring both
‘top-down’ and ‘bottom up’ perspectives could co-exist and flourish were viewed by stakeholders as a
major challenge and a crucial success factor. Through a process of ‘expert-led’ facilitation and shared
ownership in bid writing and allocation of funds the partnership was successful. The MATURE
project ‘ideas for development’ offered new insights for consideration in relation to ICT requirements,
data transfer and tracking systems.
Careers England is a trading organisation representing employers within the careers sector. The
MATURE project has featured in their online briefings to members and national conferences with
policy-makers and other interested parties.
8.3.2
Bottom-up perspectives: England
From 2008 onwards, careers services in England have undergone major change, and in some areas,
experienced displacement and closures moving first between private to public sector arrangements and
then being systematically exposed by the Coalition Government to free and open market forces. This
new highly competitive environment has resulted in those remaining Connexions services investing in
workforce development and staff training as well as new product design and developments. In the
short-term, staff are being exposed to up-skilling, reskilling and retraining in new areas of work.
Connexions services that have learned about MATURE developments and participated in some way
towards sharing ideas and offering their services were represented in nearly all of the regional areas in
England. Some examples include: South London Connexions whereby over 120 managers and
113
advisers contributed to, and participated in, an innovative knowledge creation and Knowledge
Maturing process entitled: ‘Building Dynamic Careers Services for Young People’ (2008-2011). An
in-depth audit and review of workforce development plans and quality audits undertaken by a lead
inspector highlighted opportunities for engaging and motivating staff to make greater use of
conceptual frameworks that could underpin their everyday work. Internal demands from staff wanting
to be exposed to new theories and practice-based approaches created new forms of dialogue between
managers and practitioners. External demands for greater accountability and evidence of impact of
Connexions services necessitated the creation of knowledge on guidance and related interventions.
Challenges such as ‘reflective learning’ with peers and forming communities of interest within the
organisation set alongside increased pressures and requirements to meet ‘delivery targets’ resulted in
the concept of professionalism being viewed through the differing lens of policy-maker, manager,
practitioner and client.
The MATURE team also fed into the development of an Educational Evidence Based Portal (eep) 16
on careers, work experience and employment in which partnership links were fostered with an online
community dedicated to showcasing evidence based policies and practices (on-going). In Connexions
Leicestershire (2009) a review of existing arrangements highlighted areas of overlap between certain
local services for young people. The state of readiness and agility of organisations in transforming
services in line with new policy developments required sensitive analysis from interviews with senior
level stakeholders. The MATURE project provides ‘a hook’ for discussion on future possibilities that
lifted colleagues away from the here and now to a consideration of change management processes.
Connexions services in Lincolnshire are currently integrated within Lincolnshire County Council
and they have adopted and sustained a long-standing community of practice involving teachers,
careers advisers, local authority and other interested parties. This community of practice (on-going)
has benefited from strong leadership, commitment from senior managers experienced in careers
education and guidance policies and practices and connectivity to a wider CfBT partnership outside of
the area. Within this context, displaced and redundant workers have considered forming mutuals and
sole trader organisations. Ironically, displacement has for some resulted in a determination to be
connected with their preferred occupational identity as Careers Advisers belonging to a professional
body.
Connexions Thames Valley and Connexions Merseyside (2011) has responded to anticipated policy
developments for suitably qualified staff to form part of the new National Careers Service. This has
resulted in knowledge creation activities in at least two different forms, namely, staff training linked in
some cases (though not all) to accreditation and new product design informed and supported by
innovation, change and transitional arrangements. Connexions Northamptonshire has adapted to
new market conditions by focusing on services to schools and colleges. The MATURE project was
welcomed by this organisation and attendance at EU and local meetings helped to raise the profile of
new ICT developments and partnership working. Changes in leadership at Board level and pressures
on service delivery plans resulted in low level involvement with MATURE from 2010 onwards.
Somerset Connexions (2009) and Connexions West of England (2010-2011) have each learned
through formal and informal inputs the process of development within the MATURE project. A
review of the latter organisation’s website enabled more in-depth discussions, facilitated by a member
of the project team, to take place on the potential of TEBOs to span different fields of careers practice
and to map out options for future development.
Prospects, Gloucestershire (2011) showcased MATURE and other evidence-based developments as
part of a promotional campaign to increase schools and college staff awareness on ‘a new era for
careers work’. Connexions Derbyshire (2010-2011) participated in MATURE consultation meetings
and expressed strong interest to pilot online developments up until budget cuts and major downsizing
of services took place earlier this year. Connexions Suffolk was introduced to the MATURE project
16
http://www.eep.ac.uk/dnn2/ResourceArea/Careersworkexperienceemployment/Theimpactofcareerswork/Guidan
ceandtoolkits/tabid/178/Default.aspx
114
(2010) as part of an early intervention project with primary school head teachers and senior managers.
Connexions Luton (2010) hosted a conference to raise the profile of their work and to bring together
inspirational speakers to share their career journeys and future ambitions. Connexions
Nottinghamshire and the Next Step service form part of the same company, though the latter has
regional responsibility for managing adult guidance services. These services have ‘flexed’
significantly to adjust and adapt to uncertain and fast changing careers service markets. They are a
good example of an ‘agile organisation’ with strong and confident leadership arrangements. It is
interesting to note that when staff cuts had to be implemented the CEO and senior managers accepted
cuts in salaries and reduced working hours in line with other co-workers.
Connexions South West Ltd (Devon and Cornwall) are in a similar situation to Connexions
Nottinghamshire. A strong example of how market changes and political rhetoric has impacted on
their partnerships can be considered through a rebranding exercise designed to move away from the
tarnished image of Connexions created by the Coalition Government. Next Step in the North East
region of England has benefited from transferred learning across the organisation in a project entitled:
‘Building Dynamic Guidance Services for Adults’ (2009-2011). This is an allied CfBT project
drawing upon lessons learned from the South London Connexions initiative. The MATURE project
ideas have been introduced to test out practitioners and managers state of readiness for harnessing ICT
and LMI as part of their everyday policies and practices. In view of the forthcoming National Careers
Service (NCS), due to be formally launched in April 2012, there is a growing trend for Connexions
and Next Step partnerships to merge within new and evolving regional networks.
Other bottom-up perspectives have been linked to at least three Westminster Briefing events (2011)
which attract an eclectic audience from the worlds of education, local and central authorities, careers
and employment sectors. The MATURE project leaflets and other relevant promotional materials have
been showcased. Also, the FeDS Consulting Ltd 17 hosted a conference in 2011 to explore the
evidence-base for careers work and the dynamic interface between face-to-face and online career
support services. Major aspects of the MATURE project’s work were considered as part of the
knowledge exchange process. This forum provides a neutral space to think, talk and take forward
initiatives that will help to improve the provision of lifelong learning and skills in England and further
across the UK. In contrast, The Royal Pharmaceutical Society 18 and De Montfort University 19
hosted two lively workshop sessions in 2010 on career development and the intelligent application of
career exploration systems. The audience of experienced and highly trained pharmacists revealed their
career trajectories and reflected upon how their existing knowledge and occupational status resonates
with their current interests and future career plans. The MATURE project underpinning concepts
proved helpful in highlighting the changing nature of career identity and career formations.
Also, the Campaign for Learning 20 (2011) organised a one-day workshop that included a formal
presentation and subsequent discussion on the MATURE project designed to capture the imagination
of twenty participants in creating a future vision for careers guidance and allied workforce
developments. This was followed later in the year by an invitational event led by the Pearson Group,
London 21 which was designed to set out differing perspectives on a futuristic vision of ‘Careers 2020’.
In addition, The B-Live Foundation 22 online careers portal for young people aged 11-19 years old,
delivered in partnership with schools and employers, has attracted a growing community of over 230,000
young people to help connect individuals to wide ranging opportunities and to develop key employability
17
http://www.feds.co.uk/
18
http://www.rpharms.com/home/home.asp
19
http://www.dmu.ac.uk/home.aspx
20
http://www.campaign-for-learning.org.uk/cfl/index.asp
21
http://www.pearson.com/
22
http://www.b-live.com/
115
skills. The MATURE project has supported Knowledge Maturing as part of on-going dialogue with the
Education Steering Group Board in relation to impact assessment and social mobility research activities.
Within the context of independent schools, ISCO 23 offers careers education programmes, personalised
careers guidance, courses and events and informal networking links have raised the profile of the
MATURE project with staff and participating schools in England.
Trade Union organisations such as Unison 24 have performed a key role in holding governments
accountable for their careers guidance and workforce development policies at a central and local level.
The MATURE project has attracted interest from UNISON as part of its determination to continuously
update and improve its own intelligence-base on ICT, STEM and LMI developments. Voluntary sector
organisations such as vinspired 25 with its headquarters based in London have connected their own
research agenda to the work of CfE (Research and Consulting Ltd), Leicester 26 . Through this
partnership, insights to MATURE project developments have been introduced to help stimulate
dialogue on the differing needs and expectations of young people in relation to volunteering
opportunities set alongside Job Centre Plus and Next Step practitioner experiences.
Higher education initiatives such as the national Aim Higher Network based in London and Aim
Higher, Leeds, have performed a key role in bridging the knowledge gap for many young people, parents
and teachers seeking more experience and in-depth knowledge of course provision. The MATURE
project has been profiled through Aimhigher at two major conferences before public policies and cuts
backs within higher education impacted on the demise of this initiative. The Aim Higher programme 27
was established to encourage progression to higher education. Working through 42 partnerships across
England, the programme encompassed a wide range of activities to engage and motivate school and
college learners who had the potential to enter higher education, but who were under-achieving,
undecided or lacking in confidence. The programme particularly focused on students from schools
from lower socio-economic groups and those from disadvantaged backgrounds who live in areas of
relative deprivation where participation in higher education is low. In 2009-10 the partnerships worked
with over 2,700 schools (including 188 Academies and 413 primary schools), 108 higher education
institutions, 368 FE Colleges and 114 Local Authorities. This long-standing community of HE
specialists now continues to operate through a ‘community of practitioners’ arrangement, as well as
through partnership and collaborative arrangements with allied associations. For example, the Higher
Education Liaison Officers Association 28 is the professional association of staff in higher education
working in the field of education liaison and they provide guidance and information to prospective
higher education students, their families and advisers. Lessons learned from the MATURE project
have been cascaded to this group through workshops and keynote presentations.
Universities across England are facing mounting pressure to demonstrate the added-value returns on
students’ investments in learning and work, particularly in relation to destination data and employability
agendas. The MATURE project communicated, co-operated and collaborated with the Centre for Career
Management Skills, Reading University29 (2008- 2010) on shared experiences of TEBOs and technical
infrastructure challenges and opportunities, followed by insights on the creation of a sustainable virtual
centre. Three website developments were launched by the CCMS and the external evaluation report
highlighted the potential for strengthening working arrangements to utilise career stories and ‘Beyond the
PhD’ materials within CCMS and MATURE.
23
http://www.isco.org.uk/
24
http://www.unison.org.uk/
25
http://vinspired.com/
26
http://www.cfe.org.uk/index.php
27
http://www.aimhigher.ac.uk/sites/practitioner/home/
28
http://www.heloa.ac.uk/
29
http://www.reading.ac.uk/ccms/
116
Also, E-EVOLVE, University of Central Lancashire 30 (2009) provided opportunities for students to
develop and improve self-efficacy and meta-cognitive skills by utilising enquiry based learning (EBL)
in virtual learning environments (VLEs) and gain increased exposure to work-related learning using
EBL with reference to industry practitioners and professional bodies. Students were encouraged to
develop key employability skills that meet the changing skill requirements of knowledge-based
organisations. The MATURE and E-Evolve project teams exchanged ideas and reviewed resources to
enhance each of the separate but complementary VLE platforms. Thematic interests such as strategies
for improving access and widening participation within higher education, particularly for part-time
students and those with disabilities, were considered. Through these links new film and video
producers emerged as early adopters and developers of careers promotional materials such as Careers
Box 31 and Talking Heads 32. Also, universities with responsibilities for training career practitioners, such
as Canterbury Christ Church33, the University West of England34 and University of East London35,
have each hosted well attended and highly successful seminar events aimed at staff, students and external
stakeholders interested in career guidance policy, research and practice. The MATURE project was able
to utilise these networks to consult with colleagues and to canvass views on perceived gaps in the
knowledge, skills and mind-sets of practitioners and managers involved in careers service design and
delivery.
Major Awarding Bodies such as OCR Examination Board 36 have participated in discussions on
‘ideal practice’ and investment required by practitioners and managers to undertake accredited
continuing professional development. This has led to professional discourse on levels of competence
and capability for assessors and students. The Adam Smith Institute 37 engages in traditional think
tank activities – like conducting research, publishing reports, and holding seminars and conferences –
the Institute has also, throughout its history, paid a great deal of attention to developing the next
generation of policymakers and opinion formers, with its well-known and highly regarded youth
programmes forming a major part of its activities. The MATURE project has been profiled at relevant
events.
Through various interactions, including ‘tightly knit’ and ‘loose’ networks, the application of
dialogical learning has led to better understanding of the critical relationship between face-to-face
meetings and on-line communities as a construct for shared knowledge. Habermas (1996); Freire,
(1997); and Flecha (2000) argue for new forms of dialogical learning which are now emerging within
and across the career guidance sector in England and other home nations.
8.3.3
Top-down perspectives: Northern Ireland
The Northern Ireland Careers Service (NICS) 38, based within the Department for Employment &
Learning (DEL), is the major employer of careers advisers, who work closely with schools and
colleges linked to the Department of Education 39 and DWP Jobs and Benefits Offices 40. Since
30
http://www.uclan.ac.uk/lbs/e-evolve/index.php
31
http://www.careersbox.co.uk/
32
http://www.talkingjobs.net/index.cfm/jobs/home.team
33
http://www.canterbury.ac.uk/
34
http://www.uwe.ac.uk/
35
http://www.uel.ac.uk/
36
http://www.ocr.org.uk/
37
http://www.adamsmith.org/
38
http://www.nidirect.gov.uk/careers
39
http://www.deni.gov.uk/
117
2008, a series of regular visits and presentations have introduced and highlighted MATURE project
plans, including on-going consultations and presentations.
NICS has undertaken two major reviews of adult guidance provision in 2008 and 2011 primarily,
though not exclusively, focusing on the work of Educational Guidance Service for Adults
(EGSA) 41. The 2011 review, led by KPMG, Belfast 42, has focused on accountability frameworks and
key performance Indicators . From this on-going work, a review and analysis of data followed by
shared knowledge requires sensitive handling in order to build trusting partnerships between DEL and
government-funded bodies. In general, different types of communities of practice operate between and
across these organisations based mainly on informal ties and professional interest groups. The
MATURE project has been profiled widely in Northern Ireland to help inform and support ICT,
STEM and LMI developments. In November 2010, a national symposium and conference was hosted
in Belfast by the President of the Institute of Career Guidance (ICG), whereby key lessons learned
from Knowledge Maturing and organisational agility were previewed and discussed. A live radio
broadcast followed later by a special Parliamentary Event which took place in Belfast Castle with
Ministers and officials in attendance. The overarching theme of how best to modernise careers
services, improve awareness of STEM opportunities and develop enhanced work force strategies
featured prominently within the agenda, with ideas about Knowledge Maturing processes featuring
prominently. Strong working links have also been established with Invest NI 43 a business
development agency whose aim is to support existing and new business to grow and compete
internationally, and to attract new inward investment to Northern Ireland. From this contact,
MATURE case studies were developed in 2009 focusing on organisational agility and workforce
development in a range of different SME and large companies in the province. (These case studies
were reported in the Year 2 Review).
8.3.4
Bottom-up perspectives: Northern Ireland
GEMS N.I 44 was established in 2002 to address long term unemployment and economic inactivity in
East and South Belfast. GEMS NI (formerly Belfast GEMS) has developed to become a service that is
recognised as delivering excellence in employment and employability interventions. It is frequently
held up as a model of best practice in employability services with long-term
unemployed/economically inactive people and those who experience disadvantage in the labour
market. The MATURE Project Team has connected with this organisation through shared links with
the regional branch of the ICG. Also, the Northern Ireland Schools and Colleges Careers
Association 45 has volunteered to act as a sounding board for MATURE project developments
particularly in relation to online developments aimed at teachers and parents. A keynote presentation
was recently delivered by a member of the MATURE project team at their Annual Conference 2011
which explored the dual concept of Knowledge Maturing and online teaching and learning resources
for careers provision in schools and colleges.
The University of Ulster, School of Psychology 46, Post-graduate Qualification in Career Guidance
and Masters’ programme includes on average 35 students per annum (2008-2011) who have met with
a MATURE team member to learn about latest developments in ICT and LMI. From this, linkages has
been made directly to the NICS careers portal with some reflections made on future skills needs within
a career guidance delivery context. Cross border developments between the North and South of Ireland
40
http://jobcentreplusadvisor.co.uk/ireland
41
http://egsa.org.uk/
42
http://www.kpmg.com/ie/en/contact/pages/audit-belfastoffice.aspx
43
http://www.investni.com/
44
http://www.gemsni.org.uk/background-information.html
45
http://www.nisca.org.uk
46
http://www.science.ulst.ac.uk/psychology/
118
have extended boundaries for programme development and new professional alliances between the
university, the UK ICG 47 and the Institute of Guidance Counsellors (IGC) 48 in the Republic of
Ireland.
8.3.5
Top-down perspectives: Scotland
The Scottish Government’s 49 strategy for the future of Career Information Advice and Guidance is
built around 4 themes - strengthening partnership, empowering Scotland's people,
supporting Scotland's employers and modernising delivery. The pressure on public finances demands
it is delivered in ways that are affordable and sustainable. All this means that there was a strong
feeling in 2008 that things would have to be done very differently in the future.
Created in 2008, Skills Development Scotland (SDS) 50 is a non-departmental public body (NDPB)
which brought together the careers, skills, training and funding services of Careers Scotland, Scottish
University for Industry (learndirect Scotland) and the Skills Intervention arms of Scottish Enterprise
and Highlands & Islands Enterprise. They employ over 1,000 staff and have a network of public
access centres and offices across Scotland. The MATURE project has been actively promoted at both
a strategic and operational level to contribute specifically to LMI and ICT policy and managerial
discourses.
In late 2010, a Scottish Parliamentary Group hosted on behalf of the Institute of Career Guidance, in
association with the MATURE project, a special parliamentary evening with QCG/D students, tutors
and other interested parties to celebrate the next generation of Careers Advisers and to raise the profile
of the careers profession. From this and other events, informal networks with bottom-up perspectives
have evolved which complement and support more formalised regional branch meetings.
8.3.6
Bottom-up perspectives: Scotland
Higher education provision of careers education and guidance has become increasingly complex in the
light of rapid changes in the labour market and education and training system. Career practitioners are
expected to provide accurate and up-to-date advice on employment, education and training
opportunities to an ever-widening client group, from young people making their first career decisions
and students in tertiary education, to adults and those facing redundancy or career change. Career
practitioners are also increasingly being expected to offer extra support to people who are
experiencing additional difficulties in entering or retaining employment, education and training. The
University of West of Scotland 51 Postgraduate Diploma combines periods of university-based study
with a range of work-based learning. Napier University52 also offers similar provision though does not
have the same degree of reach in terms of distance-learning provision. Both universities have worked
closely with the MATURE project team to expose students to critical reflection on the extent to which
online and face-to-face careers support services can and will co-exist now and in the future. The
University of West of Scotland also hosted a research seminar in 2011 on ICT developments in careers
guidance which drew upon MATURE project experience in this area.
47
http://www.icg-uk.org/
48
http://www.igc.ie/
49
http://www.scotland.gov.uk/Publications/2011/08/15095448/8
50
http://www.skillsdevelopmentscotland.co.uk/
51
http://www.uws.ac.uk/courses/pg-courseinfo.asp?courseid=395
52
http://www.courses.napier.ac.uk/W76703.htm
119
The ICG Scottish Regional Branch53 used ‘Cloudworks’ as part of the MATURE project to disseminate
findings from the 2010 Parliamentary Event. Finally, the Scottish Council for Voluntary Organisations 54
and the Young Enterprise Scotland (YES)55 met with the MATURE team in 2010 and 2011 respectively
to discuss and share updates on relevant research activities and opportunities for closer working links.
8.3.7
Top-down perspectives: Wales
At the centre of the careers guidance system in Wales are the six Careers Wales companies, each serving a
separate area of Wales, and their joint subsidiary, the Careers Wales Association56. They provide careers
information, advice and guidance services on an all-age all-ability basis in schools, colleges, local
communities, high street offices and in the workplace. They also facilitate the delivery of work-focused
experiences through their support of Education Business Partnerships and play a prominent role in
supporting young people with additional learning needs, those at risk of becoming disaffected and young
offenders. In November 2010, the Minister for Children, Education and Lifelong Learning (DCELL) 57
published ‘Future Ambitions: developing careers services in Wales’. This report encapsulated several
months of intense evidence-gathering from major stakeholders in the careers education and
information, advice and guidance field. It describes in some depth the wide-range of careers services
providers in Higher Education, Further Education, schools, colleges, Work Based Learning
Providers, Jobcentre Plus, Union Learning Representatives as well as Careers Wales – a whole
‘family’ of careers services providers.
The report analyses their inter-relationships and attempts to scope a more co-ordinated, better-led
service that has a shared identity and a shared outcome – citizens who are able to make well-informed
learning and careers choices and are aware of the services on hand to help them towards fulfilling
choices. On-going discussions with Welsh Assembly Government officials has resulted in further work
being undertaken in 2011-2012 to review the role and remit of a new Careers Service in Wales, drawing
on good and interesting international policies and practices. The MATURE project’s work is highly
relevant in this context given the requirement to maximise ICT and LMI within a new and evolving
differentiated service delivery model across Wales.
8.3.8
Bottom-up perspectives: Wales
The MATURE project has fed into several regional events across Wales (2008-2011) designed to build
upon local and national good and interesting policies and practices. The Learning Coaches of Wales
Report 58 and the on-going influential work of Professor Danny Sanders OBE, Centre for Lifelong
Learning, University of Glamorgan 59 are good examples of the Knowledge Maturing processes bridging
the gap between lifelong learning, coaching and careers education and guidance.
8.4
Engagement and partnerships in European and International Networks for
Knowledge Maturation
Europe 2020 60 outlines a strategy for smart, sustainable and inclusive growth. Education and training
are considered by policymakers as making a substantial contribution to this strategy in several flagship
initiatives. For example, reducing early school leaving is essential for social inclusion which is
53
http://cloudworks.ac.uk/cloud/view/4631
54
http://www.scvo.org.uk/
55
http://www.yes.org.uk/
56
http://www.careerswales.com/corporate/server.php?show=nav.5234
57
http://wales.gov.uk/about/cabinet/cabinetstatements/2010/101116dcs/?lang=en
58
http://cell.glam.ac.uk/media/files/documents/2010-02-08/Learning_coaches_of_wales_full_report_pdf.pdf
59
http://staff.glam.ac.uk/users/97
60
http://ec.europa.eu/europe2020/index_en.htm
120
focused by the flagship initiative: A European Platform against Poverty and Social Exclusion61.
The agenda for new skills and jobs demands a strong impetus to the strategic framework for
cooperation in education and training. The ET 2020 62, adopted in May 2009, constitutes the roadmap
of Europe in the field of education and training until 2020. Career guidance and workforce
development strategies are integral to the lifelong learning policy discourse. The MATURE project’s
work has permeated within and across the European Lifelong Guidance Policy Network63 through
on-going dialogue with 27 Member States on quality assurance and evidence-based policies and
practices in careers guidance.
Linked to this, the Knowledge Maturing process has extended beyond this to influence and impact
upon the work of the International Centre for Career Development and Public Policy
(ICCDPP) 64 – ‘Prove It Works!’ A further example includes a recent keynote presentation delivered
to an audience of 150 managers, researchers, practitioners and policymakers on behalf of the Ministry
of Education, Denmark (October, 2011). The University of Latvia 65 and the Latvian National
Guidance Forum 66 have both invited a member of the MATURE project team to act as external
examiner for a PhD assessment and to deliver a keynote presentation. Also, the Croatian Public
Employment Service 67 has recently invited a member of the MATURE project team to review career
guidance legislative arrangements across the EU. Finally, an example of how the MATURE project
has reached outside of the career guidance sectors can be illustrated through the CEJI ‘Religious
Diversity: E-Valorisation’ project (2008) which is now called ‘Belieforama’ 68 . This EU award
winning project brings together a variety of views, perspectives and sectors to the table and seeks to
empower all (including those of no belief or practice) to engage in this theme. Lessons learned from
MATURE in relation to the challenges of bringing together online communities as well as focus group
activities were considered as part of their Steering Group discussions.
8.5
Conclusions
The MATURE project team members were able to draw on and then enhance partners’ shared interest
in Knowledge Maturing in career guidance. This topic mattered a great deal to the participants (as
an object of pressing concern) but, in the UK, it also had deeper significance and meaning for partners
as it was bound up with the sense of identity and imagined futures for all those working in the career
guidance field. 69 These partnerships shared concern for the future of the profession and Knowledge
Maturing processes offered the prospect of contributing both to reshaping of daily work activities and
in helping shape the future of the profession. The importance of these partnerships and the relevance
of MATURE Knowledge Maturing processes were therefore significant for partners’ professional
identities, sense-making and imagined futures and the channels for sharing knowledge was through
overlapping and inter-locking personal networks, which were in part facilitated by the MATURE
project.
61
http://ec.europa.eu/social/main.jsp?langId=en&catId=961
62
http://ec.europa.eu/education/lifelong-learning-policy/doc28_en.htm
63
http://ktl.jyu.fi/ktl/elgpn
64
http://www.iccdpp.org/
65
http://www.lu.lv/eng/
66
http://www.euroguidance.net/?page_id=4696
67
http://www.hzz.hr/
68
http://www.belieforama.eu/
69
That project team members were able to articulate possible ways forward to address these issues of broader
concern with identity and future practice was one of the reasons why there were in such high demand to
contribute to strategic reviews and forums focused on future directions for career guidance.
121
The partners were aware that processes of innovation, learning and development are strongly
contextualised and welcomed that the MATURE Knowledge Maturing processes offered a useful
perspective on the management of change, particularly when many aspects of context were themselves
changing, including how practice is delivered, the nature and reach of different guidance organisations
as well as the labour market itself. The partners valued the time, space and thought given to issues
associated with the approach to the management of change offered though partnership with the
MATURE project team.
The MATURE project team members and their partnerships also had strong overlapping personal and
professional networks and the partnerships acted as a form of ‘bridging social capital’ across the career
guidance field as a whole (which sometimes operates within distinct ‘silos’).70 The MATURE tools
and approaches also operated at the boundaries between different communities and were used to
extend and deepen the communication between communities, thus making possible productive
communication and ‘boundary crossing’ of knowledge.
Partners’ engagement with Knowledge Maturing processes had to encompass a dialogue about the
changing nature of careers, how the careers sector could harness knowledge of labour markets and
embed this at a grass roots level, and what changes in practitioner knowledge, skills, behaviour and
attitudes are required to support innovation in practice. Overall then, the Knowledge Maturing
processes discussed with the partners were useful in scoping the nature of the challenges the
profession faced and exploring some possible technologically-enhanced ways of tackling these issues,
and acknowledging the constraints facing practical realisation.
The engagement with partners on issues concerned with innovation and learning were social processes
which enhanced personal networks and inter-organisational networks. Attention was paid in the
partnerships to the importance of partners building relationships to support their own knowledge and
understanding of innovation and learning development, as well as focusing upon substantive issues.
Another strand of the dialogue with partners was the potential use of technologically-enhanced
boundary objects (TEBOs) to help with the ‘continuing struggle’ to affect a shift in focus from labour
market information to labour market intelligence, from raw quantitative or qualitative data to the
interpretation and further analysis of labour market information. It was also emphasised that this
strand of Knowledge Maturing would continue to be supported by efforts of the UWAR team within
MATURE and related projects. The dialogue also stressed that the dynamic integration of different
sources of LMI and further development of TEBOs were also avenues which were being explored
further. There tended to be agreement that technology could play a role in opening and resourcing
dialogic spaces about future policy and practice in career guidance.
One strand of the partnership dialogue expanded upon with partners with a particular interest in the
TEBOs was the argument that effective learning about key aspects of guidance practice could follow
from engagement in authentic activities that embedded models of support for practice which were
made more visible and manipulable through interactive software tools (TEBOs), software-based
resources which supported knowledge sharing across organisational boundaries. Partners were often
keen to investigate further whether TEBOs could be useful in supporting Knowledge Maturing
processes in guidance. TEBOs were conceived as boundary-crossing tools which could support
situated learning with a focus upon sharing ideas about practice in different contexts in ways that
could appeal to members of different communities and networks. One avenue explored (within and
beyond the MATURE project itself) was to engage in a dialogue with guidance practitioners about the
use of Labour Market Information (LMI) in the development of prototype TEBOs. In these cases the
70
Social capital emphasises the value of social networks, and while some ‘bonding social capital’ was evident
reinforcing ties between those with similar interests in the group, the ‘bridging social capital’ between people
with diverse interests, creating norms of reciprocity, was striking in this case. While there have been criticisms
of how much added value Putnam’s distinctions give in more complex situations here they appear particularly
appropriate. Putnam, R. (2000), Bowling Alone: The Collapse and Revival of American Community, New York:
Simon and Schuster.
122
Knowledge Maturing processes needed to be extended to build an understanding of how TEBOs may
be used in ways that are empowering for practitioners, and ultimately for clients too.
The Knowledge Maturing processes linked to the development work with TEBOs was seen as a
potential way of getting individual practitioners to interact more readily with learning resources for
understanding LMI and understanding the conceptual challenges in interpreting the output of TEBOs:
graphs; labour market predictions; charts; employment data; financial models etc.; and supporting
practitioners in how to visualise, analyse and utilise LMI in new ways in the guidance process they
offer to their clients. This development work was seen as illustrative of a Knowledge Maturing process
with the potential to support learning through the dynamic visualisation of data and relationships and
the consolidation, representation and transformation of knowledge.
The partnerships for impact strategy built on aspects of the MATURE model, whereby attention was
paid to:
Expressing and appropriating ideas: developing a greater awareness of the issue of
innovation, learning, development and Knowledge Maturing in careers guidance through
dialogical exchange.
Distributing in communities: the dialogue with partners resulted in shared understandings
whereby partners became actively aware of new possibilities and ‘imagined futures’. These
ideas were subsequently discussed with other individuals and organisations within the broader
community of interest of careers guidance.
Formalising was embarked upon through a deepening of the collective understanding about
the possibilities of knowledge sharing and further development, which were then translated
into a range of structured documents available from the partners’ organizations.
Ad-hoc learning was realised as some partners engaged with innovative practices using
experimental semi-formalised structures and resources to gain experience and collaborated
with the MATURE team to help develop potential boundary objects that could help facilitate
Knowledge Maturing processes across a wider community of interest. These boundary objects
in some cases were being developed as carriers of more explicit training and development for
practitioners.
Dialogue about Knowledge Maturing processes had resulted in partner development, including in
many cases partners developing their ‘readiness to mature knowledge’ of how technology might
support innovation, learning and development in guidance practice. Many partners also appreciated
that the challenge for the future is whether social software tools can produce artefacts and scaffolding
to take participants to higher levels of understanding about improving their contextualised practice.
The partnership for impact strategy was a successful attempt at an open-ended dialogue between
researchers and partners in order to generate a richer perspective on the issue of knowledge
maturation.
123
9
Longitudinal Study of Knowledge Maturing in
Connexions Kent (Study 6)
9.1
Introduction
This evaluation study comprises a longitudinal narrative of Knowledge Maturing (KM) processes in a
community of practice, based around Career Guidance in Connexions Kent. What makes this study
interesting from a MATURE perspective is that this company had already built up a relationship with
the University of Warwick (UWAR) team in order to develop their Knowledge Maturing processes,
especially around the issue of the use of labour market information (LMI), prior to the start of the
MATURE project. Hence their interest in Knowledge Maturing did not come from participating in the
project, but rather they participated in the project because they were interested in Knowledge Maturing
. They continued with a raft of Knowledge Maturing development activities, supported by the UWAR
and Pontydysgu (PONT) teams, alongside those developments, testing and evaluation activities carried
out under the auspices of the MATURE project. So from a broader project perspective, it is of interest
to map longitudinally the KM processes that have occurred before and during the MATURE
development activities and how for Connexions Kent engagement with the KM concept has
contributed to changed practice over time (from a medium/meso level view). The longitudinal story of
KM in Connexions Kent comprises two parts – one outside and alongside the MATURE project and
the other within the project frame. This longitudinal study will examine both parts of the story. The
story elicitation process was undertaken by Sally-Anne Barnes in the period August – November 2011
making use of extensive documentation generated contemporaneously during the period 2007 - 2011.
9.1.1
Background
A working relationship was established with Connexions Kent in 2007, when a research study was
undertaken by UWAR into the nature of the careers education, information, advice and guidance
(CEIAG) provision that existed across the Counties of Kent and Medway in the South East of
England, together with the use of LMI by career practitioners. The purpose of the study was to identify
ways in which services could be improved in the future. One of the findings of this research revealed
how the relationship of career practitioners to ICT was under-developed.
Careers guidance practitioners work in a wide variety of settings (e.g. the employing organisation,
schools, colleges, etc.), so are mobile workers. In Connexions Kent, each practitioner has his or her
own laptop. There are different models of how much work takes place in shared spaces. Much use is
made of email systems (typically, use will be made of more than one email system – for example, one
that is specific to the careers guidance organisation, another that is specific to a particular school, etc.)
for varied purposes: administration, communication, storage, dissemination, teaching, supervision, etc.
Practitioners will also have access to their own filing systems, but also organisationally owned
systems (like company intranets and hard copy files). The organisational email system was used well,
but the company intranet was not operating efficiently, so was underused. The internet was used in
careers practice to research sources, in response to specific client queries about a particular
occupational sector, but findings from these searches were not saved, shared with colleagues, or
refined for use with different audiences. Its other major use was for management information. An MIS
was used that was required to provide statistics on the delivery of services to young people by
government. The system was unreliable and difficult to use – and had exerted a negative effect on staff
regarding the utility of ICT in their professional practice.
Different professional levels of expertise exist in the workforce: Trainee Careers practitioner;
Qualified Careers practitioner; Senior Careers practitioner (qualified and experienced – often offering
a particular specialism, like labour market information). Once practitioners are qualified, they are
expected to engage in continuous professional development – with days allocated for this purpose. The
type of training will be partly determined by the individual and partly as a result of the annual
review/appraisal process. So there is more autonomy, but with that comes increased responsibility and
accountability.
124
9.1.2 Workplace learning
Informal learning in the workplace plays an important role, with only limited use of formal education
and training. Informal learning is primarily social, with much learning from other people. On the other
hand, it is not simply socialisation, in that there is considerable scope for individual agency – with
practitioners making choices over whom they will communicate. There is also considerable learning
from experience – learning to cope with different types of client, interviews, requests etc. Learning
from experience is more personal, compared with the inter-personal learning with other members of
the organisation and networks.
Practitioners draw on implicit learning in the sense that there were examples of getting a feel for the
direction an interview was taking, from linking past memories with current experience. There was also
deliberative learning where there were discussions and reviews with others of past actions,
communications and experiences, both in relation to what had been successful and what had been
unsuccessful, as well as the development of a contextual understanding of employers, schools,
localities etc. Decision-making and planning of future events could also become opportunities for
learning, reflection and review.
Much learning tended to be opportunistic, in the sense that events or scenarios occurred to
practitioners because they had been memorable for some reason. Deliberative learning was more
considered and planned, with practitioners either making their own choices (e.g. choosing to check out
particular types of LMI) or it was influenced by the nature of (shared) tasks (e.g. having to put packs
of information together for a particular teaching event). Many deliberative activities, such as planning
and problem solving, were not necessarily viewed as ‘learning’ – rather they were viewed as work
activities, with learning as a by-product. Because most of these activities were seen as a normal part of
working life, they were rarely regarded as learning activities.
In-company training courses did not routinely include ICT – so a situation existed where staff
competence and confidence in their use of technologies ranged from near zero to relatively high. Age
was a factor here, with the younger members of the workforce tending towards a more positive
disposition to ICT than older workers (though not in every case). The organisation was a prime site for
Knowledge Maturing regarding the use of LMI in their core business and the Chief Executive readily
gave permission for participation.
9.1.3 How practice aligns with Knowledge Maturing
The form careers practice takes could be influenced by:
• An organisational view of an appropriate approach to follow;
• Work flow – that is, how clients arriving for careers guidance (that is, whether they were
referred, or whether they elected to come along themselves); and
• The professional judgement of the practitioner.
The nature of the service drives recruiting, staffing, personnel development, formal and informal
knowledge sharing events, together with organisational targets – rather than 'management by
objectives'.
• Recruitment: career guidance practitioners and support staff deliver the core service, with
support staff occupying subsidiary roles – but service delivery remains the prime goal of the
organisation.
• Professional development: there are particular requirements for both general and specialist
updating of careers practitioners, who find this challenging because of the many demands on
their time (this is one of the key motivating factors for their wishing to participate in
MATURE).
• Formal knowledge sharing events: conferences and seminars, with explicit updating
functions.
• Informal knowledge sharing: commonly an updating function through meetings, email, web
communities etc.
125
• Organisational targets: these are determined by purchasers of services and largely drive how
these are delivered in terms of types of support offered (e.g. one-to-one interviews; group work
sessions; careers conferences, etc.).
The following report comprises three parts:
•
•
•
Section 9.2 focuses on the research, development and implementation of two ICT systems in
Connexions Kent (Career Constructor and CareersNetKent), which demonstrate how the ICT
confidence of the workforce has grown over a four year period, and how ICT is now
integrated into everyday practice and integral to professional knowledge development. While
CareersNet originally was aimed at Continuous Professional Development (CPD), it evolved –
presumably under the influence of being exposed to the concepts of the MATURE project –
into supporting Knowledge Maturing .
Section 9.3 summarizes the research, development and pilot of the online LMI capacity
building and development software within the MATURE project – assuring quality for social
learning in content networks Demonstrator 1 and Connexions Kent Instantiation, see also
chapter 6).
Section 9.4 draws conclusions and comparisons between the development of the separate
systems.
All sections detail attempts by development teams to engage and facilitate Knowledge Maturing
practices in Connexions Kent. Both ICT systems were intended to represent a user-led design
approach.
9.2
Researching and supporting Knowledge Maturing for professional development
(CareersNet and Career Constructor)
UWAR, in partnership with software developers from two SMEs (Pontydysgu, Wales and Raycom,
Netherlands), have been working closely with Connexions Kent since 2007 to research, develop, pilot
and implement two systems: first, an INSET website to meet the professional development needs of
practitioners; and second, an e-portfolio for young people. However, as a consequence of end-user
involvement in the research process the initial conceptualisation of both systems was transformed.
This is the story of the research process and the transformation of the ICT systems, which resulted in:
• an integrated ICT system, Career Constructor, comprising a range of online tools to support
careers education, information, advice and guidance (CEIAG) services; and
• a closed community knowledge development website, CareersNet Kent, comprising a resources
section, community news, plus group and messaging functionality.
The research and development of these systems was part of a wider project aimed at designing and
implementing innovative approaches to CEIAG across Kent, with the purpose of increasing the quality
of services to young people. Specifically, the overall project remit was to help Connexions Kent
position itself as a world-class service in the delivery of high quality IAG services. It comprised four
discrete, but inter-related work-packages:
• thorough review of CEIAG provision in secondary schools, using a qualitative, in-depth case
study approach;
• designing, testing and recommending an approach for sustainable e-portfolio development for
CEIAG across the region;
• a feasibility study into the development of local labour market information (LMI) for the region,
available on-line; and
• developing a model of sustainable training support for the use of effective LMI in the IAG
process.
The major outcome of the second work package of the research was the development of Careers
Constructor, an online integrated ICT system aimed at supporting Connexions Kent Personal Advisers
(P.A.s) in the delivery of CEIAG services. The requirements and design of the system were identified
through extensive research, consultation, piloting and evaluation. The research process involved
consultations with key stakeholders, including students, school staff and Connexions Kent
126
management and practitioners on an iterative basis over a period of three years. A super-user group of
Connexions Kent practitioners was convened, which played a pivotal role in shaping the development
and undertaking the pilots and evaluation. This ‘end-user’ group provided invaluable feedback on how
the system worked in the schools, how it operated and what it looked like. The pilot e-portfolio
system, named Freefolio, was designed and tested in 2007/2008, and in 2009 it was redeveloped in
response to feedback and launched in 2010 as Career Constructor.
Following on from this development, practitioners became more aware of the potential of ICT and
how its applications could support their work. Their understanding of knowledge had grown as a result
of the project. The scoping exercise undertaken for the first work package developed our
understanding of the essential role knowledge sharing plays in careers education and guidance
practice, but also highlighted that few processes were in place to support this essential sharing.
Practitioners, operating in isolation or within small local teams, researched and shared information on
the labour market, courses, careers paths, local opportunities etc. The majority of information and
resources were maintained by the individual, some was shared face-to-face or stored in hardcopy at the
local office, and on rare occasions resources were uploaded onto the local intranet. From the research,
it became apparent that knowledge was not shared across the county and work was being duplicated.
This led to Connexions Kent management approaching UWAR to develop an INSET website to record
professional development activities, to collaborate and share information and knowledge. This led to
the design and implementation of Careersnet Kent, a community knowledge development website.
The overall outcome of the work with Connexions Kent has been the successful implementation of
ICT and the increased confidence of practitioners not only in the use of ICT, but also the integration of
ICT into their practice and professional development activities. This demonstrates how knowledge of
ICT and its implications for work/practice has matured over the course of the project. Knowledge has
been transformed from an isolated to collaborative activity, and from individually held knowledge to
shared.
9.2.1 Scoping the role of ICT in Kent
Prior to the start of the project a scoping exercise was undertaken to investigate CEIAG provision in
schools across Kent using interview and document analysis methods. This scoping exercise
investigated the provision, explored the particular role of ICT in existing provision and assessed what
place, if any, an e-portfolio could have in provision.
The scoping exercise revealed a lively, varied and expansive terrain of CEIAG provision. Four distinct
models of provision emerged from the data, including: integrated; stand alone; peripheral; and
transitional. From this, it was clear that an e-portfolio could be of value and a system designed for
Kent would offer a common CEIAG activity. However, it was evident that designing an ICT system to
fit the different models of provision would be complex, as there would be little consensus on the key
purposes of such a system. It was also noted that the system should not overlap with or duplicate other
systems in operation in Kent and used by schools. Importantly, the research also revealed how the
potential of ICT was not being exploited in knowledge development and sharing.
At the start of the project (2007), a series of awareness raising and consultation events with
practitioners and senior management from Connexions Kent were organised as part of the research
process. (This group formed a super-user group to guide the research and development of the eportfolio and professional development systems.) At an initial event, both systems were discussed,
including their purpose, design and functionalities.
9.2.2 Developing the e-portfolio system – Career Constructor
9.2.2.1
Phase 1: Designing and testing the e-portfolio
Freefolio, the initial name for the e-portfolio system (a repurposed early design of which was also
explored with respect to its potential support of Knowledge Maturing in year 1 as Design Study 4, see
D6.1 and D2.1, but at that time not further considered within MATURE in favour of more
127
comprehensive Knowledge Maturing solution), was developed collaboratively in response to those
ideas and expectations from the super-user group consultations about what an e-portfolio should do.
Freefolio was online and required no software to be downloaded in order to be used, so could easily be
implemented. It was designed to be a ‘closed system’, which meant it could only be accessed by those
authorised to do so through a log-in. The pilot e-portfolio, Freefolio, had four core functionalities:
•
•
•
•
a reflective diary or blog with structured tools designed to support CEIAG activities;
a personal ‘dashboard’ (for organising and presenting information sources);
discussion and comments features for user-to-user community building; and
spaces for collecting, organising and sharing resources.
Also, in response to the consultations, Freefolio was designed to enable each user/student to have an
individual e-portfolio in which they could:
•
•
•
•
•
•
•
complete a personal profile and a personal development plan;
write and reflect on their learning (both formal and informal);
complete structured entries to help research jobs, prepare for work experience etc.;
assess their skills, strengths and weaknesses;
link to careers websites and the area prospectus;
access and upload resources; and
discuss content with other Kent and Medway Freefolio users participating in the pilot.
Overall, Freefolio was designed to support and enable users/students to develop and demonstrate IT,
communication, research and networking skills, but also promoted reflective practice and knowledge
development.
E-portfolio pilots were undertaken in four schools across the region (between October 2007 – March
2008). Access to the schools was negotiated by members of the super-user group using a briefing note.
Activities and resources were researched and identified by the research team for use by practitioners in
planning the pilot e-portfolio sessions. In addition, the research team developed a training guide and a
short film about Freefolio. The film was designed to be an introduction to the system for the students.
The e-portfolio was evaluated by the students using the evaluation form. Teachers who attended the
sessions also completed the evaluation form. Practitioners completed the observation sheet and wrote a
short report on their experience of the pilot and evaluation session, which were emailed to the research
team. Although four pilots and evaluations were organised, technical difficulties in one school meant
students were only able to look through the training guide and discuss what they thought about eportfolios. Across three of the pilots, students (n=28) were asked to complete the evaluation and
feedback form; 16 of the students were in Year 7, and 12 were in Year 12.
Feedback on the e-portfolio was received from all practitioners participating in the pilot and two
members of school staff using the student evaluation form, observation sheet and by email. Many
questions were raised regarding: the role of the e-portfolio and its possible integration into existing
school virtual learning platforms (VLP); and issues of confidentiality. Although there were questions
regarding who sees the information, students sharing ideas, experiences and their own careers research
was considered a beneficial and a useful element of the system as broadened horizons and the
knowledge base of students.
Generally, positive feedback was received with reports that the system was easy to use after a short
period of orientation and use. The structured blog entries around careers information and research
were noted as the most useful aspects of the e-portfolio. Mixed responses were received on the
usefulness of the comment feature. It was also noted that some aspects of the navigation were
complicated. Overall, the e-portfolio was considered a good idea, but required more work particularly
on the aims and ‘look and feel’ of the system.
Students also liked the e-portfolio, but found elements difficult to navigate and unattractive. Elements
that were liked by students included: creating own profile and homepage; ease of navigation and brief
text/description; structured entries (such as researching options and careers); CV creation; and the
amount of information that could be stored.
128
9.2.2.2
Phase 2: Reflecting on lessons learnt to redevelop e-portfolio
The research team, software developers and the super-user group met in early summer 2008 to review
the pilots and consider lessons learnt. The four pilots had gone well and both technical and practical
lessons had been learnt including: adequately briefing schools to ensure ‘buy in’; managing access and
log-in issues; addressing technical issues with the schools early on in pilots; and presenting the eportfolio better in terms of its role and application together with issues of confidentiality. It was
suggested that in the future, the e-portfolio could be introduced as part of careers and made more of a
careers tool. The practitioners, forming the super-user group, demonstrated confidence in their
technical ideas for the redevelopment and started to challenge the technical team.
A second install of Freefolio was undertaken addressing comments and feedback received from the
students and practitioners experiences of piloting in schools. The refined Freefolio was aimed at
helping schools deliver elements of the new Key Stage 3 curriculum, including supporting students to:
write a personal statement and make an individual learning and career plan for their transition into the
14-19 phase; and make links between economic well-being and financial capability and other subject
areas of the curriculum.
Two pilots were undertaken with 18 Year 7 and 9 students in early 2009. Students were given a
demonstration of Freefolio and were then given the opportunity to develop their own e-portfolio using
the guide. Questions were mainly around issues such as: how to save data; how to customise; solve
issues with logging in; plus how and what to write about (even when using some of the more
structured templates). Although students were reportedly confident in using the technology, navigation
of the system continued to be problematic. Students reported that they: preferred using a PC compared
to writing; valued the links to wider resources; plus enjoyed sending messages and posting
information. Students were encouraged by the school to keep posts ‘private’ as there were concerns
about confidentiality. Overall, students believed that the e-portfolio would be beneficial as it was a
good method of storing and reflecting on information that would be useful to them in the future.
9.2.2.3
Phase 3: Developing web-based tools – Reflections on phase two informing phase three
The research team, software developers and Connexions Kent staff met up to reflect on the pilot eportfolio development and evaluate the feedback from the key stakeholders. The piloting process had
produced a clear steer for future development:
• work more closely with practitioners to support their work in individual schools;
• provide more structured support to practitioners for this process; and
• deliver improvements to the system indicated by user feedback.
It was clear that schools were no longer interested in using Freefolio in its current form as it required
time to build into the curriculum for students to use. It was also agreed that it needed a lot of further
development to improve its accessibility and use. In response, three models were proposed to progress
the e-portfolio in phase three of the research project. The preferred model was agreed to be a set of
web-based tools, as they could be completed independently by the student in school (in tutor-led time
or PSHE lessons) or at home. Students would also be able to work on the tools with the practitioners.
It was agreed that a set of web-based tools would be developed and designed to support CEIAG
delivered by practitioners. These tools would be used as part of practitioner-led career sessions and
could be accessed by students and young people independently at a later date. The framework of tools
would be integrated into the Connexions Kent website and designed with Connexions branding. Some
initial suggestions for tools by the practitioners included: CV; skills and interests – linked to videos;
achievements; goal setting and action planning; personal statement; researching careers; and careers
resources and links. Practitioners outlined the initial system requirements. Their involvement had
evolved (over the lifetime of the project) from suggesting designs to leading the technical
specification.
In summer 2009, the super-user group (comprising four practitioners, a guidance development officer
and one member of senior management) met with the research team to start to outline and specify the
possible tools to be developed for the new system. Prior to the meeting the research team and the
129
practitioners had gathered a range of paper-based CEIAG activities. At the development meeting the
options for tool development were discussed and each was assessed in terms of its usefulness to
students. Each practitioner took responsibility for researching and designing a specification for two
tools. In autumn 2009, the super-user group, research team and software developers met to finalise the
tools to be developed. Six tools were agreed to be developed in the new system that was to be named
Career Constructor.
9.2.2.4
Phase 4: Implementing Career Constructor
In winter 2009, Career Constructor was completed. It was developed using the Freefolio platform, but,
in response to feedback, had a new interface and included six tools on the dashboard. Much of the
development work on the web-based tools has been building a system based on the CRCI
categorisation that could be easily extended in the future. Included within the system, there are over
800 job profiles and 400 careers videos. The six web-based tools are:
• The Career browser tool enables the student to browse a range of jobs and careers by selecting
a sector/job family where detailed information on the job family is displayed. The student can
then browse the jobs in the family and select a job for more information on: what the job is like;
hours and work environment; salary and other benefits; how to get in and on in the job; training;
and other information. The information is imported into the system from Jobs4u, so is
constantly updated. Included in this section are links to external resources, related jobs, case
studies and careers videos from iCould. There is also the option to search for a specific job.
• Linked to the Career browser is Career rating. Using the same information, career rating
enables the user to rank each job using a sliding scale of 1 (‘not interested’) to 5 (‘very
interested). Results are saved so a student can view their favourites which are ranked in order of
preference. These jobs are linked to further information which is located in Career browser.
Students are able to amend their rating at a later date.
• The Labour market tool enables a student to view statistics on the workforce and employment
data at a national, regional and local level for several years to learn about the changing labour
market and its trends. Statistics are shown as simple and colourful bar charts.
• My career tool is where the user can store personal information about the subjects they are
studying, hobbies and activities, achievements, club and society memberships, and future
aspirations. This can start to form information for a personal statement or CV.
• The Career research tool is based on the ‘researching a career’ proforma originally designed
and developed for Freefolio. Students are given hints on where to research and find out about a
particular job. They are then able to create a job profile based on their own research. The
information is saved in Career Constructor, and can include links and uploaded documents.
Students are able to review their research and edit and extend at a later date.
• The Making decisions tool is based on the Connexions Kent programme ‘So what am I like?’,
which asks the student about who they involve in making decisions and how they make their
mind up. Results start to give students an idea about how the make decisions and point them in
the direction to find out more about themselves.
In Spring 2010, Career Constructor was tested by the super-user group and the research team to ensure
its robustness. In the June 2010, Career Constructor was launched and promoted amongst Connexions
Kent staff. Over the summer, Connexions Kent staff tested the system ready for use within schools in
Autumn 2010.
9.2.2.5
Conclusions and reflections on Knowledge Maturing
Over a three year period (2007-2010), there has been a development process of scoping, literature
review, awareness-raising and consultations in the design, piloting, refinement and evaluation of an eportfolio system. This has involved a wide range of key stakeholders including Connexions Kent
management, practitioners, school representatives and 46 students from across six schools. The
research process and development has been iterative following a process of research, defining, testing,
refining, piloting and so on. Career Constructor has been the result of this iterative process. It has been
developed on an evidenced need for a common CEIAG activity across the region and a tool to support
the work of Practitioners.
130
The research team and Connexions Kent staff reflected upon the results of the pilot evaluation and the
challenges of implementing the system. These included the need to:
• design and provide a system with a good menu (including sections on work experience, courses
chosen and interests/hobbies);
• define a clear set of boundaries – ‘scope of use’ and ‘acceptable use policy’;
• develop a user profile recognising individual skills and feeding into the university application
statement;
• train all groups involved in the pilot;
• understand schools’ IT security (i.e. which websites are blocked);
• gain commitment and time from the school and be able to access reliable IT systems within the
school;
• brief schools about the potential benefits of the system
• ensure allocation of time for Practitioners; and
• avoid any duplication of systems (e.g. with ILP).
Throughout the project it became clear that there are a number of organisational considerations which
needed to be addressed in the development, including:
•
•
•
•
•
•
•
general issues of data ownership and maintenance;
confidentiality;
relationship with management structures (both within Connexions Kent and schools);
potential risks (e.g. confidentiality and safety);
specific issues relevant to school policy and organisation;
regulatory and policy issues; and
support for individuals (including users, Practitioners and, where relevant, school staff) engaged
in portfolio development in terms of training, dedicated time and recognition/accreditation of
informal learning.
Rich feedback has been collected from the various stakeholder groups. A recurrent theme has related
to the rich, but varied, tapestry of CEIAG provision in schools across Kent. Because there is little
consistency in the priorities and purposes assigned to CEIAG across different schools, there was,
unsurprisingly, little consensus regarding the key purpose for which an online system could be used.
To address this, Career Constructor was developed specifically to focus on supporting Practitioners to
deliver high quality services to schools.
Overall, the approach adopted has been successful, as it has taken into account both organisational and
users’ needs. Throughout the design and piloting of the e-portfolio emphasis has been placed on the
process of ensuring that the system accommodates the particular needs of its users, rather than the
needs of its users having to be fitted into an existing e-portfolio product. Lessons, however, have been
learnt during this development process on how the system could be improved, which resulted in the
development of the web-based tools, Career Constructor.
9.2.3 Developing the INSET website (CareersNet Kent) to improve Knowledge
Maturing collaboratively
At the start of the project (2007), a series of awareness raising and consultation events with
practitioners and senior management from Connexions Kent were organised as part of the research
process. This group formed a super-user group to guide the research and development of Careers
Constructor (a set of online tools to support CEIAG services). During the design and development of
Careers Constructor, practitioners became more aware of the potential of ICT and how its applications
could support their work. So, in 2008, alongside the development of Careers Constructor, the
professional development system (named CareersNet Kent) was designed, applying (again) a user-led
approach. The aim of the research was:
• to adopt a user-led approach in the design, development and implementation of an ICT system
to support and record Connexions PAs professional development.
131
At an initial event, the INSET website was discussed, including its purpose, design and functionalities.
From the outset, it was important that the website development and design was led by the users – the
Connexions PAs. The role of an INSET website was debated by the super-user group. It was agreed
that the overall aim should be a central place to record both formal and informal learning and share
experiences.
9.2.3.1
Setting the context: Scoping the role of ICT in Kent
The scoping exercise undertaken for the first work package developed our understanding of the
essential role knowledge sharing plays in careers education and guidance practice, but also highlighted
that few processes were in place to support this essential sharing. Importantly, the research revealed
how the potential of ICT was not being exploited in knowledge development and sharing.
Practitioners, operating in isolation, or within small local teams, researched and shared information on
the labour market, courses, careers paths, local opportunities, etc. The majority of information and
resources were maintained by the individual, some were shared face-to-face or stored in hardcopy at
the local office, and on rare occasions resources were loaded on the local intranet. From the research,
it became apparent that knowledge was not shared across the County and work was being duplicated.
This led to Connexions Kent management approaching the UWAR to develop an INSET website to
record professional development activities, to collaborate and share information and knowledge and to
support those new to a careers post or newly qualified.
9.2.3.2
Developing and designing the INSET website
The super-user group comprising practitioners, management and the guidance development officer
met in May and July 2009 to discuss and refine the desired functionality of the INSET website. It was
agreed that the aims of the INSET website were to:
•
•
•
•
support the exchange of ideas, supervised discussions and the production of online articles;
enable peer group discussions;
help staff develop skills and expertise; and
include helpful information (such as what do I need to know on my first day).
As practitioners had become more aware of the value of embedding technology in their work and more
confident in their skills, they were enthusiastic about the potential and keen to lead the development.
The super-user group outlined the desired functionality of the INSET website, including:
• to be accessible to those working with Connexions Kent (including those in schools and local
colleges);
• to be a closed community with restricted access;
• to have sections for targeting the newly qualified, established and those in leadership roles;
• to contain a searchable and updatable resources section;
• to contain the ‘Work for Tomorrow’ LMI;
• to include a place to post events;
• to have a space in which to record own training, learning and development activities; and
• to include messaging and group functionality.
Although the audience of the website was targeted at those working for Connexions Kent, it was
revised to include trainee careers co-ordinators and those interested in CEIAG. The positive benefits of
such a system in supporting practitioners were discussed. The importance of it being led by the
practitioners was also highlighted.
Between August and October 2009, the super-user group met several times to refine the brief for the
team developing the INSET website. At these meetings a range of resources were identified for
uploading into the resources section of the website. Resources were divided into internal, local,
regional and national sources and a list of initial ‘tags’ or labels were identified by the group.
Resources were then tagged by the super-user group. Greater involvement with ICT had resulted in
practitioners understanding the concept and practice of ‘tagging’. The name for the INSET website
was also discussed. After some checking with available domain names, it was agreed that the website
would be called CareersNet Kent. The following functionality for the INSET website was also
discussed and agreed with the developers: ‘calendar’ – it was agreed that this would be useful for
132
recording the dates of meetings and future events; and ‘RSS feed’ display – it was agreed that this
would be useful, if appropriate feeds could be found. Positive feedback was received on the website in
September 2009. Some suggestions were received on users being able to comment on/discuss
resources as well as rate individual resources. In October, ‘tag gardening’ (this is where tags are
checked, amended and rationalised) took place as members of the super-user group recognised the
need for this activity. RSS feeds and widgets (i.e. maps, weather) were also identified for display on
the homepage.
The first install of the INSET website was designed to match the corporate design/style/colours of
Connexions Kent. The first install of the website was demonstrated in December 2009 to the superuser group and management. In early 2010, the idea that there were different signposts for different
users was explored. There were some concerns about the complexity of the current homepage, which
led to the redesign of the homepage with more direction for specific targeted groups. Various
improvements to the website were agreed. It was also agreed that a dedicated person in Connexions
Kent would need to be identified to maintain the resources section of the website by uploading new
resources and replacing dated resources. These changes were implemented. In March 2010, a selected
group of users were asked to pilot the website, before the official launch. It was agreed that two people
would review and identify resources on a monthly basis to maintain the resources section of the
website.
CareersNet Kent was launched in October 2010. Since the launch of the website in 2010, further
refinements have taken place:
• A ‘News’ facility has been set up for publications as an alternative to weekly email alerts.
Policy documents, events (both local and national) and general news items are posted on the
site. Members can subscribe to this news by RSS or email.
• A careers co-ordinator is writing a fortnightly bulletin about local opportunities and useful LMI,
which is now added to CareersNet Kent ensuring that all staff can access and use the
information.
• An administrator has been appointed to maintain the repository; reviewing and updating
resources on a monthly basis.
CareersNet Kent comprises:
• Dedicated sections listing resources and information guides for the ‘newly qualified’, those in
‘established’ positions and those who have been in post for some time and are in a ‘leadership’
role;
• ‘News’, which is updated regularly with information on local and national events, policy
documents, useful information and internal news. This also displays RSS feeds from particular
organisations and government departments;
• ‘Resources’, a repository of internal and external documents to download. Users can rate each
resource, add a comment or add to their private list of ‘favourite resources’.
• ‘Careers weekly bulletin’ which is produced by a careers co-ordinator containing a range of
local labour market information and news, plus training and employment opportunities;
• A ‘Members’ section where users can view and edit their own profile, record their informal and
formal professional development activities, view their activity on the website, send messages to
and connect with others;
• A ‘Groups’ section where users can subscribe to and follow the activities/news of particular
groups;
• ‘Forums’ for users to discuss relevant topics, pose questions and join group forums;
• Work for Tomorrow’ that links to all labour market information (local, county, regional,
national, sectoral) resources, a guide to assessing labour market information, and an analysis of
graduates in the region;
• A calendar of ‘Events’, listing both internal and external events information; and
• Two types of ‘Search’ facility – a tag cloud and a standard search box.
Use of the website is increasing. From a potential user group of over 430 individuals working for or
with Connexions Kent, 65% are registered users and of those 21% actively log on and use the site on a
133
daily, weekly or fortnightly basis. Since its launch, there have been 204 posts (not including the News
items and the Careers weekly bulletin) and 18 groups formed (that are actively sharing information
and discussing ideas online). As staff are now working remotely, CareersNet Kent has become a vital
means of communicating with colleagues, sharing resources and keeping up-to-date with careers
guidance policy and practice.
9.2.3.3
Reflections on CareersNet Kent
Over a three year period (2008-2010), there has been an intensive research and development process
for CareersNet Kent that has involved a scoping exercise, awareness-raising and consultations in the
design, development and implementation of the website. The development process was based on a
user-led design to ensure that the needs of the PAs were met. This has guaranteed that the
implementation has been successful and that the site and its range of functionalities were useful. The
development process has been led by the super-user group which was formed to guide the design of
the website. Evaluations of the website from the super-user group have not only informed the
refinements of the website, but its future developments. Ad hoc evaluations and feedback from
individual users has been collected, but have proved invaluable in improving the website functionality.
Connexions Kent staff have reflected upon how they use the website, what is useful and how it has
changed their practice. These reflections were focused on:
• Accessing up-to-date news and information, which is considered invaluable in the changing
context;
• Connecting with colleagues and being able to pose questions and get help;
• Enabling efficient working practices and pooling of resources; and
• Accessing resources and the ‘Work for Tomorrow’ LMI information.
Throughout the research project it has become clear that the successful implementation and use of this
website is the result of:
• the design and functionality being led by the users, ensuring that needs have been met and it is
useful and accessible;
• the website having had the full support of management who have promoted it and set
procedures in place to ensure that it is up-to-date and the main means of communication within
the organisation.
Since the launch of CareersNet Kent there have been on-going further developments and refinements
to the website to meet the changing and growing needs of the Connexions Kent staff. These
refinements have also been in response to knowledge of technical capabilities maturing. The addition
of the ‘News’ posts have changed the way information is communicated and shared in the
organisation. Discussions are underway to include a more dynamic section on labour market
information using open and linked data.
9.2.3.4
Conclusions and examples of Knowledge Maturing
The following exemplify how the website has been appropriated demonstrating how Knowledge
Maturing is being facilitated:
• One practitioner writing a Fortnightly bulletin about local opportunities and useful LMI started
adding this resource to the INSET website for other practitioners to use.
• Resources comprising knowledge base are reviewed and updated on a monthly basis.
• Practitioners have instant access to internal documents and new policy documents.
• ‘News’ page was developed – as an alternative to weekly email alerts. News, minutes and
important information are posted on the site. Members can subscribe to this news by RSS or
email.
• Groups (local or those geographically dispersed with a common interest) formed and started
sharing information and news between meetings.
• Practitioners are using forums to debate current issues and share information.
• Practitioners are identifying and uploading resources for colleagues to access.
The overall outcome of the work with Connexions Kent and the development of the CareersNet Kent
has been the successful implementation of ICT and the increased confidence of practitioners not only
134
in the use of ICT, but also the integration of ICT into their practice and professional development
activities. This demonstrates how knowledge of ICT and its implications for work and practice has
matured over the course of the project. Knowledge has been transformed from an isolated to
collaborative activity, and from individually held to shared.
This research and development process represents a highly innovative approach as it followed an
iterative process of development. IT systems and software are usually developed without the insight
of users. This development was not only researched and developed by the users, but also key elements
were designed by those users. In this instance, the involvement of users expanded the original remit of
the website from an INSET website to a collaborative and organisational networking tool. This has
resulted in an innovative and useful practitioner website with a range of functionalities. Career
Constructor, the result of this process, offers a tailored-made system designed by careers guidance
practitioners in Connexions Kent to support their work.
This project exemplifies how practitioners can become central to the research and development
process, but more importantly how they can ensure successful outcomes to ensure that it maximises
impact in changing practice and maturing knowledge. Connexions Kent’s engagement with the KM
concept has contributed to changed practice over time (from a medium/meso level view).
9.3
Researching and supporting LMI Knowledge Maturing for careers guidance
practice
The objective of this MATURE project demonstrator (named ‘Quality assurance and social learning in
knowledge networks’, Demonstrator 1 in D2.3/3.2), as described in chapter 6, was to actively support
social learning in a distributed setting with a focus on content aspects. The aim of the demonstrator
was to support knowledge workers in sharing their knowledge and experience and to foster informal,
work-integrated learning when dealing with rapidly changing information, as for example with LMI in
the context of career services.
9.3.1 The design process
The participatory design process was primarily grounded on the ethnographic studies and the design
studies performed during the first year of the project (in close collaboration with the application
partners). These studies provided guidance for the design process, in terms of functional requirements,
expressed through Personas and Use Cases. Based on these, a more elaborated version of the
application scenario was developed in close interaction with application partners. An open-ended,
participatory and iterative design process was followed. Together with application partners, a set of
design artefacts was developed, starting from early stage paper-based mock-ups in year 1 to low
fidelity prototypes at later stages, continuously integrating the gathered feedback into the on-going
development.
9.3.1.1
Phase 1: Defining the requirements (January 2009)
As part of design study 1 (“A semantic media-wiki for maturing career guidance knowledge in use”)
and design study 5 (“OLMEntor – A demonstrator for the organisational learning and maturing
environment”) in year 1, a start-up workshop was held in January 2009. The main purpose of the
workshop event was to obtain feedback from key stakeholders – particularly careers practitioners and
managers – on the two new ICT LMI prototypes developed by the MATURE project team. A key
design principle for the prototype developments was to ensure that knowledge matured by guidance
practitioners (i.e. labour market information) as part of their practice was captured and made more
openly accessible through the intelligent application of ICT support systems.
Ten representatives from application partners represented guidance organisations from Scotland and
England. In addition, two policy makers attended representing the adult guidance sector in England.
A broad overview of the project aims and objectives was presented. Participants were encouraged to
focus on ‘future perspectives and new possibilities’ for constructing ICT support systems to inform
135
and support the use of LMI in a guidance context. In particular, judgements about the efficacy of the
prototypes and their potential application by individuals in an organisational context were required. A
brief background to the design process and how this aligned to the Knowledge Maturing model was
provided. From this workshop, information was gathered regarding: current practices in researching
LMI; and organisational and individual aspirations re: community collaboration for LMI in careers
practice.
A demonstration of the concept of an LMI wiki was welcomed. However, feedback emphasised the
importance of quality assurance being a key driver of design of the system and a user-friendly
interface was deemed essential for the full integration of the system in everyday working practice (the
‘look and feel’ of the system to users).
The first prototype demonstrated (DS5), ‘OLMentor’, focused on Knowledge Maturing based on
adaptive business processes, functions and data. Workshop participants took time to reflect on the
possible applications to practice. Many elements of the prototype were liked, but participants found it
difficult to envisage how at its current level of development it could be integrated into current systems.
The second prototype demonstrated (DS1) was ‘A semantic media-wiki for maturing career guidance
knowledge in use’. It comprised a “Wiki” system that uses “MediaWiki”. “MediaWiki” is the
technology that “Wikipedia” is based on. This system would assist with the collection and
dissemination of labour market information. The system allows for easy content creation by
practitioners and the sharing of information about the local labour market with colleagues. MediaWiki
uses “web 2.0” technology to source content from other ‘places’ like YouTube, Yahoo, Flickr, etc.,
and has integrated useful collaboration tools such as Skype. This system focuses on assisting the way
individual practitioners may work with a client.
During the workshop, there was much reflection on how the system could be developed in terms of:
visual adaptation; editing; search; control; and contacting people. It was also discussed how the system
should provide an intelligent registration system, support training and staff development; allows
integration with existing systems and datasets; supporting a personal view on organisational
knowledge; provide a community driven quality assurance process; and regularly informs users who
have subscribed to different areas of interest of newly added artefacts. It was agreed that the system
could support a coherent organisational identity in the development of knowledge artefacts and allow
networking.
9.3.1.2
Design and piloting the system (November 2009, April – June 2010)
In year 2 of MATURE, DS1 evolved into Demonstrator 1 (“Assuring Quality for Social Learning in
Content Networks”, see D2.2/3.2), moving from the Semantic MediaWiki-based design study to a
broader widget-based approach, which can support a broader range of tasks of the users. This phase
was used to demonstrate different stages of development of the system to Connexions Kent. The core,
or ‘super user’ group, of practitioners, retained some of its original members, but because of
contextual changes, some were lost to the group. Workshops were held in 2009 – always in Kent and
always with senior managers present. The principle of end-user design structured the development of
the LMI system. Each time a workshop was held, feedback from prospective end users was carefully
documented and fed back to the technical teams.
The demonstrator was further evaluated in two sessions in April and June 2010 with selected
practitioners from Connexions Kent as part of the Formative Evaluation, see D6.2. The aim was to:
• Gather supporting evidence of our assumption that the Demonstrator would be appropriate and
useful in a work context for social learning and Knowledge Maturing purposes;
• Assess the usability of the Demonstrator in terms of system usefulness and interface quality;
• Gather and refine functional requirements, especially concerning the emergence of common
semantics, information quality and improving retrievability of resources, which had been
identified as important in use cases in phase 1 of the formative evaluation;
• Gather further non-functional requirements, especially those we hope would emerge from a
natural interaction with the demonstrator in a semi-realistic context; and
• Discover new use cases, although we expected to find less evidence of this at this stage of
development.
136
In April 2010, the first ‘hands-on’ workshop took place with seven potential users. Seven employees
of Connexions Kent participated in the first stage of this pilot, including two senior managers. They
met together in an ICT training room, in which technical staff had already loaded the demonstrator.
They had 3.5 hours to work together. For the first session, they decided that they would focus on the
topic: ‘Using labour market information in information, advice and guidance (IAG) with young
people’. So, they were all working on a common theme. The group leader made notes on flip charts,
which were developed throughout the session.
Feedback from first ‘hands-on’ session
Despite some members of the group being present at previous discussions and presentations (in
November 2009 and April 2010), it took them a while to make sense of the system. They were
surprised by how difficult they found the system to operate. For example, when they opened the main
system, there were ‘a load of buttons and it was not obvious what you had to do next’. By the end of
the session, only two felt that they fully understood the potential of the system. ‘It was difficult to get
through the beginning stages.’ The ‘User Guide’ was printed off in hard copy and each member of the
group was given one. Even using this guide, they struggled to use the system.
Users noted that the discussion forum was easy to operate – the group was able to communicate with
the ICT developers during the session and were able to resolve some problems experienced in this
way. They were able to get into Firefox explorer and operate what they needed to operate (e.g. plug
ins). However, it wasn’t altogether obvious how they should register with Firefox. The only part that
asked them to register was when they wanted to create a new article – they could see that they needed
to do this in the top right-hand corner. They also felt that having to sign in and register for Firefox
separately from the wiki made use of the system more difficult.
The process of tagging they found complex in Firefox – and had particular difficulty retrieving
material. This emerged as a key frustration - not being able to retrieve material they had loaded into
the main system. They were able to find materials they wanted to save, they were able to rate as (for
example, a four), but when they then wanted to retrieve this material, they could not work out how to
retrieve it. So although they were able to search, tag, and rate, they then had considerable problems
retrieving this material.
They found that they needed to open lots of windows, then got confused about what they were doing
and where they were. Users had difficulties in managing and manipulating multiple windows on a
laptop screen. Four managed to complete a test article and look at it on the screen. But this took time
to work out – it was not an intuitive process.
Some members of the group created a higher education (HE) collection and moved material into this
folder. Sharing information was challenging. Dragging an article over to the file was fine, but it took a
few times to work out that you had to drop it on the icon for it to appear.
Then, the person working on the PC next to them could not access the material in the collection. When
they opened the main toolbar, it was not there as they had expected it would be. They asked for clearer
guidance on how to share collections. They understood that the folder asks whether they wanted this to
be personal or shared, and even though they clicked ‘share’, they could get it to work. Users found this
frustrating. When one member of the group tried to add a website to a collection (or when they logged
on for future reference) there was an icon with a code that they were unable to access to rename.
Several members of the group felt that a help button was needed. They appreciated short explanations
appearing when the mouse hovered over a piece of text, but they found that this was not sufficient.
One task they have set themselves for the next session was to go through these explanations to ensure
that the language used was understandable to what is likely to be the average user.
Second ‘hands-on’ session
For a second session, the same select group of practitioners met at Ashford Connexions Access Point
(CAP) offices to work on the demonstrator. It was agreed that they would work on one laptop
connected to a projector and work through the user-guide together. This method was designed to
ensure that the whole group were at the same point of understanding with the demonstrator. The group
137
were of mixed IT ability so working through the user-guide together meant that problems, questions
and issues could be discussed together.
The group systematically worked through the user guide and tried each widget to get a better
understanding of how they worked. There seemed to be a process of testing and checking how each
widget worked. For instance, one user would create a collection and another user would try to find it
on their laptop. For the majority of the session the group worked on creating content in the collections
and wiki.
One member of the group worked on a ‘step by step’ guide for a new user. It was hoped that this could
show a new user what to do in the demonstrator first and to develop skills in using the system. This
was forwarded to the technical team when available.
Feedback from second ‘hands-on’ session
Generally, this was considered to be a more positive day in using the system than their first session.
The most useful aspect of the demonstrator was considered to be the ability to tag and rate websites
and then find them in the system. This was the first activity completed by the group. They each tried
tagging websites and then another user would log in to see if they could find the tagged website.
The group then moved on to how to use the search. However, it was found to look different for
everyone as different tags were shown for each individual.
The tag cloud was found to be quite confusing when first opened up. The default is set to ‘personal’.
It was agreed by all that they would want all tags to be viewed, therefore the default to be ‘shared’.
Questions:
• Can the default in the tag cloud be set to ‘shared/organisational’ tags?
• Can the tags be shown in some kind of order – preferably alphabetical? This was agreed by all
to be a lot easier to search and would also make gardening easier.
Many were put off by the gardening widget and thought they would not use it often. It was thought
that the gardening widget would be used later by more experienced users. This widget was found the
most difficult to use. There were concerns that this widget would confuse novice/new users.
Questions:
• Can you only edit your own tags?
• Can the ‘gardening’ widget be moved down the list on the management widget or even moved
to ‘more’?
Again, not being able to share collections was highlighted as a problem. The collections set up by the
technical team were renamed. The ‘Work for Tomorrow’ folder added by one participant was not
visible to the group. Collections could not be found, which demotivated users.
Tagging in Firefox was found to be very useful. Users opened the tag window, dragged website
URLs in and then tagged and rated – which was found to be easy. However, the recommended tags
were found to be confusing. Sometimes the tag recommendations were good, whilst at other times
they appeared to be random.
For example, all tagged Kent University – all got different tags and were unsure whether these were
personal, shared or recommended tags. There was also some confusion when toggling between
‘personal’ and ‘shared/organisational’ tags. For many, this did not make sense. At present the function
seems to work in reverse.
Questions:
• Are the tag recommendations your own or random? The tags were different for each user and
then different when same user returned to page.
• Can the toggle between ‘personal’ and ‘shared/organisational’ tags be changed to tick boxes.
The user then only has to select what they want to see – the default was agreed to be ‘shared’.
Overall, users were more positive with the demonstrator and more confident in its use. However, it
was felt that the system needed to flow better.
138
On reflection, the users found the following to be the most useful elements of the demonstrator:
•
•
•
•
•
•
Tagging in Firefox, dragging URL into Firefox
Tagging window
Search functions
Tag cloud
Creating/editing in the wiki
Collections.
The next phase of the pilot was discussed. At the start of the project, senior management at
Connexions Kent agreed to expand the group involved in the pilot once the demonstrator was
functioning. It had been expected that each of the eight members of the pilot group would ‘cascade’
training by introducing the demonstrator to two colleagues. This would have resulted in an evaluation
group of 24 practitioners. However, because of the technical difficulties experienced with the
demonstrator, senior managers decided that the company would wait for bugs to be fixed and further
improvements to the demonstrator to be successfully implemented before they expanded the pilot
group of practitioners. This was because the demonstrator would be “off-putting” to new users as it
needed to flow better. The development team indicated that the agreed improvements could not be
made until June, 2011. The company felt that more time could be given to the demonstrator in
July/August, since during this period, schools were on holiday, so practitioners had more time to spend
on this initiative.
Summary of ‘hands-on’ sessions
These sessions showed that – albeit not being familiar with the Web 2.0 community driven bottom-up
approach to a large extent – users got increasingly more familiar with the central ideas and features of
the demonstrator. They continued to see a clear potential for such a system in their context where
several functionalities of the demonstrator (such as the discussion forum, collection widget or rating
functionality) received very positive ratings. For other widgets (e.g. the wiki and gardening widget), a
clearer design rationale needed to be established.
With regards to non-functional requirements, a frequently addressed category was usability. Users
needed better guidance through the system by means of a clear help system that could be achieved by
additional training materials, such as help buttons, as well as a quick access to information about the
relevance and quality of resources. Moreover, since many employees are not familiar with softwarebased collaboration, there is a need for training sessions and documentation material that clearly
conveys the system’s purpose and addresses questions such as “What shall I do?”, “What do I want?”,
or “What is the value added by using the system?”.
A considerable challenge in the demonstrator’s interface quality is the system’s complexity and the
associated loss of overview caused by the large number of buttons in the tool bar and the widgets that
are necessary to fulfil certain tasks. Even if the users appreciate the possibility of using several
widgets to focus on specific tasks, arranging and ordering widgets is challenging and obviously a
barrier in usability.
The support of the emergence of common semantics was an important functional requirement. With
the help of appropriate tag recommendations, the system might foster the creation of an organizational
vocabulary that could be beneficial for the realization of an easier retrievability of collected resources,
another functional requirement. Based on an emerging consensus about the assignment of tags and
their semantic relations, the search widget could be extended by services that broaden and refine
search results as well as provide filters and facetted search functions.
9.3.1.3
Phase 4: Implementing and evaluating a new install (summative evaluation, May – July
2011)
The level of difficulty experienced in using the system resulted in agreement that, rather than
individuals trying to use the system in isolation from colleagues, the established super user group
would meet together and work on LMI research. To facilitate this process, topics of joint interest were
identified and prioritised.
139
Updated systems were promised and dates for their introduction agreed, to coincide with the
evaluation schedule for the project. Unfortunately, one major update, scheduled for November 2010
did not materialise until January 2011. By this time, there was insufficient time to get the system
loaded onto laptops for use by the super-user group. During this time a user-guide with step-by-step
instructions was developed to respond to criticisms that the system was ‘not intuitive’. Users were
struggling with why they would use the system and the overall purpose.
Another workshop took place early in May 2011. Users were guided through key elements of the
system and topics were agreed for individuals to work on. The practitioners tested the user-guide
during the workshop. The improved system was implemented and had some improvements, but one
key feature (collections) was not working. The system, consequently, was regarded as too problematic
for practitioners to use in their working practices or introduce to their colleagues. It was agreed to wait
for the next issue of the system.
The new system was ready for 27 July, when a further workshop was scheduled with practitioners.
The session focused on reviewing individual progress with using the system, a hands-on
demonstration of the system and working on collecting resources in the system. Although practitioners
had identified materials, they had not felt confident in uploading the materials to the system. Only a
few users had added materials – which others took advantage of downloading, but the export function
was not working as expected. Users also found that the web search facility was not working and
confidence in using the Mature Firefox widget and dragging URLs into collections was low. Users
reported that they were not sure the information would be stored, would be retrievable and that it could
be shared with colleagues.
9.3.2 Conclusions and reflections on Knowledge Maturing
While the participants gave feedback on how they found the system in trial sessions, in group
discussions (in July 2011) they also considered the potential for Knowledge Maturing if the system
could be fully implemented in practice (and in light of how they were using some of these facilities
following the implementation of software outlined in section 9.2). For example, they highlighted how
it might have an effect on the following current activities, see section 6.4.2.
9.4
Conclusions
The preceding sections describe two strands of development at Connexions Kent that both got
influenced by the ideas of Knowledge Maturing.
Section 9.2 with the systems Career Constructor and CareersNet started as a development projects for
supporting young adults and Continuous Professional Development (CPD) development of careers
practitioners. In a very user-centered process in which users also designed key elements of the system,
this has evolved into tailor-made tool which is an innovative and useful practitioner website with a
range of functionalities that support Knowledge Maturing, such as shared resources, news, groups,
forums and events. In the process of making sense of Knowledge Maturing (as they were exposed to
the concepts in their active involvement in the MATURE project), they have co-design Knowledge
Maturing support that particularly fits their own environment.
These ICT developments exemplify how practitioners can become central to the research and
development process, but more importantly how they can ensure successful outcomes to ensure that it
has maximum impact in changing practice and maturing knowledge. The individual components of the
project have won three National Career ICT Awards and are considered to represent outstanding
practice, and the ICT developments are at the heart of the reshaped service.
Section 9.3, summarized the research and development process of the Demonstrator 1/Connexions
Kent instatiation within MATURE, which operated under different constraints. As a research initiative
on Knowledge Maturing, it was more ambitious in terms of broad coverage of Knowledge Maturing
activities. In contrast to the other strand, it did not only build on pre-existing (open source)
technologies, but experimented with a novel widget-based front-end to investigate design approaches
to Personal Learning Environments. This has given end users (although involved in the design
process) much less ownership of the design process. Additionally, users’ confidence in the system
140
(which probably was underestimated) was undermined, e.g., by the loss of much user-generated
material was a change to a new system, about which users were understandably disappointed, but also
the stability, usability, and maturity of the tool (see chapter 6 for problems encountered). Usability
issues were magnified due to the software being very different to what users were used to before
(office documents). Furthermore, the technical difficulties of the Connexions Kent environment were
also underestimated, such as the need of technical administrators to install the system. At the end of
the process the users could see the potential value of the system, but they were not inclined to
introduce it into their daily practice. Expectation management was very difficult – the temptation is to
over-promise in order to get agreement to participate. A number of issues that the users expected
turned out to be technically not possible: export collection and PowerPoint preview. Expectations
needed to be better managed from the start, although the closeness of the collaboration and success of
Part 1 activities had acted to raise expectations that user requirements would be implemented in ways
that could be used in their practice not just as development activities. The software developers
acknowledged that greater discipline is needed to make sure all requirements are kept track of, in line
with best practice in agile development.
Overall, implementing a bug free system into work practices is complex and was not possible within
the time and resource constraints of the MATURE project when the developers were working at such a
distance from the users. On the other hand, innovative aspects associated with the concept of the
system and use of several widgets was enthusiastically embraced.
An overall conclusion for designing systems for Knowledge Maturing, we can summarize the
following:
•
•
•
Broad Knowledge Maturing support (as aspired by the Connexions Kent instantiation)
requires much deeper contextualization, which reaffirms the contextual nature of many
elements of the Knowledge Maturing landscape.
Such contextualization can be better achieved if users are the real owners of the development
process, which is not easy to realize in the context of a research project, which aims at
practical usefulness, sound research results, and technical innovativeness at the same time.
And finally, Knowledge Maturing support needs to be much more grounded in practice, and
systems supporting practice cannot be restricted to or even very much focussed on Knowledge
Maturing, but they first of all need to support practice. But, as the Connexions Kent case
shows, the exposure to concepts and design studies for Knowledge Maturing enables users to
become creative in realizing Knowledge Maturing support as part of their own practice and
the systems that support it.
From the perspective of the MATURE project, it was indeed fortunate that it was possible to track the
success of Part 1 activities to deliver Knowledge Maturing in practice to put alongside the more mixed
picture with Part 2 developments. Overall, however, Connexions Kent’s engagement with the KM
concept before, during and alongside the MATURE project, has contributed to changed practice over
time (from a medium/meso level view) and this is heartening from a project perspective. This report
also acts as an acknowledgment of the dedication and commitment of Connexions Kent staff in
engaging so enthusiastically with Knowledge Maturing processes both within and out with the
MATURE project.
141
10
Collaborative Conclusions and Future Work
The first part of this section contains the conclusions that were drawn up collaboratively by the
MATURE project members. This is followed by LTRI’s view of where future work stemming from
MATURE could go (thus fulfilling the requirement in the Description of Work for final task: ‘T6.4
Introduction Methodology’).
10.1
Collaborative Conclusions (project wide perspective)
10.1.1 Overview of process
Following an internal review of D6.4 in May 2012, and given the size and diverse nature of the
Summative Evaluation studies, it was decided that the MATURE project team as a whole should
arrive at the ‘Collaborative Conclusions’ contained in this section. This was seen as a positive
approach that would be beneficial if it reflected the depth of the work that has taken place as part of
the Summative Evaluation activities.
LTRI posed an initial set of questions to frame this process (Q1 & Q2 below) and two Flash Meetings
were planned (May 18th and 21st, 2012). During the Flash Meeting on May 18th Q3 (a set of related
questions) was added by UIBK. All questions were placed in a Google Doc for collaborative writing.
The main participants of the Flash Meetings and document editors were: D6.4 Internal Reviewers
(PONT, UIBK, CIMNE), representatives of the 6 Summative Evaluation Studies, the Scientific
Coordinator and LTRI. All project participants were informed of this process and were pointed to the
Google Doc so that they could contribute should they wish to. The three guiding questions were:
1.
How successfully did your Instantiation make use of General knowledge Maturing Indicators
(GMIs) to support Knowledge Maturing activities (e.g. as a service) or as an instrument for
evaluation? (For Study 1-4).
2.
How successfully did your Instantiation/study support Knowledge Maturing generally (e.g.
phases)? (For Study 1-6).
3.
How do the results compare across the studies in terms of key similarities and differences with
respect to Knowledge Maturing (and the model)? Specifically, what was confirmed across all
studies, what needs further investigation, what was not confirmed?
Following the second Flash Meeting, LTRI synthesised the collaborative responses, augmenting them
by listening to the FM recordings and by providing clarifications in the form of background notes.
This document was then distributed for final comment on 23rd May. Responses were added to this final
version, which was finalised on 4th June. The goal of the Collaborative Conclusions process was,
therefore, to obtain an answer to the above three questions from the project level. However, in order to
generate a higher level view we first looked at and documented the individual Instantiations’
perspectives and from these extrapolated the project’s collective conclusions. Below, we restate the
questions and provide the associated Collaborative Conclusions.
10.1.2 (Q1) How successfully did your Instantiation make use of General knowledge
Maturing Indicators (GMIs) to support Knowledge Maturing activities (e.g. as a
service) or as an instrument for evaluation?
Background
Following the 3rd Annual Review Indicators became a major focus of the Summative Evaluation.
However, this proved to be more difficult than anticipated for various reasons (see section 3).
Indicators provide a useful fine grained view of Knowledge Maturing, and in some instances these are
empirically justified. However, in general it was not possible to match Indicators to phases (GMIs are
not phase specific). The following question arises: how useful was it to focus on GMIs? From LTRI’s
perspective there was some initial scepticism from the different teams about taking an indicator-centric
approach in Study 1-4. The Indicator Alignment Process was developed to facilitate a common view
142
on the use of Indicators in the Summative Evaluation and to overcome this initial scepticism.
However, with hindsight it appears that this approach was not fully successful, for reasons we now
elaborate on. But, there are still many positive outcomes, not least of which being the various
opportunities for this work to be taken forward.
Briefly, and as described in section 3.3, the premise in the Indicator Alignment Process was that,
depending on the “level of justification” for a alignment claimed for each evaluation study, you could
have as a ‘top level study goal’, one (or more) of the following claims:
1.
GMIs/ SMIs serve as a basis for Knowledge Maturing services.
2.
GMIs / SMIs are used to evaluate the Instantiation’s effect on Knowledge Maturing.
3.
You are evaluating GMIs themselves.
Goals 1&2 have a high level of justification. In brief, the approach with respect to the above goals for
Study 1-4 (remember that Study 5&6 were exempt) was as follows:
•
•
•
•
Study 1 examined top level goals 1 and 3, and had the following related research questions
o
Can Knowledge Maturing Indicators support the selection of the right resources in a
given situation? That is, can people judge the maturity of knowledge that is accessed
through certain resources/artefacts 71 by knowing the values of certain Indicators
derived for these resources and does it help them to select resources that are adequate
for the task at hand?
o
Do we find (the right) traces of Knowledge Maturing in the knowledge base – using
Knowledge Maturing Indicators ? More specifically, are the artefacts that have been
developed through usage of the prototype at “the right level of maturity”?
o
The SMIs that were used to investigate the above questions were special cases of
existing GMIs; the 8 SMIs were used had a mixture of strong and weak justifications.
Study 2 examined top level study goals 1 and 3, and had the following related research
questions (that linked the SMIs to GMIs via generalisation relationships):
o
For the first central question “Does the use of people tagging improve the collective
knowledge of how to describe capabilities”, in relation to this question Study 2 made
use of 8 SMIs that had a mixture of strong and weak justifications.
o
For the second central question “Does the use of people tagging improve the
collective knowledge about others?”, in relation to this question Study 2 made use of 3
SMIs that had weak justifications.
Study 3 examined top level study goal 2, and had the following related research question
o
Does the Connexions Kent Instantiation support Knowledge Maturing Activities
related to Knowledge Maturing Indicators (GMIs)?
o
“The mapping of GMIs/SMIs to the Connexions Kent Instantiation showed that for all
Knowledge Maturing Indicators, the level of justification was strong […]. Therefore,
we decided to use GMIs/SMIs to evaluate the Instantiation’s effect on Knowledge
Maturing (Top Level Evaluation Goal 2)” (cf. 6.1.2). The level of justification for the
10 GMIs used in this study was strong.
Study 4 examined top level study goal 2, and had the following related research question
o
71
“Does the Instantiation support Knowledge Maturing activities related to Knowledge
Maturing Indicators (GMIs)?”
Artefacts are codified representations of knowledge (described in detail in D1.1 and D2.1).
143
o
The same indicator mapping took place as that for Study 3, the level of justification
for the 12 GMIs used in this study was strong.
Discussion of indicator approach from Study perspectives
Study 1 examined top level study goals 1 and 3. Study 1 used SMIs at FHNW to rank resources for
ease of selection of appropriate resources during execution of the matriculation process. However, due
to the small number of existing resources, and small sample size, the success of this exercise was
“limited”.
Study 2 also examined top level study goals 1 and 3. (Note: because it involved a 2 month extension
due to scaling up, our discussion of Study 2 is inevitably a little longer here than the other Studies.) In
terms of the evaluation, Study 2 found that GMIs/SMIs were a useful instrument in some respects.
Specifically, GMIs/SMIs were seen as a structured approach to validate key assumptions that the
Study 2 team had for the overall concept of the Instantiation (the Knowledge Maturing ontology) and
to put this into an overall framework. However, Study 2 did encounter a high level of complexity in
using GMIs. Using GMIs as part of the evaluation was seen as a labour intensive task when using
them for evaluation. Furthermore, for Study 2 in relation to top level study goal 1, the evaluation
involved the capturing of automated log data and this proved more difficult than anticipated: there are
many cleaning steps involved and decisions had to be made about what to include and what not to
include. To obtain top level study goal 3 perspectives, questionnaires to evaluate these GMIs/SMIs
were required. As a result of these considerations, and following a number of iterations, Study 2 had to
reduce the number of Indicators being used in the evaluation in order to manage the complexity of
their use. GMIs/SMIs could potentially have been useful to use combinations, but such a process was
seen as simply not being manageable. Unfortunately, Study 2 could not observe sufficient support for
“an individual has changed its degree of networkedness” as a wider impact of people tagging on the
social network of individuals. However, the situation at Connexions Northumberland may not have
facilitated the possibility of getting new contacts because of severe economic pressure (parts of the
company in fact closed operations). Furthermore, Study 2 did provided suggestive evidence that we
could in the future go on to collect evidence for justifying four GMIs which arose from the design
activities and which have not been validated as part of the Representative Study. These were:
•
•
•
•
An individual has been rated with respect to expertise
An artefact has changed its degree (score) of formality
An artefact is edited intensively within a short period of time
An artefact has been created/edited/co-developed by a diverse group
Study 3 & 4 examined top level study goal 2 only, which simplified the task in comparison to Study 1
and Study 2. Study 3 & 4 found that GMIs with a high degree of justification (mainly SMIs) were very
useful for deriving questions for questionnaires to investigate whether the tools support Knowledge
Maturing. Moreover, SMIs were used to make users aware of digital resources that match particular
Indicators. Therefore, the search widget was provided with the recognised Indicators/Knowledge
Maturing Activities (e.g. the document was tagged a lot, the document is rated very good) in order to
allow people to deal explicitly with such resources. Study 3 & 4 aimed at creating awareness for users
in order to induce Knowledge Maturing Activities. However, due to the small number of participants
in a short timeframe, they could not gain evidence for this assumption. SMIs were less useful than had
been hoped for in terms of being an instrument for evaluation in Study 3 & 4.
In summary, we can say that Study 1 & 2 used a mixture of top level study goals (1 & 3), whereas
Study 3 & Study 4 examined top level study goal 2 only. Thus there was no focus on a particular top
level study goal. The utility of the GMIs/SMIs as a tool for evaluation, in their current form, has been
questioned at a project level but ideas for future work are abundant (see below).
General discussion of indicator approach
One interesting question that arises is this: if GMIs/SMIs are used for development (i.e. built into the
tools) do they then always have to be built into the tools? We can envisage a situation where GMIs
144
could act as a list of a tool’s functionality. Indeed, it should be borne in mind that the project always
envisaged the use of GMIs as a means of making Knowledge Maturing observable. So, it is possible
that we could see GMIs at a higher level of abstraction and hence denoting that something has
changed in terms of Knowledge Maturing; this is not directly connected to tool functionality and
points to one big potential future use of Indicators.
On a general level we note that there is a problem of Indicator application in terms of the alignment of
tools with the overall concept of Knowledge Maturing and GMI/SMIs. First, there is a conflict
between the Knowledge Maturing concept and user expectations (i.e. research priorities vs. user
interests). This is not a new problem, but it remains difficult to balance the two. This could help
explain the difficulties we had in Summative Evaluation Study 3 and provide an aid to future projects
that attempt to further explore Knowledge Maturing. Study 2 can point the way forward with its
individualised implementation strategy that aims at reducing motivational barriers. Furthermore, Study
1 provided a view of ‘long tail’ business process support that could prove useful for niche business
areas. Another point to be made is that tools developed in the project were not only for Knowledge
Maturing, but also to fulfil other user needs. The point here is that Knowledge Maturing support
results in the need to include other activities because it is closely linked to these activities and is in fact
inseparable from them.
Applying Indicators involves a high degree of complexity and effort. This is not to say that the
indicator-centred approach is not useful. It is simply hard to manage, particularly if this effort is to be
made on a larger scale in real world-contexts. However, future work could look at a means of coping
with this complexity. Indicators that are empirically justified could be systematically built into project
tools and services, they could be used where possible to automate exception reporting (e.g. showing
where Knowledge Maturing has or has not taken place) or they could also be used as performance
Indicators for evaluation if they are attached ‘systematically’ to a larger scale framework like the
Knowledge Maturing model/Landscape.
10.1.3 (Q2) How successfully did your Instantiation/study support Knowledge
Maturing generally (e.g. Phases)?
Background
The Study goals that are relevant here are as follows:
Study 1 goal: “Assessment of Knowledge Maturing: do we find (the right) traces of Knowledge
Maturing in the knowledge base? More specifically, are the artefacts that have been developed through
usage of the prototype at “the right level of maturity”? That is, has knowledge matured and if so, has it
reached a level of maturity that is appropriate for the task/situation at hand? This question is based on
an earlier insight within the project: it is not always the case that the highest level of maturity is the
most appropriate.”
Study 2: goals indicated above for Indicators are also applicable here.
Study 3: goal indicated above for Indicators were also applicable here & “How do the users use the
Connexions Kent Instantiation? What do they appreciate, what needs to be improved?”
Study 4 goal: were also applicable here & “How do people use the Instantiation?”
Study 5 goal: “Engage a broad range of UK partners involved in developing different forms of
Knowledge Maturing linked specifically to both career guidance and workforce development policies
and practices.”
Study 6 goal: “Build a longitudinal narrative of Knowledge Maturing (KM) processes in a community
of practice”.
Appendix 12.3 gives a summary table of the coverage of Indicators (GMIs) by study; we will discuss
the Indicators shown in this table here because they issues raised relate to Knowledge Maturing Phases
(and hence Q2). Note that Studies 5 and 6 were exempt from this process as they operated at a level
above GMIs (meso/macro). For historical reasons indicator IV.1.1 does not exist. The total number of
145
GMIs at the start of the study was 75 and of these 24 were studied in the Summative Evaluation by at
least one study (indicated by green shading in the table in Appendix 12.3). A total of 51 GMIs were
not studied directly in the Summative Evaluation (indicated in table by no shading, i.e. white
background). Table 10.1 below summarises the GMIs that were included as a goal of Summative
Evaluation, and also shows which of the five Knowledge Maturing Phases the different GMI ID-types
belong to. It is only when we get to GMI ID II.3 onwards that we are in the area of individual
engagement with groups and social learning in knowledge networks. Of the total number of 24 GMIs
that were included as a study goal, 22 belonged to Knowledge Mentoring Phase I (i.e. in Table 10.1,
this calculated by combining GMI ID I & II). Put another way, 91% of the GMIs studied belonged to
Knowledge Maturing Phase I (Knowledge Maturing Phase I is ‘Expressing Ideas’ & ‘Appropriating
Ideas’. It is clear that MATURE has continued to take an early Phases/artefact centric focus (however,
some GMIs in Phase I do include interactions with groups).
Table 10.1: Summary of study coverage of GMI Indicators by phase
(Note there is a slight mismatch early on in the numbering between GMI IDs & KM Phases)
GMI ID
KM Phase
Total number GMIs as goals Total number GMIs as
for study for each ID
goals for study for each
KM Phase
I.
Expressing Ia. Expressing 18 out of 35
Ideas
Ideas
(part of KM Phase I)
II.
Appropriating
Ideas
KM Phase I = 22 out 49
GMIs
Ib.
Appropriating
Ideas
4 out of 14
III. Distributing II. Distributing 0 out of 4
in communities
in communities
KM Phase II = 0 out of 4
None
KM Phase III = 0 out of 0
III. Formalising
IV.
Ad-hoc IV.
Ad-hoc 2 out of 17
training
& training
&
Piloting
Piloting
KM Phase IV = 2 out of 17
V.
Formal
training,
Institutionalising
& Standardising
KM Phase V = 0 out of 5
V.
Formal 0 out of 5
training,
Institutionalising
& Standardising
Total GMIs studied 24
In some ways we have learnt nothing new in that the focus of the Instantiations in terms of the
Knowledge Maturing Model and associated Phases. Indeed, like the Demonstrators before them in the
Formative Evaluation, the focus in the Summative Evaluation was on the early Phases of Knowledge
Maturing. D6.2 noted the following ‘Insights into Knowledge Maturing’, which remain applicable
here (bold is added):
“Although the timescales of the formative evaluations were too limited to support significant
Knowledge Maturing, a number of useful insights and preliminary findings were noted. Most
Demonstrators clearly supported the initial phases of Knowledge Maturing - 1a, 1b and 2 (i.e.
expressing ideas, appropriating ideas and distributing in communities respectively) with
Demonstrator 1 also showing some progression to phase 3 (formalizing). This suggests that
phase 3 either requires more time, e.g. greater than a month, or is actually a more difficult
transition to make. Perhaps this progression to phase 3 is a qualitatively different and harder
type of transition that needs to be focused upon. For example, related to this, Demonstrator 1
noted the challenge of developing and agreeing ‘common semantics’, Demonstrator 2 found
that users did not edit (and therefore did not resolve conflicts in) the collaborative ontology
146
and Demonstrator 4 noted that users were ‘confused’ by the availability of different solutions.
Is it the case that users need some additional help and scaffolding to make the transition
to the formalization phase? … In terms of progression to phase 4 (ad-hoc training and
piloting), it is interesting to note that Demonstrator 1 are developing Technology Enhanced
Boundary Objects (TEBO’s) to specifically address this phase.”
Discussion of Knowledge Maturing from Summative Evaluation perspective
Ultimately, the question is this: did we develop tools that support Knowledge Maturing? Our answer
is “yes”, particularly in Phase I & II and we have shown in the above detailed studies that there is
evidence to support this assertion (this is also summarised very selectively and briefly below).
However, there are some cases where Knowledge Maturing was not supported (e.g. guidance for
expressing ideas, interacting in communities and formal training). There is evidence that says: “yes”,
we did develop tools that support Knowledge Maturing and below we have pulled out some key points
in relation to this evidence.
D2.4/3.4 found that sociofact 72 development is aligned to Phases, and allowed for Knowledge
Maturing; an analysis conducted with respect to sociofacts has shown that tool/apps support sociofact
development and therefore Knowledge Maturing. Furthermore, a quote from the Study 1 evaluation
report provides supporting evidence related to the Knowledge Maturing goal for Study 1 stated above
(i.e. “Assessment of Knowledge Maturing …”); they found that: “The results [...] show very clearly
that the approach chosen by KISSmir and the resulting degree of maturity of knowledge and related
artefacts is perceived as appropriate by the evaluation participants. [...] the knowledge and the artefacts
that encode it have reached a level of maturity that is somewhere between the Phases II and IV of the
Knowledge Maturing model.” However, one Secretary did mention that although the patterns were
very nicely expressed, she could think of better ways of doing this, and so “no thank you, I do not
want to follow the advice in your pattern/Knowledge Maturing tool”! This suggets that there are
different routes or paths to Knowledge Maturing (even if we assume that the Secretary is not just
being naïve, or that being confronted with the way others had agreed to do the job and she just didn’t
want to follow it). Overall, the right degree of Knowledge Maturity is reached in Study 2; the only
exception is the necessity or not to provide workflow support for one of the activities (e.g. if someone
is new we may need to re-negotiate some knowledge).
Study 2 made progress scaling-up. No other study tried to scale-up in the same way as Study 2, hence
it was granted a two month extension. Study 2 had specific problems because of the need for training
phases, not just at the beginning but all the way through the study; and during these evaluation phases
the behaviour that the Study 2 team could observe was different to other times; thus some assumptions
were made (these are listed in the limitations section for the study). Overall, key assumptions of Study
2’s ontology Knowledge Maturing model (which is based on the Knowledge Maturing Phase Model)
have been confirmed. Further research will be needed to better understand the formalisation process
and consider which support could be additionally provided.
Study 3 proposed that the software could potentially be used for the first three Knowledge Maturing
Phases – Expressing Ideas, Distributing in communities, Formalisation. In the given careers context,
SIMPLE mainly shows the potential to support Knowledge Maturing at the artefact level, although
this could not be directly observed over time. However, it is indicated that several artefact oriented
activities, which were stated to need an improvement compared to the current practice, were
mentioned to be supported rather well by SIMPLE. Of course, this cannot be generalised due to the
low participant number, but should not be neglected as the feedback came from experts in their fields.
However, the software needs to catch up with the support of several activities. In the discussions, a
real need for an improvement of the sociofact level was revealed. Participants stated that SIMPLE did
72
A sociofact relates to the collective knowledge phenomena, including collective rules, norms, structures of
social interaction, but particularly also collective knowledge in the narrower sense.
147
not help to find experts for particular topics. Study 3 team claims this could potentially be improved
by means of additional Knowledge Maturing services, which make use of a more detailed and
sophisticated user model from the usage activities. The results could be used in order to enrich certain
awareness and search functionality (cf. chapter 6, Deliverable D4.4; Nelkner, 2012). Unfortunately,
Study 3 findings from the Kent study found that the tool did not really help (in that it could not really
be used for various reasons). Furthermore, the finding that (i) tag confirmation was low and that (ii)
networkedness did not take place extensively, complements the Study 1 suggestive observation above
that there may different routes to Knowledge Maturing, in that different types of guidance may be
needed to put people on Knowledge Maturing routes or paths; and that these paths may be different
depending on the individual. Study 2 and Study 4 also found that networkedness did not take place
extensively. In general, it could be observed that guidance plays a huge role for introducing a new
software system in an existing IT landscape and parallel to existing (individual) workflows.
Overall, one can say that Study 4 provided users with a promising research prototype that might
support Knowledge Maturing. It reduced barriers for users as it allowed an easy way of tagging, rating
and assisting the collection of resources. Furthermore, the discussion widget and the tag editor allowed
users to create a shared meaning and a common vocabulary, represented in a collaboratively created
ontology. Thus, it potentially supports many important Knowledge Maturing activities. For examples,
see section 7.3, especially Table 7.4. The artefact dimension of Knowledge Maturing might be well
supported, but the sociofact Knowledge Maturing rather less so.
The partnerships for impact strategy, Study 5, facilitated dialogue about Knowledge Maturing
processes that resulted in partner development, including in many cases partners developing their
‘readiness to mature knowledge’, plus details of how technology might support innovation, learning
and development in career guidance practice. This provides a strong guide as to how an organisation
can be prepared for travelling on the Knowledge Maturing pathways through a process of knowledge
negotiation.
The Longitudinal Study 6 provided the useful perspective that Knowledge Maturing support needs to
be much more grounded in practice, and systems supporting practice cannot be restricted to or even
very much focussed on Knowledge Maturing, but they first of all need to support practice. But, as the
Connexions Kent case shows, the exposure to concepts and design studies for Knowledge Maturing
enables users to become creative in realizing Knowledge Maturing support as part of their own
practice and the systems that support it.
General discussion of Knowledge Maturing
We conclude our discussion of Q2 by making three observations:
1.
“Yes”, we can find instances Knowledge Maturing in organisations. For example, as part of
the Summative Evaluation in Study 2, we have been able to observe the usage of a MATURE
tool as part of everyday practice. Within the Study 2 evaluation we have also been able to
observe Knowledge Maturing over the period of use. Also, as we point out above, Study 5
facilitated dialogue about Knowledge Maturing processes, including in many cases partners
developing their ‘readiness to mature knowledge’.
2.
“Yes”, we can support Knowledge Maturing with tools and services (but see the discussion
below). For example, as part of the Summative Evaluation in Study 2, we have successfully
been able to introduce the tool to a significantly larger user base than originally planned. This
has yielded evidence about user acceptance and usefulness.
3.
However, it is not always as simple as we originally thought, both in terms of the linear
hierarchy of the Knowledge Maturing model and in terms of the different paths that can be
taken to achieve Knowledge Maturing. The Knowledge Maturing model and phases hold true
but not in a hierarchical sense; as Study 1 illustrated, for some knowledge you may not want to
move towards that higher level or we may need to re-negotiate some knowledge. Support and
guidance may be needed to enable the learner to reach later Knowledge Maturing Phases.
148
10.1.4 (Q3) How do the results compare across the studies in terms of key
similarities and differences with respect to Knowledge Maturing (and the
model)? Specifically, what was confirmed across all studies, what was not
confirmed, what needs further investigation?
Much of the discussion surrounding Q1 and Q2 has already addressed Q3, particularly the focus on
early Knowledge Maturing Phases. To briefly confirm, the total Indicators investigated by GMI IDtype (I-V) are shown below:
•
•
•
•
•
Total Indicators investigated in GMI ID-type I = 18 out of a possible 35
Total Indicators investigated in GMI ID-type II = 4 out of a possible 14
Total Indicators investigated in GMI ID-type III = 0 out of a possible of 4
Total Indicators investigated in GMI ID-type IV = 2 out of a possible of 17
Total Indicators investigated in GMI ID-type V = 0 out of a possible 5
As we have mentioned above, the common theme of the Summative Evaluation was for Knowledge
Maturing Phase I (i.e. GMI ID I & II combined), which is predominantly an artefact centric approach.
This confirms what we found at the Demonstrator formative evaluation stage (D6.2). Knowledge
Maturing Phases II (Distributing in communities), III (Formalising) and V (Formal training,
Institutionalizing & Standardizing) were not examined (as a goal of study) at all in the Summative
Evaluation and could clearly be the focus of future work. However, Study 1 makes the claim that “the
knowledge and the artefacts that encode it have reached a level of maturity that is somewhere between
the Phases II and IV of the Knowledge Maturing model”. The most popular GMI ID-types, i.e. they
were examined by 3 or more studies, were as follows:
•
•
•
•
•
I.3.4 An artefact became part of a collection of similar artefacts: FNHW (RQ3),
Connexions Kent, Structuralia (x3)
I.3.6 An artefact is referred to by another artefact: Northumberland/igen, Connexions
Kent, Structuralia (x3)
I.3.9 An artefact has been used by an individual: Northumberland/ige, Connexions
Kent, Structuralia (x3)
I.3.10 An artefact was changed, FNHW (RQ2): Northumberland/igen, Connexions
Kent, Structuralia (x4)
I.4.6 An artefact has been assessed by an individual: Northumberland/igen,
Connexions Kent, Structuralia (x3)
GMI ID-types II and IV were examined to a much lesser extent as follows by fewer studies:
•
•
•
•
•
•
II.1.3 An individual has contributed to a discussion: Connexions Kent, Structuralia
(x2)
II.1.5 An individual has significant professional experience: Connexions
Kent,
Structuralia (x2)
II.3.1 An individual has a central role within a social network: Connexions Kent,
Structuralia (x2)
II.4.1 An individual has been rated with respect to expertise: Northumberland/igen
(x1)
IV.1.4 A process was internally agreed or standardised: FNHW (RQ3) (x1)
IV.2.1 An individual changed his/her degree of networkedness: Northumberland/igen
(x1)
What was not confirmed, because it was found problematic, was a systematic view of the relationship
between Indicators and phases (GMIs are not phase specific).
Rather than dwelling further on the fine grained view provided by Indicators, we conclude this section
by taking a look at the bigger Knowledge Maturing picture. If we want to use instruments from our
Knowledge Maturing Landscape to design tools, then we need an overall alignment, e.g. between
149
Knowledge Maturing Landscape and Indicators built into tools and/or used for evaluation. However,
this was enacted in a systematic way in the MATURE project (although Study 1 & 2 did examine the
top level study goal 1, i.e. ‘GMIs/ SMIs serve as a basis for KM services’). In Study 2’s case, early on
in the project an ontology maturing model was derived as a blueprint for iterative refinement of a
vocabulary to enable users as they moved from informal tags to formal tags. Study 2 positioned the
tool functionality (People Tagging) with respect to that ontology maturing model. Thus, evaluation
Study 2 had to validate key assumptions that had been made in order to concretise the Knowledge
Maturing model. In this respect Study 2 observed less conceptual disruption than might have been the
case in other studies; however, they did not use a larger number of Indicators (a total 11 SMIs were
used). So a key success is to align the concept to the tool. These questions now arise. Was the problem
in Study 3 (and perhaps Study 4) one where the tool was not fully aligned with the overall Knowledge
Maturing concept? Or was it that the concept did not fit what the tool was intended to do? It appears
that during the development process of Study 3 there was an attempt to fit in with what the users
wanted but to also align with the Knowledge Maturing Model. In the end the tool was not really
designed to fit into the prevailing user model, but attempts were made to design the tool to fit
perceived user interests; this is maybe where the challenge lay for Study 3. Some things that the tool
does are useful for users but not related to Knowledge Maturing (a point made above). This link
between users vs. research interests is, as already noted, a balancing act. Some experiences have been
invaluable in the MATURE project whereas others have not been so informative. The tools went
beyond Knowledge Maturing Indicator functionality in order to meet perceived user needs. With
resource limitation this naturally led to compromises which may help explain some of the problems
we had with Study 3. This also points to another experience perceived in the project, namely that
Knowledge Maturing is embedded in every day working practices and is not a strict hierarchical
process. If we want to build tools that support Knowledge Maturing, then conceivably we also need to
build tools that support other things in a well-integrated way. In this sense, there is no such thing as a
Knowledge Maturing tool that we can install in addition to something else, it is always embedded into
a much broader range of activities and work practices.
10.1.5 Summative overview of Collaborative Conclusions on Indicators and
Knowledge Maturing model
Summative view on Indicators of varying levels of justification, i.e., how many Indicators we have on
what levels of validity?
We argue that, ideally, for all phases of Knowledge Maturing there should be a full range of GMIs
with empirical justification to draw on. We have 24 GMIs (out of a total of 75 GMIs) in this
‘validated/justified’ category at the start of the Summative Evaluation. However, in certain study
contexts, there were no GMIs with empirical justification that could be inherited, which is why we
needed to evaluate certain SMIs (e.g. Study 1 & 2). There were 4 occasions in Study 2 when this
happened. Once this evidence surrounding SMIs has been collected (in future work), it could
eventually be aggregated into a new GMI (or an existing GMI that currently has no justification),
together with other forms of evidence. This has not generally taken place so far in the Summative
Evaluation studies (for reasons highlighted below).
Summative view on the usefulness of Indicators in the evaluation studies
We already say in our Collaborative Conclusions, above, the following about Indicators. Study 1
sample size was too small to draw conclusions. Study 2 found that GMIs/SMIs provide a structured
approach to validate key assumptions that the Study 2 team had for the overall concept of the
Instantiation and hence useful for putting this into an overall framework. However, Study 2 also found
GMIs/SMIs were too complex and labour intensive. We can say for Study 2 that for many of the
Indicators, they are useful but not very easy to use and they get quite complex if we want to adapt
them in a real-world scenario. Study 3 & 4 did not find them to be as useful as hoped for. However,
Study 3 & 4 did find the Indicator approach useful for devising questionnaires.
150
What concrete conclusions were made that Knowledge Maturing has happened/been supported by the
MATURE tools?
The work with careers organisations in the UK was successful from developers' micro-perspective;
mixed from a users’ micro-perspective (successful in Northumberland; generally successful in Kent,
but with the one exception being that one set of developments did not lead through to implementation
outlined in section 9); highly successful at a meso-level; and even more successful at a macro-level in
getting widespread discussion about knowledge development, sharing and Knowledge Maturing in a
careers context all the way up to ministerial level.
Procedure we envision to review existing Indicators, to increase levels of justification, to collect
further candidates for Indicators
In terms of levels of justification, we summarize the current situation as follows. Study 1 used 8 SMIs
that had a mixture of strong and weak justifications. Study 2 made use of 11 SMIs that had a mixture
of strong and weak justifications. Study 3 made use of 10 GMIs with strong justification. Study 4
made use of 12 GMIs with strong justification. However, we would say that to answer the issues
surrounding taking Indicators forward we need much more data and time to really make good progress.
Having said this, however, below we give some lines for future work based on our experience. Indeed,
this Summative Evaluation has provided a high level of clarification about how future work could
build on MATURE’s insights into Indicators (GMIs/SMIs).
10.1.6 Future work on Indicators
We consider that GMIs have been validated in as many contexts as possible, and that they are part of
the MATURE heritage (hence the word “General”), but validation has not come exclusively from the
Representative Study.
Our conclusions on GMIs/SMIs suggest that the following lines of future work would be productive:
1.
2.
3.
10.2
Investigate further Indicators (GMIs/SMIs) in terms of varying levels of validity. As stated
above, we had 24 GMIs in an empirically validated/justified category at the start of the
Summative Evaluation; could this approach continue to evolve and deliver validated Indicators
for other, higher Knowledge Maturing Phases? And, what are the implications for using
GMIs/SMIs in tools or as flags that Knowledge Maturing has taken place?
Investigate the problems encountered with the usefulness of Indicators in the Summative
Evaluation studies (e.g. complexity and labour intensive to use) and propose solutions that
would enable bundles of Indicators to be used effectively.
Develop a procedure (based on a review of existing Indicators) to increase levels of validity, to
collect further candidates for Indicators, and that considers factors surrounding benchmarking
organisations and the wider aspects of “Continuous Social Learning in Knowledge Networks”
in the age of Social Network Sites like Facebook and LinkedIn.
Future Work (LTRI perspective)
In D6.1, LTRI made the following observation, which still seems relevant to the notions of the
alignment of theory and tools made above in the Collaborative Conclusions:
“And with MATURE an aim is to identify and model, technology mediated, social learning
and Knowledge Maturing processes and behaviours in order to design tools that support and
promote these practices. What this means in simple terms is that we need to consider repeated
cycles of: empirical work, theory/model development; and, tool development. Where these
particular aspects are typically conceived as overlapping activities and phases (rather than as
sequenced ‘steps’) … Design-based research cycle (Cook, 2002) as it is relevant to MATURE
… The above is an evolutionary approach to analysing the role of theory/models, empirical
work and technology in learning (Cook, 2002). Specifically, the purpose of this evolutionary
approach is the mapping out of not a specific theory, but a mapping out of how different
researchers are working towards the creation of theories. The point being that in the
151
evolutionary approach there is a requirement to be transparent about the theory and models in
use. This requirement, in itself, may not communicate well from one discipline to another, as
words have different meanings in different disciplines; indeed, words have different meanings
within a discipline. The only solution to this problem is, in our view, careful and continuing
dialogue between all stakeholders.”
Figure 2.2 in section 2 provides an overview of how the MATURE Design Process was developed as a
result of the above insight. Furthermore, as mentioned in section 2.2.3, in addition to the innovative
focus on GMIs and the Knowledge Maturing model prioritised in the Summative Evaluation and
‘Collaborative Conclusions’, LTRI also wanted to offer an approach to taking a view of the
overarching goal of the project, namely facilitating "Continuous Social Learning in Knowledge
Networks". The reason for this was that LTRI deemed it necessary to have a Summative Evaluation
checklist (called a typology) for various reasons already detailed in section 2. Particularly, a typology
was needed that:
•
•
•
Operated at a higher level of conceptualisation than that provided by the GMIs.
Following our design process (Figure 2.2), something was needed that allowed LTRI to
“Learn and problematise about experiences and constraints in context”. Thus the typology was
needed to provide an additional approach to analyse (mainly) the conclusions sections of each
evaluation report (it is therefore used qualitatively to provide a meta-analysis).
And, in the original Description of Work for Workpackage 6 it was envisaged that LTRI
would undertake a final task: ‘T6.4 Introduction Methodology’. We hope the above
discussion in the Collaborative Conclusions has clarified that this not possible in a prescriptive
way. However, as we have seen in the Collaborative Conclusions, there is a need for tight
control in terms of alignment and introducing Knowledge Maturing tools (these need to be
seen from the perspective of wider work place activity and practice and from a theory and
empirical work perspective). As we will see below, the typology work described will feed into
(i.e. provide a reference point for negotiation about) future work that takes aspects of
MATURE forward. As such what follows represents LTRI’s personal view with respect to an
‘Introduction Methodology’.
The case study of the MATURE people tagging demonstrator was used to elaborate on our typology in
a real work-based context (the BJET paper, see Appendix 12.2). The Learning Factor nodes presented
in section 2.2.3 were used to analyse the reported conclusion sections (and where interesting points
were raised, the other sections) of the six Summative Evaluation studies. The approach is highly
selective and not intended to be systematic; instead we hope you will agree that the analysis below
gives a rich-textual overview, but situated in a coherent conceptual typology for learning. The results
of this initial analysis, which is provided in Appendix 12.8, involved selecting text from the six study
reports that fitted a node and associating with it a brief discussion (a non-trivial task). This process
was followed within each branch of Learning Factors (node 2) thus providing at least one example of a
node taken from the six studies. Below, we pull out some selected node and related illustrative texts to
briefly enable us to discuss the future research direction that LTRI will take in Learning Layers IP (if
funded), an approach that builds on MATURE but aims to fill some of the gaps (e.g. ‘Phase III
Distributing in communities’ was not really investigated in MATURE). Furthermore, if the
relationship between the nodes can be elaborated, we can then hope to move from a typology and
towards a model or framework that can act as a starting point for Design-Based research cycles
discussed above, but in large-scale contexts. Below we restrict ourselves to discussing the analysis
from the perspective of two Learning Factor nodes only: cognitive load (2c) and personal learning
networks (2di & 2dii only).
10.2.1 2c. cognitive load
2ci. intrinsic (inherent nature of the materials and learners’ prior knowledge).
Appendix 12.8 provides an example from Study 2, section 5.3.12.2. The support given by the People
Tagging tool at Connexions Northumberland (UK) for intrinsic individual cognitive load, in terms of
the inherent nature of the materials and learners and fitting in with users’ prior knowledge, appears to
be good. Specifically, SMI2 (“A topic tag is reused for annotation by the "inventor" of the topic tag”)
152
data, which shows that individuals do use their own tags at a later stage, is a direct indicator of the tool
enabling the user to build on their own prior knowledge and thus assisting new learning that builds on
prior (often intrinsic) experience. As a contrast for this node, Appendix 12.8 also provides an example
from Study 4, section 7.6: “According to the results, the users mostly used the Instantiation for
searching and viewing various resources thereby extending their knowledge.” This is another useful
example of a MATURE tool allowing the learner at Structuralia (Spain) to facilitate ‘extending their
knowledge’ and it also provides a slightly wider view of Knowledge Maturing from an artefact centric,
early Knowledge Maturing Phase.
2cii. extraneous (improper instructional design).
Appendix 12.8 provides an example from Study 3, section 6.2.1.2. It appears that the level of users
may have been incorrectly estimated and/or the system is improperly design from an instructional
perspective, this leads to extraneous cognitive load for users. Another second example is provided
from Study 4, section 7.6: “Problems had occurred with the Tag Editor (ontology creation) and the
Tagging Widget (tagging resources). Not being able to create a tag consisting of more than one word
or to delete a tag has probably negatively affected the usability scores (for more information see
Section 7.3.2) … More extensive training for users which would show how work processes can be
supported and how the widgets can be used could be helpful in achieving better acceptance of the
overall system and individual widgets in the future. ”. As well as the need to address these usability
issues, there may be a need for guidance / scaffolding to be built into the system presented at
Structuralia (Spain).
2ciii.
germane (appropriate instructional design motivates).
Study 1, 2 and 4 provide examples of this and these have already been noted above in the
Collaborative Conclusions. Appendix 12.8 provides an example from Study 4, section 7.6 of
appropriate design, but more evidence is required in terms of motivation to learners. This notion of
motivational factors has been picked up in Study 2; the associated team have developed a useful
Motivation Barriers approach to the bespoke design of People Tagging systems for users in a variety
of contexts.
10.2.2 2d. personal learning networks (group or distributed self-regulation)
2di. building connections (adding new people to the network so that there are resources available
when a learning need arises).
Appendix 12.8 provides an example from Study 2, section 5.3.9: “Both questions lead to the
conclusion that SOBOLEO did not help to increase the number of colleagues in the professional
network. Also the participants state that they did not build up more relevant contacts for their work
practice with the help of SOBOLEO. Summing up we do not find support for SMI 11 with the
questionnaire.” Briefly, SMI 11 is “An individual changed its degree of networkedness”. Indeed, this
may be an important finding (if negative) given this project focus is ‘continuous social learning in
knowledge networks’. However, it should be noted that, although in the case of SOBOLEO, it did not
help users to find new contacts and that social learning could still have happened; as Study 2’s other
SMIs indicate different results (see example below). Furthermore, the situation at Connexions
Northumberland was that it closed down end of March 2012 and this could have affected this result as
well. One argument is that “SOBOLEO did not help to get new contacts, but this is something
completely different from social learning?” (the latter is a question, not an assertion, posed by a
member of Study 2 team during evaluation ‘buddy’ dialogues). In fact, this is not the case in our
typology: building personal learning networks involves (amongst other things like 2a to 2c) a process
of building connections by tagging new people in your network so that there are resources available
when a learning need arises. The fact that the application of the typology to the Summative Evaluation
can make visible this issue seems to LTRI a positive indicator that the typology is providing a useful
analytical tool. It could in the future, conceivably, help multi-disciplinary research teams, users and
developers “Learn” (see Figure 2.2), this being a process of learning and problematisation about
experiences and constraints in the context. This is an exploratory phase in which design teams
153
investigate the key features in the target context and requires the involvement and participation of
target users as much as possible, e.g., observations and interviews prior to implementations and usertests. What LTRI are saying now, in the light of the MATURE Summative Evaluation results, is that a
future design-based research process needs a mutually agreed typology (that is not overly complex); a
check-list used as the joint basis for negotiating, shared project team understanding and as a structure
for recording formally negotiated agreement. Agreement can be monitored and assessed over time for
progress, and changes to shared understanding recorded. Hence the scare quotes on “Learn” above
(and see Figure 2.2) as it is the researchers and developers as much as anyone who have to learn from
others in the project, along with users. To re-use part of the quote from D6.2 given above (which is
taken directly from Cook, 2002): “… there is a requirement to be transparent about the theory and
models in use. This requirement, in itself, may not communicate well from one discipline to another,
as words have different meanings in different disciplines; indeed, words have different meanings
within a discipline. The only solution to this problem is, in our view, careful and continuing dialogue
between all stakeholders”. This is not a new idea, but it remains an old problem (particularly if we take
into account the issues raised above in 2ci extraneous (improper instructional design)).
In terms of sub-node building connections (2di), Study 4, section 7.6: provided a positive example,
“The strength of the MATURE widgets is that they address new ways of student collaboration.” This
appears to be a (positive) design aspiration not necessarily borne out by the evaluation data. Indeed, in
Appendix 12.8, another example is provided from Study 5, section 8.5 which points to the fact that it
was in fact the ‘people’ in the UWAR team, with MATURE and related tools playing a small part, that
act as mediators for ‘bridging social capital’ across the career guidance field as a whole. The so called
TEBOs are an interesting idea that was not ready for Summative Evaluation due to contractual delays;
it is therefore an area for future exploration. Cook, Pachler and Bachmair (2012) have explored the use
of Social Networked Sites and mobile technology for bridging social capital from a theoretical
perspective and this may be of relevance to TEBOs. Cook et al. (2012) discusses scaffolding access to
‘cultural resources’ facilitated by digital media from a wide perspective (e.g. scaffolding access to
learning resources, health information, cultural events, employment opportunities, etc.). Key concepts
are defined, particularly forms of ‘capital’, through the lens of the following question: how can we
enable learning activities in formal and informal contexts undertaken by individuals and groups to
become linked through scaffolding as a bridging activity mediated by network and mobile technology?
Tentative conclusions include drawing attention to the fact that some research suggests that, for
example, in Higher Education Facebook provides affordances that can help reduce barriers that lower
self-esteem that students might experience in forming the kinds of large, heterogeneous networks that
are sources of social capital. ‘Trust’ is a key issue in this respect. Thus there appears to be
considerable potential for network and mobile media in terms of sustainability in the integration of
informal and formal institutional dimensions of learning.
2dii. maintaining connections (keeping in touch with relevant persons).
An example from Study 2, section 5.3.9 about SMI 4 (how often a certain topic has been affirmed):
“SMI 4 was investigated because it should shed light on the interesting indicator, if a person is several
times tagged with a certain concept. Study 2 observed that confirmations for tags were almost not
used. The mean number of tags per user is three, with 30% of all users in the system (298 users),
respectively with more than 40% of users, which participated in the training phase (212 users). We
managed to show person profile Knowledge Maturing for four different person profiles and can
therefore support this SMI.” How could the system provide support so that person profiles and show
Knowledge Maturing more often? Is this a question for future work? Indeed, as the Study 2 team point
out in section 5.3.12: “Research about the ‘degree of networkedness’ [SMI 11] was not successful. We
need, therefore, a longer period of investigation and additional support, e.g. visualisations that show
people-topic-connections. Also motivational aspects like feedback mechanisms to support
participation could be helpful (see D2.2/D3.2)”. Study 4, section 7.6 (Appendix 12.7) provided an
example, that seems in agreement with this last observation from Study 2, of the need for additional
support for the changing nature of learning from formal instruction towards more informal and loosely
coupled networks of learning, but needs more research.
154
10.2.3 Next Steps (for LTRI)
As we mentioned above, in the original Description of Work for Workpackage 6 it was envisaged that
LTRI would undertake a final task: ‘T6.4 Introduction Methodology’. As we have seen in the
Collaborative Conclusions, there is a need for control in terms of alignment and introducing
Knowledge Maturing tools needs to be seen from the perspective of wider work place activity, as well
as from the perspective of theory/concepts and empirical work. The typology work described above
will feed into a Workpackage ‘Scaffolding Networking – Interacting with People’, if funded, that
LTRI will lead on in the Learning Layers IP. The nodes of the typology act as the starting point for
negotiation with all partners about what are being called ‘Learning Impact Factors’ (they include our
Learning Factors sub-nodes: self-efficacy, self-regulation, cognitive load, personal learning networks).
These Learning Impact Factors could be used as performance indicators (although nothing has been
agreed). Furthermore, we envisage that General knowledge Maturing Indicators (or SMIs) could be
associated with specific sub-nodes of our typology/‘Learning Impact Factors’ (we have already
provided limited examples in the above discussion). If successfully funded, Learning Layers IP will
hold a minimum of one annual internal Design Conference for the next 4 years; LTRI will use some of
the analysis above to push for tighter ‘alignment’ between: theory/models, digital of digital artefacts
and tools and evaluation/empirical work. The need for interdisciplinary dialogue and negotiation about
meaning will also be emphasised.
155
11
References
Braun, B. (2011), Community-driven & Work-integrated Creation, Use and Evolution of Ontological
Knowledge Structures, PhD thesis, Karlsruhe Institute of Technology.
Braun, S., Kunzmann, C., & Schmidt, A. (2012). Semantic people tagging & ontology maturing: an
enterprise social media approach to competence management. International Journal of Knowledge &
Learning.
Braun, S., Schmidt, A. (2007). Wikis as a Technology Fostering Knowledge Maturing: What we can
learn from Wikipedia. In: 7th International Conference on Knowledge Management (IKNOW '07),
Special Track on Integrating Working and Learning in Business (IWL).
Brinkley, I. (2008). The Knowledge Economy: How Knowledge is Reshaping the Economic Life of
Nations. London: The Work Foundation
Cook, J. (2002). The Role of Dialogue in Computer-Based Learning and Observing Learning: An
Evolutionary Approach to Theory. Journal of Interactive Media in Education, 5. Paper online: wwwjime.open.ac.uk/2002/5
Cook, J. (2012). Social Media and Mobile Technologies in Workplace Practices: Interpretations of
What Counts as Good Research. Lecture, 8th Joint European Summer School on Technology
Enhanced Learning (JTEL), 21-25th May, Estoril, Portugal.
Cook, J. (submitted). Rethinking Social Network(ed) Mobile Learning in the Workplace. Hawaii
International Conference on System Sciences (HICSS 46), Social Networking and Community
minitrack, January 7-10, 2013, Grand Wailea, Maui, Hawaii.
Cook, J., & Pachler, N. (2011). Online people tagging: Social Mobile Networking Services in Workbased Learning. Presentation at SoMobNet International Roundtable on “Social Mobile Networking
for Informal Learning.” Institute of Education, London, 21 November. Retrieved from
http://cloudworks.ac.uk/cloud/view/5968
Cook, J., & Pachler, N. (2012a). Online People Tagging: Social (Mobile) Network(ing) Services and
Work-based Learning. British Journal of Educational Technology, 43(5).
Cook, J., & Pachler, N. (2012b). What is the Potential for the Use of Social Media and Mobile Devices
in Informal, Professional, Work-Based Learning? Invited talk at Centre for Teaching, Learning and
Technology, University of British Columbia, Vancouver, Canada on 16th April. Retrieved from
http://tinyurl.com/6lhlrwu
Cook, J. and Pachler, N. (2012c). Online People Tagging: Social Mobile Network(ing) Services and
Work-based Learning. European Conference on Educational Research (ECER) 2012, Cadiz, Spain, 18
- 21 September.
Cook, J., Pachler, N. and Bachmair, B. (2012). Using Social Networked Sites and Mobile Technology
for Bridging Social Capital. In Guglielmo Trentin and Manuela Repetto (Eds.), Using Network and
Mobile Technology to Bridge Formal and Informal Learning. Chandos.
Cook, J., Pachler, N., Schmidt, A., Attwell, G. and Ley, T. (accepted). Scaling and Sustaining
Informal Work-Based Learning Using Social Media and Mobile Devices. Workshop proposal to
Alpine Rendez‐Vous 2013, January, 28 – February, 1st, Villard‐de‐Lans, Vercors, French Alps.
Dabbagh, N., & Kitsantas, A. (2011). Personal learning environments, social media, and self-regulated
learning: a natural formula for connecting formal and informal learning. The Internet and Higher
Education.
Eraut, M. (2004). Informal learning in the workplace. Studies in Continuing Education, 26(2), 247273.
Flecha, R. (2000). Sharing words: Theory and practice of dialogical learning. Landham, MD:
Rowman & Littlefield
156
Freire, P. (1997). Pedagogy of the heart. New York: Continuum.
Habermas, J. (1996). Between Facts and Norms. Cambridge & Oxford: Policy Press & Basil
Blackwell (original work published in 1992).
Huang, Y.-M., Liu, C.-H., & Tsai, C.-C. (2011). Applying social tagging to manage cognitive load in a
Web 2.0 self-learning environment. Interactive Learning Environments. 6(3), 397-419.
Jessen, J., & Jørgensen, A. (2012). Aggregated trustworthiness: redefining online credibility through
social validation. First Monday, 17(1).
Kraiger, K. (2008). Third-generation instructional models: more about guiding development and
design than selecting training methods. Industrial and Organizational Psychology, 1(4), 501-507.
Lacasta, J, Iso, J. N. and Soria, F. J. Z. (2010). Terminological Ontologies: Design, Management and
Practical Applications, volume 9 of Semantic Web and Beyond. Springer US.
Lewis, J. R. (1995). IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation
and Instructions for Use. International Journal of Human-Computer Interaction. 7, 57-78.
Lewis, J.R., Sauro J. (2009). The Factor Structure of the System Usability Scale. In Human Centered
Design. Springer Berlin / Heidelberg, p. 94-103
Miles, A. and Bechhofer, S. (2009). SKOS simple knowledge organization system reference, Aug.
2009. URL http://www.w3.org/TR/2009/REC-skos-reference-20090818/.
Nelkner, T. (2012). Rationale and Design of a Knowledge Maturing Environment for Workplace
Integrated Learning, Doctoral dissertation, University of Paderborn, Paderborn.
Pachler, N., Pimmer, C., & Seipold, J. (Eds.). (2011). Work-Based Mobile Learning: Concepts and
Cases. Oxford: Peter Lang.
Putnam, R. (2000). Bowling Alone: The Collapse and Revival of American Community. New York:
Simon and Schuster.
Ravenscroft, A., Schmidt, A., Cook, J., & Bradley, C. (2012). Designing Socio-Technical Systems for
Informal Learning and Knowledge Maturing in the “Web 2.0 Workplace.” Journal of Computer
Assisted Learning, 28(3), 235-249.
Rajagopal, K., Brinke, D., van Bruggen, J., & Sloep, P. (2012). Understanding personal learning
networks: their structure, content and the networking skills needed to optimally use them. First
Monday, 17(1).
Sauro, J. (2011). Measuring Usability With The System Usability Scale (SUS), Available:
http://www.measuringusability.com/sus.php, last accessed 27-01-2012.
Schmidt, A. (2005). Knowledge Maturing and the Continuity of Context as a Unifying Concept for
Knowledge Management and E-Learning. In Proceedings of I-KNOW ’05, Special Track on
Integrating Working and Learning, Graz, 2005.
Wand, Y. and Wang, R. Y. (1996). Anchoring data quality dimensions in ontological foundations.
Communications of the ACM, 39(11), 86-95.
157
12
Appendices
12.1
General knowledge Maturing Indicators (GMIs)
The Indicator Alignment Process (section 3) had the goal of defining precise links between SMIs and
GMIs to enable the inheritance of the justification from the GMI level. The starting point was the list
of GMIs annotated with the level of justification of GMIs in D1.3 (see table below). Here we
distinguished only between empirical justification from representative study (RepStudy) or associate
partner study (APStudy) in D1.2 or no justification, e.g. individual proposal (FZI).
Note that there is a slight mismatch (early on in the numbering) between GMI IDs (below) &
Knowledge Maturing Phases. "Table 10.1: Summary of study coverage of GMI Indicators by phase"
in section 10.1.3 provides a clarification of this.
ID
Level 1
Level 2
KM Indicator
I.
Artefacts
I.1
Artefacts
Artefact as such
I.1.1
Artefacts
Artefact as such
I.1.1.1
Artefacts
Artefact as such
I.1.1.2
Artefacts
Artefact as such
I.1.1.3
Artefacts
Artefact as such
I.1.1.x
Artefacts
Artefact as such
I.1.2
Artefacts
Artefact as such
I.2
Artefacts
Creation context
and editing
I.2.1
Artefacts
Creation context
and editing
I.2.1.1
Artefacts
I.2.1.2
Artefacts
Creation context
and editing
Creation context
and editing
158
An artefact has changed its
degree (score) of readability
An artefact has changed its
degree (score) of
structuredness
An artefact has changed its
degree (score) of formality
An artefact has changed its
degree (score) of …
An artefact's meta-data has
changed its quality
characteristics
An artefact has been changed
after an individual had
learned something
An artefact has been edited
by a highly reputable
Level of
Justification
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
I.2.1.3
Artefacts
I.2.2
Artefacts
I.2.2.1
Artefacts
I.2.2.2
Artefacts
I.2.2.3
Artefacts
Creation context
and editing
individual
An artefact has been
created/edited/co-developed
by a diverse group
Creation context
and editing
Creation context
and editing
Creation context
and editing
Creation context
and editing
An artefact has been changed
as the result of a process
An artefact was prepared for
a meeting
An artefact describing a
process has been changed
Creation context
and editing
Creation context
and editing
I.2.3
Artefacts
I.2.3.1
Artefacts
I.2.3.2
Artefacts
I.2.3.3
Artefacts
Creation context
and editing
Creation context
and editing
I.2.3.4
Artefacts
Creation context
and editing
I.2.3.5
Artefacts
Creation context
and editing
I.2.3.6
Artefacts
Creation context
and editing
I.2.3.7
Artefacts
I.2.3.8
Artefacts
Creation context
and editing
Creation context
and editing
I.3
Artefacts
Usage
I.3.1
Artefacts
Usage
I.3.2
Artefacts
Usage
I.3.3
Artefacts
Usage
I.3.4
I.3.5
Artefacts
Artefacts
Usage
Usage
An artefact was
created/refined in a meeting
An artefact was created by
integrating parts of other
artefacts
An artefact has been the
subject of many discussions
An artefact has not been
changed for a long period
after intensive editing
An artefact is edited after a
guidance activity
An artefact is edited
intensively within a short
period of time
An artefact has been changed
to a lesser extent than
previous version(s)
An artefact was changed in
type
An artefact has achieved a
high degree of awareness
among others
An artefact is used widely
An artefact was selected from
a range of artefacts
An artefact became part of a
collection of similar artefacts
An artefact was made
159
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(UIBK)
validated by
APStudy
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
validated by
I.3.6
Artefacts
Usage
I.3.7
Artefacts
Usage
I.3.8
Artefacts
Usage
I.3.9
Artefacts
Usage
accessible to a different
group of individuals
An artefact is referred to by
another artefact
An artefact was presented to
an influential group of
individuals
An artefact has been accessed
by a different group of
individuals
An artefact has been used by
an individual
I.3.10
Artefacts
Usage
An artefact was changed
Artefacts
Rating &
legitimation
Artefacts
Rating &
legitimation
Artefacts
Rating &
legitimation
Artefacts
Rating &
legitimation
Artefacts
Rating &
legitimation
I.4
I.4.1
I.4.2
I.4.3
I.4.4
I.4.5
Artefacts
I.4.6
Artefacts
II
Individual
capabilities
II.1
Individual
capabilities
II.1.1
II.1.2
II.1.3
II.1.4
II.1.5
II.1.6
Individual
capabilities
Individual
capabilities
Individual
capabilities
Individual
capabilities
Individual
capabilities
Individual
capabilities
An artefact has been
accepted into a restricted
domain
An artefact has been
recommended or approved
by management
An artefact has become part
of a guideline or has become
standard
An artefact has been rated
high
An artefact has been certified
according to an external
standard
An artefact has been assessed
by an individual
Rating &
legitimation
Rating &
legitimation
Individual activities
Individual activities
Individual activities
Individual activities
Individual activities
Individual activities
Individual activities
160
An individual has acquired a
qualification or attended a
training course
An individual has contributed
to a project
An individual has contributed
to a discussion
An individual is approached
by others for help and advice
An individual has significant
professional experience
An individual is an author of
many documents
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
APStudy
individual
proposal
(FZI)
validated by
RepStudy
individual
proposal
(FZI)
validated by
RepStudy
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
II.2.1
Individual
capabilities
Individual
capabilities
Individual organization
Individual organization
II.2.2
Individual
capabilities
Individual organization
II.2.3
Individual
capabilities
Individual organization
II.2.4
Individual
capabilities
Individual organization
II.2.5
Individual
capabilities
Individual organization
II.2
II.3.1
Individual
capabilities
Individual
capabilities
II.3
An individual changed its role
or responsibility
An individual has been a
member of the organisation
for a significant period of time
An individual has been
involved in a process a
number of times
An individual has been
involved in a process for a
significant period of time
An individual has been the
owner of a process for a
significant period of time
Individual - group
Individual - group
An individual has a central
role within a social network
II.3.2
Individual
capabilities
Individual - group
An individual changed its
degree of cross-topic
participation
II.4
Individual
capabilities
Rating, assessment
II.4.1
Individual
capabilities
III
Knowledge/topic
III.1
Knowledge/topic
Activities
III.1.1
Knowledge/topic
Activities
Knowledge has been
searched for
III.1.2
Knowledge/topic
Activities
Knowledge has been
associated with an artefact
III.1.3
Knowledge/topic
Activities
III.1.4
Knowledge/topic
Rating, assessment
An individual has been rated
with respect to expertise
Knowledge has been
associated with an individual
Knowledge has been
described/documented (or
the documentation has
improved) in an artefact
Activities
161
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
individual
proposal
(FZI)
validated by
RepStudy
proposal at
CMLondon
(FZI; UBP;
UIBK)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
IV
IV.1
IV.1.2
Sociofacts
Sociofacts
Process/task
(knowledge)
Sociofacts
Process/task
(knowledge)
A process has been
successfully undertaken a
number of times
A process was certified or
standardised according to
external standards
A process was internally
agreed or standardised
A process was changed by
adding or deleting steps
IV.1.3
Sociofacts
IV.1.4
Sociofacts
IV.1.5
Sociofacts
IV.1.6
Sociofacts
Process/task
(knowledge)
Process/task
(knowledge)
Process/task
(knowledge)
Process/task
(knowledge)
IV.1.7
Sociofacts
Process/task
(knowledge)
IV.1.8
Sociofacts
Process/task
(knowledge)
IV.1.9
Sociofacts
Process/task
(knowledge)
IV.2
Sociofacts
Quality of social
network
IV.2.1
Sociofacts
Quality of social
network
An individual changed its
degree of networkedness
IV.2.2
Sociofacts
Quality of social
network
An individual changed its
degree of participation
IV.2.4
Sociofacts
Quality of social
network
An individual changed its
intensity of social action
Sociofacts
Quality of social
network
A group of individuals
changed their degree of
external involvement
A group of individuals
changed their degree of
heterogeneity
IV.2.5
A process was documented
A process was changed
according to the number of
cycles (loops)
A process was changed
according to the number of
decisions
A process was changed
according to the number of
participants
IV.2.6
Sociofacts
Quality of social
network
IV.3
Sociofacts
Agreement
162
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
RepStudy
validated by
APStudy
validated by
APStudy
individual
proposal
(FZI)
individual
proposal
(FZI)
proposal at
CMLondon
(FZI; UBP;
UIBK)
proposal at
CMLondon
(FZI; UBP;
UIBK)
proposal at
CMLondon
(FZI; UBP;
UIBK)
proposal at
CMLondon
(FZI; UBP;
UIBK)
proposal at
CMLondon
(FZI; UBP;
UIBK)
IV.3.1
Sociofacts
Agreement
IV.4
Sociofacts
Collective
capability
IV.4.1
Sociofacts
Collective
capability
Sociofacts
Collective
capability
IV.4.3
Sociofacts
Collective
capability
V
Impact/performanc
e
V.1
Impact/performanc
e
V.1.1
Impact/performanc
e
V.1.2
Impact/performanc
e
V.1.3
Impact/performanc
e
Performance
V.2
Impact/performanc
e
Quality
V.2.1
Impact/performanc
e
Quality
V.3
Impact/performanc
e
Impact
V.3.1
Impact/performanc
e
IV.4.2
A group of individuals created
a consensus artefact
A group of individuals has
established a reflective
practice
A group of individuals
changed their (systematic)
approach to organizational
development
A group of individuals meets
certain quality criteria for
collaboration
Performance
The performance of a process
has improved
Performance
The performance of a group
of individuals has improved
A process was improved with
respect to time, cost or
quality
Performance
The output of a process
(product/service) has
improved with respect to
quality
The customer satisfaction has
improved
Impact
163
proposal at
CMLondon
(FZI; UBP;
UIBK)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
validated by
RepStudy
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
individual
proposal
(FZI)
12.2
Cook and Pachler (2012) (BJET paper)
Final Draft: Cook, J. and Pachler, N. (2012). Online People Tagging: Social (Mobile) Network(ing)
Services and Work-based Learning. British Journal of Education Technology 43(5).
Online People Tagging: Social (Mobile) Network(ing) Services and
Work-based Learning
John Cook, London Metropolitan University and Norbert Pachler, Institute of Education,
University of London
John Cook (PhD) is Professor of Technology Enhanced Learning and Director of the Learning
Technology Research Institute (http://www.londonmet.ac.uk/ltri/), London Metropolitan University.
He is a founding member of The London Mobile Learning Group (www.londonmobilelearning.net/).
John was Chair/President of the Association for Learning Technology (2004-06), and is currently the
Chair of ALT’s Research Committee. Correspondence: [email protected]
Norbert Pachler (Dr. phil.) is Professor of Education and Director: International Teacher Education
at the Institute of Education, University of London. He is the convenor of the London Mobile Learning
Group, has published widely and researches and supervises in the fields of new technologies in
teaching and learning, teacher education and development and foreign language education.
Correspondence: [email protected]
Abstract: Social and mobile technologies offer users unprecedented opportunities for
communicating, interacting, sharing, meaning-making, content and context generation. And,
these affordances are in constant flux driven by a powerful interplay between technological
innovation and emerging cultural practices. Significantly, also, they are starting to transcend the
everyday life-worlds of users and permeate the workplace and its practices. However, given the
emergent nature of this area, the literature on the use of social and mobile technologies in
workplace practices is still small. Indeed, social media are increasingly being accessed via
mobile devices. Our main focus will, therefore, be on the question of what, if any, potential
there is for the use of social media in informal, professional, work-based learning. The paper
provides a critical overview of key issues from the literature on work-based learning, face-toface and technology supported, as well as social (mobile) networking services with particular
attention being paid to people tagging. It then introduces an initial typology of informal
workplace learning in order to provide a frame for understanding social (mobile) network(ing)
services in work-based learning. Finally, a case study (taken from the literature) of People
Tagging tool use in digital social networks in the European Commission funded MATURE
project is used to illustrate aspects of our typology.
Introduction
164
In 2011 social networks and associated technologies have continued to gain in importance in people’s
everyday life-worlds. Research by PEW Internet, a nonpartisan, non-profit ‘fact tank’, suggests that
66% of online U.S. adults use social media (see Smith, 2011, and 73). With the growth of (the market
share of) smartphones – with 35% of U.S. adults owning a smartphone – it hardly comes as a surprise
that mobile social media are also enjoying a period of growth, by 37% in 2011 according to analytics
firm comScore. Social networking apps appear to be the main driver for this development. 74 With the
growth also in tablets, these developments are likely to continue. These trends are not confined to the
U.S. According to comScore the number of people accessing social networking sites in France,
Germany, Italy, Spain and the U.K. has gone up by 44% in 2011 with 55 million users in these
countries doing so on their mobile devices, an increase of 67% compared with the same month a year
earlier 75. In a survey with 15,000 respondents across 6 continents by GSMArena in March 2011 76
engagement in social networks was the eighth most favourite mobile phone feature overall.
Unsurprisingly, in the fields of education and educational research social media have also started to
attract attention, albeit not always in a positive way. Cases of teachers putting their careers at risk
either by revealing too much personal information online and thereby risking becoming too familiar
with pupils or by commenting unfavourably about their pupils or employers online are eagerly picked
up by the media. 77
This paper aims to provide a new perspective by bringing a critical review of work-based learning,
Social Network(ing) Sites (SNSs) and tagging together into a typology, which is then illustrated with a
case study of a People Tagging tool in digital social networks taken from the European Commission
funded MATURE project. We are particularly interested in contributing towards a deep understanding
of social phenomena and experiences here and offer our analysis of one case study with the intention
of providing an initial frame for gaining a better understanding of what is currently a new and underresearch area. Consequently, the focus is mainly on a conceptually coherent analytical approach and
not so much on the findings themselves, which are intended to be indicative only.
Critical overview of key issues from the literature
Given the breadth of the scope of our topic across the sub-domains of work-based, informal and
mobile learning as well as SNSs, invariably difficult choices have to be made here when presenting
relevant background literature in view of the limited space available. For this reason we will discuss
SNSs broadly in order to establish general principles which can subsequently be explored with specific
reference to mobile devices. As we have sought to show in the introduction social networking is
increasingly a phenomenon played out on mobile devices.
73
http://mashable.com/2011/11/15/social-media-use-study/
74
http://www.readwriteweb.com/archives/comscore_mobile_social_networking_app_audience_grows_126_in_pas
t_year.php
75
http://www.pcworld.com/article/244399/social_networking_use_among_mobile_users_grows_in_europe.html
76
http://www.gsmarena.com/mobile_phone_usage_survey-review-592.php
77
http://www.bbc.co.uk/news/uk-scotland-16379494
165
Work-based learning, face-to-face and technology supported
A critical review of the way technologies are being used for work-based learning (Kraiger, 2008)
found that most ‘solutions’ are targeted towards a learning model based on the ideas of direct
instruction in a formal manner, e.g. transferring lectures and seminars from face-to-face interactions to
computer-mediated interactions. Recent work has started to explore approaches that seek to harness
the affordances in particular of mobile devices around learning in informal contexts (see e.g. Pachler,
Pimmer, & Seipold, 2011). The question arises: what is known from empirical work on face-to-face
work-based learning to inform our perspectives on what it may be possible to achieve with social and
mobile media mediated work-based learning?
The field of work-based learning, within which learning through and at work is discussed, is a
contested field, often driven by national and supranational policy discourses such as those around
employability or life-long learning. Often, the term ‘informal learning’ is used to capture related
processes in the workplace to set them apart from formal education or training (see e.g. Eraut, 2004, p.
247). We find the notion of ‘informal learning’ problematic at various levels, most specifically
because we question whether fundamentally different cognitive (and social) processes are at work and,
therefore, prefer to use the term ‘learning in informal contexts’. Notwithstanding these reservations,
we recognize the fact that the term ‘informal learning’ is widely used and define it as follows:
a natural activity by a self-motivated learner ‘under the radar’ of a tutor, individually or in a
group, intentionally or tacitly, in response to an immediate or recent situation or perceived need,
or serendipitously with the learner mostly being (meta-cognitively) unaware of what is being
learnt (Pachler & Cook, 2009, p. 152)
Work-based and informal learning are discussed at a range of different levels in the literature. In this
paper we focus on literature that is empirically founded. One key proponent of an empirical tradition
of work-based learning research is Michael Eraut. There are, of course, other important scholars in the
field, such as for example Sawchuck (2010), Evans et al. (2009), Illeris (2007) or Livingstone (2006),
to name but a few. Given the significance and internal coherence of Eraut’s work, as well as its
connectedness to other scholarship and research in the field, we use it as a basis for our conceptual
thinking here. Eraut’s work (2000, 2004, 2007, 2008) also has been derived mainly from the study of
professionals and graduate employees rather than workers more widely. The fact that this is the
intended target audience for our discussion of (people) tagging below makes Eraut particularly suitable
for our purposes.
Eraut (2000, p. 116) inter alia identifies the following features of informal learning, which he presents
as part of a ‘typology’: implicit linkage of memories with current experience; reflection on as well as
discussion and review of past events; observing and noting facts, ideas, opinions, impressions; asking
questions; engagement in decision-making, problem solving. By 2008 (Eraut, 2008, p. 409) the
typology had been refined into that shown in Table 1:
166
Table 1: A typology of Early Career Learning (Source: Eraut, 2008, p. 18)
Eraut (2007, p. 406) posits that these features by-and-large play out in the following four types of
activities:
•
•
•
•
Assessing clients and/or situations (sometimes briefly, sometimes
involving a long process of investigation) and continuing to monitor them;
Deciding what, if any, action to take, both immediately and over a longer
period (either individually or as a leader or member of a team);
Pursuing an agreed course of action, modifying, consulting and
reassessing as and when necessary;
Metacognitive monitoring of oneself, people needing attention and the
General progress of the case, problem, project or situation.
What is of particular interest for our purposes here is the fact that the majority of learning activities
through and at work seem to involve other people, e.g. through one-to-one interaction, participation in
group processes, working alongside others etc. This, for us, underlines the centrality of identifying
relevant ‘others’ from and with whom to learn – and the possible role of social media and SNSs in it –,
particularly given the documented problems in the transfer of knowledge between people in the
workplace (see Eraut, 2008, pp. 15-18): the heterogeneity of most workplaces in terms of human
resources are undoubtedly a strength as well as a challenge in terms of asset identification and
management. How best to identify and access relevant expertise? A related challenge for less, as well
as more established colleagues is that of (perceived) vulnerability associated with power relationships
and giving the appearance of lack of knowledge and expertise etc.
The art of discourse about practice then becomes one of establishing affinity with colleagues through
work-related discourse and giving the appearance of being generally cooperative, without giving
anything away that might increase one’s vulnerability (Eraut, 2008, p. 16).
In this section we have provided a critical overview of key issues from the literature on work-based
learning, face-to-face and technology supported, in the next section we examine social (mobile)
networking services.
Social Network(ing) Sites and Social Media
One of the early and often cited papers on social network(ing) sites is that by (boyd & Ellison, 2008).
In it the authors, in addition to charting the history of social network sites (SNSs) and setting out some
relevant research questions, offer a definition of SNSs as
web-based services that allow individuals to (1) construct a publish or semi-public profile
within a bounded system, (2) articulate a list of other users with whom they share a connection,
and (3) view and traverse their list of connections and those made by others within the system.
Also, they make the distinction between social networking and social network sites preferring the
latter term as the former, according to them, emphasises relationship initiation. The term social
network, they argue, reflects the fact that users are primarily communicating with people “who are
already part of their extended social network”, i.e. they augment pre-existing social relationships and
interactions. Merchant (2011, p. 5) considers the way in which SNSs support public displays of
167
friendship and connections to be one of their unifying features. Compared with other writers on the
topic, Merchant considers SNSs as part of what he calls ‘the wider textual universe’ of online
communication (p. 5) and he discusses them in relation to more ‘traditional’ notions of social
networks, “the patterning and flow of communication, friendship, intra- and inter-group behaviours as
they are enacted in and across different geographical locations and over time” not mediated by social
media (p. 6). This notion that SNS have the potential to include people beyond currently existing
social networks seems to us to be an important characteristic, as we shall see below when we discuss
the MATURE case study.
The existing literature on social media suggests that at best more work is required to maximise the
potential of their affordances for learning in formal contexts (e.g. see Crook, 2012). The literature on
technology and social media use and ‘informal learning’ (e.g. see Sefton-Green, 2004) on the one
other hand tends to be characterised by a certain lack of definitional clarity about what can best be
understood by ‘informal learning’, how (social) media practices in everyday life can be harnessed in
formal learning (e.g. see Pachler, Bachmair, & Cook, 2010) and by a focus on personal learning
environments. Furthermore, Dabbagh and Kitsantas (2011), with reference to the work of others, see
agency as “a change of self-representation based on psychological needs such as competence
(perceived self-efficacy), relatedness (sense of being a part of the activity) and acceptance (social
approval)” all of which they view as acts of self-regulated learning. The concept of self-regulation is
an important one and this can be seen to be linked to the notion of agency used by Pachler, Bachmair
and Cook (2010) in their socio-cultural ecology of mobile learning. Indeed, these issues were
summarised by Eraut’s observation, based on empirical work (Eraut, 2004, p. 269), that factors
affecting face-to-face learning in the workplace, such as feedback, support, challenge and the value of
the work, can lead to individual self-efficacy in terms of confidence and commitment. One approach to
providing computer-mediated support is for self-efficacy and self-regulation scaffolding. The use of
scaffolding as a metaphor refers to the provision of temporary support for the completion of a task that
a learner might otherwise be unable to achieve. Van de Pol et al.’s (2010) review of a decade of
research on face-to-face scaffolding suggests that this work seems to “point largely in the same
direction, i.e., that scaffolding is effective” (p. 286). In terms of computer supported scaffolding,
approaches to semantic and adaptive scaffolding (Azevedo, Cromley, Winters, Moos, & Greene,
2005), and meta-cognitive scaffolding for self-regulation (Pol et al., 2010) have shown promise.
In the next section attention will be paid to the literature on and the notion of people tagging, as this
appears a productive line of inquiry for work-based learning.
Tagging and people tagging
It would go way beyond the scope of this paper to offer a comprehensive discussion of the extensive
literature on tagging. Therefore, we confine ourselves to a few key issues, which we consider to be
particularly pertinent in relation to the question of the potential for informal, professional, work-based
learning.
Tagging, ostensibly, enables users of social media to add labels to digital resources in order to help
them refer to them easily at a later point. Huang et al. (2011) offer a useful discussion of tagging in
relation to cognitive load theory. They argue that the learner’s working memory is essentially a
cognitive system of limited capacity and that, therefore, a learner’s performance is affected by
cognitive load. They go on to distinguish three types of cognitive load: intrinsic, extraneous and
germane. The first type, intrinsic cognitive load “is determined by the inherent nature of the materials
and learners’ prior knowledge”. Extraneous cognitive load “is caused by an improper instructional
design.” And germane cognitive load “is resulted by an appropriate instructional design” … which
“can motivate learners to indulge themselves in the processing, construction, and automation of
schemas and them store them in their long-term memory”. (p. 3)
168
Fundamentally, two types of tagging can be distinguished: folksonomies, i.e. a user-driven, collective
system of classification, and ontologies, classifications determined by a system and/or its providers.
Many papers discuss the relationship between folksonomies and ontologies see e.g. (van Damme,
Hepp, & Siorpaes, 2007). Folksonomies, such as social bookmarking, are characterised by variability
and lack of systematicity in the use of tags and, therefore, ontological clarity. This creates a need for
an agreement of implicit semantics (Aranda-Corral & Borrego-Díaz, 2010). We also note the selfdisclosure by teachers literature but highlight that this is not what we are talking about here, in that we
can tag other workers and they can tag us.
There also exists a small body of literature on people tagging, an approach to the classification of the
knowledge embodied in users as well as their social networks rather than of digital artefacts. One
immediate challenge arises around the descriptors to be used to characterise people in your
professional network through tags particularly as the knowledge embodied in people remains often
tacit, is normally multifaceted and usually is liable to frequent change. Unlike some of the literature in
the field, which is interesting in the notion of people tagging from the perspective of efficiency gains
and cost reduction in relation to professional relationship formation and management (Farrell & Lau,
2006; Braun, Kunzmann, & Schmidt, 2010), we are interested in it primarily in relation to learning
(through and at work). In order for such systems to work, the research suggests (see e.g. Farrell, Lau,
Wilcox & Muller, 2007), one cannot rely on each employee to create, and keep their profile up-to-date
but needs to seek “to leverage the work of a few active taggers” (p. 2). Some particular challenges in
the field of people tagging are the use of potentially objectionable tags or the disclosure of sensitive
data.
Social networking approaches to workplace learning have tended to focus on describing and
augmenting employee profiles from the perspective of those profiles being used for expert finding and
community formation. These platforms are mainly based on the self-promotion paradigm whereby
people can represent themselves with a profile and indicate their connections to other users. Further, in
some of these approaches, the principle of social tagging and bookmarking is transferred to people; for
instance Linkedin (http://www.linkedin.com/), Xing (http://www.xing.com/) and Collabio 78 (short for
Collaborative Biography) developed by Microsoft (the latter is no longer active. Linkedin and Xing
have mobile phone apps available for them. Of relevance to this paper is Collabio’s interest in the
quality of tags and encouraging social connectedness (Figure 1).
Figure 1: Collabio (Source: Bernstein, Tan, Smith, Czerwinski, & Horvitz, 2009, p. 1)
78
http://research.microsoft.com/en-us/um/redmond/groups/cue/collabio/, accessed 19 July 2011
169
(The user has guessed several tags for Greg Smith, including band, ohio and vegas. Tags guessed by Greg’s
other friends are hidden by dots until the user guesses them.)
Braun et al. (2012) describe Collabio as follows: “Users can tag their friends in a game. Therefore, the
users only see the tags assigned to a friend in an obscured tag cloud. When they start to describe the
friend, guessed tags are uncovered and new tags are added to the tag cloud. For each tag, the users
accumulate points equal to the number the tag is assigned to the friend. Only the friend him-/herself
can see the whole uncovered tag cloud, who assigned which tag and delete tags if needed. However,
self-tagging is not possible. To prevent the cold-start effect of a completely empty tag cloud, seed tags
are used from a person's public profile.”
Rajagopal, Brinke, van Bruggen and Sloep (2012) offer a conceptualisation of personal learning
networks with particular relevance for our focus on people tagging. Rather than foregrounding the
technology, they concentrate primarily on “the act of making connections with other professionals”
and the skills associated with it. Key to these skills, they argue, is “the ability to identify and
understand other people’s work in relation to one’s own, and to assess the value of the connection with
these others for potential future work”. Clearly, technology in general, and SNSs in particular, has a
valuable contribution to make in the process of creating (a) personal network(s) of people distributed
across groups and places and various degrees of connectivity and interconnectivity. Amongst the
benefits cited by the authors are the development and growth of one’s professional career, access to
support structures, professional development and knowledge creation. In order to be able to make the
best use of the learning opportunities from personal learning networks, Rajagopal et al. (2012) posit,
users need to perform the following three primary tasks: “building connections (adding new people to
the network so that there are resources available when a learning need arises); maintaining connections
(keeping in touch with relevant persons); and activating connections with selected persons for the
purpose of learning”.
Finally, Rajagopal et al. (2012) note that in the skills layer of their model, which also includes an
activity and attitude layer which we don’t discuss here, technologies can offer various functionalities
to support personal networking such as enhancing communication with people in the network,
remaining in touch, positioning an individual in the network and finding people and expertise. People
tagging can be seen to be one advanced functionality supporting the learning process in and through
personal networks. In particular, they can be seen to address one important aspect on SNSs, namely
credibility through what Jessen and Jørgensen (2012) call ‘social validation’, building on the work of
others they propose a model of aggregated trustworthiness where perceived credibility = social
validation + authority and trustee + profiles.
In the next section we introduce an initial typology of informal workplace learning that takes into
account our review of key issues from the literature. The purpose of this endeavor is to provide a
frame that will, hopefully, assist our understanding of social (mobile) networking services in workbased learning (analysing current examples and providing suggested lines that could be explored in
future endeavours).
An Initial typology of factors in Social (Mobile) Network(ing) Services and Work-based
Learning
Our typology of factors in Social (Mobile) Network(ing) Services and Work-based Learning are
represented textually below in Table 2. The derivation of the main nodes was made after going
through the literature variously over several months and coming back to the simple focus presented by
170
Eraut (2004, p. 269) who summarizes the ‘Factors affecting learning in the workplace’; calling them
Context and Learning Factors. The Context Factors node branches were derived directly from Eraut’s
(2008, p. 18) ‘A typology of Early Career Learning’; this is given in Table 1 above. The key elements
of the critical literature review were added to the Learning Factors node; this was required because
Eraut’s body of work deals with face-to-face learning. In this sense we have extended Eraut’s work.
Finally, it became clear that a specialized node for people tagging factors was needed. Thus the
Learning Factors node is generic, and hence includes branches surrounding personal learning
networks, whereas the People Tagging Factors is very specific. As noted earlier, in the main our
typology (a checklist) seeks to serve as an explanatory, analytical frame as well as a starting point for
discussion about attendant issues, rather than provide a definitive map of the field. We also want to
stress that there is insufficient space here to represent and discuss each of the sub-branches of the
typology in any detail. That said, the case study of MATURE below elaborates on our typology in a
real work-based context.
1. Contexts Factors
a. Work process with learning as a by-product
b. Learning activities located within work or learning processes
c. Learning processes at or near the workplace
2. Learning Factors
a. individual self-efficacy (confidence and commitment)
i. feedback
ii. support
iii. challenge
iv. value of the work
b. acts of self-regulation
i. competence (perceived self-efficacy)
ii. relatedness (sense of being a part of the activity)
iii. acceptance (social approval)
c. cognitive load:
i. intrinsic (inherent nature of the materials and learners’ prior knowledge)
ii. extraneous (improper instructional design)
iii. germane (appropriate instructional design motivates)
d. personal learning networks (group or distributed self-regulation)
i. building connections (adding new people to the network so that there are
resources available when a learning need arises);
ii. maintaining connections (keeping in touch with relevant persons); and
iii. activating connections (with selected persons for the purpose of learning)
iv. aggregated trustworthiness (perceived credibility) = social validation +
authority and trustee + profiles
3. People Tagging Factors
a. efficiency gains
b. cost reduction
c. expert finding
d. People tagging tactics
i. People tagging optimal tagging needs to leverage the work of a few active
taggers
ii. People tagging gamification to encourage quality of tags and encouraging
social connectedness
iii. Need for seeding
171
iv. People allowed to tag each other
Table 2: Factors in work-based Social (Mobile) Network(ing) Services
Case study of the MATURE project
This section provides a brief case study of a People Tagging tool, used in digital social networks,
taken from the European Commission funded MATURE project. The MATURE Project
(http://mature-ip.eu) conceives individual workplace learning processes to be interlinked (the output of
a learning process is input to others) in a knowledge-maturing process in which knowledge changes in
nature. This knowledge can take the form of classical content in varying degrees of maturity, but also
involves people, tasks and processes or semantic structures. The goal of MATURE is to understand
this maturing process better, based on empirical studies with users, to give guidelines and to build
tools and services to reduce maturing barriers. MATURE systematically makes use of a design
research approach that has included Use Cases that were linked to personas (developed from an
ethnographically informed study) and particular Knowledge Maturing Activities. One important
continuing aspect of the MATURE work is the ‘people dimension’ of the project, which aims at
improving the development of knowledge about other’s expertise and improved informal relationships
based on a People Tagging tool pilot study (Braun et al., 2012); the tool was developed by the FZI
(http://www.fzi.de/), the scientific coordinating partner of MATURE. Note that the People Tagging
tool is different to Callabio. Braun et al. (2012) describe the People Tagging tool approach as follow:
“Semantic people tagging is based on a combination of the principles of (a) collaborative
tagging of persons … and (b) social semantic bookmarking … Employees assign tags to each
other (e.g., on entries in an employee directory, from their address book, or as a bookmark to
social networking sites like LinkedIn) referring to expertise or interests. This can be [used to]
complement self-assessment and the assignment of tags by superiors. These assignments are not
restricted to a predefined competence catalogs, but the employees can use (almost) any tags
which they find appropriate, although tags are recommended based on those already used by
others. Tags can be collaboratively developed towards a shared ontology, negotiated among the
users of the system. This is achieved through a gradual formalization (as part of everyday usage)
following the concept of ontology maturing … i.e., new tags are first added to a category of
‘latest topics’ from where users can merge synonyms and typos, or add translations, and put
them in a structure of broader and narrower terms. More formal definitions can be added, too, so
that the entries evolve from informal tags to more formal competency definitions usually found
in competency catalogs … This can serve several purposes and use cases. (1) Colleagues can
find each other more easily, e.g., for asking each other for help. (2) Employees become aware of
other colleagues with similar interests or experience to stimulate the formation of communities.
(3) It supports human resource development by providing information about the aggregated
needs (e.g., by analyzing searches) and current capabilities of current employees (aggregated
tagging data) to make the right decisions about training required. By extending the group of
people who can make competence and expertise assignments to encompass colleagues, semantic
people tagging promises to achieve (1) a higher up-to-dateness and completeness of the
employee profiles, (2) more realistic assessment of competencies and expertise than with selfassessment, and (3) additional awareness for the tagged person who can see his/her colleagues'
perspective. At the same time, assignments by colleagues come with social risks, e. g. by the
assignment of inappropriate tags”.
Our typology-driven analysis would, we predicted, surface use and design implications for work based
social network(ing) settings and these are expanded upon below in more detail in a discussion section.
Below we use a qualitative analysis of a ‘wider description of people tagging approach’ and then an
associated ‘pilot study’ (both taken from Braun et al., 2012; we refer to this as the case study part 1 &
2) to illustrate aspects of our initial typology. Where we see a mapping between the above typology
172
and the text of the case study, we will note the relevant link, in the context of descriptions of the case
study below, in italics-brackets (we call this a node-branch). For example, node-branch (3a) refers to
node 3 (i.e. People Tagging Factors) and branch a (i.e. efficiency gains).
Case study part 1: wider description of people tagging approach (Braun et al., 2012) (analysed using
the typology)
‘Knowing-who’ (3c) is an essential element for efficient Knowledge Maturing processes, e.g. for
finding the right person to talk to in order to solve a task oriented problem (1a). Many approaches like
self-descriptions in employee yellow pages, or top-down competence management approaches have
largely failed to live up to their promises. This failure is often because information contained in the
directories becomes outdated quickly (2d iv); or is not described in a manner relevant to potential
users. MATURE uses a lightweight ontology approach based on collaborative tagging as a principle to
gather the information about persons inside and outside the company (if and where relevant):
individuals tag each other according to the topics they associate with this person (3 d iv) (Figure 2).
Figure 2: People Tagging-Annotating a person (Source: Braun et al., 2012)
FZI call this ‘people tagging’ and claim (Braun et al., 2010) it can use this to gain a collective review
of existing skills and competencies (3a). Knowledge can be shared and awareness strengthened within
the organisational context around ‘who knows what?’ (Implied 3a). This tagging information can then
potentially be used to search for persons to talk to in a particular task. Moreover, it can also be used
for various other purposes. For instance, FZI claim that human resource development needs to have
sufficient information about the needs and current capabilities of current employees (2b i) to make the
right decisions about training requirements. In this context, the people tagging approach can provide
an indication of:
•
•
•
What type of expertise is needed?
How much of the required expertise already exists within the organisation?
What gaps in specific skills and competencies exist?
Of course this is a very formal approach to viewing learning needs and hence, and at first sight, we
may see that this approach is missing many informal learning factors (2). However, in terms of the
above questions, Braun et al. (2012) have observed that “each target context of a people tagging
173
system will require a different ‘configuration’, which depends on cultural aspects (2d iv) as well as the
actual goals that are associated with introducing people tagging. An analysis of the state of the art has
shown that there has been little research on identifying design options in a systematic way so that we
have developed a framework for engineering people tagging systems”. FZI go on to propose a useful
conceptual design framework for semantic people tagging systems. The framework is based on results
and experiences with field experiments, expert focus group together with an analysis of the design of
folksonomy-based systems in general in the literature. The framework has five main aspects: (a)
involved people, (b) control and semantics of the vocabulary, (c) control of tag assignments, (d)
visibility of tag assignments, and (e) search heuristics for flexible search strategies. For example, in
terms of who is allowed to tag (a), restrictions can range from: anyone being allowed to tag, or a
limited group of persons are allowed to define tags, limited either by organizational structures (e.g.,
team colleagues) or individual relationships (e.g., friends, or approved contacts in a social networking
service), or allowing only self-annotation. These options may be combined with each other (3d i).
Case study part 2: pilot study set-up and results (Braun et al., 2012) (analysed using the typology)
The People Tagging tool was introduced to, and formatively evaluated in two phases with, Connexions
Northumberland (Careers Guidance service, UK) between October 2009 and July 2010. This is a local
organisation that provides service for young people aged 13-19 years (up to age 25 for people with
special needs). It helps with decision-making about study, jobs and careers by offering impartial
information, advice, guidance and personal support. Connexions Northumberland had, at that time of
the study, 60 employees geographically distributed over a whole county. “Because of the geographical
distribution, the people's knowledge about the specialties and expertise of their colleagues is very
limited [a]cross the offices and finding the right colleague to talk to is difficult” (Braun et al., 2012).
Employees in this study were allowed to tag themselves and their colleagues without any further
restrictions, tagging of external contacts was not envisioned. Thus it was possible to tag any colleague
without the taggee’s explicit opt-in (3d iv). The employees are intended to use their own initiative to
develop and modify the vocabulary used for tagging. The system has been initially evaluated in a first
cycle over one month as a pilot at the beginning of the project:
We introduced the system in a hands-on workshop to ten employees. Additional employees
have shown and explained the system to their colleagues so that they started using the system as
well. In the introductory workshop we presented a short demonstration that was followed by an
initial questionnaire on expectations and a user trial session with guided tasks. Then the
employees used the system 16 in an unsupervised way. After four weeks, a second workshop
was held where we collected the experiences with using the system. (Braun et al., 2012).
Results specifically show that: the simplicity of the system was attractive and important (being
perceived as a ‘Facebook for work’) (2c iii); although little Knowledge Maturing could be observed
within the limited period of evaluation (i.e. one month) there were insights into related notions of
sharing and building expertise (2d i), reflective practice (2b). Furthermore, Braun et al. (2012) have
reported that participants stated that they “also liked the way it can give them lots more information
than they currently have and the basic philosophy of democracy which empowers the individual (2d
iv) and where nobody is in charge but has all possibility to contribute (currently they often feel out of
control because there is no possibility to easily contribute to a shared knowledge base like e. g. the
intranet).” However, various areas of concern were also identified: (i) there should be a ‘use by date’
for tags, it is important that a person tag is time-bound, so people who have this tag do not feel they
are making a completely open-ended commitment (2d iv); (ii) ‘lazy-practice’ issue, here some
practitioners may abuse the system where, for example, ‘lazy’ colleagues may resist entering details
about themselves and may tag others with expertise they may have (to deflect additional queries) (2d
iv); and (iii) ‘sharing could increase workload’ (2c ii). On the last issue, Braun et al. (2012) note that
“there were concerns about sharing whole people tagging information with other services in general
because it could also increase the workload”. It can be noted that the organization currently continues
to use the system, and FZI are collecting usage logs to study the tagging behaviour more closely.
174
Typology driven overview and discussion of the People Tagging study
Briefly, from the above qualitative analysis embedded in the text of our case study, we can see that the
typology is readily applied to the MATURE case study. The mapping of the nodes and branches in our
typology, as mentioned in the above case study, is summarised by the list of in Figure 3. Examining
the node-branches of our typology can be seen as one way of assessing the current status of a project
or initiative in terms of the factors from our typology that are found present in a specific case. The
node-branches found present in the MATURE people tagging case are shown in Figure 3:
(1a) (2)(2b) (2b i)(2c ii)(2c iii) (2d i)(2d iv)(2d iv = 5) (3a = 2)(3c)(3d i)(3 d iv = 2)
Figure 3: Node-branches from typology found present in the MATURE case study
Figure 3 show the result of a qualitative analysis of the text of the case study. In our discussion below,
where we claim a node-branch ‘cropped’ up we mean that the text in the case study corresponded to a
specific node-branches in the typology. For example, in Figure 3 “(1a)” means this node-branch (i.e.
Contexts Factors-Work process with learning as a by-product) appeared in the text on one occasion.
When we say “(2d iv = 5)” we mean this node-branch-sub-branch (i.e. Learning Factors-personal
learning networks-aggregated trustworthiness) was identified on 5 occasions, and so on. When we
claim below that a node-branch(-sub-branch) ‘cropped up in a peripheral manner’ we mean that the
node-branch in question was not identified or was only mentioned in the case study text in passing (a
subjective conclusion).
By examining the above node-branch list (Figure 3), we can say make the following observations.
Learning factor node-branches surrounding individual self-efficacy (2a) and node-branches around
self-regulation (2b) only cropped up in a peripheral manner or not at all. Node-branches around
cognitive load, i.e. 2c i-iii, cropped up twice. First, concern was raised that ‘sharing tagging could
increase workload’ (2c ii), i.e. cognitive load extraneous (improper instructional design); indeed the
case study explored this and the design approach for introducing People Tagging in an organisation
explicitly factors in this issue. Second, the case study results appear to show that the simplicity of the
system was attractive and important (being perceived as a ‘Facebook for work’), i.e. (2c iii) cognitive
load germane (appropriate instructional design motivates); the People Tagging tool design was
perceived by participants in the case as having potential to assist them their work practice. No explicit
mention was made of 2c i cognitive load intrinsic (inherent nature of the materials and learners’ prior
knowledge); this was probably due to the brief nature of the case study. Node-branches around
personal learning networks (2d i-iii) were difficult to detect in the case study; these node-branches
would seem to be areas where computer-based scaffolding (described above) could be needed to
provide guidance. Node-branch surrounding aggregated trustworthiness (2d iv) cropped up 5 times as
an issue (shown by ‘= 5’ in Figure 3) and this is an indicator that is clearly worthy of further
exploration. The current state-of-the art in computer-mediated scaffolding revolves around the design
and use of a distributed e-learning repository, content creation and customization, or social networks.
However, the social-network(ed) dimension of scaffolding across multiple contexts in workplace
informal and formal learning has largely been ignored. The challenge is to make sure that individuals
can make use of the increasing ability to make connections to people. In contexts such as Connections
Northumberland, they can do this in a social networked context using mobile devices where required.
Clearly there are limits to the work presented in this paper, particularly given the exploratory nature of
our work in an area that is currently under-researched. A limitation of the analysis of the People
Tagging tool case study using the typology is that no independent rater was used to validate the codes
as applied to textual descriptions of the case. Such an approach is not, in our view, appropriate at this
175
moment given the exploratory nature of this research into a complex set of social, individual and
technological issues. However, this may become a topic of future research. Instead, we focus on an
exploration of an innovation (the People Tagging tool case study) using a typology derived from a
critical review of the literature. Also, a good question that could be levelled at the work described
above is: what efforts were taken to disprove the typology? Again, as our approach is exploratory and
not one of hypothesis testing, this was not really a central question in our research.
We hope to have demonstrated that the typology has strong analytical power, even if we take into
account the above limitations. The first author has presented the typology from this paper to the
MATURE consortium meeting (January 2012) who have agreed that it should be used to provide a
micro and meso-level framework for analysing the final Summative Evaluation reports of MATURE.
Thus the typology will be tested over the coming months (March to May 2012) against diverse cases
from MATURE, several of which do not focus on people tagging. The opportunity to use our typology
in this way will allow us to further test and validate our approach.
Conclusions
The purpose of this paper was to attempt to answer the question: what, if any, potential is there for the
use of social media in informal, professional, work-based learning? We conclude that the potential is
considerable although, as we have shown above, there is need for further work. The analysis of the
MATURE People Tagging tool case study has, we claim, proved productive and we suggest that the
typology we have developed has the potential to provide a fruitful tool for further exploration of the
field. For example, on the basis of our analysis above, we can see certain gaps in the sense that of
some node-branches were absent in the MATURE case study analysis (Figure 3); on this basis we
claim that learning factor node-branches that would seem to be areas where future work on computerbased scaffolding could be needed are: individual self-efficacy (2a), self-regulation (2b) and personal
learning networks (2d). Thus the purpose of our critical review, typology and qualitative analysis
using a case study from the literature have been to provide a frame to assist our understanding of
social (mobile) network(ing) services in work-based learning. Rather than provide a definitive map of
the field, our typology (or checklist) provides an explanatory, analytical frame and as such a starting
point for the discussion of attendant issues.
Acknowledgement
MATURE is funded by European Commission FP7; Cook is a consortium member and work package
leader.
References
Aranda-Corral, G., & Borrego-Díaz, J. (2010). Reconciling knowledge in social tagging web services.
Artificial Intelligence Systems 2010 Part II, LNAI 6077 (pp. 383-390).
Azevedo, R., Cromley, J. G., Winters, F. I., Moos, D. C., & Greene, J. A. (2005). Adaptive human
scaffolding facilitates adolescents’ self-regulated learning with hypermedia. Instructional
Science, 33(5-6), 381-412. doi:10.1007/s11251-005-1273-8
Bernstein, M., Tan, D., Smith, G., Czerwinski, M., & Horvitz, E. (2009). Collabio: a game for
annotating people within social networks. Proceedings of ACM UIST 2009. October. Retrieved
from http://research.microsoft.com/en-us/um/redmond/groups/cue/publications/UIST2009Collabio.pdf
176
boyd, D. M., & Ellison, N. B. (2008). Social network sites: definition, history, and scholarship.
Journal of Computer-Mediated Communication, 13(1), 210-230. Morgan Kaufmann Publishers.
doi:10.1111/j.1083-6101.2007.00393.x
Braun, S., Kunzmann, C., & Schmidt, A. (2010). People tagging & ontology maturing: towards
collaborative competence management. In D. Randall & P. Salembier (Eds.), CSCW to Web2.0:
European Developments in Collaborative Design Selected Papers from COOP08, Computer
Supported Cooperative Work (pp. 133-154). Springer.
Braun, S., Kunzmann, C., & Schmidt, A. (2012). Semantic people tagging & ontology maturing: an
enterprise social media approach to competence management. International Journal of
Knowledge & Learning.
Crook, C. (2012). The “digital native” in context: tensions associated with importing Web 2.0
practices into the school setting. Oxford Review of Education, (July), 1-18.
doi:10.1080/03054985.2011.577946
Dabbagh, N., & Kitsantas, A. (2011). Personal learning environments, social media, and self-regulated
learning: a natural formula for connecting formal and informal learning. The Internet and Higher
Education. doi:DOI:10.1016/j.iheduc.2011.06.002
Eraut, M. (2000). Non-formal learning and tacit knowledge in professional work. British Journal of
Educational Psychology, 70(1), 113-136.
Eraut, M. (2004). Informal learning in the workplace. Studies in Continuing Education, 26(2), 247273. doi:10.1080/158037042000225245
Eraut, M. (2007). Learning from other people in the workplace. Oxford Review of Education, 33(4),
403-422.
Eraut, M. (2008). How professionals learn through work. Retrieved from
http://learningtobeprofessional.pbworks.com/f/CHAPTER+A2+MICHAEL+ERAUT.pdf
Evans, K., Guile, D., & Harris, J. (2009). Putting Knowledge to Work. Retrieved from http://bit.ly/wlepktw
Farrell, S., & Lau, T. (2006). Fringe contacts: people-tagging for the enterprise. Computer Science.
RJ100384 (A0606-027). IBM Research Division. Retrieved from http://bit.ly/farrellandlau
Farrell, S., Lau, T., Wilcox, E., & Muller, M. (2007). Socially augmenting employee profiles with
people-tagging. UIST ’07 Proceedings of the 20th annual ACM symposium on user interface
software and technology. Retrieved from
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.66.4959&rep=rep1&type=pdf
Huang, Y.-M., Liu, C.-H., & Tsai, C.-C. (2011). Applying social tagging to manage cognitive load in a
Web 2.0 self-learning environment. Interactive Learning Environments. doi:DOI:
10.1080/10494820.2011.555839
Illeris, K. (2007). How we learn: learning and non-learning in school and beyond. London:
Routledge.
177
Jessen, J., & Jørgensen, A. (2012). Aggregated trustworthiness: redefining online credibility through
social validation. First Monday, 17(1). Retrieved from
http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3731/3132
Kraiger, K. (2008). Third-generation instructional models: more about guiding development and
design than selecting training methods. Industrial and Organizational Psychology, 1(4), 501-507.
doi:10.1111/j.1754-9434.2008.00096.x
Livingstone, D. (2006). Informal learning: conceptual distinctions and preliminary findings. New
York: Peter Lang.
Merchant, G. (2011). Unravelling the social network: theory and research. Learning, Media and
Technology. doi:10.1080/17439884.2011.567992
Pachler, N., Bachmair, B., & Cook, J. (2010). Mobile learning: structures, agency, practices.
Springer. Retrieved from
http://www.springer.com/education+&+language/learning+&+instruction/book/978-1-44190584-0
Pachler, N., & Cook, J. (2009). Mobile, informal and lifelong learning: a UK policy perspective. In K.
Nyíri (Ed.), Mobile communication and the ethics of social networking (pp. 149-157). Vienna:
Passagen Verlag.
Pachler, N., Pimmer, C., & Seipold, J. (Eds.). (2011). Work-based mobile learning: concepts and
cases. Oxford: Peter Lang.
Pol, J., Volman, M., & Beishuizen, J. (2010). Scaffolding in teacher–student interaction: a decade of
research. Educational Psychology Review, 22(3), 271-296. doi:10.1007/s10648-010-9127-6
Rajagopal, K., Brinke, D., van Bruggen, J., & Sloep, P. (2012). Understanding personal learning
networks: their structure, content and the networking skills needed to optimally use them. First
Monday, 17(1). Retrieved from
http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3559/3131
Sawchuck, P. (2010). Researching workplace learning: an overview and critique. In M. Malloch, L.
Cairns, K. Evans., & B. O’Connor (Eds.), The Sage Handbook of Workplace Learning. London:
Sage.
Sefton-Green, J. (2004). Literature review in informal learning with technology outside school.
Bristol: Futurelab. Retrieved from
http://archive.futurelab.org.uk/resources/documents/lit_reviews/Informal_Learning_Review.pdf
Smith, A. (2011). Why Americans use social media. Social networking sites are appealing as a way to
maintain contact with close ties and reconnect with old friends. Pew Research Center’s Internet
& American Life Project. Accessed 15 December 2011. Retrieved from
http://www.pewinternet.org/Reports/2011/Why-Americans-Use-Social-Media.aspx
van Damme, C., Hepp, M., & Siorpaes, K. (2007). FolksOntology: an integrated approach for turning
folksonomies into ontologies. Proceedings of the ESWC 2007 Workshop Bridging the Gap
between Semantic Web and Web 2.0, Innsbruck, Austria (pp. 71-84). Retrieved from
http://celinevandamme.com/vandammeheppsiorpaes-folksontology-semnet2007-crc.pdf
178
12.3
Summary of coverage of Indicators by study
Note that there is a slight mismatch (early on in the numbering) between GMI IDs (below) &
Knowledge Maturing Phases. "Table 10.1: Summary of study coverage of GMI Indicators by phase"
in section 10.1.3 provides a clarification of this.
ID
I.
Expressing
Ideas
Level 1
Level 2
KM Indicator
I.1
Artefacts
I.1.1
Artefacts
Artefact as
such
Artefact as
such
I.1.1.1
Artefacts
Artefact as
such
I.1.1.2
Artefacts
Artefact as
such
I.1.1.3
Artefacts
Artefact as
such
I.1.1.x
Artefacts
Artefact as
such
I.1.2
Artefacts
I.2
Artefacts
I.2.1
Artefacts
I.2.1.1
Artefacts
I.2.1.2
Artefacts
Artefact as
such
Creation
context and
editing
Creation
context and
editing
Creation
context and
editing
Creation
context and
179
An artefact has
changed its
degree (score) of
readability
An artefact has
changed its
degree (score) of
structuredness
An artefact has
changed its
degree (score) of
formality
An artefact has
changed its
degree (score)
of …
An artefact's
meta-data has
changed its
quality
characteristics
An artefact has
been changed
after an individual
had learned
something
An artefact has
been edited by a
Studies examining
this
Total Indicators
investigated in this
phase = 18 out of 35
Northumberland/igen
Northumberland/igen
(Research Q1 and Q2)
FNHW (RQ3)
FNHW (RQ2)
editing
I.2.1.3
Artefacts
I.2.2
Artefacts
I.2.2.1
Artefacts
I.2.2.2
Artefacts
Creation
context and
editing
Creation
context and
editing
Creation
context and
editing
Creation
context and
editing
I.2.2.3
Artefacts
I.2.3
Artefacts
I.2.3.1
Artefacts
Creation
context and
editing
Creation
context and
editing
Creation
context and
editing
Artefacts
Creation
context and
editing
Artefacts
Creation
context and
editing
I.2.3.2
I.2.3.3
I.2.3.4
Artefacts
I.2.3.5
Artefacts
Creation
context and
editing
Creation
context and
editing
Artefacts
Creation
context and
editing
Artefacts
Creation
context and
editing
I.2.3.6
I.2.3.7
180
highly reputable
individual
An artefact has
been
created/edited/c
o-developed by a
diverse group
Northumberland/igen
An artefact has
been changed as
the result of a
process
An artefact was
prepared for a
meeting
An artefact
describing a
process has been
changed
An artefact was
created/refined
in a meeting
An artefact was
created by
integrating parts
of other artefacts
An artefact has
been the subject
of many
discussions
An artefact has
not been changed
for a long period
after intensive
editing
An artefact is
edited after a
guidance activity
An artefact is
edited intensively
within a short
period of time
An artefact has
been changed to
a lesser extent
than previous
version(s)
Connexions Kent
Structuralia
Northumberland/igen
Northumberland/igen
I.2.3.8
I.3
Artefacts
Artefacts
Creation
context and
editing
Usage
I.3.1
Artefacts
Usage
I.3.2
Artefacts
Usage
I.3.3
Artefacts
Usage
I.3.4
Artefacts
Usage
I.3.5
Artefacts
Usage
I.3.6
Artefacts
Usage
I.3.7
Artefacts
Usage
I.3.8
Artefacts
Usage
I.3.9
Artefacts
Usage
I.3.10
Artefacts
I.4
Artefacts
Usage
Rating &
legitimation
I.4.1
Artefacts
Rating &
legitimation
I.4.2
Artefacts
Rating &
legitimation
181
An artefact was
changed in type
An artefact has
achieved a high
degree of
awareness among
others
An artefact is
used widely
An artefact was
selected from a
range of artefacts
An artefact
became part of a
collection of
similar artefacts
An artefact was
made accessible
to a different
group of
individuals
An artefact is
referred to by
another artefact
An artefact was
presented to an
influential group
of individuals
An artefact has
been accessed by
a different group
of individuals
An artefact has
been used by an
individual
An artefact was
changed
An artefact has
been accepted
into a restricted
domain
An artefact has
been
recommended or
approved by
Structuralia
FNHW (RQ2)
Connexions Kent
Structuralia
FNHW (RQ3)
Connexions Kent
Structuralia
FNHW (RQ3)
Northumberland/igen
Connexions Kent
Structuralia
Northumberland/igen
Connexions Kent
Structuralia
Northumberland/igen
FNHW (RQ2)
Connexions Kent
Structuralia
Northumberland/igen
I.4.3
Artefacts
I.4.4
Artefacts
Rating &
legitimation
Rating &
legitimation
I.4.5
Artefacts
Rating &
legitimation
Artefacts
Rating &
legitimation
I.4.6
II.
Appropriating
Ideas
II.1
Individual
capabilities
Individual
activities
II.1.1
Individual
capabilities
Individual
activities
II.1.2
Individual
capabilities
Individual
activities
II.1.3
Individual
capabilities
Individual
activities
II.1.4
Individual
capabilities
Individual
activities
II.1.5
Individual
capabilities
Individual
activities
II.2
Individual
capabilities
Individual
capabilities
Individual
activities
Individual organization
II.2.1
Individual
capabilities
Individual organization
II.2.2
Individual
capabilities
Individual organization
II.1.6
182
management
An artefact has
become part of a
guideline or has
become standard
An artefact has
been rated high
An artefact has
been certified
according to an
external standard
An artefact has
been assessed by
an individual
FNHW (RQ3)
Structuralia
Connexions Kent
Structuralia
Northumberland/igen
Total Indicators
investigated in this
phase = 4 out of 14
An individual has
acquired a
qualification or
attended a
training course
An individual has
contributed to a
project
An individual has
contributed to a
discussion
An individual is
approached by
others for help
and advice
An individual has
significant
professional
experience
An individual is an
author of many
documents
An individual
changed its role
or responsibility
An individual has
been a member
of the
organisation for a
significant period
Connexions Kent
Structuralia
Connexions Kent
Structuralia
of time
II.2.3
Individual
capabilities
Individual organization
II.2.4
Individual
capabilities
Individual organization
II.3
Individual
capabilities
Individual
capabilities
Individual organization
Individual group
II.3.1
Individual
capabilities
Individual group
II.4
Individual
capabilities
Individual
capabilities
Individual group
Rating,
assessment
II.4.1
Individual
capabilities
Rating,
assessment
II.2.5
II.3.2
III.
Distributing in
communities
III.1
Knowledge/topic
Activities
III.1.1
Knowledge/topic
Activities
III.1.2
Knowledge/topic
Activities
III.1.3
Knowledge/topic
Activities
III.1.4
Knowledge/topic
Activities
An individual has
been involved in
a process a
number of times
An individual has
been involved in
a process for a
significant period
of time
An individual has
been the owner
of a process for a
significant period
of time
An individual has
a central role
within a social
network
An individual
changed its
degree of crosstopic
participation
An individual has
been rated with
respect to
expertise
Connexions Kent
Structuralia
Northumberland/igen
Total Indicators
investigated in this
phase = 0 out of 4
183
Knowledge has
been searched for
Knowledge has
been associated
with an artefact
Knowledge has
been associated
with an individual
Knowledge has
been
described/docum
ented (or the
documentation
has improved) in
an artefact
IV.
Ad-hoc
training &
Piloting
Sociofacts
IV.1
Sociofacts
Process/task
(knowledge)
IV.1.2
Sociofacts
Process/task
(knowledge)
IV.1.3
Sociofacts
Process/task
(knowledge)
IV.1.4
Sociofacts
Process/task
(knowledge)
IV.1.5
Sociofacts
IV.1.6
Sociofacts
Process/task
(knowledge)
Process/task
(knowledge)
IV.1.7
Sociofacts
Process/task
(knowledge)
IV.1.8
Sociofacts
Process/task
(knowledge)
IV.1.9
Sociofacts
IV.2
Sociofacts
IV.2.1
Sociofacts
IV.2.2
Sociofacts
Process/task
(knowledge)
Quality of
social
network
Quality of
social
network
Quality of
social
184
Total Indicators
investigated in this
phase = 2 out of 17
A process has
been successfully
undertaken a
number of times
A process was
certified or
standardised
according to
external
standards
A process was
internally agreed
or standardised
A process was
changed by
adding or deleting
steps
A process was
documented
A process was
changed
according to the
number of cycles
(loops)
A process was
changed
according to the
number of
decisions
A process was
changed
according to the
number of
participants
An individual
changed its
degree of
networkedness
An individual
changed its
FNHW (RQ3)
Northumberland/igen
network
Sociofacts
Quality of
social
network
Sociofacts
Quality of
social
network
IV.2.6
IV.3
Sociofacts
Sociofacts
Quality of
social
network
Agreement
IV.3.1
Sociofacts
IV.4
Sociofacts
Agreement
Collective
capability
IV.4.1
Sociofacts
Collective
capability
Sociofacts
Collective
capability
Sociofacts
Collective
capability
IV.2.4
IV.2.5
IV.4.2
IV.4.3
V.
Formal
training,
Institutionalizi
ng &
Standardizing
V.1
V.1.1
Impact/
performance
Impact/
performance
degree of
participation
An individual
changed its
intensity of social
action
A group of
individuals
changed their
degree of
external
involvement
A group of
individuals
changed their
degree of
heterogeneity
A group of
individuals
created a
consensus
artefact
A group of
individuals has
established a
reflective practice
A group of
individuals
changed their
(systematic)
approach to
organizational
development
A group of
individuals meets
certain quality
criteria for
collaboration
Total Indicators
investigated in this
phase = 0 out of 5
Performance
Performance
185
The performance
of a process has
Impact/
performance
V.1.2
Impact/
performance
Impact/
performance
V.1.3
V.2
V.3
Impact/
performance
Impact/
performance
V.3.1
Impact/
performance
V.2.1
12.4
Performance
Performance
Quality
improved
The performance
of a group of
individuals has
improved
A process was
improved with
respect to time,
cost or quality
The output of a
process
(product/service)
has improved
with respect to
quality
Quality
Impact
The customer
satisfaction has
improved
Impact
Data from FNHW
12.4.1 Indicator alignment results
Mappings between SMI and GMI
•
•
•
•
•
•
•
ID I.3.4
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact became part of a collection of similar
artefacts
Level of Justification validated by RepStudy
SMI D4.II.2 Process-related knowledge has reached phase II when a personal task
attachment or subtask is being added to an abstractor in a public task pattern
o This indicator is used for judging the maturity of resources that are recommended
to users in the KISSmir prototype
Description of mapping: An abstractor in a task pattern can be thought of as a service that
provides access to resources of a certain kind of resources. In most cases, abstractors are
static collections of (references to) resources. This means that the special maturing
indicator D4.II.2 is a direct concretisation of the GMI I.3.4:
o A personal task attachment or subtask is an instance of an artefact
o An abstractor in a public task pattern is an instance of a collection of similar
artefacts
o The fact that an artefact is being added to an abstractor implies that it becomes
part of that abstractor
186
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.3.6
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact is referred to by another artefact
Level of Justification validated by RepStudy
SMI D4.II.2 Process-related knowledge has reached phase II when a personal task
attachment or subtask is being added to an abstractor in a public task pattern
o This indicator is used for judging the maturity of resources that are recommended
to users in the KISSmir prototype
Description of mapping: An abstractor in a task pattern is a special kind of artefact that
points to other artefacts, namely useful resources. This means that the SMI D4.II.2 is a
direct concretisation of the GMI I.3.6:
o A personal task attachment or subtask is an instance of an artefact
o An abstractor in a public task pattern is an instance of a another artefact
o The fact that an artefact is being added to an abstractor implies that the artefact
is then referred to by the abstractor
ID I.3.1
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact has achieved a high degree of awareness
among others
Level of Justification individual proposal (FZI)
SMI D4.II.5 Process-related knowledge increases its maturity when a task pattern and its
abstractors and/or problem/solution statements are more widely used by everyone
o This indicator is used for judging the maturity of resources that are recommended
to users in the KISSmir prototype
Description of mapping: A task pattern and its abstractors are created and/or “filled” with
useful resources by end users. Such end-user contributions to a task pattern can be taken
up by other users by selecting them from the task pattern and adding them to a user’s
personal task. SMI D4.II.5 is a direct concretisation of GMI I.3.1 as follows:
o A task pattern and its abstractors is an instance of an artefact (in fact, a task
pattern is an artefact, each abstractor is an artefact and each resource that the
abstractor references is an artefact. In the mapping, we consider the whole task
pattern).
o Being more widely used by everyone implies having achieved a high degree of
awareness among others
187
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.3.2
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact is used widely
Level of Justification individual proposal (FZI)
SMIs
o D4.II.5 Process-related knowledge increases its maturity when a task pattern and
its abstractors and/or problem/solution statements are more widely used by
everyone
o D4.II.1 Process-related knowledge has reached this phase when a public task
pattern has been used by several users
o These Indicators are used for judging the maturity of resources that are
recommended to users in the KISSmir prototype
Description of mapping: A task pattern and its abstractors are created and/or “filled” with
useful resources by end users. Such end-user contributions to a task pattern can be taken
up by other users by selecting them from the task pattern and adding them to a user’s
personal task.
o SMI D4.II.5 is a direct concretisation of GMI I.3.2 by much the same argumentation
as it is a concretisation of GMI I.3.1 (it is even easier to see since “is used widely”
maps directly to “more widely used”). as follows:A personal task attachment or
subtask is an instance of an artefact
o SMI D4.II.1 is also a concretisation of GMI I.3.2 because
 A public task pattern is an instance of an artefact
 Has been used by several users implies that the artefact is used (at least
“rather”) widely
ID I.1.1.x
Level 1 Artefacts
Level 2 Artefact as such
Level 3 artefact fulfils certain quality characteristics
General Maturing Indicator (GMI) An artefact has changed its degree (score) of …
(readability, structuredness or formality)
Level of Justification individual proposal (FZI)
SMI D4.III.1 Process-related knowledge has reached this phase when task patterns /
process models have been approved internally after consolidation (insufficiently used
resources have been removed from a task pattern, abstractors of a task pattern have been
renamed and polished or removed, similar subtask abstractors have been merged, problem
or solution statements have been cleaned up / merged, and quality has been checked
o This indicator is used for judging the validity of changes to process-related
artefacts (task patterns / process models) that are based on suggestions from a
“process mining” component of the KISSmir prototype
Description of mapping: Task patterns can be edited by end users and thus grow in an
uncontrolled way. This may lead to a low degree of structure and hence readability. The
SMI D4.III.1 is a concretisation of the GMI I.1.1.x (I.1.1.1, I.1.1.2 and I.1.1.3) as follows:
o Task patterns (and process models) are instances of artefacts
o The consolidation described in the SMI (cleaning up, merging, polishing, renaming)
implies a changed (in fact, increased) degree of readability, structuredness and
formality.
188
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID IV.1.4
Level 1 Sociofacts
Level 2 Process/task (knowledge)
General Maturing Indicator (GMI) A process was internally agreed or standardised
Level of Justification validated by RepStudy
SMI D4.III.1 Process-related knowledge has reached this phase when task patterns /
process models have been approved internally after consolidation (insufficiently used
resources have been removed from a task pattern, abstractors of a task pattern have been
renamed and polished or removed, similar subtask abstractors have been merged, problem
or solution statements have been cleaned up / merged, and quality has been checked
o This indicator is used for judging the validity of changes to process-related
artefacts (task patterns / process models) that are based on suggestions from a
“process mining” component of the KISSmir prototype
Description of mapping: SMI D4.III.1 is a concretisation of GMI IV.1.4 (and others) because
o Task patterns / process models are representations of a process
o The fact that they have been approved internally after consolidation implies that
the have been internally agreed
ID I.4.2
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has been recommended or approved by
management
Level of Justification individual proposal (FZI)
SMI D4.IV.1 Process-related knowledge1, 2 has reached this phase when - after an analysis
of usage of a task pattern - the underlying process model has been adapted, e.g. a
frequently used subtask abstractor was added as a new activity to the model
o This indicator is used for judging the validity of changes to process-related
artefacts (task patterns / process models) that are based on suggestions from a
“process mining” component of the KISSmir prototype
Description of mapping: The analysis of usage of a task pattern (as well as analysis of task
execution patterns, not mentioned here) may lead to the conclusion that e.g. certain
subtasks are executed frequently enough in order to be included into the formal process
model (the process skeleton) that is underlying the KISSmir environment. An inclusion of a
subtask into the process model can be seen as approval or recommendation (by
management). That is, under the assumption that management is involved in any updates
to the process model, we can see the SMI D4.IV.1 as a concretisation of the GMI I.4.2 as
follows:
o A frequently used subtask abstractor is an instance of an artefact
o The process of adapting the underlying process model (e.g. adding a new activity)
is an instance of recommendation or approval by management
189
•
•
•
•
•
•
•
ID I.4.3
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has become part of a guideline or has
become standard
Level of Justification validated by RepStudy
SMI D4.IV.1 Process-related knowledge1, 2 has reached this phase when - after an analysis
of usage of a task pattern - the underlying process model has been adapted, e.g. a
frequently used subtask abstractor was added as a new activity to the model
o This indicator is used for judging the validity of changes to process-related
artefacts (task patterns / process models) that are based on suggestions from a
“process mining” component of the KISSmir prototype
Description of mapping: The analysis of usage of a task pattern (as well as analysis of task
execution patterns, not mentioned here) may lead to the conclusion that e.g. certain
subtasks are executed frequently enough in order to be included into the formal process
model (the process skeleton) that is underlying the KISSmir environment. If we view a
process model as a guideline (in the KISSmir case, where the process is executed by support
of a workflow engine, it is exerting a rather strong guidance), then the SMI D4.IV.1 is a
concretisation of the GMI I.4.3 as follows:
o The task pattern and its resources/subtasks is an instance of an artefact
o A process model is an instance of a guideline
o Adding (a feature of) a task pattern to a process model thus means that it becomes
part of a guideline
190
12.4.2
Evaluation data
Case descriptions
1. Case A:
o Name: Peter Nicolasia
o Nationality: South African
o Degree: Bachelor of commerce in Business management
o Final degree university: university of South Africa (UNISA)
o Additional information: 2. Case B:
o Name: Susan Fisher
o Nationality: US
o Degree: bachelor of business administration (BBA) in “Management”
o Final degree university: Davenport University, USA
o Additional information: student has been working in Switzerland for 4 years
3. Case C:
o Name: Urs Frenacher
o Nationality: Swiss
o Degree: bachelor of law
o Final degree university: university of Bern
o Additional information: 4. Case D:
o Name: Andrea Andanti
o Nationality: Swiss
o Degree: none
o Final degree university: FHNW
o Additional information: Student is studying BÖK (PT) in Brugg, plans to finish
Bachelor degree in September 2012
5. Case E:
o Name: Rayanta Nara
o Nationality: Eritrean
o Degree: BSc in Business Administration
o Final degree university: Wirtschaftsuniversität Wien
o Additional information: Student is asylum seeker. Asks if it is possible for her not
to pay the semester fee.
6. Case F:
o Name: Matthieu Rambaud
o Nationality: Canadian and French
o Degree: Bachelor in Business Administration
o Final degree university: Université de Québec at Montreal.
o Additional information: -
191
Walkthrough table
Activity
Problem (object)
Expected solution
Resources to be
selected / used
Check approval of
the university
Degree accepted?
(“Three-year vs
four-year
bachelors”)
3-year bachelor of commerce (w/o
honours) cannot be accepted (see
Anabin about South African
bachelor of commerce degrees)
Anabin
Indicators
Observations, remarks
Check availability
of matriculation
number
Check completeness of certificates
Determine study
fee
Accept application
formally
Reject application
Rejection letter
template
In how many tasks has the
document been used?
Was the document added to
the task pattern by a
reputable/trusted person?
How well do the document
contents match the current task?
-
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why??
o Was it helpful?
o Which information was useful?
o Which information was missing?
o How could the support / information be improved?
Usability (per case):
o How easy was it to find information?
o What could be improved to make contribution easier?
o What could be improved to make navigation/consumption easier?
192
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
12.1.1.1 Post-walkthrough questionnaire
193
194
12.1.1.2 Complete walkthrough results
Case A
Activity
Problem (object)
Expected solution
Resources to be
selected / used
Check approval of
the university
Degree accepted?
(“Three-year vs
four-year
bachelors”)
3-year bachelor of commerce
(w/o honours) cannot be
accepted (see Anabin about
South African bachelor of
commerce degrees)
Anabin
Indicators
Observations, remarks
-
Participant went directly to Anabin to
check approval
Did not recognize the problem, would
have accepted the student (problem
description was not sufficient to
recognize the relation to the case
Check availability of
matriculation number
Check complete-ness
of certificates
Determine study fee
Accept application
formally
Reject application
Rejection letter
template
In how many tasks has the
document been used?
Was the document added to
the task pattern by a
reputable/trusted person?
How well do the document
contents match the current task?
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why?
o Was it helpful?
o Which information was useful?
o Which information was missing?
o How could the support / information be improved?
195
-
 Interesting for general information, not
so much for resource selection
 Participant found it hard to imagine a
situation (in her work context) with
more than three templates/resources
to choose from
Missing information (content) about Bsc degrees in task pattern
Simpler to understand than Evento, easy to remember where
information/functionality could be found
Information should be easy to find the next time (easy to
remember where to look)
For the functionality seen so far, doing it the ordinary way
(paper-based) would be sufficient -> like this, it consumes more
time than it saves…
Case B
Activity
Problem (object)
Check approval of
the university
University not in
Anabin list
Expected solution
a)
b)
Resources to be
selected / used
Indicators
Observations, remarks
-
Call crus
Ask student for proof of
accreditation
Problem was found immediately,
solution (“ask student for
proof…”) was considered useful,
problem could thus be solved
Check availability of
matriculation number
Check complete-ness
of certificates
Determine study fee
Accept application
formally
Workflow proposes
7500 study fee. How to
tell the system that
700 would be correct?
(“Change fee” (or
something like that…))
Change the description of
the task
How often has a problem been
used?
How well does the problem
description match the current
case?
Acceptance letter
template
Reject application
-
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why?
o Was it helpful?
o Which information was useful? Which information was missing?
o How could the support / information be improved?
-
-
196
-
The problem was not found
because its name (“Wrong counted
semester fee” did not seem to
match the situation.
 None of the Indicators would help,
but a proper naming of the
problem would
Participant always sends a letter
to students that confirms receipt
of their application documents;
then each case is double-checked
with the dean before sending
acceptance letter
Remark about process model: should start with “check
approval” because this is typically most critical and – if
university is not approved – quickly leads to closing the case
(with a rejection)
Participant states that she has created her own form for
capturing criteria that need to be discussed in interview and
that later allow to trace and justify decisions (e.g. about
semester fees)
Participant could well imagine to use the quick note
functionality of KISSmir for capturing such information
Case C
Activity
Problem
(object)
Expected
solution
Degree in
different
area
(“Student
has degree
in a
complete
different
area”)
Accept, but
needs to do premasters (see
acceptance
letter
templates)
Resources to be
selected / used
Indicators
Observations, remarks
Selecting the problem:
Problem/solution were found
 Depends on whether there
is useful info about the case
against which the problem
descriptions can be matched
 None was found useful
because the solution was
clear anyway
Check approval of the
university
Check availability of
matriculation number
Check complete-ness of
certificates
how often has a problem been used?
How well does the problem description match the current
case?
Selecting the solution:
How many times has a solution been picked out of all
available ones for a given problem?
How well does the solution match the current case context?
How many times has a solution been changed?
Determine study fee
Accept application
formally
Acceptance letter
template (premasters)
In how many tasks has the document been used?
Was the document added to the task pattern by a
reputable/trusted person?
How well do the document contents match the current task?
Participant would (in real life) not
have looked at the problem since
each such case is discussed with the
dean anyway
-
Another name for the resource
(e.g. “Acceptance letter premasters”) would be very helpful
Usefulness of chosen indica-tor
will again depend on what is
known about the context
Reject application
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why?
o Was it helpful?
o Which information was useful? Which information was missing?
o How could the support / information be improved?
197
-
-
Including more fields into the initial entry form (e.g. adding
the field of study, thus allowing to make better
recommendations) was considered a bad idea because of the
high effort of entering it
The participant repeatedly used phrases like “I have my
own…”, “I’m doing this a bit differently…”, indicating that she
will follow her own way in many situations, sees sharing of
resources/experience between secretaries critical
Case D
Activity
Problem (object)
Expected
solution
Bachelor degree missing
(“Bachelor degree is still
missing because the
student is still studying”)
Can hand in
later
Resources to be
selected / used
Indicators
Observations, remarks
Check approval of
the university
Check availability of
matriculation number
Check completeness
of certificates
How often has a problem been used?
How well does the problem
description match the current case?
Determine study fee
Accept application
formally
Acceptance
letter template
(with
conditions)
In how many tasks has the document
been used?
Was the document added to the task
pattern by a reputable person?
How well do the document contents
match the current task?
-
-
In reality, participant would not have look-ed at
the problem (clear to her anyway)
Indicators would not be needed here because
the problem is easy to find by its name
Right letter template was recognized, but not
immediately
 Indicator was expected to work well if the
problem was captured (i.e. assigned to the task)
previously
Reject application
-
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why?
o Was it helpful?
o Which information was useful?
o Which information was missing?
o How could the support / information be improved?
Usability (per case):
o How easy was it to find information?
o What could be improved to make contribution easier?
o What could be improved to make navigation/consumption easier?
198
Process model: here, participant would start with the activity
„check completeness of certificates“ because it is obviously
problematic here
Participant would like to get a reminder (via a task or email)
about having to ask the student for the certificate later
Case E
Activity
Problem
Expected solution
Resources
Indicators
Observations, remarks
Check approval of
the university
Check availability of
matriculation number
Check complete-ness
of certificates
Determine study fee
•
•
Asylum
seeker
expects to
get all costs
paid.
How to
handle
scholarship
requests?
Fee cannot be paid,
scholarship request:
ask for proof of
financial situation,
then forward to
Markus or Ruedi
-
Selecting the problem:
How often has a problem been used?
How well does the problem description match the current
case?
Selecting the solution:
How many times has a solution been picked out of all
available ones for a given problem?
How well does the solution match the current case context?
How many times has a solution been changed?
-
Participant would pick both
solutions provided (first ask dean
for special cases, then ask
student for proof of financial
situation)
Contribution of participant to
the task pattern (on her own
initiative): problem “When to
accept asylum seekers” with
solution “depends on status –
status ‘N’ should not be
accepted”
Accept application
formally
Reject application
-
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why?
o Was it helpful?
o Which information was useful?
o Which information was missing?
o How could the support / information be improved?
Usability (per case):
o How easy was it to find information?
o What could be improved to make contribution easier?
o What could be improved to make navigation/consumption easier?
199
-
-
Tool is easy to handle
It is useful, especially if integrated with Evento; if not it might
be too much overhead in many cases
Collecting problems/experience in this way is a good idea, seems
useful
Participant said she was motivated to invest time into
documenting problems; but also stated that a mimimal effort
would be crucial for this to happen
Some information has strange labels and/or names which makes
it hard to select the right one at the beginning; however, it was
easy to “get used to” these labels later.
Case F
Activity
Problem
Expected solution
More than one
nationality
(“Student have
more then one
nationalities”)
use French
nationality to
determine (i.e. 700
CHF)
Resources to be selected /
used
Indicators
Observations, remarks
Check approval of
the university
Check availability of
matriculation number
Check complete-ness
of certificates
Determine study fee
How often has a problem
been used?
How well does the problem
description match the current
case?
-
Problem was easily found; the problem was
known to the participant and the solution
clear for her.
An interesting additional problem (that
appeared recently) was mentioned by both
the participant and the dean: universities
with a certain ranking (H+/-) and how to
deal with students coming from these
Accept application
formally
Reject application
-
-
Knowledge flow (per case)
o How easy do you find it to apply the provided knowledge to new cases? Why?
o Was it helpful?
o Which information was useful?
o Which information was missing?
o How could the support / information be improved?
Usability (per case):
o How easy was it to find information?
o What could be improved to make contribution easier?
o What could be improved to make navigation/consumption easier?
200
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________
12.1.1.3 Complete interview results
Participant 1
a) in order not to forget anything
d) additional activities: check grades,
check level of English, check work
experience (in general, check
acceptance criteria) => remark from
dean: FHNW cannot give a degree to
students who do not have work
experience (students can be accepted
without, but need to acquire the
experience during the studies
201
c) hard to get an overview over
resources in a folder, esp. regarding
the activity to which the resource
belongs
d) should be transparent
e) specifics of the study programme
make it necessary to adapt e.g. the
letters, also to keep flexibility for
special cases, but it is hard to tell
because of lack of experience
c) depends on effort: if it is much
effort to capture problems in the
tool, then a Word document would be
preferrable.
d) use formal guidelines for
justification
202
203
Participant 2
a) emails and tasks stay as a reminder for
things to do where otherwise would have
to be kept in mind (certain chance of
forgetting something because several cases
are handled at the same time)
c) appreciate the structure because of the
parallel handling of cases (ticking off tasks
helps to keep track of finished/unfinished
work)
d) The most important steps are covered,
there may be others, but I cannot
currently think of one
f) see d)
a) it is (more) difficult to find resources in
the task pattern
b) makes work more efficient because
resources are found more easily
c) offered resources should
according to situation (see b))
change
d) If I like something for myself, I would
also like to recommend it to others
e) Templates (e.g.) are currently in Evento
(and being used), nothing more than
"template offers" is needed
204
a) saves time in finding the right one
b) I'm always doing just one task, don't
need to see the other problems
c) see b)
d) will help to work efficiently, to handle
cases more consistently
e) would like to share with others
f) some cases need to be discussed face-toface; but, if a solution is found, it should
be published in the task pattern.
205
12.5
Data from Connexions Northumberland
12.5.1 Indicator alignment results
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.1.1.2
Level 1 Artefacts
Level 2 Artefact characteristics
Level 3 Artefact quality characteristics
General Maturing Indicator (GMI) An artefact has changed its degree (score) of
structuredness
Level of Justification Validated by Wikipedia study
Specific Maturing Indicator (SMI)
o Topic tags are further developed towards concepts; e.g. adding synonyms or
description
Description of mapping:
o Topic tags are specific artefacts created by the users of the system. By adding
synonyms and descriptions, these artefacts get more structured, from simple
keywords to more complex semantic descriptions.
ID I.1.1.3
Level 1 Artefacts
Level 2 Artefact characteristics
Level 3 Artefact quality characteristics
General Maturing Indicator (GMI) An artefact has changed its degree (score) of formality
Level of Justification Individual proposal by FZI
Specific Maturing Indicator (SMI)
o A person is (several times) tagged with a certain concept
o A topic tag moved from the "prototypical concept" category to a specific place in
the ontology
Description of mapping:
o Based on our research, we have conceptualized the maturing process of ontology
artefacts in the ontology maturing process model (see D1.1). Moving from
emergent tags to heavy-weight ontologies, the level of formality (as defined by
the expressiveness of the underlying formalism) increases. At the beginning, just
syntactical strings are used as tags, which evolve into shared tags (reused by
others) into concepts that are part of a taxonomy defined by broader and
narrower relationships.
Questions in questionnaire:
o
Questions 1 – 12: To what extent do you agree with the statement “profile
A/B/C/D represents this person accurately”?
o
Questions 1 – 12: If you rated profile A/B/C/D as “slightly agree”, “agree” or “fully
agree” for accuracy, please explain why in the box below.
o
Question 16: If a tag is moved from “latest topics” to another section of the Topic
List, then there is a better understanding of the tag.
o
Question 17: If a tag is moved from “latest topic” to another section of the Topic
List, then I can retrieve the tag more easily.
o
Question 18: When a tag is moved from “latest topic” to another section of the
Topic List, then the tag is less ambiguous.
206
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.2.1.3
Level 1 Artefacts
Level 2 Creation context and editing
Level 3 creator
General Maturing Indicator (GMI) An artefact has been created/edited/co-developed by a
diverse group
Level of Justification Individual proposal by FZI
Specific Maturing Indicator (SMI)
o A person is tagged by many different users
Description of mapping:
o The specific maturing indicator refers to person profiles as artefacts, which are
co-developed through tag assignments by multiple users.
Questions in questionnaire:
o
Question 13: I consider a person profile more accurate if many different people have tagged
it.
o
Question 14: I consider a person profile more complete if many different people have tagged
it.
o
Question 15: I consider a person profile more useful if many different people have tagged it.
ID I.2.3.4
Level 1 Artefacts
Level 2 Creation context and editing
Level 3 Creation process
General Maturing Indicator (GMI) An artefact has not been changed for a long period after
intensive editing
Level of Justification Validated by RepStudy
Specific Maturing Indicator (SMI)
o An ontology element has not been changed for a long time after a period of
intensive editing
o A person profile is often modified and then stable
Description of mapping:
o An ontology element is a special type of artefact.
o A person profile is a special type of artefact. If restricted to a certain time
period, “often modified” corresponds to “intensive editing”. “Stable” can be seen
as a undefined “long period” of time, so that the special maturing indicator is a
specialization.
207
•
•
•
•
•
•
•
•
ID I.2.3.6
Level 1 Artefacts
Level 2 Creation context and editing
Level 3 Creation process
General Maturing Indicator (GMI) An artefact is edited intensively within a short period of
time
Level of Justification Individual proposal by FZI, used in D3
Specific Maturing Indicator (SMI)
o The whole ontology is edited intensively in a short period of time, i.e. gardening
activity takes place
Description of mapping:
o The ontology is a specialization of “artefact”.
•
Questions in questionnaire:
o Question 21: Have you done any gardening/editing activities?
 If yes, please give some examples
 If yes, please state in which situation you did gardening/editing?
• planned session in a group
• unplanned session in a group
• planned session on your own
• unplanned session on your own
• when you saw obvious errors or disorder
• when you followed recommendations
• other (please write down)
 If no, please give a short explanation as to why you have not done any
gardening/editing?
o Question 22: What triggers you most to use SOBOLEO for tagging? Please give an
example and a reason
•
•
•
•
•
•
ID I.3.3
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was selected from a range of artefacts
Level of Justification Validated by RepStudy
Specific Maturing Indicator (SMI)
o A topic tag is reused for annotation by the "inventor" of the topic tag
Description of mapping:
o Existing topic tags in the people tagging Instantiation are reused by selecting an
existing tag that was suggested by the system based on what the user has started
to type. Therefore, the reuse corresponds to a “selection from a range of
artefacts”.
•
208
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.3.5
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was made accessible to a different group of
individuals
Level of Justification Validated by RepStudy
Specific Maturing Indicator (SMI)
o A person is annotated with additional tags at a later stage by the same user
Description of mapping:
o The special maturing indicator can be seen as a specialization of the GMI under
the assumption that additional tags are assigned with sharing in mind. Through
assigning more tags, it becomes (more) accessible to others, i.e., the group of
individuals is enlarged. This is not true in general as social tagging systems can be
appropriated in different ways, i.e., they can also be used in a more
individualistic way.
ID I.3.6
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact is referred to by another artefact
Level of Justification Validated by RepStudy
Specific Maturing Indicator (SMI)
o A person is annotated with additional tags at a later stage by the same user
Description of mapping:
o This special maturing indicator corresponds to person profiles as specializations of
artefact. These refer to tags, which are another specialization of artefact.
ID I.3.9
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact has been used by an individual
Level of Justification Validated by RepStudy
Specific Maturing Indicator (SMI)
o A person is annotated with additional tags at a later stage by the same user
o A topic tag is reused for annotation by the "inventor" of the topic tag
o Topic tags are reused in the community
Description of mapping:
o Adding tags to an existing profile corresponds to a reuse of the person profile.
o A reuse of a tag is a reuse of an artefact.
ID I.3.10
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was changed
Level of Justification Validated by APStudy
Specific Maturing Indicator (SMI)
o A person is annotated with additional tags at a later stage by the same user
o Topic tags are further developed towards concepts; e.g. adding synonyms or
description
o The whole ontology is edited intensively in a short period of time, i.e. gardening
activity takes place
Description of mapping:
o An annotation with tags is a change to a person profile (which is an artefact).
o Adding synonyms to tags is a change to tags (which is an artefact).
o Editing is a change to an ontology (which is an artefact).
209
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.4.6
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has been assessed by an individual
Level of Justification Validated by RepStudy
Specific Maturing Indicator (SMI)
o Topic tags are reused in the community
Description of mapping:
o Reusing an artefact that has been contributed by others implies that it has been
assessed as useful by the tagger, i.e., the specific maturing indicator is a
specialization of the GMI.
ID II.4.1
Level 1 Individual capabilities
Level 2 Rating, assessment
General Maturing Indicator (GMI) An individual has been rated with respect to expertise
Level of Justification Individual proposal by FZI, used in D3
Specific Maturing Indicator (SMI)
o A person is annotated with additional tags at a later stage by the same user
Description of mapping:
o This mapping depends on the appropriation of a tagging system. While tags can be
used for different purposes and thus can have different semantics, for the case of
people tagging and based on the results of the formative evaluation, it is safe to
assume that if we restrict the indicator to tags denoting topics, tagging is a (weak)
form of rating a person’s expertise.
Questions in questionnaire:
o
Questions 7 – 12: To what extent do you agree with the statement “profile C/D
represents this person accurately”?
o
Questions 7 – 12: If you rated profile C/D as “slightly agree”, “agree” or “fully
agree” for accuracy, please explain why in the box below.
o
Question 13: I consider a person profile more accurate if many different people
have tagged it.
o
Question 14: I consider a person profile more complete if many different people
have tagged it.
o
Question 15: I consider a person profile more useful if many different people have
tagged it.
210
•
•
•
•
•
•
•
•
ID IV.2.1
Level 1 Sociofacts
Level 2 Quality of social network
General Maturing Indicator (GMI) An individual changed its degree of networkedness
Level of Justification Individual proposal by FZI
Specific Maturing Indicator (SMI)
o An individual changed its degree of networkedness
Description of mapping:
o Identical
Questions in questionnaire:
o Question 19: Do you think using SOBOLEO has helped you to increase the number of
colleagues in your professional network?
o Question 20: Have you built up more relevant contacts for your work practice by
using SOBOLEO, the people tagging tool?
211
12.5.2 Evaluation data
Using SOBOLEO leaflet
Using SOBOLEO
Over 100 staff have now been trained to use this system across igen. This is a
great opportunity to find out more about your colleagues and to share expertise
and learn from each other. Also, over the next few months the project team need
to see how you use it. In order to support this, as a minimum we are asking you
to please complete at least one of the following tasks at least once a
week.
Ideally we’d like you to visit the site twice a week and do more than one
task.
Task one Log on to SOBOLEO and have a look around
Northumberland use the link from the front page of the intranet under the light
blue tab called ‘links’, or go directly from Start Menu – All programs.
Igen – use this link: http://octopus35.perimeter.fzi.de:8080/
your favourites so you can find it easily.
Save it in
Remember your username is your email address and your password is
your initials followed by 4pt
--------------------------------------------------------------------------------Task two Do some searching and browsing
On the home page search for a relevant subject. Can’t find it? Add information
is – chances are others will be looking for it too.
Can’t think of anything to search for – then click Browse Topics to look at new
documents and web pages
Click on Browse People to look up your own profile – anything changed?
Look up a colleague’s profile – preferably someone you don’t know that well.
If you find someone who you could usefully speak to why not use the email
button and introduce yourself?
----------------------------------------------------------------------------------212
Task three Do some Tagging
Click on Tag People in the menu and: Tag yourself with a new topic OR Tag a
colleague with a new topic.
Click the tabs for web pages and office documents on the same page to Tag
a new web page and/or Upload a document from your own system and tag it.
----------------------------------------------------------------------------------Task four Spend some time editing/gardening
In the Edit Topic List page, do some ‘gardening’!
Create a new main topic for the list that would suit one of the latest topics or
topics that you can go on and add now, and move them to their new home.
Add a description to the topics you created and an alternative label.
Move 2 topics from latest topics to the main topic list under the relevant
heading.
Click on the ‘improve it’ button if you’d like to tidy up but are not sure
what to do.
Use the chat log to discuss the list with your colleagues.
Look at ‘hot topics’ – are any of them related to you?
---------------------------------------------------------------------------------Getting help
If you have any problems, or want to go over any of the features of the site
again, please get in touch with Isabel (01670 798244) or Caron (01670 798232)
and we’ll be pleased to help you!
A work through alone presentation is available on the:
• SOBOLEO site under topic Using SOBOLEO
• The Northumberland intranet and circulated by email
•
On the igen intranet and circulated by email
213
SOBOLEO Practice Tasks
SOBOLEO Practice Tasks
If you have any problems, or want to go over any of the features of the site,
please get in touch with Isabel (01670 798244) or Caron (01670 798232) and
we’ll be pleased to help you!
Log
on
to
SOBOLEO
–
here
http://octopus35.perimeter.fzi.de:8080/
is
the
web
link
again
Remember your username is your email address and your password is your
initials followed by 4pt
Searching and browsing
On the home page search for a subject by typing a search term in the search
box. For example you could search for ‘Foundation Learning’, ‘IT’, ‘Labour
Market Information’ or ‘High Schools’
Click on Browse People to look up your own profile.
Click Browse Topics to look at documents and web pages.
Tagging
Click on Tag People in the menu and: Tag yourself with a new topic. Tag your
colleagues too. Remember you can tag people with the same topic more than
once.
Click the tabs for web pages and office documents on the same page to:
Tag two new web pages.
Upload a document and tag it.
Editing or ‘Gardening’
In the Edit Topic List page, do some ‘gardening’!
Remember that all topics you have just added to the system are in a
folder called ‘latest topics’ found right at the bottom of the list.
Choose one that you added and add a description and an alternative label.
214
Create a new main topic for one of these new topics, or add it to an existing
main topic that makes sense.
Use the chat log to discuss the list with your colleagues – IS ANYONE THERE?
Look at ‘hot topics’ – are any of them related to you?
Well done and thanks!
215
12.1.1.4
Questionnaire
NOTE: Before you begin, please understand that all information shared will be held in strict
confidence and your identity will not be divulged in any way.
We ask you to complete a short questionnaire about the comprehensiveness of Soboleo, the
people tagging tool. The goal of this questionnaire is to establish how useful people tagging is
to your work practice.
The questionnaire consists of two parts. The first part has four different person profile
examples, for which we want you to rate accuracy, completeness and usefulness. The second
part consists of ten additional questions.
You will need approximately between 5 and 10 minutes to complete the questionnaire.
Thank you for your help. It supports scientific research!
12.5.3 Person profile example A
216
1. To what extent do you agree with the statement “profile A represents this person
accurately”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile A as “slightly agree”, “agree” or “fully agree” for accuracy,
please explain why in the box below.
2. To what extent do you agree with the statement “profile A is complete”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile A as “slightly agree”, “agree” or “fully agree” for
completeness, please explain why in the box below.
3. To what extent do you agree with the statement “profile A is useful”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile A as “slightly agree”, “agree” or “fully agree” for usefulness,
please explain why in the box below.
217
12.5.4 Person profile example B
4. To what extent do you agree with the statement “profile B represents this person
accurately”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile B as “slightly agree”, “agree” or “fully agree” for accuracy,
please explain why in the box below.
5. To what extent do you agree with the statement “profile B is complete”?
218
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile B as “slightly agree”, “agree” or “fully agree” for
completeness, please explain why in the box below.
6. To what extent do you agree with the statement “profile B is useful”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile B as “slightly agree”, “agree” or “fully agree” for usefulness,
please explain why in the box below.
12.5.5 Person profile example C
219
220
7. To what extent do you agree with the statement “profile C represents this person
accurately”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile C as “slightly agree”, “agree” or “fully agree” for accuracy,
please explain why in the box below.
•
8. To what extent do you agree with the statement “profile C is complete”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile C as “slightly agree”, “agree” or “fully agree” for
completeness, please explain why in the box below.
9. To what extent do you agree with the statement “profile C is useful”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile C as “slightly agree”, “agree” or “fully agree” for usefulness,
please explain why in the box below.
221
12.5.6 Person profile example D
10. To what extent do you agree with the statement “profile D represents this person
accurately”?
Fully
disagree
Disagree
Slightly
disagree
No
preference
222
Slightly
agree*
Agree*
Fully agree*
•
If you rated profile D as “slightly agree”, “agree” or “fully agree” for accuracy,
please explain why in the box below.
11. To what extent do you agree with the statement “profile D is complete”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile D as “slightly agree”, “agree” or “fully agree” for
completeness, please explain why in the box below.
12. To what extent do you agree with the statement “profile D is useful”?
Fully
disagree
•
Disagree
Slightly
disagree
No
preference
Slightly
agree*
Agree*
Fully agree*
If you rated profile D as “slightly agree”, “agree” or “fully agree” for usefulness,
please explain why in the box below.
Please answer the remaining questions based on your experience with the Soboleo people tagging tool:
13. I consider a person profile more accurate if many different people have tagged it.
Fully
disagree
Disagree
Slightly
disagree
No
preference
223
Slightly
agree
Agree
Fully agree
14. I consider a person profile more complete if many different people have tagged it.
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully agree
15. I consider a person profile more useful if many different people have tagged it.
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully agree
16. If a tag is moved from “latest topics” to another section of the Topic List, then there is a
better understanding of the tag.
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully agree
17. If a tag is moved from “latest topic” to another section of the Topic List, then I can retrieve
the tag more easily.
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully agree
18. When a tag is moved from “latest topic” to another section of the Topic List, then the tag is
less ambiguous.
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully agree
19. Do you think using Soboleo has helped you to increase the number of colleagues in your
professional network?
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
Agree
Fully agree
20. Have you built up more relevant contacts for your work practice by using Soboleo, the
people tagging tool?
Fully
disagree
Disagree
Slightly
disagree
No
preference
Slightly
agree
21. Have you done any gardening/editing activities? ( yes
• If yes, please give some examples
224
Agree
/ no
Fully agree
)
•
If yes, please state in which situation you did gardening/editing?
i. planned session in a group
ii. unplanned session in a group
iii. planned session on your own
iv. unplanned session on your own
v. when you saw obvious errors or disorder
vi. when you followed recommendations
vii. other (please write down)
•
If no, please give a short explanation as to why you have not done any
gardening/editing?
22. What triggers you most to use Soboleo for tagging? Please give an example and a reason
225
Data from the questionnaire
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
Reports from igen
MATURE Project: Soboleo
Report from Connexions Northumberland by Caron Pearson and Isabel Taylor
Overview
We became involved in the MATURE Project in 2009 as a result of contact made with Warwick IER in
the context of a separate project. Isabel Taylor and I worked with Jenny Bimrose, Alan Brown,
Graham Atwell, Simone Braun and Andreas Schmidt to develop and test the people tagging tool,
Soboleo. This was initially with Connexions staff in igen Northumberland and latterly with IAG staff
across the igen group in Yorkshire. This report outlines the activity that we organised and led, as well
as our comments and feedback on the project now that our work has concluded.
Activity
Initially Isabel and I were introduced to and then worked on the first version of Soboleo. We made
suggestions to the development team about appropriate terminology for guidance and information
workers and the user friendliness of the system.
We introduced Soboleo to a test group of igen staff in Northumberland and collected their feedback
which again helped to develop the system. We had some interesting and lively discussions about
people tagging, editing and the viability of such a radically different system being used in our
organisation.
The project was then suspended for 6 months due to the changing circumstances in our industry.
Phase One of Training
In May 2010 we resumed our work and began training all igen staff in Northumberland. All staff
attended training sessions which each lasted a half day. In total 41 were introduced to the system.
See appendix 1.
The aim of the training was that:
By the end of this session we hope that you will understand what Soboleo is, why we are using it
and, through hands on experience, you’ll feel able to use all of the program’s functions.
It was made clear to participants that this session was the beginning of the process of building
familiarity with Soboleo;
307
After this session continued use of Soboleo is required. We provide some tasks to do which will
help cement your knowledge. Then we hope you will use it because it is fun and useful!
Examples from Evaluation feedback from Northumberland (for full results see appendix 2).
What participants liked about Soboleo
“Accessible for all; many possibilities for practical use. More immediate than existing intranet”
“The tagging and connecting staff to specific roles will be very informative – when everyone is
tagged”.
“Having autonomy of adding documents and relevant websites”
What participants thought could be improved
“Possibly more colour / graphics. scrolling is a bit unwieldy”
“speed”
To add an UNDO button in case of mistakes.
How participants thought it could help them in the future
“Signposting to individuals with specific competencies”
“help me find information more quickly and efficiently”
“Looking for additional information. Sharing knowledge across the organisation, updates by the PAs
and the workforce of Connexions”.
On a scale of 1-5 with 5 indicating very confident and 1 not confident at all:
Participants’ confidence in searching for topics was rated highly – 56% rated 4, none indicated 1
308
Participants’ confidence in tagging people and uploading documents was also good - 63% rated 4,
none rated less than 3
Participants’ confidence in gardening and chatting was less so – 26% rated this as 3, 31% as 4.
Further comments included
“The advantage of this program is that staff can be in control of it. However, I think this could also be
a disadvantage!? It would be hoped that staff would behave responsibly in the adding and editing of
information!? Also who would be responsible for removing out of date information?”
“Only concern is the potentially, vast amount of good information and not being able to find it.”
Overall, evaluation feedback was very positive. Staff found the system easy to use and they were
attracted to the idea of adding information and people tagging. Although, despite a few reservations,
they liked the idea of being able to garden/edit themselves, they found performing the function itself
more challenging. A recurring theme in feedback was the blandness of the interface and the
response speed of the system. There was general agreement that, once the system was in general
use it would be helpful.
Phase Two of Training
From January 2012, training was expanded to cover more staff from igen ltd in Yorkshire. A further
66 staff were trained in 6 sessions - see Appendix 3. The aims and processes of the training were
exactly the same as those used in Northumberland.
Examples from Evaluation feedback from Yorkshire (for full results see appendix 4).
What participants liked about Soboleo
“It has a ‘personal touch’ and allows professionals to link knowledge with their colleagues.”
“Very similar to Facebook – easy to use and understand.”
“All the info being available in one place for us all to share and comment on”
“Find out what knowledge colleagues have on work and non-work / leisure”
“It is a good way to communicate with colleagues from other locations”
.
309
“Good use of inter-connected topics with reasonably fast arrival at required topic”.
What participants thought could be improved
“Can be a bit slow”
“I think the website should allow you to see who has added things, pictures should be on the site”
.
“Simpler process for editing and gardening topics”
“Concerned that others can change my entries with out my knowledge. Will only be as good as the
information put on it and feel it could quickly become unwieldy
If it worked correctly and didn’t keep stalling would help”
“A-Z of names - click on letter to get to that letter. Add YOS titles, add phone numbers. Problems if
topics disappearing. Add ability to comment on documents, websites, topics. Ability to ask for
information that isn’t already a topic. Who knows about …..? “
“There should be a notice to pop up to indicate someone has sent you a message on the chat system.”
“Too easy for random delete to happen. Seems to be time consuming in its maintenance. Entries
(topic heading etc) are too specific – broader recognition by the search engine. “
How participants thought it could help them in the future
“Sharing of good practice –a) efficient use of the resource (not reinvent the wheel). b) Identification
of key people as resource.”
“If it is kept updated it could help me to keep track of opportunities in Leeds.
Would be good to be
able to set an expiry date for some posts, e.g. time limited courses that end 6 weeks later.”
“Linking professionals and expertise.”
“Timesaving when looking up info or how to contact the right person for info”
“Depends on what is entered onto it.”
“Sharing information in the company.”
“Not having to always bother someone for answers”
310
“Not sure yet. Ask the right person to get the right answer?”
Further comments
‘Staff maybe wary of spending time adding docs etc unless time is allocated during working hours.’
‘I’m not sure how this system would help in my role however; I would probably use it for research
purposes. I’m also not keen on other people being able to delete ‘my’ topics. ‘
On a scale of 1-5 with 5 indicating very confident and 1 not confident at all:
Participants’ confidence in searching for topics was rated highly – 56% rated 4, none indicated 1
Participants’ confidence in tagging people and uploading documents - 63% rated 4, none rated less
than 3
Participants’ confidence in gardening and chatting – 26% rated this as 3, 31% as 4
A further, online self help PowerPoint and practice tasks sheet was circulated to all staff who had
not been able to attend the training – approx 100 in all. To gather feedback an online survey was set
up and responses to this are recorded in Appendix 5.
Trainers’ Comments
Overall the Soboleo system was enthusiastically greeted and deemed user friendly. The concept of
people tagging in a work context is very new and again, seemed popular. Workers said they liked the
idea of transferring social technology to the working arena.
The circumstances in which we were introducing Soboleo were not ideal. The demise of the
Connexions Service ran parallel with most of this work. Had the service continued, with an
interagency working ethos, we would have been able to see the benefits of people tagging far more
clearly. Also, encouraging staff to use the system, which is separate to the client record system and
outlook, adds to the challenge.
Searching, browsing and tagging were easily taken on board but editing/gardening was more difficult
for most people to grasp. This was in part due to some difficulties we had with the system – being
unable to use the cut and paste facility, losing the page (sent back to log in), having to refresh page in
order to view changes, the endless scrolling to move topics into the main list from latest topics etc.
Participants understood that gardening was vital so that people could find the information in the
system, but many found this too unwieldy.
311
Participants remarked that the interface of Soboleo could be brighter and more use could be made of
personal logos to ‘brighten up the screen’. The log in button could also be more prominent. Several
remarked upon the fact that they would like to be able to set themselves a more secure password in
order to better protect their work on the system.
Some parts of the organisation have been using other IT systems to share information. These systems
are already well embedded into their work practices and therefore this may affect the successful
take up of Soboleo within these areas of the company as staff may be unwilling to learn and use
another system especially one they view as slow.
The training raised the issue of the range of IT skills across the company as staff in some areas were
more IT literate and confident in their use of new programs while others struggled with some of the
new concepts they were introduced to, particularly editing. We feel this will also have an effect on
usage of the system.
February 2012
Soboleo Training Northumberland
Appendix 1
Trainers: Caron Pearson and Isabel Taylor
th
th
th
st
12 July Blyth
14 July Blyth
19 July Blyth
21 July Blyth
Brenda Donnison
Andy Oliver
Ann Clark
Viv Shanks
Eileen Reid
Lindsay Taggart
Ian McIntosh
Claire March
Liam Paxton
Heather Corbett
Gill Burridge
Kim Fenney
Linda
McCluskey
Michael Haley
Kerri Riddell
Collins-
Imelda Walker
Michelle Harvey
Susan Slassor
Kevin Hogg
Tony Mulholland
Wendy Nicol
Ian Yarrow
Andrea Johnson
Alnwick/Berwick
th
th
Hexham
Blyth 20 September
th
20 July
10 August
Susan Kent
Margaret Morey
Ruth Ingleby
Josie Oliver
Maureen Stewart
Stephen Edminson
Susan Blenkinsop
Noel Keighley
Rachel Mc Creesh
Gill Bromley
Kathleen Brown
Brenny Matterson
Pamela Cornfoot
Alison Kennedy
312
Jill Hedley
Jackie Farrell
Lindsey Hunter
Alison Smith
Unable to attend
Carol Barker
Sally Weir
Zoe Olden
Jacqui Wood
Brendan McCreesh
41 trained at 20/9/11
Northumberland Soboleo Training
Appendix 2
July/August/September 2011
Evaluation Summary
1)
What do you like about this software?





I can see many potential uses for this system as it is given time to develop.
Simple to use and understand
Potentially very useful
Easy to view and use. Potential to support a number of activities
Accessible for all; many possibilities for practical use. More immediate than existing
intranet
Yes I will use Sobeleo. Will be useful when I have a question and knowing who else I
can ask in the event no one in office.
Could be really informative and current
Useful resource for work and creating ‘contacts’ and tagging web sites.
Easy to use. Love the idea of sharing info and resources
Very easy to use
I like its functionality
This is a great way to gain knowledge from people who are in the same ‘business; on a
daily basis
Quite straightforward, reasonable easy to use
Seems easy to use and navigate around
Finding web pages
Easy to navigate











313
























It could be very useful when the whole company starts to use it
The tagging and connecting staff to specific roles will be very informative – when
everyone is tagged.
Able to store information / websites / documents. Colleagues. Find in one place and
ease of access.
It allows you to see what others do / their roles.
It seems simple to use. A good way of everyone adding useful information to one
central place.
Seems fairly easy to use.
More user friendly than our intranet.
Like the fact it’s ‘managed’ by staff.
Being able to tag yourself and others – seeing who would be more suitable for different
subjects.
The concept behind it. Easy ** **
Useful information although needs to be populated with more!
That’s difficult at the moment
Easy to use
The idea of sharing info is a good one.
User friendly
The software seems to be a good informative way of sharing information with
colleagues.
It appears to be a helpful resource of information
Useful way to focus on topics.
Having autonomy of adding documents and relevant websites.
I think it could / will be very useful.
Could work very well across the organisation.
Central place to share information. Practitioners able to put info on as we find it.
Appears to be a relatively easy way to access a wide range of information.
Fairly straightforward to tag and find people. More complicated to add information.
314
2)
How do you think it could be improved?




Needs to be care re: gardening and organisation of topics
Possibly more colour / graphics. scrolling is a bit unwieldy
Easier navigation
Keeping an eye on the gardening aspects especially the latest info section could get
quite a long list
Editing ‘pen’ function didn’t appear to work
Would have to use often to be able to comment
Being able to download documents
In my opinion it could look more polished and professional
The colours could be brighter. When in the editing topic list could the latest topics be in
a separate box. Scrolling up and down can be confusion.
Brighter graphics
Few tweeks with documents and topics
More visually pleasing – could the 2 colours be any more dull?
Couldn’t move a topic out of ‘latest topics’ to make it a ‘main topic’.
Adding pics, forums – discussion forums – for people tagged in same
Not sure until start to use system
Be able to create specific groups . encouragement to be used as an info sharing site too
ie to share group sessions
The cut and paste to edit topics is a bit time consuming. Maybe it could be simplified?
If we were able to have faster internet access. But presumably this would be linked to
funding.
Hard to say at moment. Once teething problems ironed out I think it will be a helpful
‘tool’ especially for PAs.
I’m not sure what could be improved at this stage.
I think the users will be difficult to convince to buy into it. That will be a bigger issue
than any of the software.
Cross referencing with other sites. Visually? Could be a bit more up to date, needs
more colour!
Could be made easier to use
Speed
Speeded up
Linking each subject – though this maybe come easier with use.
I think using it regularly would highlight any problems.
Faster system would help.
Not sure at the moment. However, it would be useful to have a facility where
documents/information can be highlighted to indicate the time it has been on the site to
ensure things are kept up to date.
To add on UNDO button incase of mistakes.
RSS feeds would be useful. ‘Notes’ feature could be an idea so people could keep ideas
/ thoughts all together but obviously every user would need a separate page.
More interesting to look at visual, graphics.
Not sure at this point.
Quite slow to use? Computers at HQ.






























315
3)
How do you think it could help you in the future?





As a source of information and support
Not too sure as yet, but will think about it
Sharing of information especially as we are all working remotely a lot of the time
First reference point for key information / good practice
Keep in touch with PA roles, specialisms. Links to websites/documents used by
others – good way to access a wider pool of knowledge.
Information quickly
Find current information and share. Use in interview if you wanted someone with
knowledge to sit in?
Once the software is being populated and used by practitioners it will be more
relevant and useful in every day work.
As a resource base – could potentially replace ‘favourite’ on our desktop.
Not sure that it will. I’d probably still ring / email colleagues. I think it would be
more useful in a larger organisation.
It could help organise contacts
Keep knowledge up to date
Useful to look / resource information, as opposed to always having to email out
requests
Good to have information in one place
Not sure on Reception
Sharing information quickly and easily especially about different parts of the county.
Improving knowledge and practice by sharing with colleagues
Will help with links to colleges / schools / training providers who is doing what.
Provide me with useful up to date info, websites web pages and topics to use with
students and other young people
It allows you to see what others do / their roles. Get info / knowledge on any
issues I may require for a yp.
Using it to find colleague with knowledge in subject area. Looking at useful website
that others use.
Signposting to individuals with specific competencies.
Be able to access certain information more quickly / immediately (if it is on system)
As a member of admin staff, not sure it would help hugely, but could be helpful as a
‘Help Desk’ for other admin staff?
I think it would help me find information more quickly and efficiently than asking
around the office.
Will help with recording work and accessing help
helping me know who has knowledge / skill / experience within the organisation and
who to contact for advice
Information sharing
Sharing useful sites
Looking for additional information. Sharing knowledge across the organisation,
updates by the PAs and the workforce of Connexions.
Not sure. Hopefully will use it to get info but seems very hard work at the moment.
Reduce emails re: FYI
If new information was added this would benefit me and colleagues alike.
Knowledge of the expertise of colleagues within the company
Cut down on mail box sizes.
Sharing of knowledge / improving professional practice / ensure CPD / involve
everyone in sharing and updating each other.
Sharing information.
Hopefully more info will be uploaded and the site would give easy access.
Sharing information
Updating me with info on topics I am unsure about from people who are more
involved than me.
Accessing information and contacting colleagues for help and advice




































316
4)
As a result of the training today, how confident do you feel about:
(please mark on the line with a cross, 5 being very confident and 1
being not confident at all)
a.
finding people, web pages, documents and topics?
1----------------2----------------3-----------------4---------------------5
1
=
1+
=
2
=
1
2+
=
1
3
=
8
3+
=
7
4
=
17
4+
=
6
5
=
1
Comment:



Add pictures
No problems here
I enjoyed it really interesting, but will need to practice to be able to use it.
I’ll become more confident with use (to work system really needs people to buy into
idea and use it) > general comment! sorry
Thank you!!! (oops this comment should have gone to the end)
Soboleo was a lot simpler than I expected, very user friendly and fun.
Fairly straightforward
Edit Topic – does not save new additions / alterations made to original description
box.
Once familiar with Soboleo I was able to browse quite fluently and easily.
Think with more practice will find my way round it much more easily.
Need to practice
Looks good and easy to use, unsure of the time we’ll have to actually use it – but
only time will tell. Concerned re: keeping it tidy – could become addictive.
Needs lots of practice!
Am confident now, but might not be when I have to work on this myself!
Good quality training and easy to ask if unsure about something.
b.
Tagging people and uploading web pages and documents?












1----------------2----------------3-----------------4---------------------5
1
=
317
1+
=
2
=
2+
=
3
=
7
3+
=
4
4
=
19
4+
=
7
5
=
2
1
Comment


No problems. Worked well.
Pretty confident at tagging people but would need to practise uploading docs and
web pages once the system allows this.
Could be seen and used more for amusement than serious work, but I think that
phase will pass quickly
Quite straightforward when done it a few times.
Very easy to tag people. Would probably take a little practice to get used to
uploading documents
It was quite simple to get the hang of
Tagging seemed easy
Need to practice
Concerns re: quality of documents if written by PAs. I like the website option –
takes the onus off us.
Am confident now, but might not be when I have to work on this myself!
Need practice
c.
Gardening and chatting?









1----------------2----------------3-----------------4---------------------5
1
=
1+
=
2
=
2
2+
=
2
3
=
10
3+
=
7
4
=
12
4+
=
4
5
=
1
Comment


Didn’t get round to chatting.
Chatting very simple but editing / gardening somewhat more complicated.
318










This area got more confusing. Further use will help.
Chatting is easy. Gardening – think I need more practice on this!
Once teething problems sorted. At present, not able to move topics around as we
are meant to do.
Some of the gardening tools were causing problems but it may have been the
desktop computer.
I found this difficult
Need to practice
Not sure I like the idea someone could move or change something I’d written or
filed.
Streamline is essential
Am confident now, but might not be when I have to work on this myself!
Need practice
Do you think that your confidence will increase in terms of using Soboleo
once you have completed the practice tasks?


















Yes x 20
Definitely x 2
I hope so – finding time though?!
Probably yes
Yes and then with continued use
Yes once using on a regular basis will become easier
Yes confidence will improve with use and the more information you can access.
Yes as will reinforce what I’ve learnt.
Yes. Practice makes perfect 
Probably
Yes I feel my confidence will grow the more I use the programme.
Absolutely!
Hope so
Yes. As with everything practice makes perfect! The more you use the software
the more confident you will become with it.
Possibly
Yes confidence would increase – again unsure on resources re: time to do it.
Yes need time to spend / explore further.
Yes certainly.
Any other comments?








Good to be involved with something innovative related to IT for once. Very
pleasant and interesting morning. Well delivered!
Enjoyed the training – nice pace and tasks etc
Photos would be useful especially if rolled out to other parts of igen.
Think the SMS box could be really useful as I work alone – particularly info re
interviews.
Thank you for an interactive/interesting session.
Not sure I’ll use this after the practice tasks and I’d prefer to have had the option
not to do this training when we have so much work at the moment.
Think we have to keep using it and updating / editing the site. Been rather slow to
use.
Only concern is the potentially, one vast amount of good information and not being
able to find it. Not keen on my photo on profile but happy to have a ‘symbol’ to
‘represent’ me!
319





Useful tool, as long as people keep using it and more information is regularly added
/ amended.
The advantage of this program is that staff can be in control of it. However, I think
this could also be a disadvantage!? It would be hoped that staff would behave
responsibly in the adding and editing of information!?
Also who would be
responsible for removing out of date information?
Thank you for this training
Can see great relevance of the system for training and development purposes.
Well delivered training.
Thank you!



Thank you for the training. Well delivered. Once again the IT capacity of our
system can be an issue re: training events such as this.
Thank you both for this training. I appreciate there were teething problems beyond
your control.
‘Edit’ and ‘Imprac’ would not present me with a drop down box.
Appendix 3
Soboleo Training North Yorkshire, Leeds and Doncaster
Trainers: Caron Pearson and Isabel Taylor
Total 66
8th December Northallerton 13
Rachel Warters
Mark Corcoran
Helen Jukes
Bev Dawson
Lisa Pratt
Karen Morgan
Alison Fletcher
Sarah Barrett
Linda Donaghy
Liz Bryan
Thelma Thomas
Barbara Beresford
Sally Leck
320
13th January Leeds am 19
Louise Graham
Sonia Madhvani
Michael Davies
Shannel Hamilton
Andrea Webb
Raj Gill
Tanis Yeomans
Carwen Jones
Leon Walcott
Thomas Marsden
Pam Whittam
Nigel Binks
Lucy McPate
Amy Tolliday
Nick Hart
Glyn Dean
Sharon Kumar
Charles Birch
Karen Umpleby
13th January Leeds pm 14
Sam Harper
Steph Herbert
Wendy Farrar
Ann Gallagher
Jade Broderick
Sarah Keane
321
Dale Clement
Dave Goldthorpe
Joy Pollard
Sam Hampson
Emma Carlin Marshall
Saikha Pinnu
Angela Britten
Jason Hopely
18th January Doncaster 7
Pete Hardwick
Berni Hall
Kevin Norburn
Mike Sokolow
Hayley Thurley
Tracey Curle
Ruth Puckett
27th January Leeds 13
Liz Green
Mandy Cummings
Zoe Tolman
Mary Mills
Mouzma Hanif
Michelle Peacock
Heather Aung
Laura Dooher
Judy Dixon
Rachael Childs
Alex Ryan
Emma Merrick
322
Louise Baker
Soboleo Training Yorkshire 2012
Appendix 4
Evaluation Summary
1) What do you like about this software?


























Sharing documents with colleagues
The ability to share information with colleagues
Clearly laid out. Te notion of searching for specific help (personal or printed) is sound.
I can see the benefits and why it has been developed. I just don’t see it being a useful
‘tool’ for us.
Good use of inter-connected topics with reasonably fast arrival at required topic.
It seems out of date – very slow.
The freedom to add and edit information.
Easy to use.
Useful in some way such as easy access to information / people will abuse the system
and use as social network.
Quite easy to use / files/documents can be easily stored and shared.
It is a good way to communicate with colleagues from other locations.
Very easy to use and navigate through the pages.
Seems fairly easy to use.
Easy to use.
It’s user-friendly and easy to get around the site.
Easy to use.
It’s easy to use.
Easy to ‘see’ where you are! Navigation through it is clear.
That it was a very user friendly software and was easy to access.
Find out what knowledge colleagues have on work and non-work / leisure
Easy to use.
You can add info that you find useful. Colleagues can be found quickly that may
have useful expertise
It’s quite easy to navigate around
All the info being available in one place for us all to share and comment on
It’s pretty easy to use and find what you want
I like the way you can assess the info easily
323





One common storage/information sharing tool for the whole contract
How you can store found information in one place
Not sure
That you can contact work colleagues
The chat option
Fairly user friendly – good means of contacting Cx workers up and down UK.
Chat option (could be private though)
Very similar to Facebook – easy to use and understand.
It has a lot of potential to help me to keep track of information about new
developments in Leeds.
Straight forward to use. Sharing best practice and what works well.
Good facility to share info/expertise with colleagues
Effective way of sharing and storing information
Could be useful for sharing information
Anybody is able to update the system and add subject
Concept very good
The idea of the software is good for 1 group for information. Not sure how often I
would use it though
It seems to have a real useful app
Not sure yet
It seems easy to use once we get used to it.
I think it is a great idea and user friendly
Simplicity and ability to store good practice and information. Contact people who
are specialist and have lots of tags.
The simplicity, easy user interface
Could be a great way to find a lot of info in one spot
Easy to use
Easy to navigate
I’m sure it will be helpful

Unable to fully evaluate because of slow IT systems

Cross boundary aspect – being able to discuss with other igen workers, share
information easily with headings that suit us.

Easy to navigate, lots of functions.

Good – will be better as more people add information and websites.

It has a ‘personal touch’ and allows professionals to link knowledge with their
colleagues.

Easy to access using tabs.
2)
How do you think it could be improved?

Quicker system – currently too slow. Tag more than one person at a time. More
security options. Nicer, up to date appearance. Link to websites by tagging
The look of the software could be brighter more colourful. The speed and efficiency
would need to increase. It would be beneficial to have a check before items can be
deleted.
Too easy for random delete to happen.
Seems to be time consuming in its
maintenance. Entries (topic heading etc) are too specific – broader recognition by the
search engine.
In its current form it is too slow and also it is open to abuse. This needs to be thought
about carefully before it can be used.
Multiple tags in one go, rather than individual tagging for the same topic.
Improve security (changeable passwords) / Make the site more ‘user friendly’!
The speed could be improved. Multi tagging would be useful.
Alerts when new things are added to your areas of interest etc.
Somebody monitoring and ‘gardening’ in each organisation. / Images can be added,
usual stimulation.





























324

































Ensuring the management of information is sustained to ensure no out of date resources
are on the website.
There should be a notice to pop up to indicate someone has sent you a message on the
chat system.
N/A
I don’t know yet.
Added notifications for when someone has tagged you onto a subject.
Non – it’s great!
A-Z of names - click on letter to get to that letter. Add YOS titles, add phone numbers.
Problems if topics disappearing. Add ability to comment on documents, websites,
topics. Ability to ask for information that isn’t already a topic. Who knows about …..
Ability to see who is online – have chat, ie private chat.
A part for requesting information, to show who’s online/logged in. Private chat,
searches should not be case sensitive.
Speed. Names should be surname order.
Having more topics on the software.
(1) Quick search facility for names. (2) Reminder – if someone tags you on a subject
and you don’t want to be tagged. Reminder you can delete.
Concerned that others can change my entries with out my knowledge. Will only be
as good as the information put on it and feel it could quickly become unwieldy
Website be improved so works ‘faster’ – quite a lot of time spent waiting for pages
to load today.
Password not at all secure at the moment. Dislike that others can delete my entries
without my knowledge or permission.
See who’s tagged things
Not too sure at moment need to spend more time on the Soboleo web page.
More focus on ‘stuck cases’ / peer vision – somebody would need to monitor and
quality assure as it may lead to information overload otherwise.
Passwords, been able to tag people to web addresses and sites and info
Difficult to say at this stage as I feel I need to start using it first
If it worked correctly and didn’t keep stalling would help.
So you can’t delete the things others put on so easily. To limit chat to specific users
you want to chat with. If it was monitored
Could be faster
It’s too early to tell what can be improved, I guess time will tell.
I think the website should allow you to see who has added things, pictures should
be on the site.
It is too slow. Should be able to create groups of people. Chat should integrate
with email so you don’t have to be on the system to be able to receive a chat
message. Make the interface more intuitive. Be able to invite other people form
Children Leeds to use it, to enable multi agency working.
To have hover information to come up over bars to explain what they do. Photos to
become familiar with who everyone on the system is.
Email notifications would be useful to know if I have been tagged. Password
change?
Simpler process for editing and gardening topics
Facility to change password – more secure!
Tagging people shouldn’t be
anonymous. Email alerts.
Many of the functions did not work today and the training/inputting did not flow
very well.
Linked to Linkedin; see who has tagged; need password change; more relevant
info; prompt needed by Admin
I would need to work on it in a live working environment
Open up to all related agencies.
325



There should be an expiry time on the latest topic list as due to our busy work
schedules we may not have time to delete everything. The ratings as on expert is
dependent upon how much knowledge the person has who tags you. You should
only be able to tag once. Also you should only be able to delete what you. You
should be alerted if you have been tagged.
Tasks user connection to server to client
Add profile pics; to trace who has tagged you into what
IT access was poor so need this sorting out to be able to use.

Cannot comment


It is very slow – but there were IT problems on the day.

N/A

Can be a bit slow.

It’s a bit slow at the moment. Could do with a bit more information uploaded by
the staff registered on it.

Speeded up.

Unsure.
3)
How do you think it could help you in the future?


Researching specialities within the company.
It would probably take longer to share information than our present system so do
not see the present software as a form of help at the moment except online chats.
Not sure yet. Ask the right person to get the right answer?
I don’t think it can. There are other ways of doing the same, or at least similar,
things in a more effective way.
Difficult to say until utilised properly and more in-depth knowledge gained.
Signposting to individuals with queries regarding particular topics.
Not sure. Maybe improve flow of information within igen.
Good practice within the team division and for housing useful information.
Not having to always bother someone for answers. / Easier information sharing
between organisations.
Storage of documents – easily accessible to all / Encourage information and encourage
best practice.
Sharing ideas/finding out about different resources.
By giving me numeracy documentation from various people/companies across the UK.
To share information between teams in igen.
Useful work contacts and networking tool to share good practise.
By uploading our training courses on the system and letting colleagues know about
it.
Searching for who knows what.
Sharing information in the company.
Quick (easy!) links to info and who does what.
Yes as it could help share knowledge from other people.
Sharing of knowledge.
I would like to use the system to store and find local information specific to my
work.
Could be a good way to share information with colleagues
Just sharing information from other practitioners – knowing it’s come from a good
source.
Timesaving when looking up info or how to contact the right person for info.






















326




























When searching for inform, people
Look up at topics, title, finding out more information, finding out more about people
who are related to the topic
Improved information sharing and networking
Quick search to other linked things and to find info
Once we start using it and putting useful information on it I think it will be useful
Not sure that it will help me
Depends on what is entered onto it.
Again useful for contacting Cx workers in other parts of the country (esp when yp
move from Leeds to use other Cx services in the UK)
Not sure until I give the system a proper go
Be able to share good information with other workers and colleagues
If it is kept updated it could help me to keep track of opportunities in Leeds. Would
be good to be able to set an expiry date for some posts, eg time limited courses
that end 6 weeks later.
Information sharing ie on training, news reports, current hot topics to help our
young people.
Good facility to share info/expertise with colleagues
Could help to identify relevant services or to seek out opportunities re: work and
training
Sharing info/resources
Could help find information about topics/activities in other City’s
Space to share all info across multi agencies
Could see who else covers specialist topics/work and gain knowledge from them.
It would help me in terms of careers support for young people
Possibly when it has built up with more info relevant to Leeds
Linking professionals and expertise.
Finding relevant work information
To develop an excellent network of resources, links and information and provide
access to people who are ‘experts’ in areas.
Information sharing
Easy access to info sharing?
Help update knowledge and share ideas with colleagues
Information finding, networking with colleagues.
Not sure due to pending situation next year and motivating staff

In view of possible termination of contract, my personal view is that this may not be
the best time to introduce this to our team

a) Sharing of good practice – efficient use of the resource (not reinvent the wheel).
b) Identification of key people as resource

Not sure at present – see how it develops.

Support with the role and if looking for a certain website / information.

Allow staff to draw on their colleagues areas of expertise without them being
present. Allows different regions to share expertise – this may be difficult to do
without this resource.

Not really sure it can.

Not sure really – time will tell.
327
Do you think that your confidence will increase in terms of using Soboleo
once you have completed the practice tasks?





























Yes x 3
Experience suggests so. Time will tell.
I am not sure it will make much difference, but thanks all the same 
Yes, practice makes perfect.
Pretty confident already in the use of this.
The more I use the easier it will become.
I hope so …..
Yes x 3
Yes – useful to become more confident with using it.
Yes x 2
Yes – absolutely.
Hope so.
is listed. When going into topics, interesting people – doesn’t show yourself. Would
be useful to show yourself so you don’t keep editing yourself again.
Staff maybe wary of spending time adding docs etc unless time is allocated during
working hours.
Yes x 6
Yes absolutely x 2
Probably
No
Possibly.
None - thank you for the training.
We have 2 different systems already in use, don’t know if I would have time to do a
third.
Yes when I put it into practice I will become a confident user.
Yes x 8
Yes. However, I am unsure how effective this tool will be due to time restrictions.
No further than use today
Hopefully
Yes I expect to feel more confident once I have had chance to practice.
Stay same
Yes x 2

Yes – better to work with it than have presentations.
around with it etc.

Probably.
Trial and error – playing
As a result of the training today, how confident do you feel about:
(please mark on the line with a cross, 5 being very confident and 1
being not confident at all)
a.
finding people, web pages, documents and topics? (totals)
1----------------2----------------3-----------------4---------------------5
1
=
328
1+
=
2
=
4
2+
=
3
3
=
9
3+
=
5
4
=
19
4+
=
1
5
=
14
Comment:

2 rating due to access issues not presentation

Thanks for showing us this – interesting to be aware of this system – it undoubtedly
has lots of potential.

Only reason not rated 5 is IT access difficulty and reduced time. I am sure this will
be fine when I sit down again to do it and my own log-in works.

Have practised today – will need to continue as system very slow today



People seems very easy. Documents not so.
The Soboleo interface is quite straightforward and easy to use.
Soboleo could be very useful as a tool used by an individual. Could become
cumbersome if used at an organisational level ie who uploads what, who checks
accuracy, who updates and removes out of date information.
This activity is self explanatory – similar to searching on the internet.
This seems very straight forward.
Easy to use.
For me, it’s a need to do practice!
Easy to find, tabs at the top really helped.
Need more practice.
Info that is relevant to me will need to be uploaded.
Yes I will be able to share info.
Very enjoyable: Good to be able to try out the website.
Searching topics is simple and easy to use










329





Straight forward
Couldn’t complete task 3 today so will have to practice this for a while.
Very good training session
N/A
The more I use it I will hopefully get the hang of it more.
b.
Tagging people and uploading web pages and documents? (totals)
1----------------2----------------3-----------------4---------------------5
1
=
1+
=
2
=
3
2+
=
1
3
=
12
3+
=
2
4
=
15
4+
=
2
5
=
17
Comment
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
ix.
x.
xi.
xii.
xiii.
xiv.
xv.
xvi.
xvii.
xviii.
xix.
c.
You should be able to tag websites to people. Tag more than one person at a time.
Would be helpful to tag websites to other people
The Soboleo interface is quite straightforward and easy to use.
Tagging useful for targeted requirements. Web pages – possible duplication of what
a search engine does. System very slow.
This activity was self-explanatory.
Seems straight forward.
Easy to use.
For me, it’s a need to do practice!
Easy to tag and fast to access.
Need more practice.
Worried don’t know exactly how or understand the link, but pretty
Didn’t have a chance to upload documents otherwise it would be a 5
Again found this a it slow
I thought this was not very intuitive
Will need to practice uploading web pages
Needs further practise
Straight forward
Fine.
Have practised today – will need to continue as system very slow today
Gardening and chatting? (totals)
1----------------2----------------3-----------------4---------------------5
330
1
=
2
1+
=
2
=
3
2+
=
1
3
=
10
3+
=
1
4
=
13
4+
=
2
5
=
7
Comment

Easy enough, but I can see why some people may struggle with this part.
This task was confusing! I got there in the end 
Missed out on this part of the training
Maybe need a little more practice but got the hang of it – in the session.
Gardening with time permitting
Easy to use.
Why are all logs duplicated.
For me, it’s a need to do practice!
Will benefit from using the extra information you are emailing as editing not
possible as PCs having difficulties
Struggled a bit today with the editing
Seemed a complex system
Fairly straightforward
Passwords need to be changed. Chat private option. Faster browsing – may attach
a search engine – google etc.
Didn’t really get to try it as system went down
Problems with the system
Chatting is fine. The explanation of some of the gardening is confusing.
Not fully trained as yet
Needs further practise
Straight forward
I would be confident doing this, but this could be time consuming if people just add
stuff and don’t edit as they go along ie ending up with a very latest topics list.
Didn’t get much chance to do this.

Have practised today – will need to continue as system very slow today




















Any other comments?
331



















I’m not sure how this system would help in my role however, I would probably use
it for research purposes. I’m also not keen on other people being able to delete
‘my’ topics.
Would be concerned over the ease as to which other people can delete information I
had created.
Isabel was a great help. Always good to have a knowledgeable trainer.
As I said before, I can see how this system will benefit some individuals and
organisations although I don’t think it will help us here.
Easy to see on an individual basis how this might be useful. Concerns how the
system would benefit the organisation if implemented at an organisational level.
The PowerPoint could have been slightly more interactive – Again, this seemed out
of date.
Maybe someone to train rest of staff at igen.
NB:
Shared workspace – plus topic specific areas eg for sustainability group so
add all relevant docs and discussions. Can be quite slow when searching. Search is
too specific – should not be case specific – possibly add similar words for
misspelling – most relevant first.
The trainers were very helpful and informative.
We could do with changing our passwords
Keep us updated with any more changes
Thank you!
None
Thanks.
Training maybe pitched at too low a level.
Feel I need to spend a lot more time going through it again. Practice makes
perfect!!
Lovely training facilitators
Thanks for the training
Thanks trainers clear – coped well with IT difficulties. Thank you.
Major problems:

Duplication of resources and topics

Lack of control about what is added and what is deleted

Speed of input
Soboleo
Appendix 5
Summary of responses from on line training evaluation (as of 1/3/12).
As a ‘catch all’ we circulated, with self help training materials, a link to an online
questionnaire at www.surveymonkey.com. The questions are almost identical to
those asked in the evaluation proforma given out at the training sessions.
To date, 9 people have responded and their views are recorded here. The survey
remains open, and more responses may be added. View these by logging on to
332
the site, and signing in using the username: soboleo1 and password
projectend
1 As a result the training, practice tasks and your use of Soboleo, how
confident do you now feel at finding people, web pages and documents? (5
being very confident and 1 being not confident at all)
1-0
2–0
3–0
4–4
5-5
2 As a result of the training, practice tasks and your use of Soboleo, how
confident do you now feel about uploading web pages and documents? (5
being very confident and 1 being not confident at all)
1-0
2–0
3–0
4– 4
5-5
3 As a result of the training, practice tasks and your use of Soboleo, how
confident do you now feel about uploading web pages and documents? (5
being very confident and 1 being not confident at all)
1-0
2–0
3–3
4–5
5-1
333
4 As a result of the training, practice tasks and your use of Soboleo, how
confident do you now feel about editing the topic list, or 'gardening'? (5
being very confident, 1 being not confident at all)
1-0
2–1
3–1
4–6
5-1
5 What is your opinion of Soboleo? Tick all that apply
It is easy to use
7
It's a useful place to share and/or find information or resources. 4
It's a good place to find out about colleagues' jobs and specialisms 5
It's a good place to find help or support from colleagues. 2
It's a good place to offer information, help or support to colleagues 2
6 Please indicate below which of the following you have done since
completing the training and practice tasks.
I logged on to Soboleo and browsed. 9
I logged on to Soboleo and uploaded a web page or document 2
I logged on to Soboleo and tagged myself, or another person 7
I logged on to Soboleo and did some editing or gardening.
2
I logged on to Soboleo to look for some specific information. 1
I logged onto Soboleo and as a result contacted a colleague for help, support or
information. 0
7 Now that the formal training is over, will you continue to use Soboleo?
334
Yes 4
No 5
335
Additional answers for the questions 7, 8, 10 and 11
For question seven we collected 16 answers, four participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
From what I know of this colleague's work and experience the profile seems to be quite
accurate, but I do not know enough about her work to say if the profile is completely accurate.
I think this profile is much more comprehensive than A and B and provides a good insight into
the individual their expertise and links. I currently line manage the individual and therefore
feel it is a fair represnetation of their role. There are one of two addtions that could still be
made
No mention of role or base
The profile says who pamela is, including organisation, and email contact details.
this profile has a lot more info available, and appears to be comprehensively complete
Because there is more information in person profile C
more info here as long as it is accurate.
as i am not familiar with this person I do not know how accurate the information is
Details of the person's experience and experise are in the profile which is helpful if you have a
query about this area of work.
I know Pamela is an LDD specialist within the Northumberland team and links to Tyne Met
College. I dont know about every aspect of her work so this is the only reason I didnt put
Strongly Agree
Ther profile has lots of information showing a wide variety of areas of knowledge and
expertise.
There are lots of information on this page and tags
Details shown appear to match the role of the person.
I currently work with these people so know that the statements are fairly accurate.
There is lots of information which suggests it is a better representation of th person
More info than other profiles
For question eight we collected 16 answers, five participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
From what I know of this colleague's work and experience the profile seems to be reasonably
complete, but I do not know enough about her work to say if the profile is absolutely
complete.
Please see response above
As above
Pamela's profile is complete because it includes not just contact details, but also what Pamela
is associated with, including organisations and related people.
as above
Diffrent elements have been completed - not just people tagging but also other features.
More info
I don't know this person and who has tagged them so I am not able to know if this is complete
or not.
The person has been tagged with various subjects which indicates a complete profile
There is a lot of information on here including some useful related documents
Each area has information in it.
most sections are complete with information in them and wesbites and documents
Shows more information than previous profiles.
as above
It appears to be complete, there is lots of information available.
As above
336
For question 10 we collected 16 answers, four participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
My knowledge of this colleague suggests that the profile is quite accurate.
This is a fairly comprensive profile for the individual with a number of links. I think it is still
not fully complete. As a colleague I think the indivudal has slighter wider scope of knowledge
than is captured here
No role indicated No base indicated
Inclues contact and organisation details
appears to be well filled out and contain a lot of knowledge about the person
Lots of information is included about this person.
more info
again, i am not familiar with this person so am unsure of its accuracy
I know this person
Generally accurate but not sure why tagged with ''adult information'' as does not do this area
of work. Also have no idea who Charles Birch is and he is down as a related tag person
whereas there is no mention of other development team members who would be related people
A lot of tags reflects lots of areas and expertise and knowledge and information. A wide
variety gives a more accurate desription of person rather than one item
This person has lots of tags but not enough and this may show that this profile may only be
slightly accurate.
Clear lists, but items in box below name appear rather random.
This seems to be a true reflection of the above persons roles and responsibilities within the
organisation.
Same as previous
More info thatn others but don't know without knowing the person
For question 11 we collected 16 answers, four participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
My knowledge of this colleague suggests that the profile is quite complete.However I may not
know sufficient about this colleague's knowledge and experience to say that the profile is fully
complete.
Please see answer above
As above
Includes not just contact details, but also what his associations, and specialisms are.
as above
There are lots of ways of learning mroe about this person's area of expertise and interests.
More info
more info available
tagged with various subjects and related doc
The profile appears to be as complete as I would expect it to be!
Once again every section is complete.
Most boxes are complete but not enough related documents
Gives an idea of person's role
as above
Same as previous
as above
337
Answers for the question 22 about what triggers the use of SOBOLEO
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
I have only explored Soboleo when prompted.
If I have sourced a specific piece of information from somebody and feel they have expertise
that others would benefit from
build up business contacts
To alert new information
Nothing - I find that I get everything I need using both Insight and the internet as well as
contact work, when working with or needing to contact other agencies
i suppose when i wnat to shar info with other people that i know may use the system, i would
consider tagging to make aware
I dont really use tagging to tag other people as i feel that people should only be able to tag and
de tag themselves. As if i have very little or no knowledge of a topic and then i ask someone
about it i may assume they are an expert when really their knowledge may be very basic. If i
tag them someone in the future, whom may also have basic knowledge and want more
specialist knowledege,may waste their time contacting theperson who has been tagged. There
fore a person should be able to judge and tag if they are an expert.
to find out what people are doing and what they know. To share information more effectively.
Training
have not tagged anyone
no real reason. It is only useful if kept up to date and if people only tagged others accurately
and the other person was aware of this so they have the option to change the tags.
If I would like to highlight that myself or a colleague have a specialisation in a particular area
If I came across someone with an expertise that was not known this would trigger use of the
system.
When coming aross new areas of work, events or discovering the role of someone in the
organisation
By tagging can locate information and expertise quickly and more tags on profile indicates the
graeter expertise.
I don't know.
N/A
Wanting to share with others useful information and linked people.
As part of my training
If someone has been helpful with regards to a particular topic then it is important for others to
know that.
Only used when required by this process as don't find it useful
338
Answers for question 21 about gardening/editing activities
We also collected 13 examples from the 17 users, who answered the question positive. Three did not
answer at all, seven answered negatively. The following list shows the answers for the 13 examples:
•
•
•
•
•
•
•
•
•
•
•
•
•
Yes tidied up some of the sections and reallocated especially relating to more areas of
responsibility
Streamlining topics
Homework follow up exercise from January training session
general clearing up of my progile and adding or taking away preferences, wrongly related
people
I have moved topics from latest topics to a topis already inteh main list and I have created
topics in the main list.
Adding comments to others profiles - done a while ago unsure of whom or where.
I had a go at gardening but didn't really understand it.
removing erroneous tags adding a document
created topics, added sub topics, labels etc.
I have edited and added labels, descriptions and chatted on system.
I can't remember specific examples.
Only as part of my training.
as specified on the activity sheet given and on training
Also one user stated that another situation appeared for him or her. The example from the user here is
“Homework follow up exercise from January training session”. Finally, the following list shows the
seven examples for the negative answering of questions 21:
•
•
•
•
•
•
•
I think that I have failed to engage with the concept of 'gardening', that is to say that I do not
properly understand it. It feels a little like tidying someone else's garden rather than my own.
lack of time
no time
The system was not working
I find the system confusing and I have not had the time to play with the system.
System was not responding at the time
Unfortunately when I tried this the system was not fully running, but I will try again now the
problems have been resolved.
339
Additional answers for the questions 1 – 12
For question one we collected seven answers, one participant did not want to answer. Here are the
results:
•
•
•
•
•
•
•
Profile A accurately says who the person is and who they work for.
slightly agree - i feel the profile could have more to it
Because the person may not hav ebeen fully tagged with all their capabilities, only a small part
of them and these may or may not actuall ybe true as anyone can tag anyone else without the
other person knowing.
It shows me how to contact the person and where they work
Details of where the person works are in the profile and how she can be contacted.
The main thing I associate Vikki with is Leeds Learning Links - Leeds Foundation Learning is
a bit broader than this but still accurate.
Information it shows looks accurate, eg email address
For question two we collected three answers, one participant did not want to answer. Here are the
results:
•
•
•
Profile A does not include related documents or acitivity tags which may enhance the
completeness of the profile.
as above
It gives basic details such as where the person works but does not give any in depth info such
as activity Tags but does indicate that it would be a good person to contact for FL
For question three we collected seven answers, three participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
It provides some basic information and a starting point but is not complete
Profile A is useful, because it not only says who the person is, but who they work for, and
other related people who may also be useful to contact in relation to person A.
It is useful to know what others can do and who else you can turn to for specialist suppor tif
they have been tagged accurately. If they have not been tagged accurately, then this is not
useful at all.
It gives me a contact for FL
I can see a contact email for the person if i need to get in contact. I can see what is her area of
expertise/specialism or area she is interested in. I can also use her profile to see others she may
be related to, who also may be able to help me.
I agree it is useful because if I didn't know who this person was it would give me a rough idea
of her role.
Could use it to contact Vikky and start to appreciate her role as linked to Michelle Peacock on
the REAL project. Thus giving more than just her name. Contains information that might be
useful professionally.
For question four we collected three answers, two participants did not want to answer. Here are the
results:
•
•
•
The profile only says who Gavin is, but does not clearly say who he works for, aside from
'igen group'
it seems that the profile could be filled out a bit more accurately as there are no documents
attached to the profile
informs me of the person's job role
340
For question five we collected two answers, two participants did not want to answer. Here are the
results:
•
•
as above
contact details but no real context
For question six we collected four answers, four participants did not want to answer. Here are the
results:
•
•
•
•
Information is very limited but it does at least provide a name and a link
The profile is not complete because it is not clear which agency gavin works for - there are no
activity tags or supporting documents to clarify Gavin's role any further.
I would contact the person if I had a query regarding functional skills in Maths
Slightly agree as it gives some information about who he is and what he does but not a full
picture.
For question seven we collected 16 answers, four participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
From what I know of this colleague's work and experience the profile seems to be quite
accurate, but I do not know enough about her work to say if the profile is completely accurate.
I think this profile is much more comprehensive than A and B and provides a good insight into
the individual their expertise and links. I currently line manage the individual and therefore
feel it is a fair represnetation of their role. There are one of two addtions that could still be
made
No mention of role or base
The profile says who pamela is, including organisation, and email contact details.
this profile has a lot more info available, and appears to be comprehensively complete
Because there is more information in person profile C
more info here as long as it is accurate.
as i am not familiar with this person I do not know how accurate the information is
Details of the person's experience and experise are in the profile which is helpful if you have a
query about this area of work.
I know Pamela is an LDD specialist within the Northumberland team and links to Tyne Met
College. I dont know about every aspect of her work so this is the only reason I didnt put
Strongly Agree
Ther profile has lots of information showing a wide variety of areas of knowledge and
expertise.
There are lots of information on this page and tags
Details shown appear to match the role of the person.
I currently work with these people so know that the statements are fairly accurate.
There is lots of information which suggests it is a better representation of th person
More info than other profiles
For question eight we collected 16 answers, five participants did not want to answer. Here are the
results:
•
•
•
From what I know of this colleague's work and experience the profile seems to be reasonably
complete, but I do not know enough about her work to say if the profile is absolutely
complete.
Please see response above
As above
341
•
•
•
•
•
•
•
•
•
•
•
•
•
Pamela's profile is complete because it includes not just contact details, but also what Pamela
is associated with, including organisations and related people.
as above
Diffrent elements have been completed - not just people tagging but also other features.
More info
I don't know this person and who has tagged them so I am not able to know if this is complete
or not.
The person has been tagged with various subjects which indicates a complete profile
There is a lot of information on here including some useful related documents
Each area has information in it.
most sections are complete with information in them and wesbites and documents
Shows more information than previous profiles.
as above
It appears to be complete, there is lots of information available.
As above
For question nine we collected 16 answers, five participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
This colleague has listed some specific knowledge and experience which may be useful to me
in my own work in that it covers gaps in my own knowledge.
Please see response above
As above
Pamela's profile is useful because it profiles details of related people, activity tags and what
pamela is associated with.
as there is more info available you can gain a better picture of what the person is like and
where their work interests may lie. you can then use this to help with knowledge sharing etc
It is more useful becaue you have more information to draw on.
more info
There is more info so must be more useful
I would find this usefull if I were to search for a subject that she has been tagged in
This gives a good idea of Pamela's specialist knowledge
The profile is useful as it has all areas completed and resources to access.
There is a wider variety of information that can be used in documents and websites
Gives more information that could be used in different ways. Direct contact, viewing attached
documents etc.
as above
Due to the extent of information you have a good chance of finding what you are looking for
with this profile.
As above
For question 10 we collected 16 answers, four participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
My knowledge of this colleague suggests that the profile is quite accurate.
This is a fairly comprensive profile for the individual with a number of links. I think it is still
not fully complete. As a colleague I think the indivudal has slighter wider scope of knowledge
than is captured here
No role indicated No base indicated
Inclues contact and organisation details
appears to be well filled out and contain a lot of knowledge about the person
Lots of information is included about this person.
more info
342
•
•
•
•
•
•
•
•
•
again, i am not familiar with this person so am unsure of its accuracy
I know this person
Generally accurate but not sure why tagged with ''adult information'' as does not do this area
of work. Also have no idea who Charles Birch is and he is down as a related tag person
whereas there is no mention of other development team members who would be related people
A lot of tags reflects lots of areas and expertise and knowledge and information. A wide
variety gives a more accurate desription of person rather than one item
This person has lots of tags but not enough and this may show that this profile may only be
slightly accurate.
Clear lists, but items in box below name appear rather random.
This seems to be a true reflection of the above persons roles and responsibilities within the
organisation.
Same as previous
More info thatn others but don't know without knowing the person
For question 11 we collected 16 answers, four participants did not want to answer. Here are the
results:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
My knowledge of this colleague suggests that the profile is quite complete.However I may not
know sufficient about this colleague's knowledge and experience to say that the profile is fully
complete.
Please see answer above
As above
Includes not just contact details, but also what his associations, and specialisms are.
as above
There are lots of ways of learning mroe about this person's area of expertise and interests.
More info
more info available
tagged with various subjects and related doc
The profile appears to be as complete as I would expect it to be!
Once again every section is complete.
Most boxes are complete but not enough related documents
Gives an idea of person's role
as above
Same as previous
as above
For question 12 we collected 15 answers, five participants did not want to answer. Here are the results:
•
•
•
•
•
•
•
•
•
•
This profile indicates that the colleague has some knowledge that I do not.
Please see answer above
As above
The profile is useful, because each section is fully completed to show contacts, tags,
specialisms and related contacts
It is the most useful profile because it contains the most, varied information.
More info
more info available
lots of subjects tagged
Useful as you can see the areas that the person is involved in.
It is useful as contains not only tags but website links.
343
•
•
•
•
•
Yes could be useful as has lots of links.
Recorded information does not make role clear.
as above
Same as previous
as above
344
12.2 Data from Connexions Kent
12.5.7 Indicator alignment results
Mapping of Maturing Related Questions in Pre-usage Questionnaire and Post-usage
Questionnaire to GMI
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.2.3.3
Level 1 Artefacts
Level 2 Creation context and editing
General Maturing Indicator (GMI) An artefact has been the subject of many discussions
Level of Justification validated by RepStudy
SMI
o A discussion/dialogue about a resource is continued
Description of mapping:
o A continuous discussion about the artefact is implied.
Questions in Pre-usage Questionnaire
o 12. I discuss relevant resources with my colleagues
Questions in Post-usage Questionnaire
o 8. Discussing relevant resources with my colleagues
ID I.2.3.8
Level 1 Artefacts
Level 2 Creation context and editing
General Maturing Indicator (GMI) An artefact was changed in type
Level of Justification validated by APStudy
SMI
o A Collection has been exported for clients or certain purposes
Description of mapping:
o The demonstrator allows to export collections to PDFs, which means a change in
type.
Questions in Pre-usage Questionnaire
n/a
Questions in Post-usage Questionnaire
n/a
ID I.3.10
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was changed
Level of Justification validated by APStudy
SMIs
o A new digital resource is added to a private collection
o A private collection has been removed by the owner
o A private collection has been renamed
o A resource has been deleted from a private collection
o A resource in a private collection has been renamed
o An existing digital resource is added to a private collection
o A new resource has been added to a shared collection
o An existing digital resource is added to a subscribed/shared collection
o A resource has been deleted from a shared/subscribed collection
o A resource in a shared/subscribed collection has been renamed
o A shared collection has been removed by the owner
o A shared collection has been restructured
345
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
o A shared/subscribed collection has been re-structured
o A shared/subscribed collection has been renamed
o A resource has been deleted from a shared/subscribed collection
o A shared collection has been removed by the owner
o A shared/subscribed collection has been re-structured
o A shared/subscribed collection has been renamed
o A resource in a shared/subscribed collection has been renamed
Description of mapping:
o Either the resource in a collection was changed or the collection itself was
renamed or deleted.
Questions in Pre-usage Questionnaire
o 6. I store relevant results in collections on my desktop or laptop
o 11. I maintain my private collections and continuously add materials
Questions in Post-usage Questionnaire
o 5. Storing relevant results in the ‘collections’
o 7. Maintaining private collections and continuously adding materials/resources
ID I.3.3
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was selected from a range of artefacts
Level of Justification validated by RepStudy
SMIs
o A resource is selected from a range of resources provided by search
o A resource of a private collection has been viewed
Description of mapping:
o Either through search functionality or in collections the user found an interesting
resource.
Questions in Pre-usage Questionnaire
o 1. I search for colleagues to ask for help
o 2. I search on the internet for relevant information
o 3. I search on my own desktop for relevant information
o 4. I search in other resources for relevant information (paper based copies…)
Questions in Post-usage Questionnaire
o 1. Searching for colleagues to ask for help
o 2. Searching on the internet for relevant information
o 3. Searching on my own desktop for relevant information
ID I.3.4
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact became part of a collection of similar
artefacts
Level of Justification validated by RepStudy
SMI
o A resource has been associated with additional tags at later stage
o A new resource has been added to a shared collection
o A resource has been added to more than one collection by different persons
Description of mapping:
o Tagging multiple resources with same tags aggregates the resources to sets of
resources. By adding a resource to a specific collection, a certain similarity is
assumed (content, author, target group, context).
Questions in Pre-usage Questionnaire
o 6. I store relevant results in collections on my desktop or laptop
o 16. My colleagues and I have a common taxonomy/classification for tagging (or
labelling) resources
Questions in Post-usage Questionnaire
o
5. Storing relevant results in the ‘collections’
346
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
o 11. Creating a common taxonomy/classification for tagging (or labelling) resources
ID I.3.6
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact is referred to by another artefact
Level of Justification validated by RepStudy
SMIs
o A new digital resource is added to a private collection
o A user has started a discussion about a collection
o A user has started a discussion about a digital resource
o A new resource has been added to a shared collection
o An existing digital resource is added to a subscribed/shared collection
Description of mapping:
o Either directly in discussions or indirectly by added resources to private or shared
collections references are built.
Questions in Pre-usage Questionnaire
o 5. I take individual notes that I revisit at later points in time
o 6. I store relevant results in collections on my desktop or laptop
o 11. I maintain my private collections and continuously add materials
o 16. My colleagues and I have a common taxonomy/classification for tagging (or
labelling) resources
Questions in Post-usage Questionnaire
o 4. Taking individual notes that I revisit later
o 5. Storing relevant results in the ‘collections’
o 7. Maintaining private collections and continuously adding materials/resources
o 11. Creating a common taxonomy/classification for tagging (or labelling) resources
ID I.3.9
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact has been used by an individual
Level of Justification validated by RepStudy
SMIs
o A resource of a private collection has been viewed
o A high rated resource has been opened
o A shared collection is subscribed by many different users
Description of mapping:
o Open or viewing a resource is understood as using it.
Questions in Pre-usage Questionnaire
o 1. I search for colleagues to ask for help
o 2. I search on the internet for relevant information
o 3. I search on my own desktop for relevant information
o 4. I search in other resources for relevant information (paper based copies…)
o 6. I store relevant results in collections on my desktop or laptop
Questions in Post-usage Questionnaire
o 1. Searching for colleagues to ask for help
o 2. Searching on the internet for relevant information
o 3. Searching on my own desktop for relevant information
o 5. Storing relevant results in the ‘collections’
ID I.4.3
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has become part of a guideline or has
become standard
347
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Level of Justification validated by RepStudy
SMI
o A Collection has been exported for clients or certain purposes
Description of mapping:
o If a collection is exported into PDF, this means the collection is considered to be
useful for a certain target group as guideline or reference text.
Questions in Pre-usage Questionnaire
n/a
Questions in Post-usage Questionnaire
n/a
ID I.4.6
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has been assessed by an individual
Level of Justification validated by RepStudy
SMIs
o A resource has been rated by a highly reputable person
o A resource has been rated by an individual
o A rating of a resource has been changed to a higher rating
o A rating of a resource has been changed to a lower value
o A shared collection has been unsubscribed by many different users
Description of mapping:
o Rating is an assessment, therefore it can be observed due to ratings. Moreover,
the artefact is the collection and after assessing it, people have unsubscribed it.
Questions in Pre-usage Questionnaire
o 9. I make relevance judgements for digital documents in order to highlight the
most interesting resources and find them at a later date
o 10. I make relevance judgements for paper-based resources in order to highlight
the most interesting resources and find them at a later date
Questions in Post-usage Questionnaire
o xxx
ID II.1.3
Level 1 Individual capabilities
Level 2 Individual activities
General Maturing Indicator (GMI) An individual has contributed to a discussion
Level of Justification validated by RepStudy
SMI
o A discussion/dialogue about a resource is continued
Description of mapping:
o Continuing a discussion is contributing to it.
Questions in Pre-usage Questionnaire
o 12. I discuss relevant resources with my colleagues
Questions in Post-usage Questionnaire
o 8. Discussing relevant resources with my colleagues
ID II.1.5
Level 1 Individual capabilities
Level 2 Individual activities
General Maturing Indicator (GMI) An individual has significant professional experience
Level of Justification validated by RepStudy
SMIs
o A person creates many new shared tags related to a particular topic
o A person creates many shared collections related to a particular topic
o Many people subscribe to collections from a certain person
Description of mapping:
o Experience can manifest itself in providing a lot of information to other.
348
•
•
•
•
•
•
•
•
•
•
•
Questions in Pre-usage Questionnaire
o 7. I add keywords or tags to my digital resources in order to find them at a later
date
o 8. I add keywords or tags to my paper-based resources in order to find them again
at a later date
o 13. I share my private digital collections with colleagues
o 14. I share my private paper-based collections with colleagues
o 15. I share my private notes with colleagues
o 16. My colleagues and I have a common taxonomy/classification for tagging (or
labelling) resources
o 17. My colleagues and I maintain common digital collections of information
materials
Questions in Post-usage Questionnaire
o 6. Adding keywords or tags to my resources in order to find later
o 9. Sharing private digital collections with colleagues
o 10. Sharing my private notes with colleagues
o 11. Creating a common taxonomy/classification for tagging (or labelling) resources
12. Maintaining common digital collections of information and materials with
colleagues
ID II.3.1
Level 1 Individual capabilities
Level 2 Individual - group
General Maturing Indicator (GMI) An individual has a central role within a social network
Level of Justification validated by RepStudy
SMIs
o Many people contribute to collections from a certain person
o Many people subscribe to collections from a certain person
Description of mapping:
o The social network considered here, is the network of people who exchange
knowledge artefacts and knowledge via particular collections of a specific person.
Questions in Pre-usage Questionnaire
o 13. I share my private digital collections with colleagues
o 14. I share my private paper-based collections with colleagues
o 15. I share my private notes with colleagues
o 17. My colleagues and I maintain common digital collections of information
materials
Questions in Post-usage Questionnaire
o 9. Sharing private digital collections with colleagues
o 10. Sharing my private notes with colleagues
o 12. Maintaining common digital collections of information and materials with
colleagues
349
Appendix: Mapping of Focus Group questions, hypotheses and Knowledge Maturing Indicators
Focus Group questions
1.
2.
3.
4.
Hypotheses
Using Connexions Kent
Instantiation…
Give one example of how you have used the
demonstrator.
How was this different from the way you would have
completed this task without the support of the
demonstrator?
Do you think that the demonstrator has helped you to
think more creatively about:
a. How LMI could be used?
Definitely Yes / Yes / Unsure / No / Definitely no
b. How LMI could be integrated more into IAG
sessions?
Definitely Yes / Yes / Unsure / No / Definitely no
Please say more about your answer:
Thinking about the ‘search’ tool in the demonstrator,
has this tool been useful or not?
Definitely Yes / Yes / Unsure / No / Definitely no
Phase Ia - leads to a more effective
generation of ideas
Phase Ib - makes it easier for a
person to identify emerging
knowledge
Please say more about your answer:
5.
Has using the ‘search’ tool in the demonstrator made
it easier or harder to:
a. Locate LMI?
Definitely Yes / Yes / Unsure / No / Definitely no
b. Identify new sources?
Definitely Yes / Yes / Unsure / No / Definitely no
Phase Ib - makes it easier for a
person to identify emerging
knowledge
Please say more about your answer:
6.
7.
8.
Thinking about creating and using ‘collections’ in the
demonstrator, has this tool been useful or not?
Definitely Yes / Yes / Unsure / No / Definitely no
Phase Ib - makes it easier for a
person to identify emerging
knowledge
Please say more about your answer (i.e. did this relate
to tagging/labelling of sources, organisation of
sources, accessibility of sources, commitment to
creating a collection).
Has the ‘collections’ tool made it harder or easier to:
a. Collect LMI?
Definitely Yes / Yes / Unsure / No / Definitely no
b. Collate LMI?
Definitely Yes / Yes / Unsure / No / Definitely no
c. Identify new LMI?
Definitely Yes / Yes / Unsure / No / Definitely no
Phase Ib - makes it easier for a
person to identify emerging
knowledge
Please provide some more information about your
answers to this question:
Have you created a collection with a colleague/s?
Definitely Yes / Yes / Unsure / No / Definitely no
Phase II - leads to a more effective
distribution of knowledge
Please explain (i.e. how was this discussed and agreed,
what was it created for a joint project, interest etc.).
9.
Have you shared your collections and/or subscribed to
350
Phase II - leads to a more effective
collections created by other colleagues?
sharing of knowledge
Definitely Yes / Yes / Unsure / No / Definitely no
10.
Please explain (i.e. with colleagues in the same office
or different offices within the organisation).
Have you used or shared the collections with, for
example, other colleagues, careers co-ordinators in
schools or pupils/students?
Phase II - leads to a more effective
sharing of knowledge
Definitely Yes / Yes / Unsure / No / Definitely no
Please give an example.
11.
As a result of using the demonstrator, do you think
that you have more awareness of what LMI your
colleagues are interested in and/or researching?
Phase II - increases awareness of
activities/topics in other
communities (of practice)
Definitely Yes / Yes / Unsure / No / Definitely no
Please explain your answer.
12.
13.
14.
Thinking about how you have used information from
the ‘search’ and ‘collection’ tools to create
information/pages in the wiki, how has the
demonstrator helped you to develop LMI for different
purposes, such as presentations, sessions with
students, information/leaflets, or other purposes
(please specify)?
Again thinking about the creation of information/pages
in the wiki, have you been able to combine
information from a range of sources from the ‘search’
and/or ‘collection’ tool and presented in different
formats (i.e. created more information sheets, handouts, presentations etc.)?
Do you feel more confident in your ability to:
a. Identify new knowledge on the labour market?
Definitely Yes / Yes / Unsure / No / Definitely no
b. Assess the quality or reliability of labour market
information and sources?
Definitely Yes / Yes / Unsure / No / Definitely no
Phase III - leads to a more effective
formalisation of knowledge
Phase II-V - leads to increased
exchange/creation/use of boundary
objects
Phase V - makes it easier to identify
solid/reliable/sedimented
knowledge
Please explain your answer further.
15.
16.
Do you think that by using the demonstrator you have
increased you knowledge of, for example, a particular
topic, local labour market, educational courses and
qualifications?
Definitely Yes / Yes / Unsure / No / Definitely no
Please explain your answer further or give an example.
Do you feel more motivated to develop your
understanding of LMI for IAG by engaging in
information searching, collecting, collating and
tagging?
Definitely Yes / Yes / Unsure / No / Definitely no
Please explain your answer further.
351
Guidance/using prototype
Guidance: using prototype increases motivation to engage in
Knowledge Maturing activities
17.
18.
Overall, do you think that the demonstrator has been
successful in:
a. Supporting the collection and development of LMI
for practice?
Definitely Yes / Yes / Unsure / No / Definitely
b. Increasing efficiency of researching the labour
market?
Definitely Yes / Yes / Unsure / No / Definitely no
c. Reducing individual effort in researching the labour
market?
Definitely Yes / Yes / Unsure / No / Definitely no
d. Retaining and developing organisational knowledge?
Definitely Yes / Yes / Unsure / No / Definitely no
Are there are any further comments or remarks you
would like to make about the MATURE demonstrator?
352
Using prototype – reduces time to
proficiency, increases retaining of
existing knowledge
12.5.8 Evaluation data
Pre-usage Questionnaire
353
354
Post-usage Questionnaire
355
356
357
12.6
Data from Structuralia
12.6.1 Indicator alignment results
Appendix: Mapping of Maturing Related Questions in the Structuralia Knowledge Maturing
Questionnaire to GMI
Part I: Current practices of knowledge creation and knowledge sharing
Part II: Perceived need for improvement
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ID I.2.3.3
Level 1 Artefacts
Level 2 Creation context and editing
General Maturing Indicator (GMI) An artefact has been the subject of many discussions
Level of Justification validated by RepStudy
SMI
o A discussion/dialogue about a resource is continued
Description of mapping:
o A continuous discussion about the artefact is implied.
Questions in Part I
o 12. Discussing with my colleagues about relevant resources
Questions in Part II
o 8. Discussing with my colleagues about relevant resources
ID I.2.3.8
Level 1 Artefacts
Level 2 Creation context and editing
General Maturing Indicator (GMI) An artefact was changed in type
Level of Justification validated by APStudy
SMI
o A Collection has been exported for clients or certain purposes
Description of mapping:
o The demonstrator allows to export collections to PDFs, which means a change in
type.
Questions in Part I
n/a
Questions in Part II
n/a
ID I.3.10
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was changed
Level of Justification validated by APStudy
SMIs
o A new digital resource is added to a private collection
o A private collection has been removed by the owner
o A private collection has been renamed
o A resource has been deleted from a private collection
o A resource in a private collection has been renamed
o An existing digital resource is added to a private collection
o A new resource has been added to a shared collection
o An existing digital resource is added to a subscribed/shared collection
358
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
o A resource has been deleted from a shared/subscribed collection
o A resource in a shared/subscribed collection has been renamed
o A shared collection has been removed by the owner
o A shared collection has been restructured
o A shared/subscribed collection has been re-structured
o A shared/subscribed collection has been renamed
o A resource has been deleted from a shared/subscribed collection
o A shared collection has been removed by the owner
o A shared/subscribed collection has been re-structured
o A shared/subscribed collection has been renamed
o A resource in a shared/subscribed collection has been renamed
Description of mapping:
o Either the resource in a collection was changed or the collection itself was
renamed or deleted.
Questions in Part I
o Storing relevant results in collections on my desktop or laptop
o Maintaining private collections and continuously adding materials
Questions in Part II
o Storing relevant results in collections on my desktop or laptop
o Maintaining private collections and continuously adding materials
ID I.3.3
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact was selected from a range of artefacts
Level of Justification validated by RepStudy
SMIs
o A resource is selected from a range of resources provided by search
o A resource of a private collection has been viewed
Description of mapping:
o Either through search functionality or in collections the user found an interesting
resource.
Questions in Part I
o Searching on the internet for relevant information
Questions in Part II
o Searching on the internet for relevant information
ID I.3.4
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact became part of a collection of similar
artefacts
Level of Justification validated by RepStudy
SMI
o A resource has been associated with additional tags at later stage
o A new resource has been added to a shared collection
o A resource has been added to more than one collection by different persons
Description of mapping:
o Tagging multiple resources with same tags aggregates the resources to sets of
resources. By adding a resource to a specific collection, a certain similarity is
assumed (content, author, target group, context).
Questions in Part I
o Storing relevant results in collections on my desktop or laptop
o Creating a common taxonomy/classification for tagging (or labelling) resources
359
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Questions in Part II
o
Storing relevant results in collections on my desktop or laptop
o
Creating a common taxonomy/classification for tagging (or labelling) resources
ID I.3.6
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact is referred to by another artefact
Level of Justification validated by RepStudy
SMIs
o A new digital resource is added to a private collection
o A user has started a discussion about a collection
o A user has started a discussion about a digital resource
o A new resource has been added to a shared collection
o An existing digital resource is added to a subscribed/shared collection
Description of mapping:
o Either directly in discussions or indirectly by added resources to private or shared
collections references are built.
Questions in Part I
o Storing relevant results in collections on my desktop or laptop
o Maintaining private collections and continuously adding materials
o Creating a common taxonomy/classification for tagging (or labelling) resources
Questions in Part II
o Storing relevant results in collections on my desktop or laptop
o Maintaining private collections and continuously adding materials
o Creating a common taxonomy/classification for tagging (or labelling) resources
ID I.3.9
Level 1 Artefacts
Level 2 Usage
General Maturing Indicator (GMI) An artefact has been used by an individual
Level of Justification validated by RepStudy
SMIs
o A resource of a private collection has been viewed
o A high rated resource has been opened
o A shared collection is subscribed by many different users
Description of mapping:
o Open or viewing a resource is understood as using it.
Questions in Part I
o Searching on the internet for relevant information
o Storing relevant results in collections on my desktop or laptop
Questions in Part II
o Searching on the internet for relevant information
o Storing relevant results in collections on my desktop or laptop
ID I.4.3
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has become part of a guideline or has
become standard
Level of Justification validated by RepStudy
SMI
o A Collection has been exported for clients or certain purposes
Description of mapping:
o If a collection is exported into PDF, this means the collection is considered to be
360
useful for a certain target group as guideline or reference text.
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Questions in Part I
n/a
Questions in Part II
n/a
ID I.4.6
Level 1 Artefacts
Level 2 Rating & legitimation
General Maturing Indicator (GMI) An artefact has been assessed by an individual
Level of Justification validated by RepStudy
SMIs
o A resource has been rated by a highly reputable person
o A resource has been rated by an individual
o A rating of a resource has been changed to a higher rating
o A rating of a resource has been changed to a lower value
o A shared collection has been unsubscribed by many different users
Description of mapping:
o Rating is an assessment, therefore it can be observed due to ratings. Moreover,
the artefact is the collection and after assessing it, people have unsubscribed it.
Questions in Part I
o Making relevance judgements for digital documents in order to highlight the most
interesting resources and find them at a later date
Questions in Part II
o Making relevance judgements for digital documents in order to highlight the most
interesting resources and find them at a later date
ID II.1.3
Level 1 Individual capabilities
Level 2 Individual activities
General Maturing Indicator (GMI) An individual has contributed to a discussion
Level of Justification validated by RepStudy
SMI
o A discussion/dialogue about a resource is continued
Description of mapping:
o Continuing a discussion is contributing to it.
Questions in Part I
o Discussing with my colleagues about relevant resources
Questions in Part II
o Discussing with my colleagues about relevant resources
ID II.1.5
Level 1 Individual capabilities
Level 2 Individual activities
General Maturing Indicator (GMI) An individual has significant professional experience
Level of Justification validated by RepStudy
SMIs
o A person creates many new shared tags related to a particular topic
o A person creates many shared collections related to a particular topic
o Many people subscribe to collections from a certain person
Description of mapping:
o Experience can manifest itself in providing a lot of information to other.
361
•
•
•
•
•
•
•
•
•
•
•
Questions in Part I
o Adding keywords or tags to my digital resources in order to find them at a later
date
o Sharing private digital collections with colleagues
o Sharing my private notes with colleagues
o Creating a common taxonomy/classification for tagging (or labelling) resources
o Maintaining common digital collections of information and materials with
colleagues
Questions in Part II
o Adding keywords or tags to my digital resources in order to find them at a later
date
o Sharing private digital collections with colleagues
o Sharing my private notes with colleagues
o Creating a common taxonomy/classification for tagging (or labelling) resources
o Maintaining common digital collections of information and materials with
colleagues
ID II.3.1
Level 1 Individual capabilities
Level 2 Individual - group
General Maturing Indicator (GMI) An individual has a central role within a social network
Level of Justification validated by RepStudy
SMIs
o Many people contribute to collections from a certain person
o Many people subscribe to collections from a certain person
Description of mapping:
o The social network considered here, is the network of people who exchange
knowledge artefacts and knowledge via particular collections of a specific person.
Questions in Part I
o Sharing private digital collections with colleagues
o Sharing my private notes with colleagues
o Maintaining common digital collections of information and materials with
colleagues
Questions in Part II
o Sharing private digital collections with colleagues
o Sharing my private notes with colleagues
o Maintaining common digital collections of information and materials with
colleagues
362
12.6.2 Evaluation data
Appendix: Structuralia Knowledge Maturing Questionnaire
Questionnaire STRUCTURALIA
Name: ____________________
Please indicate for each of the following activities to which extent these are typical
for your own work.
Do not reply
Very typical
Typical
Rather untypical
Untypical
Part I: Current practices of knowledge creation and knowledge sharing
1. I search on the internet for relevant information
☐
☐
☐
☐
☐
2. I store relevant results in collections on my desktop or laptop
☐
☐
☐
☐
☐
3. I add keywords or tags to my digital resources in order to find them at a later date
☐
☐
☐
☐
☐
4. I make relevance judgements for digital documents in order to highlight the most ☐
interesting resources and find them at a later date
☐
☐
☐
☐
5. I maintain my private collections and continuously add materials
☐
☐
☐
☐
☐
6. I discuss relevant resources with my colleagues
☐
☐
☐
☐
☐
7. I share my private digital collections with colleagues
☐
☐
☐
☐
☐
8. I share my private notes with colleagues
☐
☐
☐
☐
☐
9. My colleagues and I have a common taxonomy/classification for tagging (or ☐
labelling) resources
☐
☐
☐
☐
10. My colleagues and I maintain common digital collections of information ☐
materials
☐
☐
☐
☐
Additional comments (overall
system):____________________________________________________
363
In the following, a couple of activities are described that are intended to be
supported with the MATURE demonstrator tool.
Please indicate for each of these activities whether you think the
demonstrator supports them well or whether improvements are needed.
Do not reply
Works well
Needs some improvement
Needs a lot of improvement
Not crucial for my work
Part II: Perceived need for improvement
1. Searching on the internet for relevant information
☐
☐
☐
☐
☐
2. Storing relevant results in collections on my desktop or laptop
☐
☐
☐
☐
☐
3. Adding keywords or tags to my digital resources in order to find them at ☐
a later date
☐
☐
☐
4. Making relevance judgements for digital documents in order to highlight ☐
the most interesting resources and find them at a later date
☐
☐
☐
5. Maintaining private collections and continuously adding materials
☐
☐
☐
☐
☐
6. Discussing with my colleagues about relevant resources
☐
☐
☐
☐
☐
7. Sharing private digital collections with colleagues
☐
☐
☐
☐
☐
8. Sharing my private notes with colleagues
☐
☐
☐
☐
☐
9. Creating a common taxonomy/classification for tagging (or labelling) ☐
resources
☐
☐
☐
10. Maintaining common digital collections of information and materials ☐
with colleagues
☐
☐
☐
Question: Any other activity?
___________________________________________________________
364
☐
☐
☐
☐
12.2.1.1
Appendix: Usability Questionnaire I
Feedback for software usability
You will be asked some questions referring to the usability of the software. First, we ask for the
overall system and afterwards for each single widget the same questions. It should take you about 10
to 15min to fill out the questionnaire.
Answers:
0
1
2
3
4
5
Do not reply
Strongly
Disagree
Disagree
Somewhat
Agree
Agree
Strongly
Agree
*Compulsory
1. Overall System
1.1 I think that I would like to use this system frequently *
1.2 I found the system unnecessarily complex *
1.3 thought the system was easy to use *
1.4 I think that I would need the support of a technical person to be able to use this system *
1.5 I found the various functions in this system were well integrated *
1.6 I thought there was too much inconsistency in this system *
1.7 I would imagine that most people would learn to use this system very quickly *
1.8 I found the system very cumbersome to use *
1.9 I felt very confident using the system *
1.10 I needed to learn a lot of things before I could get going with this system *
Additional comments?
2. Collection Widget
Please provide your opinion about the Collection widget!
2.1 I think that I would like to use the collection widget frequently *
2.2 I found the collection widget unnecessarily complex *
2.3 I thought the collection widget was easy to use *
2.4 I think that I would need the support of a technical person to be able to use the collection widget *
2.5 I found the various functions in the collection widget were well integrated *
2.6 I thought there was too much inconsistency in the collection widget *
2.7 I would imagine that most people would learn to use the collection widget very quickly *
2.8 I found the collection widget very cumbersome to use *
365
2.9 I felt very confident using the collection widget *
2.10 I needed to learn a lot of things before I could get going with the collection widget *
Additional comments?
3. Search Widget
Please provide you opinion about the Search widget!
3.1 I think that I would like to use the search widget frequently *
3.2 I found the search widget unnecessarily complex *
3.3 I thought the search widget was easy to use *
3.4 I think that I would need the support of a technical person to be able to use the search widget *
3.5 I found the various functions in the search widget were well integrated *
3.6 I thought there was too much inconsistency in the search widget *
3.7 I would imagine that most people would learn to use the search widget very quickly *
3.8 I found the search widget very cumbersome to use *
3.9 I felt very confident using the search widget *
3.10 I needed to learn a lot of things before I could get going with the search widget *
Additional comments?
4. Tag Editor Widget
Please provide you opinion about the Tag-Editor widget!
4.1 I think that I would like to use the Tag-Editor frequently *
4.2 I found the Tag-Editor unnecessarily complex *
4.3 I thought the Tag-Editor was easy to use *
4.4 I think that I would need the support of a technical person to be able to use the Tag-Editor *
4.5 I found the various functions in the Tag-Editor were well integrated *
4.6 I thought there was too much inconsistency in the Tag-Editor *
4.7 I would imagine that most people would learn to use the Tag-Editor very quickly *
4.8 I found the Tag-Editor very cumbersome to use *
4.9 I felt very confident using the Tag-Editor *
4.10 I needed to learn a lot of things before I could get going with the Tag-Editor *
Additional comments?
366
12.2.1.2
Appendix: Usability Questionnaire II
5. Tag Cloud Widget
Please provide you opinion about the Tag-Cloud Widget
5.1 I think that I would like to use the Tag Cloud-Widget frequently *
5.2 I found the Tag Cloud-Widget system unnecessarily complex *
5.3 I thought the Tag Cloud-Widget was easy to use *
5.4 I think that I would need the support of a technical person to be able to use the Tag Cloud-Widget
*
5.5 I found the various functions in the Tag Cloud-Widget were well integrated *
5.6 I thought there was too much inconsistency in the Tag Cloud-Widget *
5.7 I would imagine that most people would learn to use the Tag Cloud-Widget very quickly *
5.8 I found the Tag Cloud-Widget very cumbersome to use *
5.9 I felt very confident using the Tag Cloud-Widget *
5.10 I needed to learn a lot of things before I could get going with the Tag Cloud-Widget *
Additional comments?
6. Tagging Widget
Please provide you opinion about the Tagging Widget
6.1 I think that I would like to use the Tagging Widget frequently *
6.2 I found the Tagging Widget unnecessarily complex *
6.3 I thought the Tagging Widget was easy to use *
6.4 I think that I would need the support of a technical person to be able to use the Tagging Widget *
6.5 I found the various functions in the Tagging Widget were well integrated *
6.6 I thought there was too much inconsistency in the Tagging Widget *
6.7 I would imagine that most people would learn to use the Tagging Widget very quickly *
6.8 I found the Tagging Widget very cumbersome to use *
6.9 I felt very confident using the Tagging Widget *
6.10 I needed to learn a lot of things before I could get going with the Tagging Widget *
Additional comments?
7. MatureFox Firefox Plugin
Please provide you opinion about the MatureFox Firefox Plugin
7.1 I think that I would like to use the Firefox plugin frequently *
7.2 I found the Firefox plugin unnecessarily complex *
7.3 I thought the Firefox plugin was easy to use *
7.4 I think that I would need the support of a technical person to be able to use the Firefox plugin *
367
7.5 I found the various functions in the Firefox plugin were well integrated *
7.6 I thought there was too much inconsistency in the Firefox plugin *
7.7 I would imagine that most people would learn to use the Firefox plugin very quickly *
7.8 I found the Firefox plugin very cumbersome to use *
7.9 I felt very confident using the Firefox plugin *
7.10 I needed to learn a lot of things before I could get going with the Firefox plugin *
Additional comments?
8. Demographic information
This is the last page and it is typical to collect some demographic data
Age
<30
•
30-39
•
40-49
•
50-59
•
60 or older
Gender
•
Male
•
Female
368
12.7
Using the typology to provide examples of Learning Factors involved in
"Continuous Social Learning in Knowledge Networks"
As noted earlier (section 2.2.5), in addition to the innovative focus on KM phases and GMIs
prioritised in the Summative Evaluation, LTRI also wanted to offer an approach to taking a richtextual view of the overarching goal of the project, namely facilitating "Continuous Social Learning in
Knowledge Networks". The reason for this was that LTRI deemed it necessary to have a Summative
Evaluation checklist (called a typology) which in the main seeks to serve as an explanatory, analytical
frame as well as a starting point for discussion about attendant issues, rather than provide a definitive
map of the project. The case study of the MATURE people tagging demonstrator was used to
elaborate on our typology in a real work-based context (the BJET paper, see Appendix 1). The
Learning Factor nodes presented in section 2.2.5 are now used to analyse the reported conclusion
sections (and where interesting points were raised the other sections) of the six Summative Evaluation
studies. The approach is highly selective and not intended to be systematic; instead we hope you will
agree that the analysis below gives a rich-textual overview of the six Summative Evaluation studies,
but situated in a coherent conceptual typology. Within each branch of Learning Factors we now
provide at least one example taken from the six studies and where relevant a brief discussion. The
analysis presented in this section has been submitted to a high quality international conference (Cook,
submitted).
2a.
individual self-efficacy (confidence and commitment)
General example from Study 5, section 8.5: “This topic [the general concept of Knowledge Maturing]
mattered a great deal to the participants (as an object of pressing concern) but, in the UK, it also had
deeper significance and meaning for partners as it was bound up with the sense of identity and
imagined futures for all those working in the career guidance field.”
Discussion: this quote illustrates that individual self-efficacy can be shaped by events beyond the
organisation.
2ai.
feedback
•
Example from Study 1, section 4.3.2: “As far as the “training” aspect was concerned, the tool
laid open few, but interesting “knowledge gaps”, i.e. the new colleague was able to learn about new
problems through using the tool – although the oral explanations of her predecessor and the experience
she had already gathered with her first matriculation cases covered about 70% of the problems that we
had hidden in our artificial cases.”
•
Discussion: Feedback from the tool did help, but verbal feedback and prior experience were
deemed more important. Thus a ‘blended’ approach seems to work here.
2aii.
support
•
Example from Study 3, section 6.2.1.3: “At this final workshop, although the careers advisers
had identified materials to load on to the system they did not feel confident in uploading the
materials.”
•
Discussion: Support can build confidence; in this example insufficient support was provided.
2aiii.
challenge
•
Example from Study 3, section 6.4.2: “The collections tool was found by all users to be
advantageous to their work, particularly in collecting, collating and identifying new LMI. Echoing
others comments, one user said that: “I really like the idea of sharing […] avoids duplication of work,
but in reality sharing maybe a challenge, as I have to attribute author, who updates information and
who takes credit?”. This raised issues around working in a culture with an organisational policy of
accreditation. Issues around ownership and intellectual property were debated.”
369
•
Example from Study 4, section 7.6: “Teacher, acting more like a tutor, is providing support
and guidance to the students leaving them to choose how something can be done. The mentioned
changes and developments represent challenge for the teachers, the students and as well for the whole
system, requiring some procedures of change management, specially aimed to address the general
resistance to change.”
•
Discussion: the above draw on challenge in a negative or at least increase of workload way.
But, if change is managed this could be seen as a positive challenge.
2aiv.
value of the work
•
Example from Study 6, section 9.4: “The projects [UWAR] won three National Career ICT
Awards and are seen as representing outstanding practice, and the ICT developments are at the heart of
the reshaped service.”
•
Discussion: involvement in the MATURE project is seen as having a positive value and being
of value to changing career guidance practice.
2b.
acts of self-regulation
2bi.
competence (perceived self-efficacy, overlap with 2(a))
•
Example from Study 5, section 8.5: “Expressing and appropriating ideas: developing a greater
awareness of the issue of innovation, learning, development and Knowledge Maturing in careers
guidance through dialogical exchange.”
•
Discussion: These refer to the Knowledge Maturing Phase Model phase 1a & 1b, but usefully
set them in the context of careers guidance.
•
Example from Study 4, section 7.6: “…the discussion widget and the tag editor allowed users
to create a shared meaning and a common vocabulary, represented in a collaboratively created
ontology. Thus, it supports many important Knowledge Maturing activities, as "Find relevant digital
resources", "Keep up-to-date with organisation-related knowledge", "Familiarise oneself with new
information" … "Reorganise information at individual or organisational level", "Share and release
digital resources", "Communicate with people", and "Assess, verify and rate information" (cf.
Deliverable D1.2 and D2.3/D3.3).”
•
Discussion: Many of the widgets and tools described above are aimed at helping users build
up personal knowledge. This may be done through dialogue with others (and hence there is an overlap
with 2bii & 2diii).
2bii.
relatedness (sense of being a part of the activity)
•
Example from Study 4, section 7.6: “The use of the MATURE tools implies a change in the
relationship between students and teachers, where students take an active role, collaboratively
contributing to course materials.”
•
Example from Study 5, section 8.5: “These partnerships shared concern for the future of the
profession and Knowledge Maturing processes offered the prospect of contributing both to reshaping
of daily work activities and in helping shape the future of the profession. The importance of these
partnerships and the relevance of MATURE Knowledge Maturing processes were therefore significant
for partners’ professional identities, sense-making and imagined futures and the channels for sharing
knowledge was through overlapping and inter-locking personal networks, which were in part
facilitated by the MATURE project.”
•
Discussion: The notion that ‘MATURE Knowledge Maturing processes were therefore
significant for partners’ professional identities’ provides a potential vehicle for developing a sense of
being a part of the activity.
2biii.
acceptance (social approval)
•
Example from Study 5, section 8.5: “Distributing in communities: the dialogue with partners
resulted in shared understandings whereby partners became actively aware of new possibilities and
370
‘imagined futures’. These ideas were subsequently discussed with other individuals and organisations
within the broader community of interest of careers guidance.”
•
Discussion: This refers to the Knowledge Maturing Phase Model phase II, and puts the
concept into the context of careers guidance practice in the sense that the notion of ‘imagined futures’
provides a goal or potential discussion mechanism for potential social approval. This can be seen as a
precursor to 2di building connections (adding new people to the network so that there are resources
available when a learning need arises).
2c.
cognitive load
2ci.
intrinsic (inherent nature of the materials and learners’ prior knowledge)
•
Example from Study 2, section 5.3.12.2: “The main driver for this evaluation has been the
validation of key assumptions underlying our ontology maturing model (as described in D1.1) and the
SOBOLEO tool. The key idea behind these tools is that instead of expert groups specifying the
vocabulary and updating it periodically in larger timer frames, every user of the system continuously
contributes to the evolution of the vocabulary by adding new tags, reusing them, and consolidating
them towards a shared vocabulary. The same applies not only to the vocabulary, but also to person
profiles, which are incrementally built by the users of the system. The success of such an evolutionary
development within everyday work processes depends on key assumptions that have been evaluated as
part of the Summative Evaluation . Results for the transition from phase Ia to Ib [expressing ideas to
appropriating ideas] is that individuals need to reuse their own tags so that they are not one-time
keywords, but rather expressions of the individual’s vocabulary. SMI2 data has shown that individuals
do use their own tags at a later stage. Likewise, person profiles are also not constructed in a single
step, but refined by the users of the system (SMI1). Thus it would seem that in this context (where tags
are reused by the originator: SMI1 & SMI2)”
•
Discussion: the support given by the people tagging tool at Connexions Northumberland (UK)
for intrinsic individual cognitive load, in terms of the inherent nature of the materials and learners’ and
fitting in with users’ prior knowledge, appears to be good. Specifically, SMI2 (“A topic tag is reused
for annotation by the "inventor" of the topic tag”) data, which shows that individuals do use their own
tags at a later stage, is a direct indicator of the tool enabling the user to build on their own prior
knowledge and thus assisting new learning that builds on prior (often intrinsic) experience.
•
Example from Study 4, section 7.6: “According to the results, the users mostly used the
Instantiation for searching and viewing various resources thereby extending their knowledge.”
•
Discussion: another useful example of the tool allowing the learner at Structuralia (Spain) to
‘extending their knowledge’.
2cii.
extraneous (improper instructional design)
•
Example from Study 3, section 6.2.1.2: “At this final workshop [at Connexions Kent, UK],
although the careers advisers had identified materials to load on to the system they did not feel
confident in uploading the materials. Only a few users had added materials since May, which others
took advantage of downloading, but the export function only provided links to the documents, rather
than a collation of materials as had been expected. Users had expected collections to be collated into a
document rather than a list of PDFs, but this was not technically possible and should have been
communicated to users. The PowerPoint preview was also not enabled because it would have been too
difficult to implement, but was expected by the users. They also found that the web search facility was
not working. Confidence in using the MATURE Firefox widget and dragging URLs into collections
was low, as users were intermittently unable to view others’ website ratings and tags. Users reported
that they were unsure the information would be stored, would be retrievable or that it could be shared
with colleagues. This happened as a result of the collections created during the formative evaluation
not being added to the updated system.”
371
•
Discussion: It appears that the level of users may have been incorrectly estimated and/or the
system is improperly design from an instructional perspective, this leads to extraneous cognitive load
for users.
•
Example from Study 4, section 7.6: “Problems had occurred with the Tag Editor (ontology
creation) and the Tagging Widget (tagging resources). Not being able to create a tag consisting of
more than one word or to delete a tag has probably negatively affected the usability scores (for more
information see Section 7.3.2) … More extensive training for users which would show how work
processes can be supported and how the widgets can be used could be helpful in achieving better
acceptance of the overall system and individual widgets in the future. ”.
•
Discussion: there may be a need for guidance / scaffolding to be built into the system
presented at Structuralia (Spain).
•
Example from Study 6, section 9.4: “The software development underpinning the MATURE
project demonstrator [i.e. Study 3 at Connexions Kent, UK] was undertaken with the MATURE
developers and was one where the process was very valuable for the MATURE developers, but much
less relevant for the users themselves than the activities with which they engaged in Part 1. At the end
of the process the users could see the potential value of the system, but given that they already had a
well-functioning KM system developed as outlined in Part 1, it was decided that the final usability
trials of the system would take place elsewhere … The final report associated with the evaluation of
the MATURE project demonstrator, produced jointly by the development team and the UWAR team,
emphasised how problems were encountered in addressing a range of technical issues, from a series of
bugs (in relation to logging on; problems with MatureFox; missing/disappearing collections,
collections which could not be subscribed to; problems with display of tags, clickable tags) and user
requirements taking a long time to be fulfilled. From the developers’ perspective the situational
constraints came as a surprise: Internet access was poor at Connexions Kent; hardware issues at
Connexions Kent were unforeseen; geographically distributed users added an extra layer of
complexity; challenge for installing the system (the developers had fixed on their approach, but
probably a web-based tool would have worked better here); and the political situation became much
more challenging as it became apparent that there would be a major reorganisation with the loss of
many jobs. There were also challenges to using the system due to the complex installation, logging on,
manipulation of windows and navigation of system and some features were disabled (discussion and
tag editor) when they had been accessible in earlier installs and this confused users. Usability issues
were magnified due to the software being very different to what users were used to before (mainly
office tools): e.g., re-sizing windows, tagging, MatureFox.”
•
Discussion: This example speaks for itself and provides yet more evidence to support the point
made above; namely, that users may have been incorrectly estimated and/or the system is improperly
designed from an instructional perspective, this leads to extraneous cognitive load for users.
2ciii.
germane (appropriate instructional design motivates)
•
Example from Study 4, section 7.6: “The Instantiation [Structuralia, Spain)] already supports
well some of the most typical users’ Knowledge Maturing activities (e.g. “Storing relevant results in
collections on my desktop or laptop”, “Maintaining private collections and continuously adding
materials” and Searching on the internet for relevant information”) … The Collection Widget and the
Discussion Widget, two of the most important widgets, were perceived easy to use, which can clearly
contribute to the maturing of artefacts (Collection Widget) and sociofacts (Discussion Widget).”
•
Discussion: appropriate design, but more evidence is required in terms of motivation to
learners.
1.2d.
personal learning networks (group or distributed self-regulation)
2di.
building connections (adding new people to the network so that there are resources available
when a learning need arises);
•
Example from Study 2, section 5.3.9: “Both questions lead to the conclusion that SOBOLEO
did not help to increase the number of colleagues in the professional network. Also the participants
372
state that they did not built up more relevant contacts for their work practice with the help of
SOBOLEO. Summing up we do not find support for SMI 11 with the questionnaire.”
•
Discussion: SMI 11 is “An individual changed its degree of networkedness”. This may be an
important finding (if negative) given this project focus is ‘continuous social learning in knowledge
networks’. However, it should be noted that, at least in the case of SOBOLEO, it did not help to get
new contacts but that social learning still could have happened; as Study 2’s other SMIs indicate
different results (see example below). The situation at Connexions Northumberland was that it closed
end of March and this could have affected this result as well. One argument is that “SOBOLEO did
not help to get new contacts, but is something completely different from social learning?” (the latter is
a question posed by a member of Study 2 team). In fact, this is not the case in our typology: building
personal learning networks involves (amongst other things like 2a-c above) a process of building
connections by tagging new people in your network so that there are resources available when a
learning need arises. The fact that the application of the typology can surface regarding this issue
seems to LTRI a positive indicator that the typology is providing a useful analytical tool. It could in
the future help multi-disciplinary research teams “Learn” (see Figure 2.2): a process of learning and
problematisation about experiences and constraints in the context. This is an exploratory phase in
which design teams investigate the key features in the target context and involves the involvement and
participation of target users as much as possible, e.g., observations and interviews prior to
implementations and user-tests. What we are saying now here is that such a process needs a mutually
agreed typology; a check-list used as the joint basis for negotiating shared project team understanding
and as a structure for recording formally negotiated agreement. Hence the scare quotes on “Learn”
above as it is the researchers as much as anyone who have to learn from others in the project and users.
This is not a new idea, but it remains an old problem (particularly if we take into account the issues
raised above in 2ci extraneous (improper instructional design)).
•
Example from Study 4, section 7.6: “The strength of the MATURE widgets is that they
address new ways of student collaboration.”
•
Discussion: this appears to be a (positive) design aspiration not necessarily borne out by the
evaluation data.
•
Example from Study 5, section 8.5: “The MATURE project team members and their
partnerships also had strong overlapping personal and professional networks and the partnerships
acted as a form of ‘bridging social capital’ across the career guidance field as a whole (which
sometimes operates within distinct ‘silos’). The MATURE tools and approaches also operated at the
boundaries between different communities and were used to extend and deepen the communication
between communities, thus making possible productive communication and ‘boundary crossing’ of
knowledge” … “One strand of the partnership dialogue expanded upon with partners with a particular
interest in the TEBOs [Technology Enhanced Boundary Objects] was the argument that effective
learning about key aspects of guidance practice could follow from engagement in authentic activities
that embedded models of support for practice which were made more visible and manipulatable
through interactive software tools (TEBOs), software-based resources which supported knowledge
sharing across organisational boundaries.”
•
Discussion: It was in fact the ‘people’ in the UWAR team, with MATURE and related tools
playing a small part, that act as mediators for ‘bridging social capital’ across the career guidance field
as a whole. The so called TEBOs are an interesting idea that was not ready for Summative Evaluation
due to contractual delays; it is therefore an area for future exploration. Cook, Pachler and Bachmair
(2012) have explored the use of Social Networked Sites and mobile technology for bridging social
capital from a theoretical perspective and this may be of relevance to TEBOs. Cook et al. (2012)
discusses scaffolding access to ‘cultural resources’ facilitated by digital media from a wide perspective
(e.g. scaffolding access to learning resources, health information, cultural events, employment
opportunities, etc.). Key concepts are defined, particularly forms of ‘capital’ through the lens of the
following question: how can we enable learning activities in formal and informal contexts undertaken
by individuals and groups to become linked through scaffolding as a bridging activity mediated by
network and mobile technology? Tentative conclusions include the highlight that some research
373
suggests that in Higher Education Facebook, for example, provides affordances that can help reduce
barriers that lower self-esteem students might experience in forming the kinds of large, heterogeneous
networks that are sources of social capital. ‘Trust’ is a key issue in this respect. Thus there appears to
be considerable potential for network and mobile media in terms of sustainability in the integration of
informal and formal institutional dimensions of learning.
2dii.
maintaining connections (keeping in touch with relevant persons);
•
Example from Study 5, section 8.5: Formalising was embarked upon through a deepening of
the collective understanding about the possibilities of knowledge sharing and further development,
which were then translated into a range of structured documents available from the partners’
organisations.
•
Discussion: Formalising refers to the Knowledge Maturing Phase Model phase III, and the
above gives a broad perspective on this from careers guidance perspective of common understanding
of documents. However, maintaining connections (i.e. keeping in touch with relevant persons) is
implicit in this activity and perhaps this is a short-fall of the phases model (it should be made
explicit)?
•
Example from Study 2, section 5.3.9: “SMI 4 was investigated because it should shed light on
the interesting indicator, if a person is several times tagged with a certain concept. Study 2 observed
that confirmations for tags were almost not used. The mean number of tags per user is three, with 30%
of all users in the system (298 users), respectively with more than 40% of users, which participated in
the training phase (212 users). We managed to show person profile maturing for four different person
profiles and can therefore support this SMI. Additionally we get support for the GMI evaluation from
the questionnaire.”
•
Discussion: How could the system provide support so that person profiles show maturing
more often? Is this a question for future work? Indeed, as the Study 2 team point out in section 5.3.12:
“Research about the degree of networkedness was not successful. We need therefore a longer period of
investigation and additional support, e.g. visualisations that show people-topic-connections. Also
motivational aspects like feedback mechanisms to support participation could be helpful (see
D2.2/D3.2)”.
•
Example from Study 4, section 7.6: Teacher, acting more like a tutor, is providing support and
guidance to the students leaving them to choose how something can be done. The mentioned changes
and developments represent challenge for the teachers, the students and as well for the whole system,
requiring some procedures of change management, specially aimed to address the general resistance to
change.
•
Discussion: the changing nature of learning from formal instruction towards more informal
and loosely coupled networks of learning needs more research.
2diii.
activating connections (with selected persons for the purpose of learning)
•
Example from Study 5, section 8.5: “Ad-hoc learning was realised as some partners engaged
with innovative practices using experimental semi-formalised structures and resources to gain
experience and collaborated with the MATURE team to help develop potential boundary objects that
could help facilitate Knowledge Maturing processes across a wider community of interest. These
boundary objects in some cases were being developed as carriers of more explicit training and
development for practitioners … Dialogue about Knowledge Maturing processes had resulted in
partner development, including in many cases partners developing their ‘readiness to mature
knowledge’ of how technology might support innovation, learning and development in guidance
practice. Many partners also appreciated that a challenge for the future is whether social software tools
can produce artefacts and scaffolding to take participants to higher levels of understanding about
improving their contextualised practice.”
•
Discussion: There may be a need for scaffolding or guidance. This is a topic for future work.
374
•
Example from Study 2, section 5.3.12.2: “Phase Ib-II. [Appropriation to Consolidation in
communities] is crucial for entering the community consolidation phase, i.e. the take-up of tags by the
community, which manifests in the reuse of tags by others. This has been observed with SMI3 (tags
are reused by other users).”
•
Discussion: A useful concept but more work is needed.
•
Example from Study 4, section 7.6: “The MATURE tools provide improvements in the
domain of learning experience, enabling the students to actively shape their learning setting. The use
of the MATURE tools implies a change in the relationship between students and teachers, where
students take an active role, collaboratively contributing to course materials.”
Discussion: The widget based tool was successfully used for learning at Structuralia (Spain).
•
Example from Study 4, section 7.6: “Thus, it supports many important Knowledge Maturing
activities, as "Find relevant digital resources", "Keep up-to-date with organisation-related knowledge",
"Familiarise oneself with new information", "Reorganise information at individual or organisational
level", "Share and release digital resources", "Communicate with people", and "Assess, verify and rate
information" (cf. Deliverable D1.2 and D2.3/D3.3).”
•
Discussion: Examples of how MATURE tools (could) facilitate activation of connections.
•
Example from Study 5, section 8.5: “The Knowledge Maturing processes linked to the
development work with TEBOs was seen as a potential way of getting individual practitioners to
interact more readily with learning resources for understanding LMI and understanding the conceptual
challenges in interpreting the output of TEBOs: graphs; labour market predictions; charts; employment
data; financial models, etc.; and supporting practitioners in how to visualise, analyse and utilise LMI
in new ways in the guidance process they offer to their clients. This development work was seen as
illustrative of a Knowledge Maturing process with the potential to support learning through the
dynamic visualisation of data and relationships and the consolidation, representation and
transformation of knowledge.”
•
Discussion: the above example provides a vision of how it could look with the help of (yet to
be fully developed) TEBOs.
2div. aggregated trustworthiness (perceived credibility) = social validation + authority and trustee +
profiles
•
Example from Study 2, section 5.3.12.2: “Phase Ib-II. Phase II and II-III (vocabulary
development). The collaborative consolidation and formalisation depends on sufficient user activities
in terms of adding additional details to tags like description or synonyms (SMI6) moving from the
unsorted “latest topics” section to the hierarchy (SMI7), gardening activities (SMI8), which all have
been observed in the evaluation. Convergence could also be observed because of the stability periods
in the analysis of SMI8.”
•
Example from Study 2, section 5.3.12.2: “Phase II and II-III (person profiles converge and
capture collective opinion). Similar to the vocabulary, also the evolution of person profiles requires a
sufficient level of activity with respect to affirmation of existing tags (SMI4) by a diverse group of
individuals (SMI5).”
•
Discussion: the evolution of person profiles seems related to aggregated trustworthiness and
needs further work.
•
Example from Study 5, section 8.5, “One avenue explored (within and beyond the MATURE
project itself) was to engage in a dialogue with guidance practitioners about the use of Labour Market
Information (LMI) in the development of prototype TEBOs. In these cases the Knowledge Maturing
processes needed to be extended to building an understanding of how TEBOs may be used in ways
that are empowering for practitioners, and ultimately for clients too.”
•
Discussion: TEBOs here would be trusted tools for aggregating trustworthiness.
375