Papers40 - Open Education Europa

Transcription

Papers40 - Open Education Europa
January 2015
g
n
i
n
r
a
e
L
e ers
p
a
P
0
4
Assessment, certification, and quality
assurance in open learning
Editorial
Assessment, certification, and quality assurance in open learning
In-depth
Quality Assurance for OER : Current State of the Art and the TIPS Framework
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_In-Depth_40_1
Students as evaluators of open educational resources
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_In-Depth_40_2
Student’s Quality perception and learning outcomes when using an open accessible eLearning-resource
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_In-Depth_40_3
From the field
An Assessment-Recognition Matrix for Analysing Institutional Practices in the Recognition of Open Learning
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_1
Peer-review Platform for Astronomy Education Activities
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_2
Seven features of smart learning analytics - lessons learned from four years of research with learning analytics
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_3
Quality assurance in online learning: The contribution of computational linguistics analysis to criterion referenced
assessment
http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_4
eLearning Papers is a digital publication on eLearning by openeducationeuropa.eu, a portal created by the
European Commission to promote the use of ICT in education and training. Edited by P.A.U. Education, S.L..
E-mail: editorialteam[at]openeducationeuropa[dot]eu, ISSN 1887-1542
The texts published in this journal, unless otherwise indicated, are subject to a Creative Commons AttributionNoncommercial-NoDerivativeWorks 3.0 Unported licence. They may be copied, distributed and broadcast provided
that the author and the e-journal that publishes them, eLearning Papers, are cited. Commercial use and derivative
works are not permitted. The full licence can be consulted on http://creativecommons.org/licenses/by-nc-nd/3.0/
Editorial
Open Learning and its Future of
Assessment, Certification and
Quality Assurance.
Open learning is scaling up again after the ODL-peak in 1990’s. Thanks to the ongoing
changes in societies, working life and technology enabled globalization of education, open
education has more potential users today than ever before. The major question is how to
implement it for achieving best learning results.
Many initiatives took place during the last years: The ongoing debate on Open Educational
Resources was leading to the influential 2012 Paris OER Declaration by UNESCO. In 2013,
the “Opening up Education” communication was published by the European Commission
demanding for “Innovative teaching and learning for all through new Technologies and
Open Educational Resources”. And this year, the “Declaration of Crete” calling for “Reestablishing Openness as Default” was approved in a common workshop of the International
Community for Open Research and Education (ICORE) and the Open Education Consortium
(OEC) at the international LINQ conference 2014. Today open learning is introduced in many educational systems and sectors throughout
Europe thanks to major flagship initiatives like Open Discovery Space and Inspiring Science
Education involving all 28 EU member states and beyond.The proof of concept and potential
benefits will be demonstrated and evaluated in the next years requiring a strong focus on
assessment, on certification and in particular on the key quality dimension in open learning.
Currently the vision of open learning is applied and amended for opening up education
combining innovations and quality (Stracke 2014).
This issue of eLearning Papers presents a collection of in-depth articles and reports from the
field on “Assessment, certification, and quality assurance in open learning”. These papers
provide a comprehensive glimpse to what is taking place in open learning today.
ng
i
n
r
eLeaers 0
4
Pap
Christian M. Stracke, Managing Director of TELIT Research Institute
Tapio Koskinen, eLearning Papers, Director of the Editorial Board
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
2
In-depth
Quality Assurance for OER :
Current State of the Art and the TIPS Framework
Authors
Paul Kawachi FRSA
[email protected]
Open Education Network
United Kingdom
We present validated quality assurance criteria as guidelines for creating and improving
open educational resources (OER). We have reviewed all 45 known related frameworks
in the literature, and built an open collation of 205 criteria. Through several rounds
of international workshops, questionnaires, surveys and referrals, these have been
examined by more than 200 OER experts and teachers around the world to produce a
practical framework consisting of 38 key criteria. Through a grounded theory approach,
these are distributed among four levels ; the teaching aspects, information content
aspects, presentation aspects, and the system technical aspects - giving us the acronym
TIPS - for a highly validated (content validity index > 0.80 according to Lawshe) framework
as guidelines for determining and improving the quality of OER. These key criteria can
be helpful to creators of OER, or easily applied as a rubric to assess or improve existing
OER by reusers. All the methods and data are in the free-of-cost open-access domain.
1.Introduction
Tags
Quality assurance ; Criteria,
Validation, OER
ning
r
a
e
eL ers
Pap
40
Open educational resources (OER) offer an unprecedented opportunity to develop
learning materials for the developing world. The present study focuses on the creation and
improvement of OER by teachers, through Guidelines for quality assurance assessment.
The rationale for developing these Guidelines for teachers as creators of their own OER is
essentially to broaden the author-base to involve teachers as reflective practitioners. Good
quality OER can widen informal access to education through independent study and widen
formal access through prior learning. Good quality OER can also prevent dropout from
formal education through offering remedial study resources. They therefore provide indirect
cost benefits to the institution, community and governments. Moreover creating OER can
empower the teacher as author, raise their self-esteem and social status, and help raise the
profile of the school. Dhanarajan & Abeywardena (2013, pp.9-10) found that teachers’ lack in
own skills was a leading barrier against creating OER, and lack in ability to locate quality OER
was a leading barrier against reusing OER. In order to expand the OER author base, guidelines
may be helpful which offer suggestions to teachers as potential authors. The current project
deals with developing an instrument which consists of a checklist of criteria as suggestions
to be considered by teachers when designing OER. The resulting criteria present ideas to
teachers as prospective creators of OER : offering ways they could reflect upon in order to
develop a culture of quality within their own respective local communities of practice. We
also expect institutions supporting development and use of OER to adopt these Guidelines
in their internal quality assurance practices. By developing and offering these Guidelines,
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
3
In-depth
we are interested in nurturing the idea of quality as a culture.
Developing a culture of quality through teacher continuous
professional reflection may be the best way forward rather
than simply aiming to digitally store somewhat permanently an
individual teacher’s own lesson materials.
Defining quality in absolute terms is elusive because it depends
upon whose perspective we choose to adopt. However, quality
has been fairly well defined by Harvey & Green (1993) as being
on five dimensions ;- with Fitness for Purpose as the dimension
most relevant to quality for open educational resources
(OER), Cost Efficiency as another which is also relevant and
Transformative Learning, (the other two dimensions are not
concerned with education). These are given in Box 1 below. The
key dimension for quality of OER is thus Fitness for Purpose, and
this indicates that the purpose needs to be defined, and this
depends on whose perspective we adopt.
Box 1 : Dimensions of Quality
(i)
Achieving Exceptional Excellence : surpassing some preset criterion-referenced standard
(ii)
Achieving Perfection : focusing on first making a
machine that is successful 100% of the time, rather than
trial-and-error or envisaging improving it later on
(iii)
Achieving Fitness for Purpose : satisfying the aims
or reasons for producing the item, according to the
judgements of the various stakeholders - particularly the
consumers
(iv)
Achieving Value for Money : focusing on relative
efficiency, and the (immediate output, mid-term
outcome, and long-term impact) effectiveness
(v)
Achieving Transformation : enhancing and empowering
the consumer, eg equipping the student with the 21stcentury knowledge-creative skills
According to the third dimension of quality as Fitness for Purpose.
We are grappling here with the issue of Whose purpose ? and
therefore we suggest a practical way forward to accommodate
the different perspectives. The challenge is illustrated by eg an
OER highly rated as excellent quality by students in their remedial
learning, but which teachers elsewhere find terribly difficult to
adapt, change the language, and relocalise to another culture
and context. So, on one level (let’s call this the basic or ground
level with students in class) the OER is high quality, but on
another higher level (of the teachers as reusers and translators)
this same OER is low quality and unusable. The global institution
and OER experts (say at the highest level) would rate this OER
more critically because of the difficulty to remix. To simplify
ning
r
a
e
eL ers
Pap
40
the challenge, we draw three levels of localisation each with its
own specific quality criteria : (i) the upper-most level-1 of the
repository containing the internationalised OER that have been
standardised by OER experts and like a textbook are almost
context-free, (ii) the intermediate level-2 of readily adaptable
OER, and then (iii) the ground level-3 of the fully localised OER
used by actual students. Briefly, the upper-most level-1 is the
most restrictive interpretation of quality by OER experts and
institutions, the intermediate level-2 is complex involving ease
of adapting through re-contextualising OER by teachers, and
the ground level-3 is quality in the hearts and minds of the
students learning with the fully localised OER version. Very few
if any studies have yet gathered feedback from students about
their achieving improved learning using OER, and while we are
waiting for our impact studies to be completed, the present
study here reports on quality perceived at the other two levels ;at level-1 of the OER experts, and at level-2 of the teachers. The
three levels are shown in Figure 1, as a pyramid-like structure.
Figure 1. The OER localisation processes
These three levels were originally designed to visualise the
processes of localisation and internationalisation, according
to the level of the reusers : depending on whether they were
the intended end-users (notably the student learning), were
the intermediate users (the providers, teachers, or translators),
or were the storekeeper users (the repositories, portals and
institutions). Here these three levels are employed to illustrate
their three respective views on quality.
This Figure 1 shows there are more OER at the base of a pyramid
structure, to represent the reality that there are many versions
eg one version in each context, while at the higher intermediate
level-2 there are fewer, and even less in the highest level-1.
An example here would be a national curriculum textbook at
the repository level-1, lesson plans at the teacher level-2, and
individualised interpretations to each student in his or her
native language at level-3. The teacher enjoys some autonomy
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
4
In-depth
within the four walls of the classroom, and can use humour,
exaggeration, gestures etc to convey the teaching points. But
if a student asks to record the lesson for later revision study,
the teacher could be advised to use clearer language, without
idiomatic or local slang (this copy would be level-2 for sharing
with others). And when the teacher writes all this up into a
publishable textbook the product would be up at level-1.
Domains of Learning scaffold to be simplified and re-categorised
for ease in use by teachers. The outcome was four groups of
criteria which through a grounded theory approach were
subsequently labelled as (T) Teaching and Learning Processes,
(I) Information and Material Content, (P) Presentation, Product
and Format, and as (S) System Technical and Technology, giving
us the acronym TIPS. These four groups are presented in Figure
2 as layers of quality concerns.
2.Methods:
To date a total of 45 quality assurance frameworks relevant to
this project have been found in the literature. Of these, there
are 19 that are useful, and these have been examined in detail,
to harvest points that could help formulate criteria for our OER
Guidelines. These 19 other frameworks are by Baya’a, Shehade
& Baya’a (2009), Latchem (2012), Sreedher (2009), Ehlers (2012),
Achieve (2011), Camilleri & Tannhäuser (2012), The Quality
Matters Program (2011), Bakken & Bridges (2011), McGill (2012),
Khanna & Basak (2013), Kwak (2009), Frydenberg (2002), The
SREB - Southern Regional Education Board (2001), Merisotis
& Phipps (2000), Sloan (2013), Jung & Lee (2014), Mhlanga
(2010), Williams, Kear & Rosewell (2012), and Leacock & Nesbit
(2007). They are covered in the discussion later. From these a
total of 205 criteria are suggested for OER quality assurance
(given in full by Kawachi, 2014a). Initially we hoped that these
would suggest common categories across all the frameworks,
but the ad hoc designs meant that no alignment was possible.
Instead therefore the five Domains of Learning template was
adopted (see Kawachi, 2014b) onto which suggestions could be
positioned in some collated manner, to avoid duplication and to
facilitate our review and collection processes.
The comprehensive list of 205 criteria for OER quality were
collated onto the Domains of Learning scaffold, and discussed
at length with OER experts and teachers globally. Specifically a
Regional Consultation Meeting was held in Hyderabad, India, on
13-15th March 2013 at Maulana Azad National Urdu University,
and an International Workshop was held in Islamabad, Pakistan,
on the 1st October 2013 at Allama Iqbal Open University.
Other face-to-face and online discussions were held at other
universities around the world.
The various consultations and feedback discussion resulted in
these 205 criteria being reduced to 65 criteria (these are given
in Kawachi, 2013). Many of these criteria were in technical or
complex English, which teachers in developing countries might
find inaccessible. Feedback conversations also asked for the
ning
r
a
e
eL ers
Pap
40
Figure 2. The four layers of the TIPS Framework
These four layers comprising the TIPS Framework are presented
in easy accessible English in a pamphlet (available at http://
www.open-ed.net/oer-quality/tips.pdf ) for teachers to use
in the field. The Framework has also been translated into
other local languages. After publishing this initial version-1.0
(Kawachi, 2013), further studies were undertaken in both field
tests and surveys to improve utility, confidence and reliability,
and involving specifically Content Validation according to
Lawshe (1975). We also included Wave Analysis according to
Leslie (1972). Wave Analysis is a method to increase confidence
in survey data being complete and comprehensive. Where
successive waves show similar distributions of response ratings,
then confidence is increased.
Content Validity is a term with an imprecise meaning : according
to Fitzpatrick (1983) it can refer to (i) how well the items cover
the whole field, (ii) how well the user’s interpretations or
responses to the items cover the whole field, (iii) the overall
relevance of all the items, (iv) the overall relevance of the user’s
interpretations, (v) the clarity of the content domain definitions,
and/or (vi) the technical quality of each and all the items. The
first two concern the adequacies of the sampling, and come
under Construct Validity. Notwithstanding that Content Validity
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
5
In-depth
is an imprecise term, it can be measured quantitatively by
asking content experts to rank each item as (i) Essential, (ii) Notessential but useful, or (iii) Not necessary, according to Lawshe
(1975). Those items ranked as not necessary are likely to be
discarded. Among a large number N of experts, the number
who rank the item as essential N E is used to calculate the
Content Validity Ratio for each item as shown in Box 2 below.
This formula gives a Ratio of zero if only half the experts rank
the item as essential, and if more than half the experts rank the
item as essential then a positive Ratio between zero and one.
N of
experts
5
minimum
CVR
.99 .99 .99 .75 .68 .62 .59 .56 .54 .51
N of
experts
15
minimum
CVR
.49 .42 .37 .33 .31 .29 .27 .26 .26 .25
6
20
7
25
8
30
9
35
10
40
11
45
12
50
13
55
14
60
Table 1. The Minimum Averaged Value CVR for a Criterion to be
Retained
To determine the valid criteria of quality as Fitness for Purpose,
we surveyed individual OER experts, separately from individual
teacher-practitioners, to discover their respective sets of
criteria. In each case the individual was invited by personal
email. Of the three arbitrary levels (see Figure 3 below), we
thus are surveying level-1 of the OER-experts, and level-2 of the
teachers.
BOX 2: The Content Validity Ratio CVR (from Lawshe, 1975)
For relatively small groups of experts, the average Ratio for
each item retained in the instrument should be close to one to
decide the specific item has content validity with a probability
of p<0.05. For larger groups of experts, the likelihood decreases
that co-agreement as essential occurred by chance, and the
Ratio value can be lower while still reaching a probability
of p<0.05, with these values (corrected and extended from
Lawshe, 1975) shown in Table 1 below for various group sizes.
Items obtaining minimum value, or above, are retained in
the instrument. Then the average Content Validity Ratio over
all items is termed the Content Validity Index. Generally the
instrument should have an Index of 0.80 or above to be judged
as having content validity. Some outliers can be discarded on
the basis of a low ranking by the experts, while others can be
retained despite a low ranking provided there is some other
procedure supporting their inclusion. The important point here
is that for increasing numbers of respondents, the number of
retained items or criteria (that are above the cut-off threshold)
increases.
Other frameworks have used anonymous surveys through
mass mailings, and such a way could be open to mischievous
responses, as well as to careless completion. Therefore we set
out to invite each respondent by personal email individually. In
order to ascertain if any difference in returns occurred, the same
survey was administered through anonymous mass mailings to
OER discussion forums.
In the event, three sets of survey were performed in parallel ; - (i)
Set-1 of OER experts who were invited individually by personal
email, (ii) Set-2 of OER online communities invited by posting a
message to the respective group discussion forum, and (iii) Set3 of school teachers at-the-chalkface who were each invited by
personal email. A copy of the survey instrument is available at
http://www.open-ed.net/oer-quality/survey.pdf. Those authors
who presented a paper on OER quality to the 2013 Seventh
Pan-Commonwealth Forum on Open Learning PCF7 http://
pcfpapers.colfinder.org were added to the OER experts list and
also invited to respond.
3.Results:
This study examined other frameworks discovered in the
literature which might be relevant to developing this ProjectFramework of quality criteria for OER. Some metaanalysis of the
main categories is given here. Those frameworks, which might
be worthwhile reviewing for suggestions, are preselected,
redrawn with categories numbered for easy reference. At the
beginning of this Project, there was a plan to gather as far as
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
6
In-depth
possible the available published frameworks of QA criteria, and
then somehow to collate these frameworks by putting them
alongside each other in juxtaposition to see commonalities : the
similarities and differences. As these were gathered it became
apparent that most were ad hoc subjective listings of whatever
came to the mind of the author at the time of writing - with little
if any overarching organisation. Therefore recourse was made to
the comprehensive framework of all the educational objectives
consisting of the five Domains of Learning. Then the criteria
from other frameworks were positioned onto this Domains of
Learning scaffold, prior to being reviewed, compartmentalised
and then labelled through a grounded theory approach.
The results from this review of the literature for evaluating
the quality of learning objects, e-learning, web-based learning
environments, courses and programmes, and other relevant
materials. Some share sufficient fittingness to consider adopting
for the criteria given in Project-Framework.
Four criteria were suggested by Baya’a, Shehade & Baya’a (2009)
for evaluating web-based learning environments ; Usability
(Purpose, Homepage, Navigation, Design, Enjoyment,
Readability), Content (Authority, Accuracy, Relevance,
Sufficiency, Appropriateness), Educational Value (Learning
activities, Activity plan, Resources, Communication, Feedback,
Rubric, Help tools), and Vividness (Links, Updating). Latchem
(2012, pp.81-86) gives criteria for quality assurance of
Immediate Outputs, Short-or-medium-term Outcomes, Longterm Impacts, and also Inputs. The most relevant categories and
criteria are shown in Table 5 below. All the items from Latchem
Outcomes are included into the final Framework. Sreedher
(2009) gives five criteria areas in an interesting Quality Assurance
of Multimedia Learning Materials (QAMLM) framework based
on the ADDIE model of instructional design. The ADDIE model is
a process consisting of five stages ;- Analysis, Design,
Development, Implementation, and Evaluation. It can be used
iteratively, and has some relevant shared fit with creating OER.
The QAMLM is re-drawn as appendix3. Ehlers (2012) gives
seven criteria areas for quality assurance of e-learning courses
as follows ;- (i) Information about + organization of programme,
(ii) Target Audience Orientation, (iii) Quality of Content, (iv)
Programme/ Course Design, (v) Media Design, (vi) Technology,
and (vii) Evaluation & Review. The second of these concerns
Needs Analysis which may be problematic in OER, and also the
last on evaluation can be difficult where students give
anonymous feedback as social tags. Achieve (2011) gives eight
criteria areas in a framework called Achieve-OER-Evaluation to
ning
r
a
e
eL ers
Pap
40
assess OER quality according to the USA common core state
standards for curricula, as follows ;- (i) Degree of Alignment to
Standards, (ii) Quality of Explanation of the Subject Matter, (iii)
Utility of Materials Designed to Support Teaching, (iv) Quality of
Assessment, (v) Quality of Technological Interactivity, (vi)
Quality of Instructional Tasks and Practice Exercises, (vii)
Opportunities for Deeper Learning, and (viii) Assurance of
Accessibility. The Achieve company is set up by the Institute for
the Study of Knowledge Management in Education (ISKME) that
is run by the repository OER-Commons. The technical language
used is intractable and a barrier to adoption. Camilleri &
Tannhäuser (2012, drawn from pp.17-19) give eight dimensions
as technical criteria and two as pedagogical crieria, as follows ;(i) Compatibility with a Standard, (ii) Flexibility and Expandability,
(iii) Customization and Inclusiveness, (iv) Autonomy of the users
during the interaction with the multimedia resources, (v)
Comprehensibility of the graphic interface, (vi) Comprehensibility
of learning contents, (vii) Motivation, engagement and
attractiveness of the OER modules and/or learning resources,
(viii) Availability of reporting tools (e-Portfolio), (ix) Cognitive :
Interaction between the OER and Learner, and (x) Didactic : The
full QMP document is not open access. Bakken & Bridges (2011)
give five criteria areas for online primary and secondary school
courseware, as follows ;- (i) Content, (ii) Instructional Design,
(iii) Student Assessment, (iv) Technology, and (v) Course
Evaluation and Support. These are international standards and
could be useful for adopting in creating OER for school-level
student end-users. Binns & Otto (2003) give four criteria areas
as the quality assurance framework for distance education, as
follows ;- Products, Processes, Production and delivery, and
general Philosophy of the institution. These four areas were
earlier suggested by Norman (1984), and Robinson (1993) has
reported these four used in Uganda together with the various
components under each category (both cited in Binns & Otto,
2003, pp.36-38). The four-P framework may be relevant to
developing regions where OER are used in face-to-face
classrooms. McGill (2012) gives five criteria areas for determining
the quality of OER, as follows ;- (i) Accuracy, (ii) Reputation of
Author / Institution, (iii) Standard of Technical Production, (iv)
Accessibility, and (v) Fitness of Purpose. This framework is
advocated by the institution-group HEA and JISC. They only
lastly give consideration to the students and the OER being fit
for use. Khanna & Basak (2013) give six criteria areas, as follows
;- (i) Pedagogical, (ii) Technological, (iii) Managerial, (iv)
Academic, (v) Financial, and (vi) Ethical. This set is interesting
since they also give five levels of depth to these areas :- (1 -
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
7
In-depth
highest) IT infrastructure - services and networking, (2)
Management support systems, (3) Open content development
and maintenance, (4) Open (online / public) teaching and
learning, and (5) Learner assessment and evaluation. The six
areas of Khanna & Basak (2013) are taken from Khan (2001,
p.77) who gives eight, as follows ;- (i) Institutional, (ii)
Pedagogical, (iii) Technological, (iv) Interface Design, (v)
Evaluation, (vi) Management, (vii) Resource Support, and (viii)
Ethical. Kwak (2009) gives twelve criteria areas in a framework
that has ISO-9001 certification, as follows ;- (i) Needs Analysis,
(ii) Teaching Design, (iii) Learning Content, (iv) Teaching-Learning
Strategy, (v) Interactivity, (vi) Support System, (vii) Evaluation,
(viii) Feedback, (ix) Reusability, (x) Metadata, (xi) Ethics, and (xii)
Copyright. Frydenberg (2002) gives nine criteria areas as
domains of e-learning quality, as follows ;- (i) Institutional
Commitment, (ii) Technology, (iii) Student Services, (iv)
Instructional Design and Course Development, (v) Instruction
and Instructors, (vi) Delivery, (vii) Finances, (viii) Regulatory and
Legal Compliance, and (ix) Evaluation. These were labelled as
Domains. There was no discussion beyond noting these nine
were harvested from the literature. The SREB - Southern
Regional Education Board (2001) gives three criteria areas for
K6-12 web-based courses, as follows ;- (i) Curriculum, Instruction
and Student Assessment, (ii) Management, and (iii) Evaluation
of Delivered Courses. Of note they call for e-learning courses to
impart the higher-order critical thinking skills to school children.
Merisotis & Phipps (2000) give seven criteria areas, as follows
;- (i) Institutional Support, (ii) Course Development, (iii)
Teaching/Learning, (iv) Course Structure, (v) Student Support,
(vi) Faculty Support, and (vii) Evaluation and Assessment. Their
criteria are very closely similar to those of the Sloan Consortium
(Sloan, 2013). Sloan (2013) give nine criteria areas, as follows ;(i) Institutional Support, (ii) Technology Support, (iii) Course
Development and Instructional Design, (iv) Course Structure, (v)
Teaching and Learning, (vi) Social and Student Engagement, (vii)
Faculty Support, (viii) Student Support, and (ix) Evaluation and
Assessment. Jung & Lee (2014) give eleven criteria areas, as
follows ;- (i) Infrastructure, (ii) Quality Assurance, (iii)
Institutional Vision & Support, (iv) Finance & Partnership, (v)
OER Development, (vi) Learning Content, (vii) Learning Support,
(viii) Student Support, (ix) Learning Outcomes, (x) Return on
Investment, and (xi) Research & Development. These are taken
here from their as-yet-unpublished survey asking any
respondents to rank their institutional level of practice against
each criterion. Mhlanga (2010) gives thirteen criteria areas, as
follows ;- (i) Policy and Planning, (ii) Learners, (iii) Programme
ning
r
a
e
eL ers
Pap
40
Development, (iv) Course Design, (v) Course Materials, (vi)
Assessment, (vii) Learner Support, (viii) Human Resource
Strategy, (ix) Management and Administration, (x) Collaborative
Relationships, (xi) Quality Assurance, (xii) Advocacy and
Information Dissemination, and (xiii) Results. These criteria are
intended for maintaining all aspects of quality in open schools,
covering more aspects than other frameworks for course quality.
Williams, Kear & Rosewell (2012) give six criteria areas, as
follows ;- i) Strategic Management, (ii) Curriculum Design, (iii)
Course Design, (iv), Course Delivery, (v) Staff Support, and (vi)
Student Support. These criteria are given in good detail for
benchmarking the quality of e-learning programmes. Leacock &
Nesbit (2007) give nine criteria areas, as follows ;- (i) content
quality, (ii) learning goal alignment, (iii) feedback and adaptation,
(iv) motivation, (v) presentation design, (vi) interaction usability,
(vii) accessibility, (viii) reusability, and (ix) standards compliance.
In the present study, the wave analysis results show that those
OER experts at level-1 (shown here in Figure 3) gave different
ratings from those teachers at level-2 (Figure 4), confirming
that these levels hold different perspectives of what constitutes
quality. Of note, the OER experts at level-1 were technically
more critical and rejected most of those criteria related to
classroom pedagogy out-of-hand as not relevant to OER
specifically, although being relevant to educational resources
more widely. And the teachers at level-2 were more generous
considering their fellow teachers who might need to incorporate
more consciously pedagogic good practices into their OER. Both
groups ranged in age across the spectrum, and geographically
around the world, and also both groups showed a balance by
gender (M/F = 38/31, with one respondent no data).
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
8
In-depth
threshold at CVR E = 0.84. Similarly the earlier analysis involving
19 respondents of teachers at level-2 gave 24 criteria retained
at the cut-off threshold of CVR E = 0.80, which increased by the
final 21 respondents to give 38 criteria retained. These 38 criteria
incorporated those 8 criteria indicated by the OER experts at
level-1. They are listed in Table 2 below as the essential valid
criteria for the TIPS Framework version-2.0.
Figure 3. Ratings by OER Experts show similar distributions
Somewhat aside from these two surveys of individual OER
experts at level-1 and of individual teachers at level-2, a
third anonymous survey was performed, and collected 14
respondents giving 17 retained criteria with collective CVR E =
0.81. These anonymous findings are fairly similar to those by
the OER experts, but they tended to be younger respondents
and more generous, approaching the views of the teachers at
level-2. These results suggest that anonymous surveys could
be valid, although less precise, with wider spectrum in ratings,
and with higher incidence of spurious incomplete returns that
warranted each to be inspected one-by-one before treating
in Content Validation analysis. Caution should be applied to
interpreting rating results from anonymous surveys.
Figure 4. Ratings by Teachers show similar distributions, although
differing slightly from those by the OER Experts
Less rigorously, if we were to accept that most respondents
do not know that items scored as ‘Useful’ according to Lawshe
(1975) are discarded, and the analysis is re-performed using
all the items scored as either ‘Essential’ or ‘Useful’ then this
gives CVR E+U . In fact our conversations indicated this was
highly likely to have been the case, with few understanding the
method of Lawshe. The CVR E+U was considerably higher for
each criterion, and the average CVR E+U , over all the criteria
items C-1 to C-65 without discarding the lower scoring items,
which is the overall Content Validity Index CVI E+U for the
instrument, is 0.94 which is clearly > 0.80 and indicates the
original TIPS Framework was also valid at p < 0.05.
The Content Validation analysis was performed many times as
the number of respondents increased over time. The visible
outcome here was that at higher numbers of respondents, the
minimum Content Validity Ratio was reduced, according to
Lawshe (1975), and Table 1, while still surpassing the cut-off
threshold for validity. With only 32 respondents of OER experts
at level-1, there were six criteria highest rated as essential
and collectively reaching the CVR E = 0.80 threshold. (These
six are C-13, C-28, C-37, C-40, C-44, and C-51.) Then with 35
respondents by the final closure of the survey, the number
of criteria was increased to 8 while still reaching beyond the
The Content Validation processes resulted in criteria rated as
Essential, or as Useful, or as Not Necessary, for teachers in their
efforts to build their own OER. As the numbers of respondents
increased, and where these showed good co-agreement with
each other, then the number of criteria, that reach and surpass
the cut-off point at CVI > 0.80 for instrument validity at p<0.05,
increases slightly. The surveys were finally closed on the 5th
June with a total of 70 respondents, and all the analyses were
re-computed. The basic effect from having larger numbers is
that according to Table 1 the cut-off level for CVR E is lower
and the number of retained items may be more. The following
Table 2 below gives the final 38 criteria that can be reasonably
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
9
In-depth
retained. They are presented here with the original labels C1-C65 of the 65 criteria of version-1.0. These 38 criteria constitute
the new revised TIPS Framework version-2.0 (Kawachi, 2014c).
T : Teaching and learning processes
C-1
Consider giving a study guide for how to use your OER,
with an advance organiser, and navigational aids
C-2
Use a learner-centred approach
C-3
Use up-to-date appropriate and authentic pedagogy
C-6
You should clearly state the reason and purpose of the
OER, its relevance and importance
C-7
It should be aligned to local wants and needs, and
anticipate the current and future needs of the student
C-10
Bear in mind your aim to support learner autonomy,
independence, learner resilience and self-reliance
C-12
You should adopt a gender-free and user-friendly
conversational style in the active voice
C-13
Don’t use difficult or complex language, and do check
the readability to ensure it is appropriate to age/level
C-14
Include learning activities, which recycle new
information and foster the skills of learning to learn
C-15
Say why any task-work is needed, with real-world
relevance to the student, keeping in mind the work
needed to achieve the intended benefit
C-35
Try to keep your OER compact in size, while allowing it
to stand-alone as a unit for studying by itself. Consider
whether it is small enough to reuse in other disciplines
C-36
Add links to other materials to enrich your content
P : Presentation product and format
C-37
Be sure the open licence is clearly visible
C-40
Ensure your OER is easy to access and engage
C-44
Present your material in a clear, concise, and coherent
way, taking care with sound quality
C-47
Put yourself in your student’s position to design a
pleasing attractive design, using white-space and
colours effectively, to stimulate learning
C-48
Have some space for adding moderated feedback later
on from your students
C-49
Consider whether your OER will be printed out, usable
off-line, or is suitable for mobile use
C-51
Use open formats for delivery of OER to enable
maximum reuse and remix
C-52
Consider suggesting which OER could come before
your OER, and which OER could come afterwards in a
learning pathway
S : System technical and technology
C-54
Consider adding metadata tags about the content to
help you and others later on to find your OER
C-18
Stimulate the intrinsic motivation to learn, eg through
arousing curiosity with surprising anecdotes
C-55
C-21
Monitor the completion rate, student satisfaction and
whether the student recommends your OER to others
Give metadata tags for expected study duration, for
expected level of difficulty, format, and size
C-56
C-23
Include a variety of self-assessments such as multiplechoice, concept questions, and comprehension tests
Try to use only free sourceware/software, and this
should be easily transmissible across platforms
C-57
C-24
Provide a way for the student and other teachers to
give you feedback and suggestions on how to improve
Try to ensure your OER is easily adaptable, eg separate
your computer code from your teaching content
C-59
C-25
Link formative self-assessment to help-mechanisms
Your OER should be easily portable and transmissible,
and you should be able to keep an off-line copy
C-26
Try to offer learning support
C-60
Your OER and the student’s work should be easily
transmitted to the student’s own e-portfolio
C-62
Include a date of production, and date of next revision
I : Information and material content
C-28
Make sure that the knowledge and skills you want the
student to learn are up-to-date, accurate and reliable.
Consider asking a subject-matter expert for advice
Table 2 : The TIPS Framework of Quality Criteria for Teachers as
Creators of OER
C-29
Your perspective should support equality and equity,
promoting social harmony, and be socially inclusive, law
abiding and non-discriminatory
4.Conclusions:
C-30
All your content should be relevant and appropriate to
purpose. Avoid superfluous material and distractions
C-32
Your content should be authentic, internally consistent
and appropriately localised
C-34
Encourage student input to create localised content
for situated learning : draw on their prior learning and
experience, their empirical and indigenous knowledge
ning
r
a
e
eL ers
Pap
40
Many OER experts and teachers freely contributed their wisdom
in shaping and improving the TIPS Framework version-1.0
(Kawachi, 2013). In particular the OER-Asia-2014 Symposium
in Penang afforded the opportunity to present and discuss
the validation process. At that time there were comments on
the distinction between Fitness of Purpose, and Fitness for
Purpose (see the Keynote by Prasad, 2014). The point here is
that Fitness of Purpose refers more to the institutional level-1
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
10
In-depth
and involves quality control or internal assessment. Whereas
Fitness for Purpose refers to the student learning at level-3,
which is assessed externally eg by interviews or examination
grades. Findings from the validation processes here indicate
that indeed the quality criteria concerns are different between
level-1 and level-2. However impact studies at level-3 on
achieved improved quality of learning using OER by students
and of the TIPS Framework in actual use by teachers are not
yet completed. It is likely that findings from impact studies will
confirm that quality assurance concerns as Fitness of Purpose at
the institutional level-1 are distinct from those at the cognitive
learning level-3, but that the Fitness of Purpose (relatively few)
criteria at the institutional level-1 will be subsumed and within
the (relatively many) Fitness for Purpose criteria of quality of
achieved learning level-3. This would be consistent with the
current validation findings where there are 8 criteria at level-1
which are included within the 38 criteria at level-2.
Teachers will find these Guidelines helpful (i) to remind them of
aspects worthwhile considering in their creation of their own
OER, and (ii) to offer suggestions on how to judge the quality
of other OER they may find on the internet. In both cases of
authoring and of reusing OER, these Guidelines aim to stimulate
the gradual development of a culture of quality surrounding the
use, reuse and sharing of OER to generally improve teaching
and learning. As such, institutions should also be able to use
these Guidelines for their OER. To help in this respect a rubric
is designed based on these Guidelines to offer assistance in
Quality Improvement (discussed below) and faculty continuous
professional development through iterative self-appraisal, or in
other words for professional reflection-in-action.
And at a highest level, preparing a textbook as an OER for
unknown faraway contexts would be at level-1 ((the TIPS
Framework criteria C-1, C-6, C-24, C-28, C-37, C-49, C-51, C-54,
and C-56 would be upper-most in the teacher’s mind). The
suggested criteria that may be in the teacher’s mind while
creating his or her own OER here indicate that not all the 38
criteria of the TIPS Framework are intended to be applied at all
times and at all levels. The teacher can pick and choose those
which might seem most suitable and Fit for Purpose in the
situated context of each OER.
There is furthermore a need to add social tagging (see eg Kawachi
& Yin, 2012) to somehow annotate OER with regard to users’
reflection on perceived quality. There remain wide concerns that
the quality of existing OER is dubious or difficult to discern. OER
usage and OER institutional origin do not adequately or reliably
indicate desirable quality. One way to build in quality assurance
and improvement is to add a feedback form at the end of each
OER. This could be done at the repository level-1 by the librarian
if funding permitted. Alternatively it could be built onto each
OER. In any case, there is the task to know what points to ask
any user to give feedback about. The current TIPS Framework
goes a long way to resolving this by suggesting those criteria
that cover quality concerns of teachers as OER users. A valid
Rubric can be constructed using the TIPS Framework criteria,
and attached to OER for quality assurance purposes.
The explanation to Figure 3 above gives some ideas to the
teacher how to use these Guidelines. The teacher normally
keeps a diary-type learning journal of ideas to present in class including any anecdotes, jokes, or emphasis, especially relating
to the student’s own culture and language. Making a video or
recording of the lesson as an OER could be done at level-3 for
own reuse the following year (the TIPS Framework criteria C-2,
C-7, C-18, C-32, C-40, and C-47, would be upper-most in the
teacher’s mind). If the teacher was intending this to be used in a
different classroom context, the language would be clearer with
less anecdotes - if for no other reason than the teacher does
not know the culture of those students in the nearby city, or
believes there will be a diversity of ethnicities in the future class
(the TIPS Framework criteria C-13, C-21, C-29, C-34, C-44, and
C-57, would be upper-most in the teacher’s mind).
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
11
In-depth
References:
Harvey, L., & Green, D. (1993). Defining quality. Assessment and Evaluation
in Higher Education, 18 (1), 9-34. Retrieved March 28, 2014, from
Achieve (2011). Achieve-OER-Evaluation. Washington, DC : Achieve Inc.
Retrieved January 5, 2013, from http://www.achieve.org/oer-rubrics
Bakken, B., & Bridges, B. (Eds.) (2011). National Standards for Quality
Online Courses (version 2). Vienna, VA : International Association for K-12
http://www.tandfonline.com/doi/abs/10.1080/0260293930180102#.
UzU8F9xLfTQ
Jung, I., & Lee, T. (2014). QA in e-ASEM OER project. Unpublished research.
Drawn from https://www.surveymonkey.com/s/X5THFTF
Online Learning iNACOL. Retrieved January 5, 2013, from http://www.
inacol.org/research/nationalstandards/iNACOL_CourseStandards_2011.
Kawachi, P. (2014a). Criteria : Comprehensive collation of quality assurance
pdf
criteria for OER. Retrieved July 12, 2014, from http://www.open-ed.net/
oer-quality/criteria.pdf
Baya’a, N., Shehade, H.M., & Baya’a, A.R. (2009). A rubric for evaluating
web-based learning environments. British Journal of Educational
Kawachi, P. (2014b). The domains of learning : Comprehensive taxonomy
Technology, 40 (4), 761-763.
of educational objectives. Retrieved July 12, 2014, from http://www.opened.net/oer-quality/domains.pdf
Binns, F., & Otto, A. (2006). Quality assurance in open distance education
- Towards a culture of quality: A case study from the Kyambogo University,
Kawachi, P. (2014c). The TIPS quality assurance framework for creating
Uganda. In B.N. Koul, & A.S. Kanwar (Eds.), Perspectives on distance
open educational resources : Validation. Proceedings of the 2nd Regional
education : Towards a culture of quality, (pp.31-44). Vancouver, BC :
Symposium on OER, (pp.183-191). Penang, 24-27 June. Retrieved July 4,
Commonwealth of Learning. Retrieved September 10, 2012, from http://
2014, from http://www.oerasia.org/proceedings
www.col.org/SiteCollectionDocuments/PS-QA_web.pdf
Kawachi, P. (2013). Quality assurance guidelines for open educational
Camilleri, A.F., & Tannhäuser, A-C. (Eds.) (2012). Open Learning
resources : TIPS framework, version 1.0. New Delhi, India : Commonwealth
Recognition : Taking Open Educational Resources a Step Further. Brussels,
Educational Media Centre for Asia (CEMCA). Retrieved March 7, 2014,
Belgium : EFQUEL - European Foundation for Quality in e-Learning.
from
Retrieved December 18, 2012, from http://cdn.efquel.org/wp-content/
88770-07-6.pdf
uploads/2012/12/Open-Learning-Recognition.pdf?a6409c
http://cemca.org.in/ckfinder/userfiles/files/OERQ_TIPS_978-81-
Kawachi, P., & Yin, S. (2012). Tagging OER for skills profiling : User
Dhanarajan, G., & Abeywardena, I.S. (2013). Higher education and open
perspectives and interactions at no cost. Proceedings of the Regional
educational resources in Asia : An overview. In G. Dhanarajan, & D. Porter
Symposium on Open Educational Resources : An Asian Perspective on Policy
(Eds.), Open educational resources : An Asian perspective, (pp. 3-20).
and Practices, (pp.134-140). 19-21 September, Wawasan Open University,
Vancouver, BC : Commonwealth of Learning. Retrieved July 7, 2014, from
Penang, Malaysia. Retrieved January 23, 2014, from http://www.oerasia.
http://www.col.org/PublicationDocuments/pub_PS_OER_Asia_web.pdf
org/symposium/OERAsia_Symposium_Penang_2012_Proceedings.pdf
Ehlers, U-D. (2012). Partnerships for better e-learning in capacity building.
Khan, B.H. (2001). A framework for e-learning. In B.H. Khan (Ed.) Web-
Presentation for a wiki-based course on e-learning design. Retrieved
based training, (pp.75-98). Englewood Cliffs, NJ : Educational Technology
February 8, 2013, from http://efquel.org/wp-content/uploads/2012/03/
Publications.
ECBCheck_Presentation_EN.pdf?a6409c
Khanna, P., & Basak, P.C. (2013). An OER architecture framework : Need
Fitzpatrick, A.R. (1983). The meaning of content validity. Applied
and design. International Review of Research in Open and distance
Psychological Measurement, 7 (1), 3-13. Retrieved February 21, 2014, from
Learning, 14 (1), 65-83. Retrieved March 8, 2013, from http://www.irrodl.
http://conservancy.umn.edu/bitstream/11299/101621/1/v07n1p003.pdf
org/index.php/irrodl/article/view/1355/2445
Frydenberg, J. (2002). Quality standards in e-learning : A matrix of
Kwak, D.H. (2009). e-Learning QA strategy in Korea. Presentation to
analysis. International Review of Research in Open and Distance Learning,
Conference in Thailand. 1 May. Retrieved April 20, 2013, from (search on
3 (2), Retrieved November 2, 2012, from http://www.irrodl.org/index.
the following) 202.29.13.241/stream/lifelong/Kwak%20/eLearningQA_
php/irrodl/article/viewArticle/109/189
Korea.ppt
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
12
In-depth
Latchem, C. (2012). Quality assurance toolkit for open and distance nonformal education. Vancouver, BC : Commonwealth of Learning. Retrieved
December 20, 2012, from http://www.col.org/PublicationDocuments/
QA%20NFE_150.pdf
Lawshe, C.H. (1975). A quantitative approach to content validity. Personnel
Psychology, 28 (4), 563-575. Retrieved February 21, 2014, from http://
www.bwgriffin.com/gsu/courses/edur9131/content/Lawshe_content_
valdity.pdf
Prasad, V.S. (2014). Institutional frameworks for quality assurance of OER.
Keynote Presentation Proceedings of the 2nd Regional Symposium on OER,
(pp.177-181). Penang, 24-27 June. Retrieved July 4, 2014, from http://www.
oerasia.org/proceedings
Quality Matters Program (2011). Quality matters QM Standards 2011-2013.
Retrieved December 10, 2012, from http://www.qmprogram.org/files/QM_
Standards_2011-2013.pdf
Sloan (2013). A quality scorecard for the administration of online education
Leacock T.L., Nesbit J.C. (2007). A framework for evaluating the quality
of multimedia learning resources. Educational Technology & Society, 10
programs. Newburyport, MA : The Sloan Consortium. Retrieved October 8,
2013, from http://sloanconsortium.org/quality_scoreboard_online_program
(2), 44-59. Retrieved November 23, 2014, from http://www.ifets.info/
SREB - Southern Regional Education Board (2001). Essential principles of quality
journals/10_2/5.pdf
: Guidelines for web-based courses for middle grades and high school students.
Leslie, L.L. (1972). Are high response rates essential to valid surveys ?
Social Science Research, 1, 323-334.
Atlanta, GA : SREB Educational Technology Cooperative. Retrieved December
9, 2012, from http://info.sreb.org/programs/EdTech/pubs/EssentialPrincipals/
EssentialPrinciples.pdf
McGill, L. (Ed.) (2012). Open educational resources infokit. Higher
Education Academy & JISC. Retrieved December 20, 2012, from https://
openeducationalresources.pbworks.com/w/page/24838164/Quality%20
considerations
Merisotis, J.P., & Phipps, R.A. (2000). Quality on the line : Benchmarks for
success in Internet-based distance education. Washington, DC : Institute
for Higher Education Policy. Retrieved November 20, 2012, from http://
www.ihep.org/assets/files/publications/m-r/QualityOnTheLine.pdf
Sreedher, R. (Ed.) (2009). Quality assurance of multimedia learning materials.
New Delhi : Commonwealth Educational Media Centre for Asia. Retrieved March
3, 2013, from http://cemca.org.in/ckfinder/userfiles/files/QAMLM%201_0.pdf
Williams, K., Kear, K., & Rosewell, J. (2012). Quality assessment for e-learning
: A benchmarking approach (2nd edn.). Heerlen, Netherlands : European
Association of Distance Teaching Universities (EADTU). Retrieved May 26, 2014,
from http://oro.open.ac.uk/34632/2/3D5D7C.pdf
Mhlanga, E. (2010). Quality criteria for maintaining quality in open schools.
In L. Cameron (Ed.), Quality assurance toolkit for open schools, (pp.29-53).
Vancouver, BC : Commonwealth of Learning. Retrieved April 4, 2014, from
http://www.col.org/PublicationDocuments/pubQAOSToolkit.pdf
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: openeducation.eu
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
13
In-depth
Students as evaluators of open educational resources
Authors
The global open education movement is thriving and growing, and as the term open
educational resource (OER) reaches adolescence, critical questions are starting to
emerge. Is the promise providing open access to high-quality education being delivered?
Startlingly absent from the literature is the thorough understanding of the views of OER
student users, be they campus-based or open learners on the web. This paper presents
the results of a mixed method survey of student attitudes to examine their awareness
of OER and understanding of quality. Student volunteers additionally worked with
university staff to develop an OER evaluation matrix (OEREM) to aid selection and use
of resources. Student selection of OER is based on a mixture of value-based and qualitybased judgments, and ease of use may simply be driving their choices. Further research
is required to understand how the resulting OEREM matrix can be used to provide
a more informed and critical selection, and also how such strategies can be used to
assist open learners who bring a diverse range of academic abilities and personal
circumstances that influence their learning.
Vivien Rolfe
[email protected]
Associate Head of Department
of Biological, Biomedical and
Analytical Science
University of the West of
England
Bristol, UK
1.Introduction
Tags
Student perceptions of OER;
OER quality; OER movement;
OER evaluation matrix; digital
literacy
ning
r
a
e
eL ers
Pap
40
As the term “open educational resources” (OERs) reaches adolescence (United Nations
Educational Scientific and Cultural Organisation, UNESCO, 2002), it cannot be disputed that
global open education movement is thriving and growing. The notion of being able to access,
and share, free open resources has driven forward innovative practices and various policy
initiatives showing widespread endorsement from individuals, organisations and nations
(Cape Town Declaration, 2014; European Commission, 2012; UNESCO, 2012). These policies
and reports declare that OER will have transformational effects on education by crossing
geographic, professional and cultural boundaries. As part of these goals, the ‘quality’ of OERs
is heralded as one of the benefits of engaging with them:
•
Quality can be improved and the cost of content development reduced by sharing and
reusing. (Organisation for Economic Co-operation and Development, OECD, 2007).
•
Open Educational Resources (OER) initiatives aspire to provide open access to highquality education resources on a global scale. (Yuan, MacNeill & Kraan, 2008)
•
And institutions are investing to improve the quality of teaching and learning. (UNESCO,
2011).
•
OER gives us the previously unimaginable opportunity to use technology to maintain
the quality of instructional materials while significantly cutting educational costs. (Wiley,
Green& Soares, 2012).
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
14
In-depth
•
‘Opening up education’ proposes actions towards more
open learning environments to deliver education of higher
quality and efficacy (European Commission, 2012).
So the notion of OER and quality are inextricably linked, but
seemingly the adolescence may well be a turbulent one,
with questions being raised as to how quality is indeed being
assured and whether the OER are having an impact. What we
do know is that producing high-quality resources is strategically
important as a lever for governments and policy makers ‘buy
into’ OER. The “increased efficiency and quality of learning
resources” was cited as a reason to engage in OER in a survey
of 82 education ministers from different countries (Hoosen,
2012). In a separate survey of executive staff from within a UK
higher education institution, those interviewed stated that the
quality assurance of OER was an important factor to consider
if an institution were to engage. “How could it be ensured that
OER released by a university was high quality and met learner
needs?”“The traditional academic quality processes govern the
quality of awards, not content.” “How would OERs released
remain current and reflect the quality of information required
by professional bodies?”(Rolfe & Fowler, 2012).
The need for good quality is also imperative in a practical sense.
In a US study of 2,144 faculty from a nationally representative
sample of US higher education providers, the need for trusted
quality teaching resources was cited second as being important
for choosing materials to use. In the survey, OER was deemed
to be of equivalent quality to that of traditional educational
resources (Allen, & Seaman, 2014). An interesting observation
is that in much of the discourse, the ownership of quality
assurance is never clearly defined: “at a minimum, someone
must capture content, digitise it, check it for copyright issues,
resolve copyright issues, and provide quality assurance of the
final product” (Wiley, 2007). Yuan et al, (2008) outline three
underlying three models for regulating OER quality that have
generally been adopted. Firstly, an institution-based approach
implies that the reputation and brand persuades learners /
users that the resources are of good quality. In this there is the
assumption that internal quality checks are in place. In peerreview approaches, mirroring that used in journal publishing,
production teams would review the quality of OER to agreed
criteria. In an open approach, OER users decide through use
and/or comments whether a resource was useful and of quality
(Yuan et al, 2008).
Within the entire debate little thought has been given to the
role of students and open learners as gatekeepers of these
processes. In a recent review of OER quality issues, Camilleri,
Ehlers and Pawlowski (2014) suggested that quality must be the
responsibility of a range of stakeholders and at each stage of
the OER life cycle. The stakeholders could include policy makers,
education management and academic communities, formal and
informal learners. UNESCO also implicated users in the process,
suggesting that they could engage in assuring the quality of
content through commenting in social networks (UNESCO,
2011).
“Student bodies can encourage students to participate in the
social networking environments that have been created around
OER repositories, so that they play an active role in assuring the
quality of content by adding comments on what content they
are finding useful and why.” (UNESCO, 2011).
One would instantly question whether students would have
the critical ability to do so, and getting students to engage at
all, let alone in meaningful ways, would be a familiar challenge
to many educators as seen in other online learning sectors.
In observations of learner behaviours in massive online open
courses (MOOCs), many participants are inclined not to engage
in discussion boards and address the tasks set (Zutshi, O’Hare, &
Rodafinos, 2013), and a lack of confidence in exposing personal
views might result in participants tending to ‘lurk’ rather than
be active participants (Mackness, Mak, Williams, 2010). The
other problem with engaging open learners in critical processes
is also that we know startlingly little about them, with the
research more widely conducted focusing on traditionally
enrolled students. We know little about the motivations, digital
and critical literacies of informal open web learners.
2.AIM
The aim of this paper is to investigate student perceptions of
OER and specifically to find their perspectives on what they
would think quality materials to be. The paper also describes
work undertaken to develop a learning tool to assist students in
making quality judgments and to help in their selection process.
Aspects of this work have been presented at conference (Hurt,
Towlson & Rolfe, 2012; Hurt, Rolfe & Dyson, 2013).
3.Methodology
The research was conducted in two phases. Firstly, a
questionnaire was circulated to students in the Faculty of
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
15
In-depth
Health and Life Sciences at De Montfort University in 2012 to
evaluate student attitudes toward OER (Survey available for
sharing: Hurt, 2014). The questionnaire comprised qualitative
and quantitative items, including Likert-scale questions to
capture opinion. Follow up interviews with a random selection
of twelve healthcare and science students provided a fuller
understanding of their views. The interviews lasted around
30 minutes and were recorded and transcribed, the content
captured in Microsoft Excel and clustered into themes and
presented as a narrative.
Phase two of this work was in collaboration with the Kimberlin
Library at De Montfort University. Students were invited to
participate in focus groups and worked with library staff to
capture how they interacted with open online resources.
Participants were a mix of undergraduate and postgraduate
science students. The approach was to use a previously
developed evaluation matrix that students used to assist in the
critical appraisal of research articles (Mathers & Leigh, 2008).
Students used the original IEM and apply it to a text document
to become familiar with it. Students then applied the IEM to a
multimedia OER (i.e. available from YouTube).
The students wrote on the matrix with comments and
suggestions on how to make it more relevant to OER. The result
was the OER ‘Evaluation Matrix’ that students could use to
place judgement on the quality of open educational resources
that they might intend using in their studies. A brief evaluation
of the OEREM was then carried out with students and staff.
This research was ethically approved by the host institution.
4.Results
Student awareness and perceptions of OER
The questionnaire was available online and in paper form and
was made available to all students in the Faculty of Health and
Life Sciences. In total, 264 life science students participated
voluntarily as reported previously (Hurt et al, 2013) including
midwifery students, and students of forensic, healthcare,
nursing and biomedical science. Around one third had heard
of the term ‘open educational resource’ and they made a good
effort to define what they felt an OER to be in terms of open
licensing, being freely available resources, being accessible and
re-usable.
Perceived Positives
Engagement
and Barriers to
OER
The questionnaire revealed many perceived benefits and
barriers to using OER as reported in full (Hurt, 2014). One area
of exploration was how to encourage use. The survey asked
“what help do you think is required to get students to use
open educational resources?” 132 students answered and the
majority of responses suggested that OERs needed to be more
widely advertised in the institution (n=61, 46% responding
to that question), and others felt that the students needed
knowledge on how to use them (n=30, 23% responding).
Student Understanding of OER Quality
There were many aspects of the questionnaire that were
followed up through a series of student interviews. Semistructured questions were developed to partly understand why
students felt they needed more knowledge on how to use OER.
For the purposes of this paper, the discussions relating to those
questions are presented here, where in reality a much wider
conversation was held, as reported (Hurt, 2014).
Students were asked what motivated them to seek out OERs
to assist with their studies. They were also asked how they
make judgements on the quality of resources. What emerged
was a blurring between what might be perceived as ‘quality’
and what might be perceived as ‘study value’. When asked
what motivates them to select OERs to assist with their studies,
students recognised the need to look for credible sources that
were up-to-date:
“An accredited website, like a ‘.org’ or something like that. You
try and get things from universities or from our point of view it’s
usually our core places like the Department of Health and NICE,
proper research institutions that research into the specific thing
that you’re looking for rather than just picking something off the
net that’s written by anyone”. (Midwifery Student).
“A lot of journal articles, unless something has not been clinically
updated or proven otherwise, you get the most up-to-date and
you aim for the last 10 years.” (Midwifery Student)
Further into the interview, students began to infer that they
applied a quality judgement depending on the purpose of their
study and what they wanted to achieve:
“It depends what it is for. If it is for general learning, say for
instance we’d had a lecture or you were reading an article and
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
16
In-depth
there’s a term or a word you’re not familiar with, I wouldn’t be
too worried if I used something like Wikipedia if it’s just for what
does this mean, in other words just a quick definition. If it was
for an academic piece of work that I was submitting, then I’d
want something to be far more solid and sort of valid, research
based.” (Midwifery Student)
• Who is the author?
• What is the relevance of points made?
•
Where is the context for points made?
• When was the source
published?
• Why: what was the author’s reason/purpose for
writing the resource?
In discussing their choice and selection of academic resources,
students were reasoning that it was less to do with quality but
they were placing a ‘value’ on the resource that were due to
ease of use and clarity of content:
In the second phase of the present research, the aim was to
develop an OER evaluation matrix (OEREM). In all, 38 students
were involved in focus groups to develop the matrix. Students
provided 33 annotations to the ISEM (critical appraisal) tool to
hone it. They provided textual prompts in their own words that
they felt more applicable to OERs. They also commented on the
inclusion of visual clues and some definitions of OERs as part of
the tool. The resulting document was produced by students as
an aid to selecting OER.
“If it’s easy to understand.
I like pictures.”
“If they’re about the subject you’re learning, if they’re good
notes and they’re easy to understand I think they’ll be all
right.”(Biomedical Science Students)
There was also a value placed on immediacy of access to
information and the need to be time effective.
“I think it does because the longer I spend finding it, I tend to
lose my concentration and think about what I will do next.”
(Biomedical Science Student)
“…the thing is, you know, when trudging through books for
example, you never really know what you’re going to get without
reading the index or the content and you still don’t know what
you’re going to get until you’ve re-read that thing. Whereas
on the internet or online you can just scan and have a quick
look through and pick out so many key words for what you’re
actually looking for and get the right research that you need.”
(Midwifery Student)
Tools-Base Approaches to Judging Quality
What emerges is a picture that suggests that students are
choosing OERs on the basis of ‘value’ as well as ‘quality’ which
might suggest a skills deficit in the critical ability of students.
Previous research by Mathers and Leigh (2008) revealed a gap
between the critical ability of student to self-assess information
and their perception of their own ability. In the study, nearly
69% of students agreed and strongly agreed they had the ability
to critically evaluate information, whereas in reality they were
not performing so well (Mathers & Leigh, 2008). The authors
developed an information evaluation tool (Information Source
Evaluation Matrix, ISEM) that was piloted with students as
previously reported, based on a matrix of five discrete criteria –
the ‘5 Ws’ (Towlson, Leigh & Mathers, 2010).
ning
r
a
e
eL ers
Pap
40
Focus Group Results
• Who is the author?
Look them up. • What is the relevance of
points made?
• When was the OER produced?
• Why has the
OER been produced?
The students confirmed that the resulting matrix was an effective
tool for evaluating the use of OERs. The matrix included brief
instructions as to how to use it, and explanations of what OER
are and how to find them. One of the first areas questioned was
that most OER viewed in the focus groups did not include any of
the relevant information. Students commented that in looking
at multimedia resources such as OER it was difficult to even find
information on the author and date of publication by which to
evaluate it:
“…and this evaluating matrix can easily be used when
evaluating the writing article or essay. But when we are looking
animation it is difficult to find info that will be needed to make
sure it reliable.”
The use of the word task in the “What” criterion was also
confusing to students in the context of an OER compared to a
journal article:
“The matrix is more for journal/article based evaluation as
these videos show no particular argument”.
On of the biggest difficulties encountered that was unresolved
was how to develop an appropriate question for evaluating the
‘quality’ of the multimedia design.
As a result of the focus groups, the matrix developed in an
iterative way. A final version of it was evaluated with a number
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
17
In-depth
of stakeholders including library staff, academics and an IT
specialist. They all felt the matrix was useful with “a good visual
strategy for indicating evaluation”. Limitations included that
responses did not always fall into the clearly defined spaces, and
areas for improvement included the inclusion of “something to
do with the format of materials OERs usually Audio/Visual” and
also turning it into an online tool.
5.Discussion
The aim of this paper was to investigate the concept of engaging
students as ‘gatekeepers’ of OER quality. How to ensure and
assure the quality of OER is an area ill-resolved, despite the
fact that the notion of ‘quality’ is entrenched well within many
definitions of OER, and it presents a persuasive argument for
academic and political ‘buy-in’. Part of the problem might be
there has been little insight into what the OER community
actually means by quality, and whether this has different
interpretations for different groups. Also, the responsibility for
owning OER quality is not resolved, although suggestions are
it needs to engage a wide range of stakeholders (Camilleri et
al, 2014) and maybe even using learners as part of the process
(UNESCO, 2011).
With the interest in OER and the promise of quality learning,
there is surprisingly little empirical research that relates
the perspectives of university students on using OER, and a
significant gap in understanding activities of open / informal
learners (Bacsich et al, 2011). The present study addressed part
of these concerns by exploring university student views on OER.
Around one third of students were aware of the term ‘open
educational resource’. Students were also good at identifying
some of the key characteristics of OERs including minimal
copyright restrictions, free of fees, reusable and accessible
(Hurt et al, 2013). In a larger-scale survey of the UK student
body by the National Union of Students, data was gathered in
2012 via online survey with 2,807 respondents representing
150 institutions. Participating students were provided with a
definition of OER, and then questioned on levels of awareness.
Around one fifth of traditional students (full-time, campus
based) and quarter of non-traditional students (over 25,
studying at the Open University or studying part time) claimed
to be aware of OER (National Union of Students, NUS, 2014).
In the NUS survey, students claimed the most positive benefit
of OER was to “improve the quality of my learning experience”,
however the survey did not question more deeply the drivers
and barriers to using OER that would have revealed more
ning
r
a
e
eL ers
Pap
40
of a sector-wide strategic context for moving forward the
open education agenda. There are no studies that relate the
perceptions of open learners or users of OER content on the
web.
Student Acceptance of ‘Value’ and ‘Quality’
The student interviews revealed an interesting perspective
when they described their thought processes for selecting OER.
Their decision-making was based on both quality and value
propositions. They were describing OER in terms of the criteria
relating to quality – currency of resource, authenticity of author,
but they were also applying value judgments, based on ease
of use. Such behaviours are also not new observations. In the
‘technology acceptance model’ proposed by Davis, Bagozzi and
Warshaw it is the ease of use and perceived usefulness of the
internet that primarily that drive end-behaviour (Davis, Bagozzi
& Warshaw, 1989). In the further evaluation of this model with
student learners, the perceived usefulness and ease of use of
the web was persuasive toward their changing behaviour and
use of the web in their studies and assignments (Sihombing,
2007). What isn’t clear is in terms of learning behaviour is
whether in our study some students are selecting OERs based
on ease of use because this relates to a lack of critical thought,
or whether critical thought processes are being over-ridden
by the enchantments of the internet. Gaps between students’
perception of their own critical ability and actual ability have
been reported so such questions should form the basis of further
research (Mathers & Leigh, 2008). These authors developed a
tool to assist students in critical selection of research papers
and this was a success. Others favour a more holistic strategy
by involving students where possible in their own learning
processes to ensure they are identifying their own working
standards and making their own critical judgments:
“The involvement of students in identifying standards and/or
criteria to apply tot heir work and making judgments about the
extent to which they have met these criteria and standards.”
(Boud, 1991).
Boud sees self-assessment as critical to the effectiveness of
learning and lifelong learning and is part of the repertoire of
academic skills - feedback from lecturers, peer-assessment that
inform learning development. These all contribute to gaining
confidence in making valid judgments, so therefore, should be
built into courses to acquire skills. Boud’s concern that has been
reflected more recently is that:
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
18
In-depth
“Care needs to be taken that it is not linked with activities
which undermine the engagement of students in determining
assessment criteria and in making their own decisions about
their work”. (Boud, 2005).
The present research was a case in point and therefore sought
to allow students to work in multi-disciplinary university teams
over a period of months to produce a student-centered learning
tool. The outcome was an OER Evaluation Matrix, and perhaps
the aspects of it that were unresolved, such whether evaluating
the production and accessibility of multimedia resources, is less
of a significant outcome considering that the students were part
of an activity in which they were being encouraged to make
decisions about their own work and approaches.
OEREM as a Quality Evaluation Tool
The OER Evaluation Matrix (OEREM available to download:
Hurt et al, 2014) that developed defined four parameters that
students felt as important to judge the quality of resources.
The Organisation for Economic Co-operation and Development.
defines ‘quality of content’ as:
“Currency, relevance and accuracy of the information. Is the
content clear and concise and informed by scholarship, does
it completely demonstrate the concepts, how flexible is it”?
(Organisation for Economic Co-operation and Development,
2007).
Without placing any pre-defined thoughts in the minds of the
students participating in focus groups, aspects such as currency
and relevance were identified as parameters, alongside
authorship and the motivations for producing the resource.
There was lack of consensus whether an evaluation of the
technological aspects of the OER in their multiple formats
should be included, perhaps because of the difficulties in doing
so, and perhaps these would be construed as value rather than
content quality judgments. One outcome of this work is it was
used to inform the design of all future OERs from the group,
with detailed title and credit pages included thereafter as
standard practice.
The OEREM was clearly aligned to the ‘quality’ of the resource
rather than its visual appeal. The question remains, would the
tool influence student choice of resource, or would ‘value’
– time, ease of use – override these decisions? A further
evaluation of the OEREM is required in order to determine this.
ning
r
a
e
eL ers
Pap
40
The OER Quality Dilemma
This paper outlines empirical research in which student
perceptions of OER was examined, alongside work that engaged
students in a quality evaluation process. So where does this
leave the notion of OER quality and quality assurance, and how
does this mesh with the underlying philosophical benefit of OER
to “provide open access to high-quality education resources
on a global scale?” (Yuan et al, 2008). Does it matter that
OER is not produced to the same rigorous quality assurance
that is required of campus-based modules and programmes
(Tannhäuser, 2012), or as we see elsewhere in relation to open
courses such as MOOCs, the traditional academic norms and
values of bricks and mortar institutions do not readily translate
to open online learning, particularly in relation to the translation
of ethical values of equality and diversity (Rolfe, 2013).
The altruistic nature and global reach of the OER movement
might in itself create a problem. Randall Collins placed some
thought around the need for conflict within intellectual pursuits
as “the energy source of intellectual life” (Collins, 2000). In his
work he outlines that creativity can become stagnant and remain
unchallenged due to overarching social factors that prevent the
critique and questioning of that very work. The necessity to
infuse critical reflection in the OER arena is an urgent one, but
due to it’s ideologically appeal, may result in a distinct lack of
criticality.
We therefore return to the argument by Camilleri et al, (2014) in
that quality approaches may be best served by holistic methods
to involve responsibility by all stakeholders at each stage of
the OER life cycle. Institutions and those releasing OER could
explore peer-review and quality-informing processes involving
the users of materials, although it is not clear what the diversity
of specific needs would be. Recognising and satisfying learner
needs in massive online open courses is known to be a challenge
with some participants feeling intimidated and overwhelmed
by the lack of structure, whilst others thrive (Kop, Fournier &
Mak (2011). Not just in terms of academic skills and ability,
other pressures fall on autonomous, self-directed learners and
one strategy adopted is to show patterns of activity followed by
‘lurking’ I order to meet the demands of everyday life (Kop &
Carroll, 2012).
The present research is limited in that it provides the context
of one UK higher education institution in which there was a
reasonable level of OER activity within the faculty in question.
Mixed method approaches comprising questionnaire and
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
19
In-depth
student interview provide a robust insight into student
awareness and perceptions of OER, and using a team of
researchers added validity to the analysis. Much of this research
was part of a Masters dissertation project an the student was
supervised by two academics including a Professor of Social
Science and another experienced in the evaluation of open
education.
The development of the OEREM was carried out in an iterative
way, and although feedback from stakeholders was deemed
positive, a longitudinal study of the impact of the matrix would
provide in-sight into its effectiveness, and whether it also acts
as a decision-making tool. A major limitation to the study is that
it involved university-based students and did not consider the
views of other open learners that use OER that are likely to offer
a much more colourful and less polarised perspective.
6.Conclusions
In conclusion, the notion of OER quality assurance and
enhancement is complex and requires an exploration of how
to address it, particularly considering the growth of the global
OER movement and increasingly widespread adoption of open
practices.
The need to evaluate OER quality and the critical processes
implicit may conflict with the altruistic and philosophical
stances of ‘open’. These bridges need to be crossed because
policy-makers are looking for high quality learning resources to
inform their decision-making.
One approach to engage students in OER quality control
could be the use of the OEREM, although as this paper shows,
further research is required to evaluate its effectiveness, and
to question whether it assists with student selection of online
learning materials or whether they will do so based on the
basis of study value and quality. Research is certainly needed
to understand more about the communities of open learners
whose levels of literacy, learning behaviours and circumstances
are simply not well understood.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
20
In-depth
References
Hurt, L., Towlson, K., & Rolfe, V. E. (2014). OER Evaluation Matrix.
All references must be adequately cited and listed following
the standard citation style choice of the author (APA).
resources/19266
Jorum deposit retrieved November 1 2014 from http://find.jorum.ac.uk/
Hurt, L., Rolfe, V., & Dyson, S. (2013). Student awareness and perceptions
Allen, I. E., & Seaman, J. (2014). Opening the Curriculum: Open
Educational Resources in U.S. Higher Education 2014. Babson Survey
of open educational resources. OER13 Annual Conference. Retrieved
November 1 2014 from http://www.medev.ac.uk/oer13/63/view/
Research Group and Pearson. Retrieved November 1 2014 from http://
www.onlinelearningsurvey.com/reports/openingthecurriculum2014.pdf
Hurt, L., Towlson, K., & Rolfe, V. E. (2012). Enabling Students to Evaluate
and Make Effective use of Contemporary Open Education Resources. OER12
Bacsich, P., Phillips, B., & Bristow, S. (2011). Learner Use of Online
Educational Resources for Learning (LUOREL). Final JISC Report. Retrieved
November
1
2014
from
http://www.jisc.ac.uk/media/documents/
Annual Conference, p473 – 478. Retrieved November 1 2014 fromhttp://
www.open.ac.uk/score/files/score/file/Conference%20Proceedings%20
Cambridge%202012.pdf
programmes/elearning/oer/LUOERLfinalreport.docx
Kop, R. and Carroll, F. (2012). Cloud Computing and Creativity: Learning on
Boud, D. (1991). HERDSA Green Guide No 5. Implementing student selfassessment (Second ed.). Campbelltown: The Higher Education Research
a Massive Open Online Course. European Journal of Open, Distance and
e-Learning: EURODL.
and Development Society of Australasia (HERDSA).
Kop, R., Fournier, H., & Mak, J. S. F. (2011). A pedagogy of abundance or a
Boud, D. (1995). Enhancing Learning Through Self-Assessment. London:
Routledge Falmer.
pedagogy to support human beings? Participant support on massive open
online courses. The International Review of Research in Open and Distance
Camilleri, A.F., Ehlers, U.D., & Pawlowski, J. (2014). State of the Art
Review of Quality Issues related to Open Educational Resources (OER).
European Commission Report. Retrieved November 1 2014 from http://
is.jrc.ec.europa.eu/pages/EAP/documents/201405JRC88304.pdf
Cape Town Declaration (2014). The Cape Town Open Education Declaration.
Retrieved November 1 2014 from http://www.capetowndeclaration.org
Learning, 12(7), 74-93.
Mackness, J., Mak, S.F.J., & Williams, R. (2010). The Ideals and Reality of
Participating in a MOOC. Retrieved November 1 2014 from http://www.
lancaster.ac.uk/fss/organisations/netlc/past/nlc2010/abstracts/PDFs/
Mackness.pdf. Dirckinck-Holmfeld, L., Hodgson, V., Jones, C., De Laat, M.,
McConnell, D. and Ryberg, T., eds. In: Proceedings of the 7th International
Conference on Networked Learning 2010, Denmark, 3 - 4th May 2010.
Collins, R. (2000). The sociology of philosophies. A global theory of
intellectual change. Cambridge, MA: The Belknap Press.
Mathers, L., & Leigh, M. (2008). Facilitators and barriers to developing
learning communities. Paper delivered at the higher education annual
Davis, F., Bagozzi, R.P., & Warshaw, P.R. (1989). User Acceptance Of
conference ‘Transforming the student experience’, Harrogate, 1–3 July
Computer Technology: A Comparison of Two. Management Science, 35(8),
2008. Retrieved November 1 2014 from https://www.heacademy.ac.uk/
982-1003.
node/3510
European Commission (2012). Opening Up Education. Retrieved
Organisation for Economic Co-operation and Development, OECD. (2007).
November 1 2014 from http://eur-lex.europa.eu/legal-content/EN/
Giving knowledge for free: The emergence of open educational resources.
TXT/?uri=celex:52013DC0654
OECD Publishing. Retrieved November 1 2014 from http://www.oecd.org/
Hoosen, S. (2012). Report for COL, UNESCO. Survey on Governments’ Open
edu/ceri/38654317.pdf
Educational Resources (OER) Policies. Retrieved November 1 2014 from
National Union of Students, NUS. (2014). Students’ views on learning
http://www.col.org/resources/publications/Pages/detail.aspx?PID=408
methods and open educational resources in higher education. HEA and Jisc
Hurt, L. (2014). De Montfort University Student Perceptions and
Understanding of Open Education Resources. Masters by Research
Case Study. Retrieved November 1 2014 from https://www.heacademy.
ac.uk/sites/default/files/resources/NUS-report.pdf
Dissertation. Jorum deposit retrieved November 1 2014 from https://
Rolfe, V., & Fowler, M. (2012). HEA/JISC Open Educational Resources;
dspace.jorum.ac.uk/xmlui/handle/10949/19267
How institutional culture can change to adopt open practices. HEA Case
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
21
In-depth
Study. Retrieved November 1 2014 from https://www.heacademy.ac.uk/
node/3672
Rolfe V. (2013).
MOOCs and social responsibility toward learners.
In OPEN-ED Open Education 2013. Utah, Park City, November 2013.
Retrieved November 1 2014 from http://vivrolfe.com/uncategorized/
mooc-research-on-student-experience-and-social-responsibility-towardlearners/
1st Global OER Forum. Retrieved November 1 2014 from http://portal.unesco.
org/ci/en/ev.php-URL_ID=2492&URL_DO=DO_TOPIC&URL_SECTION=201.html
United Nations Educational Scientific and Cultural Organisation, UNESCO.
(2011). Guidelines for open educational resources (OER) in Higher Education.
COL, UNESCO Operational Guide. Retrieved November 1 2014 from http://
unesdoc.unesco.org/images/0021/002136/213605E.pdf
United Nations Educational Scientific and Cultural Organisation, UNESCO.
Sihombing, S.O. (2007). Students Adoption of the Internet in Learning:
Applying the Theory of Acceptance Model. Paper presented at National
Conference “Inovasi Dalam Menghadapi Perubahan Lingkungan Bisnis”
Universitas Trisakti – Jakarta (2007)
Tannhäuser, A. (2012). General Reflection on Open Learning Recognition.
In A. F. Camilleri & A. Tannhäuser (Eds.), Open Learning Recognition: Taking
Open Educational Resources a Step Further (pp. 59–62). Brussels, Belgium:
EFQUEL - European Foundation for Quality in e-Learning. Retrieved
(2012). Paris OER Declaration. 2012 World OER Congress UNESCO, Paris,
June 20-22, 2012. Retrieved November 1 2014 from http://www.unesco.
org/new/fileadmin/MULTIMEDIA/HQ /CI/CI/pdf/Events/Paris%20OER%20
Declaration_01.pdf
Wiley, D. (2007). On the Sustainability of Open Educational Resource Initiatives
in Higher Education. Paper commissioned by the Organisation for Economic Cooperation and Development. Retrieved November 1 2014 from http://www.
oecd.org/edu/ceri/38645447.pdf
November 1 2014 from http://efquel.org/wp-content/uploads/2012/12/
Wiley, D., Green, C., & Soares, L. (2012). Dramatically Bringing down the Cost
Open-Learning-Recognition.pdf
of Education with OER: How Open Education Resources Unlock the Door to Free
Towlson, K., Leigh, M., & Mathers, L. (2010). The Information Source
Learning. Center for American Progress.
Evaluation Matrix: a
quick, easy and transferable content evaluation tool.
Yuan, L., MacNeill, S., & Kraan, W. (2008). Open Educational Resources –
SCONUL Retrieved November 1 2014 from http://www.sconul.ac.uk/
Opportunities and Challenges for Higher Education. Bolton: JISC CETIS.
publication/the-information-source-evaluation-matrix-a-quick-easy-andtransferable-content
Zutshi, S., O’Hare, S., & Rodafinos, A. (2013). Experiences in MOOCs: The
Perspective of Students. American Journal of Distance Education, 27(4), 218-
United Nations Educational Scientific and Cultural Organisation, UNESCO.
227.
(2002). Experts to Assess impact of Open Courseware for Higher Education.
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: openeducation.eu
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
22
In-depth
Student’s Quality Perception And Learning Outcomes
When Using An Open Acessible Elearning-Resource
Authors
Bissinger, Kerstin
[email protected]
University of Bayreuth,
Didactics of Biology
Department
Researcher, PhD student
Bayreuth, Germany
Bogner Franz X.
[email protected]
University of Bayreuth,
Didactics of Biology
Department
Professor, head of department
Bayreuth, Germany
The present study focuses on 10th graders’ perceptions of open educational resources
(OERs) by examining students’ rating of one exemplary open educational resource.
Individual cognitive load while working on the web-based module was monitored.
Additionally, we focus on learning outcomes monitored in a pre- and post-test design
using pen and paper questionnaires as well as digital workbooks. The completion
of one exemplary task depicts the potential to foster critical thinking by using OERs.
Using these different aspects permit us to evaluate the quality of the used OER from a
student’s point of view. In summary our results points to a positive perception of the
used OER, a cognitive achievement effect as well as the potential to support critical
thinking which altogether supports a good quality of the resource and supports our
strategy to be successful in order to assess quality of OERs in general.
1.Introduction
Tags
Quality of OERs; student’s
perception; learning outcome;
rainforest; climate change
ning
r
a
e
eL ers
Pap
40
eLearning is regarded as a “new learning paradigm” (Yacob et al. 2012; Sun et al. 2008)
and consequently enters more and more into today’s education. These circumstances are
reflected in the Digital Agenda for Europe, where especially Action 68 requests to “mainstream
eLearning in national curricular for the modernisation of education”. Consequently,
requirements for digital literacy of educators and students lead to the establishment of
initiatives like Open Education Europa or the Open Discovery Space Project (ODS). Both
projects share an important word in their title: “Open”. Open can refer to different aspects
like education should be open to everyone, or everyone should be allowed to contribute to
education regardless of place, time or social aspects. In this sense open refers to an additional
aspect: “Open educational resources” (OERs) which means digitally available resources
for everyone. The idea is to motivate teachers to share their ideas across boundaries and
work together in order to promote a literate and responsible next generation of European
citizens. Consequently the main objective of ODS is to introduce changes in educational
practices and develop resource-based learning. As a starting point, pedagogical best practice
scenarios were developed and uploaded to a digital access point: the ODS portal. So far 6092
teachers are connected within 395 communities and have access to 721916 resources. The
evaluation of ODS mainly addresses teachers by monitoring their actions on the portal and
the implementation of questionnaires during workshops, coupled with more qualitative
direct evaluation through workshops, interviews and focus groups (Bissinger et al. 2014).
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
23
In-depth
However, the OERs within the teacher communities of the
different projects are providing the basis for collaboration and
learning and thus their quality plays an important role for the
sustainable implementation of open education and learning.
Yet, defining quality in regard to eLearning and OERs is one of
the “central challenges for theory and practice” (Ehlers, 2004).
In order to define quality in this context Dondi (2009) suggests
to “consider the influence and visions of stakeholders on quality
perception”. Besides encouraging teachers to actively engage
in OER-communities by using, adapting and creating digital
resources, students are an important stakeholder group as well.
They are the actual end-user presumably profiting the most
by the integration of innovative learning methods leading to
computational literacy and the fostering of 21st century skills.
Consequently, it is important to examine how students are
affected by and perceive OERs as “quality seems to be in the
eye of the beholder” (Dondi 2009). Even within one stakeholder
group, differences concerning quality perception emerge
(Boonen 2005 cited in Dondi 2009) making the assessment of
quality difficult. These difficulties have led to some complains
about a lack of documentation for eLearning as an “effective
delivery mechanism” with respect for instance to learning
outcomes (Manochehr, 2006). Up to now, there are some results
existing in regard to students’ perception of eLearning and the
related learning outcomes: Morgil et al. (2004) found students
as pre-set to acquire knowledge „through the teacher“ and
consequently as hesitant to use computer assisted educational
applications; nevertheless, this perception changed after
participating in an eLearning class. Cox (2013) reported students’
attitudes being dependent “upon their overall experience of
using e-learning“. In regard to quality perception Ehlers (2004)
described four groups of quality perception from a learner’s
perspective which differ based on their preferences.
Our present study focuses on the implementation of an
exemplary OER which is publicly available on the ODS portal:
Tropical Rainforest and Climate Change. The OER tackles a
complex thematic area which is imparted through eLearning in
order to help students in a suitable manner to understand the
role of tropical rainforest in the context of climate change and
to establish their own opinions on this media-influenced topic.
Starting from a basic exploration of the ecosystem, students
analyse original data from Ecuador in order to find an informative
basis to detect climate change. They also calculate their own
carbon footprint and become acquainted with various actions
which can be implemented in their daily life in order to tackle
climate change. All these aspects aim to foster education for
ning
r
a
e
eL ers
Pap
40
a sustainable development. Therefore it was officially awarded
to become a contribution to the UN decade of sustainable
development. This learning resource was viewed by 59 different
teachers of the ODS portal since its launch in June 2014 and was
rated with 5 out of 5 stars by them.
The current study presents the quality perception of students
reflected by their rating of tasks in regard to their cognitive load
(CL; Paas & Van Merriënboer, 1994) and perception of usefulness.
Additionally we focus on their cognitive achievements which
provides an external quality criteria. We reflect our results
taking into account student’s computer user self efficacy (CUSE;
Cassidy & Eachus, 2002).
2. Material and Methods
114 tenth graders (age 16.51; SD 1.459; 50.88% male, 49.12%
female) participated in a 1.5 hour learning programme
dealing with tropical rainforest and climate change. Students
followed the learning scenario available on the ODS portal
and worked with the OER “Bayreuth goes Ecuador” which
is a website compromising several applications like a video,
texts, an interactive animation, a carbon footprint calculator
and an analysing tool to examine two original datasets of an
interdisciplinary research group recorded in Ecuador. Students
are guided through these activities by dividing the unit into three
different tasks including five leading questions each of which
they directly answer on the website (and send themselves via
email) or in a digital workbook.
In order to analyse students’ quality perceptions concerning
this learning scenario, the participants were asked to rate how
appealing they found the three different tasks. School grades
were used for this purpose as students are familiar with this scale
ranging from 1 “pleased me very much (well done)” to 6 “pleased
me not at all (insufficient)”. Additionally, we asked students to
state their CL varying between 1 very low CL and 9 very high
CL with 5 as an anchor point defined as the CL of an everyday
class. 82 students provided their opinion on these variables.
Measuring learning outcomes was based on a pre-, post-test
design (Figure 1). All tests contained 30 questions dealing with
tropical rainforests and climate change, which were covered
by the eLearning programme and a previous preparing handson circle. In order to consider different preferences in regard
to computer usage the post-questionnaire contained the 30
item comprising CUSE scale (Cassidy & Eachus 2002) which asks
student’s self-assessment. Students rate statements concerning
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
24
In-depth
3. Results
Figure 1: Study design a pen and paper questionnaire was used to
identify students’ already existing previous knowledge 2 weeks prior
to the intervention. Directly after implementing the OER Bayreuth
goes Ecuador a post test was assigned to examine students’
knowledge acquisition.
typical computer implementations, problems or fears on a scale
ranging from 1 (strongly disagree) to 6 (strongly agree) for each
item. All 114 students answered these questionnaires.
Furthermore, the digital workbooks of these 114 students
were analysed regarding two exemplary tasks. Task A contains
the shading of tropical rainforest regions on a map (example
provided in Figure 3), while task B focussed on the analysis of
original data and asked students to draw their own conclusions
concerning the existence of a temperature trend, its causes
and consequences. The latter task was analysed according to
Mayring’s qualitative content analysis (Mayring, 2004), and
statistical analysis was performed using SPSS Version 22 (IBM
Coorp. 2013).
In general, students rated all three tasks within the fair ranges
(means between 2.46 to 2.74) although some of them required
the total spectrum for feedback. This is reflected by written
comments (in the digital workbooks) like “It was interesting and
was fun.” (male student, age 14), “...all [other] tasks were ok”
(male student, age 15), “really helpful, recommendable” (male
student, age 16). Some students stated the tasks were “complex
to handle...” (female student, age 17) which points to a CL
situation. Herein, the complete range was needed in order to
rate the tasks. Generally, students rated all tasks below the CL of
a usual lesson in their classes. Figure 2: students’ perception of
eLearning illustrates our finding showing boxplots of the ratings
and the stated cognitive load . In regard to student’ CUSE values
we found a rather self-confident group of students with CUSE
mean values ranging between 3 and 6 with a mean of 4,44
(SD=.704).
Figure 2: students’ perception of eLearning focusing on students’ rating (left boxplots) and their cognitive load (right boxplots). Students rate the
eLearning tasks within fair ranges while their cognitive load is mainly below a typical classroom experience.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
25
In-depth
The Kolmogorow Smirnow test yields non-normality for all
variables (age, grades, CL, gender, CUSE mean; p < 0.001).
Consequently, non-parametric analyses were used for examining
further interrelations. There is no connection between gender
and the perception of the OER (neither grades nor CL), which is
supported by a non-significant Mann-Witney U and Wilcoxon
Test using gender as grouping variable (Table 1). Additionally the
computer user self efficacy of male and female students does
not differ significantly (Mann Whitney U test (U=.332; p=.05).
The provided grades and the CL of each task correlate positively
(marked bold in Table 2). Furthermore, CL also correlates
positively between the different tasks, and additionally the
grade of the first task correlates positively with the second and
third task (Table 2). The computer users’ self efficacy of our
students does not show any significant correlations neither with
the provided grades nor with the indicated CL or age.
T1_grade
T2_grade
T3_grade
Mann-Whitney-U-Test
768.500
702.000
719.500
Wilcoxon-W
1363.500
1297.000
1895.500
U
-0.483
-1.131
-0.983
Asymp. Sig. (2-tailed)
0.629
0.258
0.326
T1_CL
T2_CL
T3_CL
Mann-Whitney-U-Test
794.500
657.500
763.5
Wilcoxon-W
1970.500
1833.500
1939.500
U
-0.206
-1.511
-0.502
Asymp. Sig. (2-tailed)
0.837
0.131
0.616
Figure 3: Exemplary task students were asked to shade tropical
rainforest regions in their digital workbooks.
Table 1: Gender equality statistical analysis yield no significant
differences between male and female students. Both populations
rate the OER similarly and need to invest a comparable mental effort
reflected by their cognitive load.
In regard to student’s learning outcomes, all 30 items showed a
Cronbach’s alpha of 0.77 and thus can be used reliably. Students
generally improved their knowledge by using the eLearning
resource. The maximal knowledge increase constituted 11 items
whereas the mean was 3.16 items with a standard deviation of
2.45 items. No gender effect was present (Mann Whitney U
test: U=-1.562, p=0.118) and no significant correlation between
age and knowledge gain was observed (Spearman Rho=-0.114;
p=0.229). Concerning Task A, 4.17 correct regions (SD=2.04)
were averagely shaded in the pre-test. After participating in
the scenario students correctly shaded almost two new regions
during the post-test (mean 1.96 SD 1.828) depicting a significant
(p<0.001) knowledge increase.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
26
In-depth
Age
T1_grade
T1_CL
T2_grade
T2_CL
T3_grade
T3_CL
CUSE
Age
T1_ grade
T1_CL
T2_ grade
T2_CL
T3_ grade
T3_CL
CUSE
1.000
.034
-.088
-.053
.052
-.006
-.035
.156
Sig.
.762
.434
.638
.642
.960
.753
.161
R
1.000
.272*
.414**
-.028
.240*
.020
.131
Sig.
.013
.000
.800
.030
.858
.241
R
1.000
.090
.361**
.054
.495**
-.019
Sig.
.423
.001
.631
.000
.864
R
1.000
.228*
.211
.136
-.018
Sig.
.039
.057
.222
.873
R
1.000
R
.114
.345**
.036
Sig.
.308
.001
.725
R
1.000
.367**
-.039
Sig.
.001
.725
R
1.000
-.144
Sig.
.198
R
1.000
Sig.
* Correlation significant (two-tailed) at 0.05 level.
**Correlation significant (two-tailed) at 0.01 level.
Table 2: Spearman Correlations depicted significant interrelations between ratings and cognitive load (marked bold) but no significant
correlations are present in regard to student’s computer self-efficacy and the other variables.
The majority of students correctly recognized a raising
temperature trend in the research data of task B, whereas
only 7% did not report a trend. Additionally, some students
assessed the quality of this temperature increase: 4% regarded
the increase as huge over time, while 6% stated the increase as
very small (Figure 4). While reflecting on the reasons for this
temperature trend, students provided four concepts: global
climate change, the anthropogenic greenhouse effect, human
land use and generally the loss of the tropical CO2-repository.
Hereby the two rather „general global reasons“ climate
change and greenhouse effect were mentioned dominantly
which is shown in Figure 5. A similar pattern can be found by
examining the consequences which students suggested. Here
as well, global consequences like “melting of poles”, “changing
of seasons” or “sea level rise” were mentioned often although
students were explicitly asked to describe consequences for the
tropical rainforest ecosystem. The distribution of consequences
is shown in Figure 6. About 50% of the students focussed on
the actual task and described consequences for the tropical
ecosystem.
ning
r
a
e
eL ers
Pap
40
[%]
Figure 4: Students’ statement on temperature trends. Students
analysed original temperature data from Ecuador and mainly stated
to observe raising temperatures.
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
27
In-depth
[%]
Figure 5: Reasons for temperature increase. Students reflected on
reasons for raising temperatures and mainly blamed global problems
to cause the trend.
[%]
Figure 6: Consequences of raising temperature in a tropical
rainforest ecosystem. Students were asked to think of consequences
for a local tropical rainforest ecosystem however almost half of them
stated global consequences.
More than half of these students mentioned the loss of
biodiversity whereas 10% described the disturbance of the
ecosystem including all connected biological consequences
(adaptation, migration, speciation and extinction). The loss of
lebensraum (mainly for animals) was also described by 10% of
our participants.
4. Discussion
We found a general positive perception for the implemented
eLearning scenario “Tropical Rainforest and Climate Change”.
This possibly resembles students’ familiarity and comfort
with modern pedagogical practices. Even at home technology
is familiar and widespread: 85% of German households own
a personal computer and 93% of people aged 16-24 years
use the internet daily1 . However, not all students rated the
1 Number provided by the federal statistical office
ning
r
a
e
eL ers
Pap
40
tasks as pleasant which is reflected through the utilisation of
the complete spectrum of grades. This varying perception of
students is in line with Cox (2013) who described students’
attitudes as being related to their prior experiences, which
are individual and thus probably differing within our sample.
However, computer users self efficacy fails as a predictor for
different perceptions: we detected no significant correlations
between CUSE values and any other variable. However our
sample size was perhaps inadequate and the self efficacy
of our students was rather high. Interestingly we found no
gender effects concerning the CUSE values which contradicts
the findings of the original study where Cassidy & Eachus
(2002) found significantly higher CSE values for male students.
A possible reason for these findings is the development of an
information and communication technology (ICT) culture. Since
the first study more than a decade has passed and students’
familiarity with ICT has risen and programmes to foster ICT
skills especially for females were launched within schools and
beyond. These efforts probably closed the gap between male
and female students. The federal statistical office reports no
significant differences concerning gender to be existent for age
groups below 45 years2. So perhaps other factors, like interest
or motivation resulting from prior experiences might be more
suitable predictors for varying perceptions and thus could be
taken into account in future studies. Parker (2004) divides quality
into three categories: “learning source, learning process and
learning context”. In our study we might use these categories
to explain the quality of our OER. The “learning source” which
refers to the learning materials e.g. our website “Bayreuth
goes Ecuador”, the infrastructure teaching and support can be
assessed by the grades provided by the students. The quality of
the “learning processes” can be reflected by the cognitive load
of our students as it is a measure of the cognitive processes
which takes place. The adequacy of the “learning context” (the
environment in which learning takes place) could be compared
to the CUSE values which were stated by the participants. Taking
these different dimensions into account we receive a subjective
quality perception of our stakeholders. Summarising our results
we tend to assess our OER as being of good quality reflected
by good grades, a low to medium cognitive load and a rather
medium to high CUSE value.
When examining the interrelations of task grades a significant
correlation between the first task and the rating of the
subsequent tasks was found. This could be interpreted as a
loss of motivation. If students did not like the first task (high =
2 federal statistical office
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
28
In-depth
bad mark), they did not like the others either. This might reflect
a general problem with the learning method as described
below. Sun et al. (2008) found seven variables as determining
an eLearners’ satisfaction namely “learner computer anxiety,
instructor attitude toward e-Learning, e-Learning course
flexibility, course quality, perceived usefulness, perceived ease
of use, and diversity in assessment”. Consequently, these
aspects need to be taken into account for the ongoing study.
Another crucial factor is the correlation between student’s
rating and CL. Although CL was found to be below a typical
classroom experience, we found a positive correlation between
CL and grade. As in the German school rating system a high
grade reflects a bad mark we find students perceiving the task
as difficult are rating it badly. Furthermore, the correlation
pinpoints a relation between the CL of the first and other tasks.
If students had difficulties with the first task, they are expected
to encounter difficulties with the other tasks as well. This needs
consideration as a general difficulty with the learning object and
thus the media internet and digital workbook. As the webpage
contains a variety of interactive information sources, these
findings are in accordance with Van Merriënboer and Ayres
(2005) who concluded “element interactivity [...] may be too
high to allow for efficient learning.
These interrelations create different types of perceptions
and thus it is not contradictory that students used the whole
spectrum to state their opinion on our OER which is in line
with Boonen (2005 cited in Dondi 2009) as explained above.
Concerning quality these findings lead to different target groups
within our stakeholders who have different quality preferences
and thus rated accordingly. Ehlers 2004 defined four quality
specific target groups: “The Individualist”, “the Result-Oriented“,
“the Pragmatic” and “the Avantgardist”. All these groups
have different preferences which are of importance for their
quality perception and consequently will rate the same OER
in a different manner. These preferences derive from different
factors, for example learning styles which should consequently
be taken into account as in well future studies.
In order to receive more impartial quality criteria we analysed
student’s learning outcomes. Manochehr (2006) found
eLearning to be more effective for particular learning styles
than for others. In his study students following the “Assimilator
and Converger” learning style performed better and thus
reached a higher knowledge increase than the other students.
In our study, all students achieved a knowledge increase
both in the pre and post-test knowledge questionnaire and
ning
r
a
e
eL ers
Pap
40
in the development of shading tropical rainforest regions.
We found no relationship neither between gender, age
or computer user self efficacy suggesting our eLearning
programme to be efficient for the complete target group
reflecting a good quality in regard to the learning outcome.
Knowledge increase through the implementation of an eLearning
module has already been reported by other authors (Morgil et
al., 2004; Cradler &Cradler, 2009) even leading to the statement
“Online e-Learning is an alternative to traditional face-to-face
education” (Sun et al., 2008). However, this conclusion should
not be drawn too quickly and is not yet broadly agreed on. Clark
(2002) explained it is “not the medium that causes learning.
Rather it is the design of the lesson itself and the best use of
instructional methods that make the difference”. Consequently,
further studies are needed to find a broader consensus on
this issue especially as European policy makers promote this
direction and late adopting teachers need to become assured of
this new learning paradigm.
Regarding the students’ competencies within our eLearning
scenario, a majority was capable of correctly interpreting a plot
of more than 40.000 data points. Nevertheless, the answers
mostly followed a media-influenced pattern tending to blame
global climate change as the ultimate reason. However a
smaller group of our participants reflected in detail on reasons
for climate change. When stating the greenhouse effect as a
reason this group clearly emphasised its anthropogenic origin.
Additionally, the concept of land use included statements like
“tropical rainforests get deforested for palm oil plantations (for
our daily consumer goods) which leads to an increased carbon
dioxide emission and thus an increased greenhouse effect”
(female student aged 17). These observations are congruent with
the statements provided concerning consequences of climate
change. Although specially asked to report on consequences
affecting the tropical rainforest ecosystem, the vast majority
named global consequences as in the following example: “The
consequences of the raising temperature are rising sea levels and
melting glaciers.” (male student, aged 17). Here as well students
with a focus on the actual task provided more elaborated
statements on the loss of biodiversity “which we have not even
discovered completely and might contain important medical
plants or interesting animals” (female aged 18). Statements like
these might originate from a general interest on environment or
from a preparation lesson within the botanical garden. Future
studies should analyse the correlation between environmental
attitudes and knowledge conveyed through digital resources:
this might help to detect special target groups of learners which
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
29
In-depth
could lead to tailored learning experiences and strategies. The
loss of lebensraum which was mainly stated in regard to animals
depicts a classical phenomenon named “zoochauvinism”
(Bozniak, 1994). Students generally neglect plants as living
organisms as a result of education in which plants are seldom
used as examples and their daily perception of plants that
are not moving and do not show the signs of living at a quick
glimpse. It is interesting to mention that we could observe
students to pick animals in the interactive animation first and
seldom examined plants out of their own interest. As students
did not necessarily encounter plants in the eLearning module,
it is not remarkable to find this common pattern. Interestingly
students who pinpointed the disturbance of the ecosystem
included all possible biological consequences and thus did not
evaluate the temperature development as something bad but
more as an ongoing process which conveys evolutionary tradeoffs. This interpretation might result from prior class teaching
as ecology and thus adaptation and speciation are topics of the
10th grade curriculum. Certainly, it is interesting to see students
connecting their already existing prior knowledge with their
findings throughout our eLearning scenario. These observations
are in line with Manochehr, (2006) and Cradler & Cradler (1999)
as both studies emphasize that eLearning promotes critical
thinking and the ability to apply learning.
should be taken into account to further improve this way of
learning and students’ perception.
Acknowledgements
This study was completed within the framework of
the Open Discovery Space Project (funded half-andhalf by the European Union CIP PSP Grant Agreement
No.
297229
and
the
University
of
Bayreuth).
We thank the students and teachers who supported this study
through their participation.
5. Conclusion
A successful implementation of any open available eLearning
scenario needs reflection in regard to quality, significant
learning outcomes and the activation of critical thinking in
order to promote 21st century skills within the next generation
of responsible citizens. Our study points in this direction.
Furthermore, in our study students generally perceived an
eLearning scenario as a good learning environment in which
they could accomplish successful learning outcomes. However
one should take into account the computer self-efficacy of our
learners which was rather high. Future studies should focus on
more heterogeneous participants analysing different sample
groups in order to examine this factor. Although our study is just
one example, as other studies also point to successful learning
achievements we tend to conclude that eLearning offers a good
opportunity to pass on knowledge to today’s youth when using
a highly qualitative OER. In order to assess the quality of the
resource the combination of rating and cognitive load provided
interesting insights and is easy to handle which might be a good
approach for future studies. Nevertheless, in a broader context
other variables such as learning styles and design improvements
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
30
In-depth
References
Manochehr, N. N. (2006). The influence of learning styles on learners
in e-learning environments: An empirical study. Computers in Higher
Bissinger K., Arnold C.J., Bogner F.X. (2014): D5.2-2 The Revised Evaluation
Education Economics Review, 18(1), 10-14.
Plan. Open Discovery Space.
Mayring, P. (2004). Qualitative content analysis. A companion to qualitative
Bozniak, E.C. (1994). Challenges facing plant biology teaching programs.
research, 266-269
Plant Science Bulletin, 40, 42–26
Morgil İ, Arda S., Seçken N., Yavuz S. &Özyalçin Oskay Ö. (2004). The
Cassidy, S., & Eachus, P. (2002). Developing the computer user self-efficacy
(CUSE) scale: Investigating the relationship between computer self-efficacy,
gender and experience with computers. Journal of Educational Computing
Influence of Computer-Assisted Education on Environmental Knowledge
and Environmental Awareness; Chemistry Education: Research and
Practice, Vol.5; No2. 2; pp. 99-110
Research, 26(2), 133-153.
Paas, F. G., & Van Merriënboer, J. J. (1994). Instructional control of
Clark, R. (2002). Six principles of effective e-learning: What works and why.
The E-Learning Developer’s Journal, 1-10.
cognitive load in the training of complex cognitive tasks. Educational
Psychology Review, 6(4), 351-371.
Cox, M. J. (2013). Formal to informal learning with IT: research challenges
and issues for e‐learning. Journal of Computer Assisted Learning, 29(1),
Parker, N. K. (2004). The quality dilemma in online education. Theory and
practice of online learning, 16.
85-105.
Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives
Cradler, R., & Cradler, J., (1999) Just in Time: Technology Innovation
Challenge Grant Year 2 Evaluation Report for Blackfoot School District No.
a successful e-Learning? An empirical investigation of the critical factors
influencing learner satisfaction. Computers & Education, 50(4), 1183-1202.
55 San Mateo; CA: Educational Support Systems
Van Merriënboer, J. J., & Ayres, P. (2005). Research on cognitive load
Dondi, C. (2009). Innovation and Quality in e-Learning: a European
Perspective. Journal of Universal Computer Science, 15(7), 1427-1439.
Ehlers, U. D. (2004). Quality in e-learning from a learner’s perspective.
European Journal for Distance and Open Learning 2004-I. Online: http://
www.eurodl.org/materials/contrib/2004/Online _Master_COPs.html
theory and its design implications for e-learning. Educational Technology
Research and Development, 53(3), 5-13.
Yacob, A., Kadir, A. Z. A., Zainudin, O., & Zurairah, A. (2012). Student
Awareness Towards E-Learning In Education. Procedia-Social and
Behavioral Sciences, 67, 93-101.
IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0.
Armonk, NY: IBM Corp.
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: openeducation.eu
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
31
From the field
An Assessment-Recognition Matrix for Analysing Institutional
Practices in the Recognition of Open Learning
Authors
Gabi Witthaus
[email protected]
Research Associate
Institute of Learning Innovation,
University of Leicester, UK.
Mark Childs
[email protected]
Visiting Fellow
Institute of Learning Innovation,
University of Leicester, UK.
Bernard Nkuyubwatsi
[email protected]
PhD student
Institute of Learning Innovation,
University of Leicester, UK
Grainne Conole
[email protected]
Professor of Learning Innovation
and Director of the Institute of
Learning Innovation
University of Leicester, UK
Andreia Inamorato dos Santos *
andreia-inamorato-dos.santos@
ec.europa.eu
Research Fellow
European Commission, Joint
Research Centre (JRC), Institute
for Prospective Technological
Studies (IPTS), Information
Society Unit, Sevilla, Spain
Yves Punie *
[email protected]
Project Leader
European Commission, Joint
Research Centre (JRC), Institute
for Prospective Technological
Studies (IPTS), Information
Society Unit, Sevilla, Spain
This paper shares some of the findings of the OpenCred study, conducted by the
Institute of Learning Innovation at the University of Leicester in collaboration with
the European Commission’s Institute for Prospective Technological Studies (IPTS),
and funded by the IPTS. It describes a range of initiatives by higher education and
professional training institutions in Europe in which non-formal, open learning
achievements are recognised. Recognition of learning is almost always conferred in
consideration of the type of assessment used, and so a matrix has been developed
to show the relationship between these two features. The vertical axis of the matrix
comprises a five-level hierarchy of formality of recognition (from no recognition to
full recognition in line with the European Credit Transfer and Accumulation System),
while the horizontal axis represents a five-level hierarchy for robustness of assessment
(from no assessment to formal examinations). Examples of European open education
initiatives are discussed and plotted on the assessment-recognition matrix. The paper
concludes with a summary of the tensions between the assessment procedures used
and the recognition awarded, and offers recommendations for institutions wishing to
evaluate the nature of recognition awarded to open learners. It also identifies further
areas in which the framework could develop.
1. Introduction
OpenCred is part of the OpenEdu1 project of the IPTS, which is exploring institutional
strategies on openness in higher education, with the aim of supporting the design of suitable
policies at a European level. This paper therefore focuses on practices for recognition of open
learning in Europe. Open Educational Resources (OER) and Massive Open Online Courses
(MOOCs) have emerged in recent years and are triggering a mindset change in institutions.
Generalisations concerning what learners require from participation in open education are
premature, in that insufficient research has been conducted regarding, for example, the
degree to which formal recognition of learning is important to these learners. While badging
may prove motivating for some learners, formal recognition of learning may be the main
goal for others. Certificates for open learning achievements vary in terms of their level of
formality of recognition, depending largely on how they are linked to assessment. This paper
looks into emerging practices around the issuing of certificates for open learning in Europe,
and the relationship between assessment and recognition.
Tags
Recognition of open learning,
open education, assessment,
MOOCs, badges
ning
r
a
e
eL ers
Pap
40
* Disclaimer: The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official
position of the European Commission.
1 http://is.jrc.ec.europa.eu/pages/EAP/OpenEdu.html
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
32
From the field
2. MOOCs and the issuing of certificates
The Open Education Europa (2014) ‘European MOOCs
Scoreboard’ indicates that there are currently over 800
MOOCs being offered by Europe-based institutions. At a recent
gathering of the Eurotech alliance, which comprises higher
education institutions from Switzerland, Denmark, Germany
and the Netherlands, a debate on MOOCs and the future of
education was held, and recognition of credits was highlighted
as one of the key challenges in the EU (Eurotech Universities,
2014). The report points out (Ibid) that recognition of open
learning is not just an add-on to the established procedures
of recognition of prior learning; it requires a substantial shift
in mindset, particularly on the part of educational institutions,
where traditionally the roles of teaching, content provision,
assessment and credentialisation (the awarding of diplomas or
degrees) have all been bundled together. The recognition of nonformal open learning achievements requires an ‘unbundling’
of services provided by these institutions (Camilleri and
Tannhäuser, 2013, p.96; Gaebel, 2014, p.28), which can conflict
with the requirements of national quality bodies.
The Trends Report by the Open Education Special Interest Group
(2014) looks at the aspects that would need to be considered in
order to recognise learning achievements in open, non-formal
education. Where MOOC providers offer certificates, Verstelle
et al (2014, p.25) recommend considering two key aspects in
order to determine the value of these certificates:
1. Is the certificate merely proof of attendance or does it
provide evidence of learning? If the latter, how robust
was the assessment? (Multiple-choice questions with
automated marking at one end of the range; the completion
of an examination under supervision at the other).
2. To what degree is the student’s identity validated, and how
much supervision is provided? The report identifies four
levels for these two intertwined elements:
a) No validation of identity – the MOOC relies on the honour
of the student,
b) Online validation by facial recognition or keystroke
tracking,
c) Online monitoring, which requires a moderator/ proctor
to have a 360-degree view of the student’s room transmitted
via a webcam. The Trends report notes that some institutions
would not accept online proctoring as a qualifying examination
ning
r
a
e
eL ers
Pap
40
environment, regarding it as being prone to fraud, although it
is increasingly being seen as legitimate (Verstelle et al., 2014,
p.25),
d) Attendance at a physical examination site.
Similarly, a report published by the Norwegian Ministry of
Education (Kjeldstad et al. 2014), proposes that for the awarding
of formal academic credit, proof of learning will need to be
demonstrated via examination, and the importance of validation
of the identity of the examinee is stressed (Ibid, section 8.4).
The report (Ibid) lists the following situations in which validation
of identity is required:
•
A student wants transfer of credits obtained in a MOOC
conducted by a foreign provider to a degree at a local
institution,
•
A student wants their achievements in a MOOC studied
with a foreign provider to be validated as part of the
admission process to higher education in a local institution,
•
A local institution offers a MOOC and awards credits for
successful completion of assessment tasks,
•
An employee wants to include their participation in a
MOOC in their documentation of competence when
applying for a job.
These reports underline the importance of linking the means of
assessment with the nature of recognition awarded to learners.
Other aspects also need to be considered, as noted above by
the Eurotech alliance, but for the purposes of this paper we will
consider the relationship between assessment and recognition
practices currently observed within institutional initiatives in
Europe.
3. The OpenCred Study
This study is mainly based on publicly available information from
open education websites, working groups, projects and studies
across Europe. The aim was to identify any general principles
regarding recognition of open learning that could inform
discussions in the field and support developers and learners,
by clarifying the range of options and models for recognition of
open learning that existed and might be replicated.
During the study, it became apparent that the concept of ‘open’
was rather blurred, and that it can mean different things to
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
33
From the field
different developers and learners. Furthermore, although nonformal learning was the focus of the study, it is often difficult
to distinguish between formal and non-formal learning. (See
glossary for definitions used in the study.) Rather than impose
limitations on how open or non-formal a course must be to be
included, we included any course or initiative promoted under
the banner of open education.
The initiatives identified ranged considerably in the degree
to which open learning was formally recognised, with some
offering no recognition at all, or simply a completion certificate
or badge, and others offering exemption from examinations or
courses, or ECTS credits. They also varied enormously in relation
to the criteria identified in the Trends Report for robustness of
assessment described above, both in terms of the nature of the
assessment, and in the degree to which the learner’s identity
was verified. Treating assessment as a separate category from
recognition could appear to be a purely academic exercise,
since in most discussions about open education these tend to
be inextricably intertwined; however, there is variance in the
degree to which they are linked, as two similar assessment
processes may not lead to the same sort of award. Treating
them as two discrete categories enables numerical values
to be separately ascribed to both assessment (assigning a
higher value to assessment processes that appears to be more
robust) and recognition (assigning higher values to more formal
credentials) and so provides an opportunity to test how strong
this link actually is.
A proposed hierarchy of descriptors for the formality of
recognition of open learning in European open education
initiatives is shown in Table 1. This set of descriptors was
developed with reference to the discussion of MOOCs and
credits in the reports by the Dutch Open Education Special
Interest Group (2014) and the NVAO group (2014, p.6-7),
also from the Netherlands. A value has been provided for
each descriptor, and these values will be used to organise the
information about institutional open education initiatives in the
following section.
Table 1: Formality of recognition
Level
Descriptors
0
No formal recognition
1
Unauthenticated completion certificate/statement
of accomplishment or badge showing proof of
participation or completion2
2
Authenticated certificate or badge which either (a)
contains limited/no information on the nature of the
course, the nature of the learner’s achievement and the
nature of the assessment process used, or (b) indicates
that the learner’s identity was verified online but there
was no supervision during assessment (as is typical in
Coursera MOOCs with Signature Track)3
3
Certificate providing exemption from a specified
entrance exam
4
Certificate conferring between 1 and 4 ECTS credits
Certificate conferring a minimum of 5 ECTS credits
Certificate providing exemption from a specified
module/course or part of qualification at the issuing
institution
Certificate from an accredited institution which ‘(a)
formally and clearly states on whose authority it was
issued, provides information on the content, level and
study load, states that the holder has achieved the
desired learning objectives, provides information on
the testing methods employed and lists the credits
obtained, according to a standard international system
or in some other acceptable format, (b) is demonstrably
and clearly based on authentication [i.e. student’s
identity is verified] and (c) states that the examinations
have been administered under supervision and specifies
the nature of this supervision.’ (NVAO 2014, p.9)
Continuing professional development (CPD) credits
2 http://bluebox.ippt.pan.pl/~vkoval/vk_files/coursera/Game_Theory_II_130707.jpg
3 For example, see this sample certificate on Coursera’s website: https://s3.amazonaws.
com/coursera/specializations/jhudatascience/cert_icon.png
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
34
From the field
Table 2: Robustness of assessment
Level Descriptors
0
No assessment
1
Record of completion of activities
Examples of Level 1 recognition initiatives
(unauthenticated certificates or badges)
Self-assessment
Assessment with automated checking, e.g. multiplechoice questions (MCQs), submission of programming
code, or acceptance of a submission of text on the
basis of word count (No verification of identity)
Peer assessment (No verification of identity)
2
Online examination with verified identity and no realtime supervision, e.g. Coursera’s Signature Track4 or
Accredible’s5 ‘self-proctoring’ (in which a recording
is made of the student’s screen and face while
examination is in progress, and is compressed into a
2-minute time-lapse video, embedded in certificate).
3
Submission of coursework and/or performance of
practical tasks where the student is personally known
to the examiner. (The context may be either face-toface or online. The assumption is that inconsistencies
in performance style will be picked up and this
minimises the likelihood of cheating. This is common
practice in traditional online courses, e.g. online MBA
programmes.)
Online examination with identity verification and
real-time proctoring (e.g. ProctorU6 , Proctor2Me7 or
Remote Proctor8, which has a panel of proctors check
individual examination recordings)
4
After that we will map selected initiatives onto a matrix in order
to test the degree to which means of assessment is linked to the
nature of recognition awarded to learners.
On-site examination (including on-site challenge
exams)
Recognition of prior learning (RPL) conducted
by recognised expert(s) (e.g. based on portfolio
submission and/or interview – requires a relatively
low candidate-to-assessor ratio and hence generally
not scalable to open initiatives)
Table 2 lists the proposed descriptors for robustness of assessment.
The open education initiatives discussed below were selected
from those included in the European-wide desk research
undertaken for the OpenCred study. The selection was a
stratified sample, in that it aimed to provide a good range
of different combinations of formality of recognition versus
robustness of assessment, but within each stratum examples
were chosen randomly. The initiatives are organised according
to formality of recognition, as per Table 1, starting with Level 1.
4 https://www.coursera.org/signature/guidebook
The first MOOC in Croatia9 was convened by the organisation
CARNet (Croatian Academic Research Network) and was on the
subject of creating courses in Moodle. It began in January 2014,
and had 440 participants (CARNet 2014). Learners could earn
one or more of the three badges offered: Attendant, Designer
and Distinguished Attendant. Peer assessment was used to
ascertain whether participants qualified for badges. At the end
of the course, 80 participants obtained the Attendant badge,
over 70 obtained the Designer badge and around 70 achieved
the Distinguished Attendant badge. Learners’ feedback
indicated that obtaining badges motivated them to learn, and
several individuals obtained all three badges. (In summary,
this initiative has a ‘formality of recognition’ level of 1 and a
‘robustness of assessment’ level of 1, according to Tables 1 and
2 respectively. These levels will be represented as R1; A1, and
this convention will be used for all the remaining initiatives in
this section. The MOOC will be referred to as ‘Moodle’ for short
in Figures 1 and 2 below.)
The openHPI10 platform for free, online learning, which
launched in 2012, is an initiative of the Hasso Plattner Institute
based at the University of Potsdam, Germany. They specialise
in IT and computer programming topics and claim to have
offered the first German language MOOC. This focused on the
technological functionality of the Internet. 11,000 learners
participated, of whom 1,662 received a certificate of successful
completion (Allgaier, 2013). A ‘graded record of achievement’
is offered to candidates on ‘successful completion’ of openHPI
courses. ‘Successful participation means that you earn at least
50% of the sum of maximum possible points for homework
and final exam. The final exam will weigh 50%. The record of
achievement will be issued in the name you used to register
at open HPI’ (openHPI 2012-2014). Certificates also indicate
whether the learner’s results fall within the top 5, 10 or 20% of
the class (Meinel and Willems 2013, p.6) (Short name ‘Internet’:
R1; A1.)
In France, the National Ministry of Education launched a national
portal for MOOCs through the France Université Numérique
5 https://accredible.com/
6 http://www.proctoru.com
7 http://www.procwise.com
9 http://www.carnet.hr/loomen/moodle_mooc
8 http://www.softwaresecure.com/product/remote-proctor-now/
10 https://open.hpi.de/
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
35
From the field
(FUN)11 in October 2013. MOOCs offered via this portal are
required to adhere to a set of quality standards and guidelines.
The guidelines suggest that recognition should be given for
attendance and participation, rather than for achievement of
learning objectives, citing the difficulties involved in supervising
online assessment. The perspective of FUN is that assessment
in MOOCs can only be conducted through automation or peer
assessment, and both have limitations: automation provides
assessment of only superficial information, and the answers
can also be easily disseminated amongst participants leading to
high potential for cheating, while peer assessment is ‘a trade-off
between workload imposed on participants and the precision
of the evaluation’ (Cisel 2013, pp.19-25). The use of badges is
recommended, mainly as a way of encouraging participation.
Badges can be awarded automatically for completing tasks and
can act as a gradual record of completion. Cisel (2013, p.28)
concludes that badges ‘are mainly used today to encourage
participants to interact on forums, but could have a growing
importance in the process of reward for work done over the
years.’ In fact, most of the MOOCs currently available on the
FUN platform appear to offer unverified completion certificates
(which have the same status as unverified badges in Table 1
above): one such example is ‘From Manager to Leader 2.0’12.
(Short name ‘Leader’: R1; A1.)
Table 3: Open education initiatives in
sample that award level 1 recognition
These are summarised in Table 3.
Name of course
(translated into
English where
applicable)
Code in
figure 1
Formality of
recognition
Robustness
of
assessment
Creating Courses in
Moodle (CARNet)
Moodle
1
1
The Technological
Functionality of the
Internet (openHPI)
Internet
1
1
From Manager to
Leader 2.0 (FUN)
Leader
1
1
Examples of Level 2 recognition initiatives
(authenticated certificates; no credits)
Since Coursera courses offer ‘verified certificates’ to students
who complete a course on the Signature Track, the Coursera
MOOCs by European providers that offer this option fall under
Level 2 in the formality of recognition hierarchy. The University
of London was an early MOOC adopter on the Coursera
platform. During one of the iterations of the ‘English Common
Law: Structure and Principles’ MOOC, the course leader received
several emails of thanks from students, some of which included
mention of how they were using their verified certificate to gain
credits from other universities. Unfortunately for the purposes
of this study, however, the email correspondence with those
students and the related data has been deleted in keeping with
data protection requirements (Lockley 2014). Nevertheless, this
anecdotal evidence indicates that even a relatively low level of
formal recognition offered by a MOOC provider may lead to
more substantial recognition by other institutions.
The Copenhagen Business School in Denmark offers a MOOC
on ‘Social Entrepreneurship’ via Coursera which also awards a
Coursera verified certificate. (Short name ‘SocEnt’: R2; A2.)
A French commercial provider, the First Finance Institute, has
established a ‘Business MOOC platform’ which claims to have
over 50,000 members. Authenticated certificates are awarded
for exams taken at Pearson centres around the world. The
MOOCs are offered free of charge for the first four weeks, after
which students are invited to continue for ‘an optional week 5’
for a small fee ($29 for students and $59 for professionals) –
which includes the assessment and certification (First Business
MOOC 2014). The certificate does not confer academic credit;
nevertheless according to the organisation’s website, some
students say they will add their MOOC experience to their CVs.
An example of an upcoming MOOC on this platform is the ‘Wall
Street MOOC’. (Short name ‘Wall’: R2; A4)
11 http://www.france-universite-numerique.fr/moocs.html
12https://www.france-universite-numerique
mooc.fr/courses/CNAM/01002S02/Trimestre_1_2015/about
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
36
From the field
Table 4: Open education initiatives in
sample that award level 2 recognition
These are summarised in Table 4.
Name of course
Code in
figure 1
Formality of
recognition
Robustness
of
assessment
English Common
Law: Structure
and Principles
(University of
London)
Law
2
2
Social
Entrepreneurship
(Copenhagen
Business School)
SocEnt
2
2
Wall Street MOOC
(FFI)
Wall
2
4
Examples of Level 3 recognition initiatives
(authenticated certificates; fewer than 5
ECTS credits; exemption from entrance
exam)
Many of the European MOOC-providing institutions that are
promoting their offerings under the umbrella of the OpenUpEd
portal13 award formal certificates, which they describe on their
website as ‘official credits that can count towards obtaining a
degree (i.e., ECTS).’14
Università Telematica Internazionale, UNINETTUNO, in Italy,
provides the vast majority of the courses listed on the OpenupEd
portal (104 out of 160), covering a very wide range of subjects.
All include self-evaluation exercises and peer to peer reviewing
of exercises, and a ‘Students Activities Tracking’ system that
generates graphics, reports and statistics on learners’ activities.
Learners may opt to take a final examination at UNINETTUNO
headquarters or at designated national and international
centres. Learners who want to get ECTS credits for these
MOOCs need to enrol in the corresponding course offered by
this university. Then, a tutor is assigned to the enrolled student,
whose learning activities are also recorded. A final exam is
administered to the MOOC participants, and those enrolled
students who pass the exam are awarded ECTS credits. An
example is the MOOC ‘Measurement Theory’, which leads to 2
ECTS credits. (Short name ‘Measure’: R3; A1.)
Another example from the OpenUpEd portal comes from the
Portuguese Open University (Universidade Aberta), which offers
a MOOC on climate change in which students have the following
recognition options: ‘1) Certificate of course completion through
a peer assessment process. 2) Paid formal credit (4 ECTS), if
required by participants in a period of up to 3 months after the
course, pending subsequent formal assessment of the work in
the course and a face-to-face exam.’15 (Short name ‘Climate’:
For candidates who take the exam, the values are R3; A4.)
In Finland, the University of Helsinki’s Department of Computer
Science runs courses in which students are required to produce
programming code that is automatically assessed using the
institution’s TestMyCode (TMC) testing system. This made
the tasks easily adaptable to a MOOC format, and MOOCs
on programming have been running since 2012 (University
of Helsinki Department of Computer Science, nd). Of the 417
participants of the first cohort, 38 were subsequently accepted
into the Computer Science department and the department is
considering using attendance on the MOOC as an alternative to
passing an entrance exam (Vairimaa, 2013, pp.3-6). (Short name
‘Coding’: R3; A1.)
Table 5: Open education initiatives in
sample that award level 3 recognition
These are summarised in Table 5.
Name of course
Code in
figure 1
Formality of
recognition
Robustness
of
assessment
Climate Change
(Open University
Portugal)
Climate
3
4
Measurement
Theory
(UNINETTUNO)
Measure
3
2
Coding (University
of Helsinki)
Coding
3
1
13 http://www.openuped.eu
14 http://www.openuped.eu/mooc-features/recognition-options
ning
r
a
e
eL ers
Pap
40
15 http://www.openuped.eu/courses/details/1/10
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
37
From the field
Examples of Level 4 recognition initiatives
(equivalent forms of formal recognition to
those used in formal education)
Many of the initiatives that award formal recognition at the
highest level tend to offer a range of recognition options for
students. In some cases, providing institutions offer special
recognition privileges for students who are enrolled in their feebearing programmes, and in others, open learners are given the
option to pay a fee for assessment and credits.
In Cyprus, the University of Nicosia recently engaged in MOOC
provision (MassiveOpenOnlineCourse.com 2014). Introduction
to Digital Currencies16 was offered in mid-2014 and was
taught by an expert on the concept of the Bitcoin. This MOOC
contributes 10 ECTS credits of a total of 90 ECTS credits for the
Master of Science in Digital Currency that is being developed
at this university (University of Nicosia 2014). Each module of
ten ECTS credits costs the student 1,470 Euro, apart from the
first module, which is offered for free as a MOOC. The formal
recognition option is available only to students enrolled on
the university’s MSc, while members of the public can achieve
badges and completion certificates. (Short name ‘Currencies’:
For enrolled students, the values are R4; A4.)
Vytantas Magnus University in Lithuania provides a non-MOOC
example in this category, in that teachers undergoing initial
teacher training can have their use of OER included with their
theoretical and practical achievements when applying for RPL
(Cedefop 2007). This enables them to achieve exemption from
certain courses, thereby reducing the duration of their formal
training period. This is a particularly interesting example because
of the lack of any formal assessment associated with OER. It is
the process of recognition of prior learning that creates a layer
of robust assessment and enables recognition for the learner.
(Short name ‘TeacherOER’: R4; A4.)
OERu, and will be piloting the new RPL policy and procedures
in these modules, including determining RPL on the basis of
challenge exams (Institiuid Teicneolaiochta, Sligeach, 2014).
Learners will be charged a ‘minimal’ fee to cover the cost of the
assessment. Procedures for running the challenge examinations
are being formulated and it is likely that online proctoring will
be used (Clinch 2014). (Short name ‘Challenge’: R4; A3.)
Table 6: Open education initiatives in
sample that award level 4 recognition
These are summarised in Table 6.
Name of course or
initiative
Code in
Figure 1
Formality of
recognition
Robustness
of
assessment
Introduction to
Digital Currencies
(University of
Nicosia)
Currencies
4
4
Recognition of OER
in Teacher Training
(Vytantas Magnus
University)
TeacherOER
4
4
IT Sligo’s plans
for challenge
examinations
Challenge
4
3
The assessment-recognition matrix
In order to compare and contrast the recognition opportunities
for non-formal open learning described above, we will map
them onto a matrix with formality of recognition on the vertical
axis and robustness of assessment on the horizontal axis.
Each of these axes has a spectrum of values from zero to four,
represented by the descriptors in Tables 1 and 2. When the
initiatives described above are mapped onto the matrix, the
following picture emerges:
In Ireland, a draft policy document has been drawn up by the
Institute of Technology Sligo (IT Sligo), which includes the
intention to include non-formal open learning within their RPL
procedures. IT Sligo is a member of the OER universitas (OERu)17
, a global consortium of post-secondary institutions aiming to
collaboratively provide courses leading to full credentialisation
for learners at minimal cost. IT Sligo academics are currently
developing modules in electronics and engineering for the
16 http://digitalcurrency.unic.ac.cy/free-introductory-mooc
17 http://oeru.org/
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
38
From the field
Figure 1: The OpenCred Assessment-Recognition Matrix
4. Discussion
The ‘Currencies’ MOOC and the ‘Moodle’ MOOC show an
expected pattern, where the level of formality of recognition
is commensurate with the level of robustness of assessment.
Similarly, the results for the ‘Climate’ MOOC are unsurprising,
in that a very robust form of assessment (onsite examination)
leads to a relatively high form of recognition (4 ECTS credits).
In fact, in almost all the cases, formality of recognition is
closely linked to the robustness of assessment, the levels being
identical or differing from the other by only one point (indicated
by the diagonal lines superimposed on the matrix in Figure 2).
Figure 2, showing which open learning initiatives differ by more than
one value within the parameters of robustness of assessment and
formality of recognition
ning
r
a
e
eL ers
Pap
40
Only two cases fall outside of this relationship in which the
level of formality of recognition is commensurate with the
level of robustness of assessment. In the case of the ‘Coding’
MOOC, 3 ECTS credits are awarded for a relatively non-robust
form of assessment (multiple-choice questions). In this case,
however, it appears that the institution has developed a
sophisticated automated marking system for the purposes
of checking programming code (TestMyCode), which gives
them the confidence to issue this relatively formal award.
The opposite anomaly is evident in the case of the Wall Street
MOOC (‘Wall’), where a very robust form of assessment, an
on-site examination, receives no formal academic credit. Since
the MOOC does not cover the full scope of content that is
covered on the corresponding certificate course offered at this
institution, learners who want recognition will need to pay the
fees and enrol on the full course. The MOOC therefore seems to
be functioning as a ‘taster’ for the full course. The institution’s
rationale for offering an examination is not clear – perhaps it is
to enable learners to self-evaluate their readiness for the full
course.
It is also worth noting that the ‘TeacherOER’ initiative, which
scores the highest possible points for both formality of
recognition and robustness of assessment on the basis of RPL
procedures, is typical of many OER and MOOC provisions used
in support of continuous professional development (CPD) or inservice training. In many CPD programmes, staff are relied upon
to accurately report their learning to their employer (Open
Education Special Interest Group 2014, p.25). Similar situations
occur where professional bodes require their members to
undertake a certain amount of training per year Open Education
Special Interest Group 2014, p.28). Usually, CPD takes place
within a closed environment and with a favourable ratio
between assessors and candidates, and this makes it difficult to
replicate at scale in open education. Nevertheless, the flexibility
of open education means it is an ideal way of helping candidates
and their employers meet the necessary requirements.
In addition to the open learning recognition initiatives described
earlier, a data point has been added to the matrix for a
‘traditional online MA module’ (the point labelled MA, which is
shared with the data point for IT Sligo’s ‘Challenge’ exams). This
is to show that the most robust form of assessment, physical,
onsite examinations, is not very common in traditional, closed,
online programmes offered by many European universities.
These programmes often only require students to submit
assignments, and there is limited or no checking of the students’
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
39
From the field
identity; nevertheless, full academic credentials are awarded to
successful students. It may be that the relatively low studentstaff ratio, combined with the typical requirement that students
participate in forums and other online tasks throughout the
term helps staff to notice instances of cheating. However, one
possible consequence of the evolution of open, online courses
with relatively rigorous assessment options is that mainstream
online courses may come under pressure to upgrade the rigour
of their assessments.
5. Conclusions
The matrix was created to help clarify the distinctions between
different levels of assessment and recognition. The descriptors
and values assigned are open to debate, but in itself prompting
this debate seems to be a useful exercise, as this forces an analysis
of the various merits of, for example, badging, certification and
more formal awards. In discussing this with academics, it has
also proven useful as a tool for them to use in analysing their
institutions’ assessment and recognition practices.
Further to this, the development of the matrix provides the
basis for a series of observations concerning open educational
courses. The first of these arises from the process through
which the open educational courses were selected for inclusion
in the matrix. Many were excluded because they did not
contain sufficient information about assessment requirements
or recognition options. Even upon enrolling on several selected
MOOCs, we found that much of this information was still not
available.
The matrix also indicates that, in the main, robustness of
assessment and recognition of learning are very closely
linked for the majority of open learning initiatives. This raises
a contradiction in the argument that MOOCs represent an
opportunity for more accessible and inclusive educational
provision. Formal recognition requires robust assessment, and
robust assessment requires tutors to review performance and
students to have their identities validated. This all requires
financing. To the extent that these costs have to be passed on to
learners, or learners have to be enrolled on one of the providing
institution’s mainstream programmes to receive recognition,
MOOCs become that much less open and less inclusive. The
challenge for institutions is to overcome this low cost / high
value incompatibility in the most cost-effective way.
The matrix is also revealing in what it does not include. In the
interests of space, we excluded all cases with a ‘zero’ level of
ning
r
a
e
eL ers
Pap
40
recognition. This may give the false impression that most open
learning initiatives do have some form of recognition, but in
reality, a substantial number of initiatives were found which
offered no recognition at all. This may reflect the perception
for many stakeholders that online education is a poor second
to face-to-face education. As one learner interviewed for the
study stated:
No-one takes an online exam seriously. If employers see my
certificate and it says I did it online, they do not know that the
online exam was proctored and my identity was confirmed and
so on. But if they know that I went to the University … and took
an exam, that is much more serious. Then they know that I have
learnt something important.18
For the formal recognition of open learning to be more easily
accepted, a wider awareness-raising process may need to
be implemented by employers and traditional educational
institutions.
The matrix may be useful in identifying where paths to resolving
the low cost/ high value incompatibility lie. One path identified
through this study (and there may be others) is in the use of
open education in CPD. Where self-reporting is accepted as a
valid form of assessment for CPD, open learning can meet the
needs of both learners and employers, in that it is flexible, wideranging in scope, and (generally) free.
As context is key to considering the interplay between
assessment and recognition, further work is needed in extending
the review of open courses to those outside of Europe as well
as exploring additional characteristics of open learning for
inclusion in the framework – for example, how ‘open’ is open,
and what forms of learning take place within them? This may
enable more precise modelling of different types and contexts
of open education to inform developers and learners about
what options are available for constructing courses, and which
examples already exist and, perhaps (considering the cost /
value incompatibility), those that can exist.
6. Glossary
Formal learning: Learning that occurs in an organised and
structured context (e.g.in an education or training institution
or on the job) and is explicitly designated as learning (in terms
of objectives, time or resources). Formal learning is intentional
18 Andreas Schumm, learner on the ‘Data Structures and Algorithms’ MOOC by the
University of Osnabrueck
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
40
From the field
from the learner’s point of view. It typically leads to validation
and certification (CEDEFOP 2009).
Informal learning: Learning resulting from daily activities related
to work, family or leisure. It is not organised or structured in
terms of objectives, time or learning support. Informal learning
is mostly unintentional from the learner’s perspective (CEDEFOP
2009).
Non-formal learning: Learning which is embedded in planned
activities not always explicitly designated as learning (in terms
of learning objectives, learning time or learning support), but
which contain an important learning element. Non-formal
learning is intentional from the learner’s point of view (CEDEFOP
2009).
Open learning: Open learning is an innovative movement in
education that emerged in the 1970s and evolved into fields of
practice and study. The term refers generally to activities that
either enhance learning opportunities within formal education
systems or broaden learning opportunities beyond formal
education systems (D’Antoni 2009, cited in Wikipedia).
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
41
From the field
References
methods in higher education. Retrieved from http://www.regjeringen.no/
pages/38804804/PDFS/NOU201420140005000EN_PDFS.pdf
Allgaier, H.J. (2013, August 28). Hasso Plattner Institute: MOOC Learners
at openHPI Show a High Success Rate. idw news. Retrieved from https://
idw-online.de/de/news548300.
Lockley, P. (2014). Personal Communication (June 2014).
MassiveOpenOnlineCourse.com (2014). Two big trends meet, digital
Camilleri, A.F. and Tannhäuser, A.C. (2013). Chapter 4: Assessment and
currencies MOOC: The University of Nicosia prepares to launch ‘Introduction
Recognition of Open Learning. In L. Squires and A. Meiszner (eds) Openness
to Digital Currencies. Retrieved from http://massiveopenonlinecourses.
and Education. Bingley: Emerald Group Publishing Limited, pp.85-118.
com/tag/cyprus/.
CARNet
Google
(2014,
January
Translate.
28).
Retrieved
by
Meinel, C. and Willems, C. (2013). openHPI Das MOOC-Angebot des
http://translate.google.com/
Hasso-Plattner-Instituts. Technische Berichte Nr. 79. Retrieved from http://
MOOC
from
Moodle.
Translation
translate?depth=1&hl=en&prev=/search%3Fq%3Dcarnet%2Bmooc%26cl
ient%3Dfirefox-a%26hs%3D571%26rls%3Dorg.mozilla:en-US:official%26c
hannel%3Dsb&rurl=translate.google.co.uk&sl=hr&u=http://www.carnet.
hr/loomen/moodle_mooc.
NVAO (2014). MOOCs and online HE: A survey. Retrieved from http://
www.nvao.net/page/downloads/NVAO_MOOCs_and_online_HE_A_
survey_June_2014.pdf.
CEDEFOP (2007). Recognition and validation of non-formal and informal
learning for VET teachers and trainers in the EU member state. Retrieved
from
opus.kobv.de/ubp/volltexte/2013/6680/pdf/tbhpi79.pdf.
http://www.cedefop.europa.eu/etv/Upload/Information_
resources/Bookshop/480/5174_en.pdf
CEDEFOP (2009) European guidelines for validating non-formal and
informal learning. Retrieved from http://www.cedefop.europa.eu/en/
Open Education Europa (2014). European MOOCs scoreboard. Retrieved
from
http://www.openeducationeuropa.eu/en/european_scoreboard_
moocs.
Open Education Special Interest Group (2014). Open Education Trend
Report. SURF. Netherlands.
publications-and-resources/publications/4054
openHPI (2012-2014). FAQ. Retrieved from https://openhpi.de/pages/faq.
Cisel, M. (2013) Le Guide du MOOC, http://www.france-universite-
Poldoja, H. (2014, April 22). Open Education in Estonia: Presentation at
numerique.fr/IMG/pdf/guide_mooc_complet_vf.pdf
the Boldic project webinar. Retrieved from http://www.slideshare.net/
Clinch, G. (2014). Personal Communication (June 2014).
D’Antoni, Susan (2009). ‘Open Educational Resources: reviewing initiatives
and issues’. Open Learning: The Journal of Open, Distance and e-Learning
24 (1 (Special Issue): 4.
Eurotech Universities (2014, July 7). EuroTech Universities session animates
debate on MOOCs and future of education at ESOF2014. Retrieved from
hanspoldoja/open-education-in-estonia.
University of Helsinki Department of Computer Science (nd). Mikä on
MOOC?. Retrieved from http://mooc.cs.helsinki.fi/content/mik%C3%A4mooc.
University of Nicosia (2014). MSc. in Digital Currency. http://
digitalcurrency.unic.ac.cy/admissions-registration/admissions-criteria.
http://eurotech-universities.eu/eurotech-universities-session-animates-
University of Tartu (2014). Estimation of Measurement Uncertainty
debate-on-moocs-and-future-of-education-at-esof2014/.
in Chemical Analysis.Retrieved from https://sisu.ut.ee/measurement/
Gaebel, M. (2014). MOOCs: Massive Open Online Course, EUA occasional
papers.
Retrieved
from
http://www.eua.be/Libraries/Publication/
MOOCs_Update_January_2014.sflb.ashx.
uncertainty.
Vairimaa, R. (2013). Joustava MOOC: Avoimet verkkokurssit käyvät vaikka
pääsykokeen korvikkeeksi. Yliopistolainen 2, s.3-6. Retrieved from http://
Institiuid Teicneolaiochta, Sligeach (2014). Procedure Title: Recognition of
palvelut.unigrafia.fi/yliopistolainen_2_2013/.
Prior Learning for Module Exemption /Credit and for Initial or Advanced
Verstelle, M., Schreuder, M. & Jelgerhuis, H. (2014). Recognition of MOOCs
Admission to a Programme Institute of Technology Sligo.
in the Education Sector. 2014 Open Education Trend Report, pp.24–25.
Kjeldstad, B., Alvestrand, H., Elvestad, O.E., Ingebretsen, T., Melve,
I., Bongo, M. et al. (2014). MOOCs for Norway: New digital learning
ning
r
a
e
eL ers
Pap
40
Retrieved
from
https://www.surf.nl/en/knowledge-and-innovation/
knowledge-base/2014/2014-open-education-trend-report.html.
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
42
From the field
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: elearningeuropa.info
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
43
From the field
Peer-review Platform for Astronomy Education Activities
Authors
Pedro Russo
[email protected] nl
astroEDU Managing Editor
Thilina Heenatigala
[email protected]
astroEDU Assistant Editor
Leiden Observatory / Leiden
University
The Netherlands
Edward Gomez
[email protected]
astroEDU Managing Editor
Las Cumbres Observatory
Global Telescope Network
(LCOGT)
California, USA
Linda Strubbe
[email protected]
astroEDU Editor in Chief
Canadian Institute for
Theoretical Astrophysics
Toronto, Canada
Tags
Astronomy Education, Open
Educational Resources, Web
technologies, Educational
Repositories
Hundreds of thousands of astronomy education activities exist, but their discoverability
and quality is highly variable. The web platform for astronomy education activities,
astroEDU, presented in this paper tries to solve these issues. Using the familiar peerreview workflow of scientific publications, astroEDU is improving standards of quality,
visibility and accessibility, while providing credibility to these astronomy education
activities. astroEDU targets activity guides, tutorials and other educational activities in
the area of astronomy education, prepared by teachers, educators and other education
specialists. Each of the astroEDU activities is peer-reviewed by an educator as well as
an astronomer to ensure a high standard in terms of scientific content and educational
value. All reviewed materials are then stored in a free open online database, enabling
broad distribution in a range of different formats. In this way astroEDU is not another
web repository for educational resources but a mechanism for peer-reviewing and
publishing high-quality astronomy education activities in an open access way. This
paper will provide an account on the implementation and first findings of the use of
astroEDU.
1. Introduction
The amount of educational content freely available on the Web is large and growing fast.
Many challenges have emerged for educators when looking for and comparing resources
available online, most of them related with discoverability, quality and openness of the
resources. The Open Educational Resources (OERs) model (Hylén, 2006) addressed some
of these challenges, offering a new,
scalable, and potentially powerful vision
of learning. OERs are teaching, learning
and research resources that reside in
the public domain or have been released
under an intellectual property license that
permits their free use or re-purposing
(Atkins et al., 2007). This concept has
been further developed: making OERs
be cost free to the end-user; allowing
the end-user freedom to Reuse, Revise/
alter, Remix and Redistribute, the 4R
framework. This framework was initially
presented by Wiley and expanded in
detail by Hilton III et al. (2010) (Figure 1).
Figure 1. Open Education Resources 4R framework: Reuse, Revise(Alter), Remix, Redistribute as
presented by Hilton III, Wiley, Stein & Johnson (2010)
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
43
From the field
The far-reaching impact of the OERs in society is not widely understood and is full of challenges on creation, use and evaluation
(Smith & Casserly, 2006). Not all of these challenges derive from the OER model, but from lateral reasons, for example the use of OER
is connected with the level of Internet access, knowing that two thirds of the world’s population still doesn’t have Internet access or
the educational policies at different levels, within institutions and in government (Morley, 2012). Moreover OER is seen as a potential
threat to education content held by publishing houses (OECD, 2007).
Nevertheless the number of OER has opened a new way for science education to produce, develop and distribute resources. The
number of repositories that store these resources has been growing in recent years, each with a different emphasis. Below we present
a summary of repositories which are specific to astronomy education activities and resources.
2. Science Education Repositories
Although there are thousands of educational repositories [10], archiving a variety of resource types, there are not many repositories of
educational resources specifically for astronomy. The table below gives an overview of existing repositories for astronomy education
in English.
Repository
Compadre (Physics
and Astronomy
Education Activities)
Galileo Teachers
Training Program
URL
www.compadre.org
Type of review
NASA Wavelength
astroEDU
www.galileoteachers. www.astrosociety.
org
org/education/
www.
nasawavelength.org/
www.iau.org/
astroEDU
Internal review
Internal review
Internal review
Internal review
Internal review
Open to
submissions?
Yes
No
No
No
Yes
Multilingual
Support?
No
No
No
No
Planned
User registration
required?
No
No
Yes
No
No
Library Type
Repository (Resource Repository
records)
(Resources in PDF)
Listing (lists of links to Repository (Resource Library
resources)
records)
License
Various (from
no license to
copyrighted)
Various (from
no license to
copyrighted)
Various (from
no license to
copyrighted)
Creative Commons
Attribution 3.0
Unported
Resource Types
Student resources,
Teaching resources,
Teaching resources,
Tools, webpages.
Games, A/V material,
Tools, Datasets,
Reference materials
Reference materials,
Courses, Webpages
Student resources,
Teaching resources,
Datasets, Reference
materials, News &
events, Webpages
Lessons Plans.
Connection to the
Curriculum
None
None
None
US National Science
Education Standards
Yes (English version:
UK, Australia, US)
Collection
growing?
N/A
N/A
N/A
Yes
Yes
Reference
Deustua, 2004
N/A
N/A
Smith, 2013
Russo, Gomez, et al.
2014
Various (from
no license to
copyrighted)
Astronomical Society
of the Pacific
Table 1.
The information collated on those repositories is mostly well organized and of high quality, however none of those repositories
satisfies completely the OER 4R model previously mentioned; sometimes the material cannot be revised or remixed; it can only, in
most of the cases, be redistributed and reused. All of these repositories are only available in English and the majority do not provide
the original source text of materials, which would facilitate adaptation and translation, essential for an international platform.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
44
From the field
The discoverability of resources is also one of the problems.
There is no evidence that teachers use the repositories of
educational material to find their resources. Some educators
use generic search engines, like Google. Results from a Google
search provide very little indication of the quality of the
resources. However quality is one of the most important criteria
for educators when they search for learning resources online
(Brent, 2012).
3. astroEDU
Although OERs offer a good solution for sharing knowledge,
particularly when putting open educational resources on the
internet, ensuring these OERs are also of high quality remains
a challenge. To address this we propose astroEDU (www.iau.
org/astroedu) an online platform for sharing astronomy OERs.
astroEDU conforms to the 4R framework but adds a new
component, review, to ensure the resources are of the highest
quality. In this respect astroEDU has enhanced the 4R model
to a 5R model where Review becomes the fifth ‘R’. To address
the need to review the resources we propose a review system
similar to that used in the academic knowledge creation and
dissemination; the peer-review model.
Peer review was introduced to scholarly publication more
than 280 years ago, as a process to evaluate any work by one
or more people of similar competence to the producers of the
work (also called peers). In academia this constitutes a form of
self-regulation by qualified members of a profession within the
relevant field. Peer review methods are employed to maintain
standards of quality, improve performance, and provide
credibility. In academia, peer review is often used to determine
an academic paper’s suitability for publication (Scanlan 2013).
Interestingly the peer-review process hasn’t been widely
used for the evaluation of science education products such as
educational activities. Early attempts of using this methodology
have been done by Cafolla (2006) and Gold (2012) with varying
levels of success.
More recent attempts have been made by NASA Wavelength
and Climate Literacy and Energy Awareness Network (CLEAN) to
integrate peer-selection and internal peer-review. The resources
are selected by a panel or by team members and then reviewed
by a pre-appointed review board. These review methodologies
enhance the quality of the resources, but do not provide a
system for any educator to submit their resources, which is a
limitation of both platforms.
The innovative aspect of astroEDU is the use of peer-review in a
similar way to its use in scholarly publication. The suitability for
publication of the activity is evaluated by the two stakeholders
peers; an educator and an astronomy researcher. In this way
both educational and scientific accuracy of the activity is
checked and reviewed.
For astroEDU the methodology used is anonymous peer-review,
also called blind review. In this system of pre-publication peer
review of scientific articles or papers for journals by reviewers
who are known to the journal editor but whose names are not
given to the article’s author. The reviewers do not know the
author’s identity, as any identifying information is stripped from
the document before review. In this respect astroEDU’s form
of peer-review is double blind and free from bias. Moreover
the same way that peer-reviewed scholarly articles are the
main metric for performance evaluation of scholars, astroEDU
will provide a new metric to assess the quality of the work
developed by educators.
Figure 2. Proposed new OER framework: 5R: addition of Review of
content through scientific and pedagogical quality checked (and
improved) by the community peers
ning
r
a
e
eL ers
Pap
40
To ensure a rigorous peer-review of educational activities an
activity template was established and designed by the astroEDU
editorial board. For that specific learning outcomes need to
be identified, which enable the logic of the activity and the
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
45
From the field
evaluation structure. For astroEDU educational taxonomy was established based on (AGU, 2013). astroEDU activities follow the
standard components defined by several authors in science education (USU 2008). The components can be broken in four main
areas: Objectives (What will your students learn?), Materials (What are the teaching instruments?), Processes (How will you teach
your students?) and assessment (How will you assess your students’ learning?). In Table 2 you can find a detailed explanation for the
relevant different sections of astroEDU activities.
Sections of astroEDU activities.
Description
Activity title
Full title of the activity.
Keywords
Any words that relate to the subject, goals or audience of the activity. Note that most
submissions will get variety of keywords and editor must ensure to select and add relevant
keywords. Important for on-line search.
Age range
All age categories the activity applies to. The categories may change depending on the
reviewers’ and editorial board’s input
Education level
The education level will change depending on the reviewers’ and editorial board’s input.
Time
The time taken to complete the activity.
Group size
Defines whether the activity is for individual or group use. Can also provide information like
how many students per teacher
Supervised for safety
Determine whether the activity has steps that require adult supervision for safety. E.g.:
using scissors.
Cost
Estimated cost of any materials needed for the activity. For astroEDU as a currency we use
Euro (€).
Location
Suitable location to conduct the activity (for example indoors or outdoors).
List of material
List of items needed for the activity. Try to find materials which are easily and cheaply
available in most countries (or offer alternatives)
Overall Activity Goals
A short list of points outlining the general purpose of the activity, and why these are
important for students to learn. For example, “The overall goals of the activity are for
students to understand why we experience seasons, and to improve their ability to
communicate scientific findings. Seasons are important to understand because they affect
daily life and involve concepts about light that are important in other contexts as well.”
(More specific learning objectives are entered in the field "Learning Objectives")
Learning Objectives
Learning objectives are specific statements that define the expected goals of an activity in
terms of demonstrable skills or knowledge that will be acquired by a student as a result of
instruction. These are also known as: instructional objectives, learning outcomes, learning
goals. The demonstration of learning should be tied directly to "Evaluation". On the
following page you can find some additional information on how to formulate good learning
objectives: http://edutechwiki.unige.ch/en/Learning_objective. Use terminology listed on
the page. For example, “Students will use the concept of solar flux as a function of incidence
angle to explain why it is hot in summer and cold in winter in Toronto.”
Evaluation
Include ways to test the goals, learning objectives and key skills learned by the audience.
A way to assess the implementation of the activity and your performance should also be
included.
Background information
This section contains information that teachers will read prior to beginning the activity.
Necessary scientific background information needed to implement this activity. Limit each
topic to one paragraph and keep in mind what is actually necessary for the activity. Also
keep in mind the background of the teacher (e.g., explain concepts clearly, and do not use
inappropriately technical vocabulary).
Core skills
Determine whether the activity is: Asking questions, Developing and using models, Planning
and carrying out investigations, Analysing and interpreting data, Using mathematics and
computational thinking, Constructing explanations, Engaging in argument from evidence,
Communicating information or a combination of these.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
46
From the field
Type of learning activity
Enquiry models are a pedagogical methodology for learning activities where the educational
activity "starts by posing questions, problems or scenarios, rather than simply presenting
established facts or portraying a smooth path to knowledge". There are several approaches
to enquiry-based instruction. These approaches include: Open-ended enquiry, Guided
enquiry, Structured enquiry, Confirmation or Verification, Fun Learning.
Brief Summary
One-paragraph short description of the activity. The description should give an introduction
to the activity as well as what to expect from completing the activity.
Full description of the activity:
Detailed step-by-step breakdown of the activity. Use graphics where possible to show the
steps.
Connection to school curriculum
Add the curriculum connection from the relevant country or region. The astroEDU editorial
board will help find further connections.
Table 2. astroEDU Educational Activities Taxonomy
4. astroEDU Technical Implementation
The publication workflow of astroEDU was designed to remove
barriers to the creation submission, use and re-use, and sharing
of high-quality content. To achieve these goals, astroEDU
uses off-the-shelf web technologies for the production and
publication workflows.
Submission is done via e-mail or a web form (typeform). Google
documents and spreadsheets are used as collaborative tools
within the editorial workflow, as described in Figure 3. Central to
the philosophy of astroEDU is disseminating the best astronomy
education resources. The majority of educator interaction with
astroEDU will be searching and browsing for resources on the
website. After review, activities are made available in many
different formats: PDF, .doc, HTML, and epub, including the
source files (RTF) for future translations and remixes. These
successful activities are then syndicated through educational
resources repositories and sharing sites (example: Scientix (ref),
TES (ref.) and OER.). One of the main goals of the astroEDU is to
promote the use of excellent activities worldwide. That is the
reason why all the astroEDU activities will be licensed through
the Creative Commons Attribution 3.0 Unported license. All the
astroEDU activities are labeled with a Digital Object Identifier
(DOI), to provide a form of persistent identification for future
reference and an easy way for educators to reference their
activities just like in scholarly paper .
Figure 3. Production pipeline (to be produced)
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
47
From the field
The front-end website uses different web technologies, mainly open-source software. The table 3. gives you an overview of the
different web technologies.
Web Technology
Use in astroEDU
Reference
Django
Django is a high-level Python Web framework
that encourages rapid development and clean,
pragmatic design. Django was designed to handle
two challenges: the intensive deadlines of a
newsroom and the stringent requirements of
the experienced Web developers who wrote it.
It lets you build high-performing, elegant Web
applications quickly.
Forcier, J., Bissex, P., & Chun, W. (2009). Python
Web development with Django. Upper Saddle
River, NJ: Addison-Wesley.
Python
Python is a dynamic object-oriented programming
language that can be used for many kinds of
software development. It offers strong support
for integration with other languages and tools,
comes with extensive standard libraries, and can
be learned in a few days.
Forcier, J., Bissex, P., & Chun, W. (2009). Python
Web development with Django. Upper Saddle
River, NJ: Addison-Wesley.
MariaDB
MariaDB is a robust, scalable, and reliable SQL
server. MariaDB is a drop-in replacement for
MySQL.
MariaDB An enhanced, drop-in replacement for
MySQL. (n.d.). Retrieved May 22, 2014, from
https://mariadb.org/en/
NGINX
NGINX is a high performance, open source web
application accelerator that helps websites deliver
more content, faster, to its users.
Nginx news. (n.d.). Retrieved May 22, 2014, from
http://nginx.org/
Memcached
Memcached is a high-performance, distributed
memory object caching system, generic in nature,
but intended for use in speeding up dynamic web
applications by alleviating database load.
Nginx news. (n.d.). Retrieved May 22, 2014, from
http://nginx.org/
Elasticsearch
Elasticsearch is a search server based on Lucene.
It provides a distributed, multitenant-capable fulltext search engine with a RESTful web interface
and schema-free JSON documents.
Elasticsearch. (n.d.). Retrieved May 22, 2014, from
http://www.elasticsearch.org/overview/
Markdown and ReportLab
PDF and EPUB are digital formats optimised for
printing and e-books, respectively. astroEDU
activities are available in these formats to enable
broader use, leveraging technologies such as
Markdown and ReportLab.
Markdown: Syntax. (n.d.). Retrieved May 22,
2014, from http://daringfireball.net/projects/
markdown/syntaxhttp://daringfireball.net/
projects/markdown/syntax
http://daringfireball.net/projects/markdown/
syntax
ReportLab open-source PDF Toolkit. (n.d.).
Retrieved May 22, 2014, from http://www.
reportlab.com/opensource
Table 3. Web technologies used to develop and implement astroEDU.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
48
From the field
5. Conclusions & Future Work
astroEDU is an open-access platform for peer-reviewed
astronomy education activities and makes the best astronomy
activities accessible to educators around the world. As a
platform for educators to discover, review, distribute, improve,
and remix educational astronomy activities, astroEDU tries to
solve some past issues with educational activities in astronomy,
namely that high-quality educational activities are difficult
to find. As a proof-of-concept, the whole astroEDU process
demonstrates that through peer-review and publication in a
simple, elegant website, high-quality educational materials can
be made available in an open-access way. The initial results and
feedback is positive: “[AstroEDU is] off to a promising start, with
a pleasing range of activities suited to children of all ages and
abilities” (Physics World, 2014).
Silvia Simionato, Alejandro Cárdenas-Avendaño, Bruno Rino and
Jan Pomierny for their comments to this article.
The pedagogical impact of astroEDU will be measured in the
next years, when more activities will populate the repository
and more educators will use the materials. In the near future
astroEDU will also explore new ways to review its content, mainly
through classroom evaluation and post-publication evaluation.
For classroom evaluation some randomized evaluations will be
run in schools in Wales (UK) and the Netherlands. The different
educators can also use the comments box for each activity so
Those who use the activity can discuss how it worked when
they used it, testing, etc. astroEDU will also test new models
of peer-review, example open peer-review. The discoverability
of educational material is another issue that will be addressed
in the next developments steps. Using techniques like Search
Engine Optimization we expect to increase the number of users
for the astroEDU activities. astroEDU is currently available in
English, although astroEDU currently welcomes submissions
in any language. It is anticipated the platform will be offered
in other languages in early 2015. Only a truly cross-platform
and cross-language experience will be useful for educators and
teachers around the world and astroEDU will try to achieve that.
Acknowledgement:
astroEDU was developed by funding from the European
Community’s Seventh Framework Programme ([FP7/20072013]) under grant agreement n° 263325. astroEDU is a project
of the International Astronomical Union’s Office of Astronomy
for Development. astroEDU development was supported by,
International Astronomy Union, Universe Awareness, Leiden
University, LCOGT and European Union. We would like to thank
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
49
From the field
References
Morley, M. (2012) “Open Policy”. Squire Morley, ‘Open Policy’ for
Openness in Education course.
Anne U. Gold, Tamara Shapiro Ledley, Susan M. Buhr, Sean Fox, Mark
McCaffrey, Frank Niepold, Cathy Manduca, and Susan E. Lynds (2012)
Organisation for Economic Co-operation and Development [OECD].
Peer-Review of Digital Educational Resources—A Rigorous Review Process
(2007) Giving knowledge for free: The emergence of open educational
Developed by the Climate Literacy and Energy Awareness Network
resources. Paris: OECD - Educational Resources Centre for Educational
(CLEAN). Journal of Geoscience Education: November 2012, Vol. 60, No.
Research
4, pp. 295-308.
dataoecd/1/53/38484866.pdf [Accessed June 2014].
Atkins, D. E., Brown, J. S., & Hammond, A. L. (2007). A review of the open
Physics World, 2014, Web life: AstroEDU Review, Physics World, March
educational resources (OER) movement: Achievements, challenges, and
2014
new opportunities (pp. 1-84). Creative commons.
and
Innovation.
Available
from:
http://www.oecd.org/
Scanlan, S. (2013). NANO: New American Notes Online: An Interdisciplinary
Brent, I. Gibbs, G. Katarzyna, A. (2012) “Defining openness: updating the
Academic Journal for Big Ideas in a Small World, Intro Peer Review.
concept of “open” for a connected world”. JIME2012: JIME Special Issue on
Retrieved 20 June 2014, from http://www.nanocrit.com/issues/issue-3-
Open Educational Resources.
peer-review-new-possibilities/
Cafolla, Ralph. “Project MERLOT: Bringing peer review to web-based
Smith, M. S., & Casserly, C. M. (2006). The promise of open educational
educational resources.” Journal of Technology and Teacher Education 14.2
resources. Change: The Magazine of Higher Learning, 38(5), 8-17.
(2006): 313-323.
Smith, D. A., Schwerin, T. G., Peticolas, L. M., Porcello, D., Kansa, E.,
Deustua, Susana E. “ComPADRE: Communities of Physics and Astronomy
Shipp, S. S., & Bartolone, L. (2013, December). NASA Wavelength: A Full
Digital Resources in Education.” Mercury 33.2 (2004): 19.
Spectrum of NASA Resources for Earth and Space Science Education. In
AGU Fall Meeting Abstracts (Vol. 1, p. 0705)
Hylén, J. (2006). Open educational resources: Opportunities and
challenges. Proceedings of Open Education, 49-63.
Wiley, D. (No Date). Defining the “Open” in Open Content. Retrieved 20
June 2011.
Hilton III, J., Wiley, D., Stein, J., & Johnson, A. (2010). The four ‘R’s of
openness and ALMS analysis: frameworks for open educational resources.
Open Learning, 25(1), 37-44.
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: elearningeuropa.info
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
50
From the field
Seven features of smart learning analytics - lessons learned
from four years of research with learning analytics
Authors
Martin Ebner
[email protected]
Behnam Taraghi
[email protected]
Learning Analytics (LA) is an emerging field; the analysis of a large amount of data helps
us to gain deeper insights into the learning process. This contribution points out that
pure analysis of data is not enough. Building on our own experiences from the field,
seven features of smart learning analytics are described. From our point of view these
features are aspects that should be considered while deploying LA.
Anna Saranti
[email protected]
Social Learning, Computer
and Information Services
Graz University of
Technology
Graz, Austria
Sandra Schön
sandra.schoen@
salzburgresearch.at
Innovation Lab,
Salzburg Research
Forschungsgesellschaft,
Salzburg, Austria
Tags
Learning Analytics, aspects,
technology enhanced
learning
ning
r
a
e
eL ers
Pap
40
1. Introduction
Already back in 2006 Retalis et al. proposed their first thoughts on “Learning Analytics”
and considered interaction analysis as a promising way to better understand the learner’s
behavior. A couple of years later, further activities were organized, especially Long and
Siemens (Long & Siemens, 2011) predicted that the most important factor shaping the
future of higher education will be big data and analytics. Since then, scientific conferences,
different reports (e.g. Horizon report, 2011) and public funding referred to Learning Analytics.
Nowadays, discussing about the topic Learning Analytics is attracting many researchers
worldwide. According to Siemens and Baker (Siemens & Baker, 2012) LA “is the measurement,
collection, analysis and reporting of data about learners and their contexts, for purposes of
understanding and optimizing learning and the environments in which it occurs”. Further
research publications refined the definition towards more students’ activities (Duval, 2010)
or proposed descriptive models and frameworks (cf. Siemens 2011; Elias 2011; Greller &
Drachsler 2012; Cooper 2012; Chatti et al. 2012; Friesen 2013).
Within our own work and studies, we worked with LA in diverse contexts of learning in
schools and higher education. At first glance, the difference between Educational Data
Mining (EDM) and Learning Analytics is not obvious (Baker et al., 2012). Therefore the last
years of research was dominated to explain why LA differs from EDM and why a new research
field is absolutely necessary. Furthermore the authors did several field studies using learning
analytics (Schön et al., 2012; Ebner & Schön, 2013; Ebner et al., 2013; Taraghi et al., 2013;
Taraghi et al., 2014a; Greller et al., 2014; Taraghi et al., 2014b). Against this background we
tried to formulate features that we consider as crucial for smart implementation of LA. From
our point of you, these are effective also in performance support in organizations as well as
for learning support in classrooms. These aspects are independent from the context, but
important for the support of learning and learners
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
51
From the field
2. Seven features of smart Learning Analytics
Based on our literature review and our own on-field experiences,
we propose a list of seven features of smart Learning Analytics.
1. Learning Awareness: Smart LA should support the awareness
of learning. Even if it has components of assessment and
controlling, LA is meant to support learning. It is important
that each learner is informed about his/her current state
in the learning process. Questions like: Where do I achieve
the greatest performance and where/when not, are in
the center of this aspect. Learning awareness reflects on
the idea that the learner is conscious about the evolution
of his/her personal learning process and knows how to
improve it according to the available data.
2. Privacy Awareness: Considering privacy is becoming a very
important issue. It is not only recommended to software
developers to keep personal data safety, but also for
instructors, teachers, trainers as well as learners. Data
confidentiality has to be guaranteed. A secure manipulation
and transfer of personal data is a precondition for successful
LA programs. Learners should be able to trust their learning
environments.
3. Time Awareness: When LA comes into our minds, we are
always thinking in learning processes. The term process
implies that we follow learning along a timeline. Therefore
LA has to provide a possibility where learners as well as
instructors, teachers or trainers are able to see how they
are performing over a certain time period. They also have
to understand that learning is not a snap shot, but a highly
steadily growing process over time.
LA must be embedded in a pedagogical approach. They
claimed that LA will force pedagogical interventions,
which may lead to a change of the pedagogical behavior
(Van Harmelen & Workman, 2012). LA is strongly linked to
pedagogical interventions and brings the educator in the
center of the whole learning arrangement.
6. Big Data Centralism: An important reason what LA makes
powerful is the potential notion of data centralism. From a
technical perspective the main difference between former
EDM software and LA can be seen in centralized big data.
Due to web technologies it is nowadays possible to let
people use the device of their choice, while aggregating
the produced learning data centrally. It does not matter if
learners are using a smartphone, a tablet or a computer,
each single entry will be gathered on the same place.
Consequently, the amount of data can get really big and
therefore reliable for further research investigations.
7. Knowledge Structures Acquisition: The last feature of
smart LA considers the new knowledge that emerges from
the analysis of data and that is important for pedagogical
scientists. New insights and perspectives may let us rethink
how people are learning. Knowledge structures can be
derived and can influence the existing algorithm running in
the background of LA software. Dynamical adoption to the
learner’s needs is one of big challenges in close relation to
the data gathered in the background.
4. Visual Feedback: LA has to provide visualizations of the
learning process. Graphics working as feedback channel
for the learners (how did I perform till now) as well as
instructors, teachers and trainers (how did my class/
group perform till now) and finally also for administrators,
developers and researchers (how did the program enhance
the learning process). Each illustration must be easy
understandable and simple. Otherwise learners or teachers
will not find them helpful. From our perspective this is a
very sophisticated task.
5. Pedagogical Interventions: LA collects data of learners to
analyze it. Different visualizations give instructors, teachers
and trainers the idea how their learners are currently
performing. As Greller and Drachsler (2012) mentioned,
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
52
From the field
Fig. 1 Seven features of smart Learning Analytics
3. Conclusion
Fig 1. summarizes seven features of LA we suggest to bear in
mind while implementing LA. The main issue is that the data
itself will not lead to any valuable insights into how learning
occurs or might happen. It is about the people who get it in an
appropriate way in order to enhance their knowledge about the
learning process. Learning Analytics should support learners to
enhance their performance, educators to get a better picture
about their students’ learning and also should support scientists
to understand how learning in a particular domain happens. The
seven proposed features of smart LA supports the development.
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
53
From the field
References
Horizon Report (2011). New Media Consortium, http://wp.nmc.org/
horizon2011/sections/learning-analytics/ (last visited July 2014)
Baker, R. S., Duval, D., Stamper, J., Wiley, D. & Buckingham Shum, S.
(2012). Panel: Educational Data Mining meets Learning Analytics, in:
Schön, M., Ebner, M., & Kothmeier, G. (2012). It’s Just About Learning the
Proceedings of 2nd International Conference on Learning Analytics &
Multiplication Table, In Proceedings of the 2nd International Conference
Knowledge (LAK ‘12), New York, USA 2012, p. 20.
on Learning Analytics and Knowledge (LAK ‘12), Simon Buckingham Shum,
Dragan Gasevic, and Rebecca Ferguson (Eds.). ACM, New York, NY, USA,
Chatti, M.A., Dyckhoff, A.L., Schroeder, U., & Thüs, H. (2012). A reference
pp. 73-81.
model for learning analytics. in International Journal of Technology
Enhanced Learning. Special Issue on “State-of-the-Art in TEL”. pp. 1-22.
Retalis, S., Papasalouros, A., Psaromiligkos, Y., Siscos, S., & Kargidis, T.
http://learntech.rwth-aachen.de/dl1139 (last visited July 2014)
(2006). Towards Networked Learning Analytics – A concept and a tool.
Networked Learning. http://www.lancaster.ac.uk/fss/organisations/netlc/
Cooper, A. (2012). A Framework of Characteristics for Analytics. CETIS
past/nlc2006/abstracts/pdfs/P41%20Retalis.pdf (last visited July 2014)
Analytics Series Vol.1, No 7.
Romero, C., & Ventura, S. (2007). Educational data mining: A survey from
Duval, E. (2010). Attention Please! Learning Analytics for Visualization and
Recommendation. To appear in: Proceedings of LAK11: 1st International
Conference on Learning Analytics and Knowledge 2011. (https://lirias.
kuleuven.be/bitstream/123456789/315113/1/la2.pdf
1995 to 2005. In: Expert Systems with Applications, 33, 135-146. https://
www.academia.edu/2662296/Educational_data_mining_A_survey_
from_1995_to_2005 (last visted July 2014)
(last visited July
Siemens, G., & Baker, R. S. J. (2012). Learning Analytics and Educational
2014)
Data Mining: Towards Communication and Collaboration. LAK ‘12
Ebner, M., & Schön, M. (2013). Why Learning Analytics in Primary
Education Matters!, Bulletin of the Technical Committee on Learning
Proceedings of the 2nd International Conference on Learning Analytics and
Knowledge, pp. 252- 254.
Technology, Karagiannidis, C. & Graf, S (Ed.), Volume 15, Issue 2, April
Siemens, G. (2011). Learning Analytics: A foundation for informed change
2013, pp. 14-17.
in higher education. Educause conference presentation. Available at:
Ebner, M., Schön, M., Taraghi, B., & Steyrer, M. (2013). Teachers Little
Helper: Multi-Math-Coach, Proceedings of the IADIS International
http://www.slideshare.net/gsiemens/learning-analytics-educause
(last
visited July 2011)
Conference e-Learning 2013, Nunes, M. B. & McPherson, M. (Ed.), Prague,
Taraghi, B., Softic, S., Ebner, M., & De Vocht, L. (2013). Learning Activities
IADIS Press, pp. 183-190.
in Personal Learning Environment. In Proceedings of World Conference on
Elias,
T.
(2011).
and
Potential.
Learning
Available
Analytics:
at:
Definitions,
Processes
http://learninganalytics.net/
Educational Multimedia, Hypermedia and Telecommunications 2013 (pp.
2466-2475). Chesapeake, VA: AACE.
LearningAnalyticsDefinitionsProcessesPotential.pdf (last visited July 2014)
Taraghi, B., Ebner, M., Saranti, A., & Schön, M. (2014a). On Using Markov
Friesen, N. (2013). Learning Analytics: Readiness and Rewards. In:
Canadian Journal of Learning and Technology (CJLT) Vol 39, No 4. Available
at: http://www.cjlt.ca/index.php/cjlt/article/view/774/379 (last visited
Chain to Evidence the Learning Structures and Difficulty Levels of One Digit
Multiplication, In: Proceedings of the Fourth International Conference on
Learning Analytics And Knowledge, ACM, New York, pp. 68-72.
July 2014)
Taraghi, B., Saranti, A., Ebner, M., & Schön, M. (2014b). Markov Chain and
Greller, W., & Drachsler, H. (2012) Translating Learning into Numbers: A
Generic Framework for Learning Analytics, In: Educational Technology &
Society 15 (3) , pp- 42-57.
Classification of Difficulty Levels Enhances the Learning Path in One Digit
Multiplication. Learning and Collaboration Technologies. Designing and
Developing Novel Learning Experiences. Panayiotis, Z., Ioannou, A. (Ed.),
Greller, W., Ebner, M. & Schön, M. (2014). Learning Analytics: From Theory
to Practice – Data Support for Learning and Teaching. In: Computer Assisted
Assessment. Kalz, Marco, Ras, Eric (Eds.) -Research into E-Assessment,
Springer Lecture Notes, pp. 322-322.
Van Harmelen, M., & Workman, D. (2012). Analytics for Learning and
Teaching. CETIS Analytics Series Vol. 1, No. 3.
Communications in Computer and Information Science Volume 439, 2014,
pp 79-87. Springer, New York
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
54
From the field
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: elearningeuropa.info
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
55
From the field
Quality assurance in online learning:The contribution of
computational linguistics analysis to criterion referenced
assessment
Authors
Lynette R. Goldberg
[email protected]
Lecturer, Bachelor of Dementia
Care
Asking students as a group what they have learned from an assessment and analysing
their responses using computational linguistics software can provide a necessary and
complementary perspective to criterion referenced assessment. This can ensure an
assessment is meaningful, relevant, and designed to optimise learning and assure
quality in online education.
Alison Canty
[email protected]
Course coordinator, Bachelor of
Dementia Care
Wicking Dementia Research
and Education Centre,
School of Medicine/Faculty
of Health, University of
Tasmania
Australia
Tags
criterion-referenced
assessment, computational
linguistic analysis,
Leximancer, rubrics
1. Introduction
Open learning advocates a barrier-free approach to education that includes learning via the
Internet and other forms of telecommunication (Bates, 2005). The University of Tasmania
(Australia) offers a newly-developed, fully online Bachelor of Dementia Care degree designed
specifically for adults who are working in aged care who wish to understand more about the
diseases of dementia, provide evidence-based care, and advance their qualifications (Kelder
et al., 2013; Kyndt, Dochy, Onghena, & Baert, 2013). These adult learners bring enriching life
experiences along with the challenge for their new learning and assessments to be problembased, meaningful, and relevant to their work (Fischer & Heikkinen, 2010; Ross-Gordon,
2011).
Criterion-referenced assessments, or carefully-constructed rubrics, form an important part
of quality assurance in the online learning for this degree. This method of independently
assessing students’ learning against a clearly stated list of criteria is considered an integral
component of a student-centred approach to assessment (Biggs & Tang, 2011; Cordiner,
2011). It has replaced norm-referenced assessment, the previous mainstay of university
grading schemes, where students were compared with each other to generate performance
standards according to a normal distribution (Biggs & Tang 2011; Wakeford, 2006).
In addition to evaluation criteria, a well-developed rubric contains quality definitions, and a
scoring strategy so that students can derive comfort and a sense of fairness from judgments
of their performance that are transparent, publicly stated, and equitably applied (Biggs &
Tang, 2011; Cordiner, 2011; Ellis & Kelder, 2012; Kairuz & Bond, 2013; Reddy & Andrade,
2010). In a critical review of 20 published studies across higher education programs, Reddy
and Andrade (2010) stipulated that positively-used rubrics need to be (a) available to, and
possibly co-created by, students before any assignment; (b) focused on performance; (c)
developed to actively guide deeper learning and self-and peer- assessments; and (d) used to
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
56
From the field
enhance teaching and learning as well as to evaluate. Panadero
and Romero (2014) reiterated the value of rubrics in facilitating
the accuracy with which students assess their own work. Rubrics
also need to be valid and reliable, an area in which continued
work is needed (Cordiner, 2011; Humphry & Heldsinger, 2014;
Reddy & Andrade, 2010).
However, even with established content and construct validity,
rubrics may not provide an instructor with an overall perspective
of what students are learning and how they prefer to learn.
The inclusion of computational linguistics to analyse students’
feedback can provide a valuable complement to rubrics to
facilitate quality assurance in online learning and promote
mixed methods statistical analyses of effective education
strategies (Ivankova, 2014).
As an authentic case-based task promotes higher quality student
learning and can facilitate the social and cultural relevance of
learning about the neuroscience of dementia (Illes et al., 2010),
students in a recent unit on the Biology of Ageing and Dementia
completed a comparative case study. A detailed rubric assessed
their ability to: (a) administer three specific tests to two older
adults, one of whom had dementia, (b) analyse and interpret
information, citing relevant literature, and (c) write a clear,
informative and accurate summary of their findings. With one
exception, all of the 76 students passed the assessment, 22 with
a pass (50-59%), 13 with a credit (60-69%), 22 with a distinction
(70-79%), and 18 with a high distinction (> 80%). Students
then were asked “What did you learn from the case study?” A
computerised content analysis of their responses (n = 65; 4,427
words), using Leximancer software (https://www.leximancer.
com/; Smith & Humphreys, 2006) provided an important overarching thematic and conceptual analysis. Leximancer runs
on Windows, MacOS, Linux and Unix platforms - with IE8+,
Firefox7+, Chrome 14+ and Safari5+ supported browsers. At
present, the software will run on Windows XP. However, as
Windows XP has passed end of life and is no longer supported
by Microsoft, it is no longer officially supported (http://info.
leximancer.com/operating-systems/). The software is not free;
currently an academic perpetual desktop license costs $1,500
(AUD) and an academic annual desktop license costs $750 (plus
GST for Australian residents). There are alternative digital tools
that can be used for content analysis (see Halper, 2011), but
Leximancer compares favourably with these products given its
validity study (Smith & Humphreys, 2006), published scholarly
applications, cost, installation, technical support, and user
friendliness (Bell, Campbell, & Goldberg, 2014).
ning
r
a
e
eL ers
Pap
40
In Leximancer analysis, concepts are viewed as related word
groups, e.g., the concept “knowledge” may contain the keywords
“learning” and “information.” The keywords are weighted as
to how often they occur in sentences containing the concept
compared to how often they occur elsewhere in the text. The
meaning, or sense, of the words in context can vary. Leximancer
analysis then clusters the identified concepts into higher level
themes and ranks the most important themes, relative to one
another, in percent.
The themes and underlying concepts that were identified from
students’ comments are shown in the visual map in Figure 1. The
themes were: Cognitive (100% - with the underlying concepts
of “ageing,” “assessment,” and “possibly;” sphere 1); Person
(35%, sphere 2); Required (33% - with the underlying concepts
of “providing,” “answers,” and “individuals;” sphere 3), Learned
(27%, sphere 4); Test (26%, sphere 5), and Interesting (24%,
sphere 6). When word ambiguity occurred, as, for example,
in the theme of “test,” the investigator was able to manually
check the related text blocks in isolation and in context to clarify
content meaning. This demonstrates an important feature
of Leximancer: it is automated but also features multiple
interactive windows that facilitate more detailed scrutiny (Bell
et al., 2014).
Figure 1. Concepts derived from students’ responses about the case
study, their connectivity, and the relative importance of the themes
(centred and in percent in each of the six spheres) under which any
additional concepts clustered.
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
57
From the field
In an objective way, this conceptual categorization of responses
illustrated students’ insight into (a) the cognitive and functional
differences between older adults with dementia and those
without dementia, and the importance of understanding
individual differences between people with dementia
(spheres 1-3), (b) the range of additional information gained
from the experiential task (sphere 4), (c) the challenge of
obtaining informed consent, choosing appropriate tests, and
administering and scoring them accurately (sphere 5), and the
interesting nature of the case study (sphere 6). Of interest,
no student mentioned the rubric in their responses. The
computational linguistics analysis confirmed that the case study
was a meaningful and relevant assessment activity that assisted
in grounding information students had learned in lectures and
readings (Hodge, 2014; Melkun, 2012) as demonstrated by the
following comments:
that an assessment is meaningful, relevant, and of high quality
and presented in a way that optimises learning.
Student 40: It was interesting to see how cognitive impairment
can make tasks that seem quite simple to complete really
complex and difficult.
Student 24: It gave me insight into coping strategies used by
the person with dementia. It also gave me an opportunity to use
assessment tools which have not been part of my job description,
giving me greater insight into the overall assessment of people
with dementia.
Student 48: Being able to apply previous knowledge and
putting acquired learning into practice with a person with
dementia enhanced the learning process. Like any learning,
actually applying it to real life situations only enhances learning
outcomes and encourages critical thinking.
2. Conclusions
Asking students as a group what they have learned from an
assessment and analysing their responses using computational
linguistics software can provide a necessary, objective, and
complementary perspective to criterion referenced assessment.
The software is helpful in conducting an efficient analysis of
lengthy narrative data, rather than relying on isolated anecdotal
comments. It facilitates pattern recognition that can then be
used by instructors to support conclusions about learning
objectives and assignments, and gain insight into what students
are learning and their learning preferences, all of which are
essential aspects of the design and delivery of online education.
Thus it can be considered as one of a range of tools to ensure
ning
r
a
e
eL ers
Pap
40
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
58
From the field
References
Bates, A.W. (2005). Technology, e-learning and distance
education. London/New York: RoutledgeFalmer.
Bell, E., Campbell, S., & Goldberg, L.R. (2014 – in press).
Nursing identity and patient-centredness in scholarly health
services research: A computational analysis of PubMed
abstracts 1986-2013. BioMed Central – Health Services
Research.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at
university (4th ed.). Maidenhead, Berkshire, England: Open
University Press/McGraw-Hill.
Cordiner, M. (2011). Guidelines for good assessment
practice (rev. ed.). Available at: http://www.teachinglearning.utas.edu.au/assessment.
Ellis, L., & Kelder, J. (2012). Individualised marks for group
work: Embedding an eportfolio criterion in a criterion
referenced assessment (CRA) rubric for group-work
assessment. Education for Information, 29, 219-227.
Fischer, K.W. & Heikkinen, K. (2010). The future of
educational neuroscience. In D. Sousa (Ed.), Mind, brain,
and education: Neuroscience implications for the classroom
(pp. 249-269). Bloomington IN: Solution Tree Press.
Halper, F. (2011). Predictive analytics: The Hurwitz victory
index report. In Hurwitz Victory Index. Needham, MA:
Hurwitz & Associates.
Hodge, S. (2014). Transformative learning as an “interpractice” phenomenon. Adult Education Quarterly, 64(2),
165-181.
Humphry, S.M., & Heldsinger, S.A. (2014). Common
structural design features of rubrics may represent a threat
to validity. Educational Researcher, 43(5), 253-263.
Illes, J., Moser, M.A., McCormick, J.B., Racine, E. Blakeslee,
S., Caplan, A., …& Weiss, S. (2010). Neurotalk: Improving
communication of neuroscience, National Review of
Neuroscience. 11(1) 61-69. doi:10.1038/nrn2773.
Ivankova, N. (2014). Implementing quality criteria in
designing and conducting a sequential QUAN → QUAL
mixed methods study of student engagement with learning
ning
r
a
e
eL ers
Pap
40
applied research methods online. Journal of Mixed Methods
Research, 8(1), 25-51.
Kairuz, T., & Bond, J.A. (2013). Development of interpersonal
communication skills among pharmacy students: Evaluation
of an assessment rubric. Focus on Health Professional
Education: A Multidisciplinary Journal, 15(2), 18-29.
Kelder, J-A., Canty, A., Carr, A., Skalicky, J., Walls, J.,
Robinson, A., & Vickers, J. (2013). A learning place where
a high-risk student cohort can succeed: Curriculum,
assessment, and teacher recruitment. In S. Frielick, N.
Buissink-Smith, P. Wyse, J. Billot, J. Hallas, & E. Whitehead
(Eds). Research and Development in Higher Education: The
Place of Learning and Teaching, 36, 253-265.
Kyndt, E., Dochy, F., Onghena, P., & Baert, H. (2013). The
learning intentions of low-qualified employees: A multilevel
approach. Adult Education Quarterly, 63(2), 165-189.
Melkun, C.H. (2012). Nontraditional students online:
Composition, collaboration, and community. The Journal of
Continuing Higher Education, 60, 33-39.
Panadero, E., & Romero, M. (2014). To rubric or not to
rubric? The effects of self-assessment on self-regulation,
performance, and self-efficacy. Assessment in Education:
Principles, Policy & Practice, 21(2), 133-148.
Reddy, Y.M., & Andrade, H. (2010). A review of rubric use
in higher education. Assessment & Evaluation in Higher
Education, 35(4), 435-448.
Ross-Gordon, J.M. (2011). Research on adult learners:
Supporting the needs of a student population that is no
longer traditional. Peer Review, 13(1, winter), 1-6.
Smith, A.E., & Humphreys, M.S. (2006). Evaluation of
unsupervised semantic mapping of natural language
with Leximancer concept mapping. Behaviour Research
Methods, 38(2), 262-279.
Wakeford, R. (2006). Principles of student assessment. In
H. Fry, S. Ketteridge & S.A. Marshall (Eds.), Handbook for
teaching and learning in higher education – enhancing
academic practice (pp. 42-61). Oxton: RoutledgeFalmer.
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
59
From the field
Edition and production
Name of the publication: eLearning Papers
ISSN: 1887-1542
Publisher: elearningeuropa.info
Edited by: P.A.U. Education, S.L.
Postal address: c/Muntaner 262, 3r, 08021 Barcelona (Spain)
Phone: +34 933 670 400
Email: editorialteam[at]openeducationeuropa[dot]eu
Internet: www.openeducationeuropa.eu/en/elearning_papers
ning
r
a
e
eL ers
Pap
40
Copyrights
The texts published in this journal, unless otherwise indicated, are subject
to a Creative Commons Attribution-Noncommercial-NoDerivativeWorks
3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning
Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/
licenses/by-nc-nd/3.0/
eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers
n.º 40 • January 2015
60