Abstract - Interdisciplinary Journal of e

Transcription

Abstract - Interdisciplinary Journal of e
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Analyzing Associations between the Different
Ratings Dimensions of the MERLOT Repository
Cristian Cechinel
Federal University of Pampa,
Bagé, Brazil
Salvador Sánchez-Alonso
University of Alcalá,
Alcalá de Henares, Spain
[email protected]
[email protected]
Abstract
As the dissemination of digital learning resources is continuously growing over the internet, the
existing repositories are enabled to search for different alternatives to assess the quality of their
materials. MERLOT, one of the most recognized learning object repositories available nowadays,
has adopted the peer-review approach as the cornerstone for the quality evaluation of their learning objects. In that evaluation, experts on specific areas rate resources according to three predefined dimensions, and, after an extensive edition process, these ratings are published and used
for the recommendation of materials among the community of users. In addition, MERLOT allows users to write comments and provide ratings for the learning resources, complementing its
evaluation strategy with this more informal mechanism. The present work analyzes associations
between the ratings given by the users and the experts with the aim of discovering whether or not
these two groups have similar impressions about the quality of the materials, as well as to explore
the usefulness of this twofold strategy towards the establishment of learning resources quality
inside MERLOT.
Keywords: Learning objects, quality, ratings, MERLOT, peer-review, public-review
Introduction
As the dissemination and availability of learning resources grows on the Internet, the many existing repositories are searching for mechanisms to evaluate their catalogued/stored materials in order to rank them and better serve the resource seeking needs of their users. Learning object repositories (LORs) are potential aggregators of communities of practitioners (Brosnan, 2005; Han,
Kortemeyer, Kramer, & Prummer, 2008; Monge, Ovelar, & Azpeitia, 2008), i.e., people who
share interests and concerns about something they do and learn through their interactions (Wenger, 2006). Subsequently, some of the repositories harness the features of such social environments through the adoption of strategies for the establishment of quality that rely on the impressions of usage and on the evaluations given by regular users and experts that are members of the
repository community. These are forms
Material published as part of this publication, either on-line or
of evaluative metadata (Vuorikari, Main print, is copyrighted by the Informing Science Institute.
nouselis, & Duval, 2008) that serve as
Permission to make digital or paper copy of part or all of these
the basis for properly ranking and recworks for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
ommending resources for users.
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
However, as LORs are distinct in many
aspects (number of resources, granularity of the materials, locality of the learning objects, openness for users, subject
areas, internal goals, audience)
Editor: Janice Whatley
Analyzing Associations between the Different Ratings Dimensions of the MERLOT Repository
(McGreal, 2008), so too are the solutions they implement regarding this kind of community-based
assessment. For instance, in eLera repository (E-Learning Research and Assessment Network http://www.elera.net), members can create reviews of the learning objects using the Learning Object Review Instrument (LORI) (Nesbit, Belfer, & Leacock, 2003), and experienced members can
moderate teams of members in a collaborative online review process where reviewers discuss and
compare their evaluations (Nesbit & Li, 2004). Besides, members can also add some resource to
their personal bookmarks, allowing eLera to recommend materials, not only using their associated
ratings, but also using their popularity. From a different perspective, the Connexions repository
(http://cnx.org) approaches quality by a system called lenses which arranges resources according
to evaluations provided by individuals and organizations (Kelty, Burrus, & Baraniuk, 2008). In
this context, resources are explicitly endorsed by third parties and gain higher quality assurance
as they start to accumulate more endorsements. Moreover, Connexions also provides mechanisms
to sort materials considering their number of accesses over the time and considering the ratings
given by the users. The MERLOT repository (Multimedia Educational Resource for Learning and
Online Teaching - http://www.merlot.org) introduced a post-publication peer-review model in
order to assure the quality of its catalogued resources (Cafolla, 2002). In MERLOT, after their
publication, materials are peer-reviewed and rated by experts in the resource domain. As it is not
possible to peer-review all resources, MERLOT also allows the community of users to provide
their own comments and ratings for the materials, as well as to bookmark their favorite resources
in the so-called Personal Collections. All this information is used by MERLOT to sort the materials during the searching process.
The case of MERLOT is particularly unusual in the sense that ratings are gathered from two welldefined and distinct groups, public and experts (it becomes important to mention here that peerreviewers in MERLOT are also members which may cause some overlap of individuals in these
two groups of people), which possibly come from distinct backgrounds and may have divergent
opinions with respect to quality. In fact, the differences between these groups could be considered
the strength of the adopted approach, once it provides complementary views about the same subject. Considering this, the present paper analyzes the existence of associations between the ratings
given by these two groups of evaluators in MERLOT in order to discover whether or not they
diverge about the quality assessment of the same materials, as well as to explore the usefulness of
such complementary evaluations towards the assurance of quality inside the repository.
The rest of this paper is structured as follows. The next section describes the features and differences between peer-review and public-review systems, and the third section presents how these
systems are applied in MERLOT. The fourth section reports and discusses the exploratory analysis performed to evaluate the existence of associations between the ratings given by the different
groups of evaluators in MERLOT. Finally, the fifth section presents conclusions and future work.
Peer-Review and Public-Review
Peer-review is conventionally known as the process of assessing a scientific paper or project idea
by critical examination of third parties that are experts in the same work domain. This system is
widespread in the process of publishing papers in journals and conferences, where the work under
evaluation is submitted to a chief-editor, who requests a group of fellow-experts to review it. The
review process provides advice about whether or not the article can be accepted for publishing,
and what further work is still required in the case of acceptance (Harnad, 2000). In the most widely adopted form of peer-review, the identity of the reviewers is hidden from the authors, as well
as from the other reviewers. The defenders of peer-reviewing claim that this kind of professional
approval serves as a way of assuring the quality of published papers. However, the system is not
free from criticisms, and issues such as conflicts of interest, biases of the peers, time delays, and
the difficulty to detecting fraud are often mentioned as possible shortcomings of the peer-review
2
Cechinel & Sánchez-Alonso
process (Benos et al., 2007). Despite the controversies regarding its efficiency, the peer-review
system remains as the cornerstone of quality assurance in the academic field and has also entered
in the scene of educational resources after its implementation in MERLOT.
Contrary to peer-review systems, mainly related to the scientific field, public-review is widely
diffused in many other areas, such as online vendors (e.g., Amazon - http://www.amazon.com,
eBay - http://www.ebay.com) and several communities of interest (e.g., IMDb http://www.imdb.com, YouTube - http://www.youtube.com, RYM - http://rateyourmusic.com).
In these, users normally benefit from comments and ratings given by the community through the
use of recommender systems (such as collaborative filters) which, based on the comparison of
users’ profiles and the correlation of personal tastes, provide personalized recommendation of
items and products that are suggested to be of the users’ interest (Resnick & Varian, 1997). In this
kind of social system, the motivations and goals behind the users’ participation vary significantly,
from the desire and need of social interaction, to professional self expression and reputation benefits (Peddibhotla & Subramani, 2007). Table 1 explores some other aspects which normally differentiate standard peer-review and public-review systems.
Table 1: Different aspects involving peer-review and public-review
ASPECTS
PEER-REVIEW
PUBLIC-REVIEW
Evaluator Background
Expert in the field domain
Non-expert
Existence of official criteria or metrics
Yes
No/Sometimes
Size of the community of evaluators
Restricted
Wide opened
Common Models
Pre-publication
Post-publication
Domain
Scientific field, journals and
funding calls
Online vendors, communities of interest
Motivation
Prestige, fame, to determine
the quality and direction of
research in a particular domain, obligation
Desire and need of social
interaction, professional
self expression, reputation
Communication among evaluators
Not allowed
Encouraged
Selection of evaluators
Editor Responsibility
None
Financial Compensation
Normally none
None
Time taken for the evaluation
Typically Slow
Typically Fast
Level of formality
Formal process for editing
and revision
Informal
Author’s identity
Masked
Non-masked
Requirements to be a reviewer
To be an expert in the field
and to be invited
Creation of a member’s
account
Reviews and Ratings in MERLOT
The Multimedia Educational Resource for Learning and Online Teaching (MERLOT) is an international initiative that allows users to catalogue educational resources with the aim of facilitating
the use and sharing of online learning technologies. MERLOT has adopted a post publication
peer-review model (Cafolla, 2002), where already catalogued materials are peer-reviewed by discipline experts who are members of some discipline community editorial board (e.g., Biology,
3
Analyzing Associations between the Different Ratings Dimensions of the MERLOT Repository
Business, Chemistry, Mathematics, Psychology). Editorial boards of MERLOT decide on the
process of selecting materials that are worthy of reviewing, and the assigned materials are then
independently peer-reviewed by their members according to three main criteria: 1) Quality of
Content, 2) Potential Effectiveness as a Teaching Tool, and 3) Ease of use. After peer-reviewers
report their evaluations, the editorial board chief-editor composes a one single report and publishes it in the repository with the permission of the authors (Merlot, 2010a). In MERLOT, the
formation of the editorial boards is mostly through the indication of individuals by MERLOT institution partners, but also by volunteers who meet some minimum criteria (e.g., being an instructor in an institution, being an expert in the scholarship of their field, possessing excellence in
teaching) and participate in the training offered by MERLOT.
In addition to peer-review evaluations, MERLOT also allows the registered members of the
community to provide comments and ratings about the materials, complementing its strategy of
evaluation with an alternative and more informal mechanism. Besides the ratings checkbox and
the form where the member can fill in his or her comments, the MERLOT’s Add Comment Section also asks if the member has used the resource in the classroom and offers two extra forms
where the members can make technical remarks and inform the time they spent reviewing the
material. The ratings of users and peer-reviewers range from 1 to 5 (with 5 as the best rating). The
use of the same rating scales for both kinds of evaluations allows us to contrast these ratings in
order to evaluate possible correlations and the existence or not of disagreement between the two
groups of evaluators.
Exploratory Analysis
Data Sample
The method followed for the study reported here was the analysis of correlations between the ratings given by the two kinds of evaluators in MERLOT, named as peer-reviewer’s ratings (PRR)
and user’s ratings (UR). Data from a total of 20,506 learning objects was gathered (September
2009) through a web crawler developed for that purpose and similar in functionality to the one
reported by Biletskiy, Wojcenovic, and Baghi (2009). Most of the resources did not already have
any instances of peer-review or user rating, and, from the collected data, only 3.38% presented
both at least one peer-review and at least one user rating (Table 2).
Table 2. Sample sizes of peer-reviewed and user-reviewed resources
TOTAL SAMPLE
SIZE
PRR > 0
UR > 0
PRR ∩ UR
SIZE
%
SIZE
%
SIZE
%
20,506
2,595
12.65
2510
12.24
695
3.38
This sample containing ratings from both groups at the same time (PRR ∩ UR) was used in the
present study and the rest of the data was discarded.
Analyzing the Associations
Users ratings against overall peer-reviewers ratings
As both samples did not follow a normal distribution, a non-parametric analysis was performed
using the Spearman’s rank correlation (rs) to evaluate whether or not there is association between
the ratings of the two groups, i.e., if the raters agree or not about the quality of the resources.
Considering that MERLOT divides resources into different categories of disciplines, we split the
4
Cechinel & Sánchez-Alonso
collected sample according to these categories. This allows us to observe potential differences
according to the background of the evaluators. As MERLOT allows users to catalogue the materials in more than one category of discipline, some part of the split sample is overlapped. However, we decided to maintain the learning objects classified in more than one discipline due to the
fact we considered this overlap relatively small (16%). Table 3 presents the results of this analysis.
Table 3. Correlation between users ratings and peer-reviewers ratings
considering the categories of disciplines
Discipline
Sample
Size
PRR Average (std)
UR Average (std)
rS
P
S
All
695
4.34(0.70)
4.29(0.70)
0.19
0.00
Y
Arts
25
4.14(0.74)
4.43(0.58)
0.20
0.33
N
Business
59
4.22(0.79)
4.15(0.94)
0.06
0.66
N
Education
167
4.41(0.68)
4.36(0.72)
0.16
0.04
Y*
Humanities
133
4.60(0.51)
4.40(0.67)
0.19
0.03
Y
Mathematics & Statistics
66
4.67(0.52)
4.25(0.69)
0.17
0.31
N
Science & Technology
285
4.21(0.71)
4.25(0.72)
0.26
0.00
Y
Social Sciences
73
4.20(0.75)
4.38(0.60)
0.2
0.09
Y+
In Table 3, the column P stands for the p-value obtained in the analysis. In the column S, N stands
for no significant association between the ratings given by the two groups of evaluators, Y represents significant association at 99% level, Y* means significant association at 95% level, and Y+
at 90% level. The correlation coefficient (rs) indicates the strength of the association between the
two groups of ratings varying from -1 to 1, where 0 means no association (no agreement). The
closer the correlation coefficient is to 1, the better is the association. As can be seen in Table 3,
the disciplines of Arts, Business, and Mathematics and Statistics did not present any association
between the ratings given by users and peer-reviewers. However, the ratings for the disciplines of
Education, Humanities, Science and Technology and Social Sciences, and for the overall sample
did presented an association. But, even though these associations exist, they are not too strong, as
their coefficients of correlation are relatively small. Figure 1 better illustrates the weakness of the
association for the discipline of Science and Technology (we selected this discipline due to the
fact that this was the one which has presented the highest coefficient of correlation of all).
As can be seen in Figure 1, it is not possible to observe patterns indicating concordance between
the ratings of the two groups. (A strong correlation between the ratings could be suggested by a
formation of a diagonal line, or the agglomeration of dots in some region of the matrix, for instance). In fact, we can observe several cases where users and peer-reviewers strongly disagree
about the ratings. The weakness of the associations is also confirmed when we perform a linear
regression analysis in order to explore more deeply the relationships between users and peerreviewers ratings in the discipline of Science and Technology. Despite the fact that it is possible
to generate a linear prediction model at a 99% level of significance, the coefficient of correlation
remains small (0.28) and the model is able to represent only 7.94% of the entire population.
5
Analyzing Associations between the Different Ratings Dimensions of the MERLOT Repository
Figure 1: Scatter plot matrix of ratings for the discipline of Science and Technology
At first glance, this exploratory analysis indicates that both groups of reviewers have different
impressions about the quality of the learning objects catalogued in MERLOT, thus serving as
complementary views of assessment inside the repository.
Users ratings against the three criteria of peer-reviewers ratings
As mentioned before, the peer-reviewer rating is composed of three main criteria: 1) Quality of
Content, 2) Potential Effectiveness as a Teaching Tool, and 3) Ease of use. As the existence (or
the absence) of associations could be specifically related to one of these dimensions, we ran the
same analysis to evaluate the associations between the users’ ratings and each one of the criteria
of peer-reviewers’ ratings. Table 4 presents the results of this analysis.
Table 4. Correlation between users ratings and the three criteria
of peer-reviewers ratings
Quality of content
Discipline
Potential effective
as a teaching tool
Ease of Use
Total
Rating
rS
P
S
rS
P
S
rS
P
S
S
All
0.21
0.00
Y
0.25
0.00
Y
0.19
0.00
Y
Y
Arts
0.21
0.30
N
0.14
0.50
N
0.41
0.04
Y*
N
Business
0.07
0.61
N
0.04
0.73
N
0.13
0.32
N
N
Education
0.11
0.15
N
0.30
0.00
Y
0.15
0.05
Y*
Y*
Humanities
0.27
0.01
Y
0.26
0.00
Y
0.15
0.09
Y+
Y
Mathematics & Statistics
0.23
0.06
Y*
0.02
0.87
N
0.02
0.85
N
N
Science & Technology
0.25
0.00
Y
0.32
0.00
Y
0.25
0.00
Y
Y
Social Sciences
0.25
0.03
Y*
0.27
0.05
Y*
0.11
0.33
N
Y+
From the table it can be seen that in some disciplines the associations encountered before do not
persist for all three evaluation criteria. For instance, in the Education discipline, users’ ratings are
associated with only two of the three evaluation criteria, more precisely Potential effectiveness as
a teaching tool and Ease of use, and in the discipline of Social Sciences users’ ratings are associated only to the Quality of content and Potential effectiveness as a teaching tool criteria. Moreover, other disciplines that did not present any association between the two groups of ratings now
present association between users’ ratings and some specific evaluation criteria. This is the case
of the discipline of Mathematics and Statistics, which presented association between the users’
6
Cechinel & Sánchez-Alonso
ratings and the Quality of content criteria, and of the discipline of Arts, which presented association between users’ ratings and the Ease of use criteria.
It is interesting to highlight that the associations encountered in this analysis are slightly stronger
than the associations encountered before (with just two exceptions). However, the coefficients of
correlations found are still weak, which reinforces the initial conclusion that users and peerreviewers have different impressions about quality. Such weakness is again confirmed when we
perform a linear regression analysis between users ratings and the Ease of use criteria in the discipline of Arts (these are the data sets which presented the highest coefficient of correlation, with
rs = 0.45). Here, the prediction model is generated at a 95% level of significance, however, the
coefficient of correlation is small (0.46) and the model represents only 21% of the entire population.
As the evaluations given by users do not follow any pre-defined criteria, it is difficult to precisely
understand what they are referring to without an in-depth look at the textual information (comments) attached to them. Bearing this in mind, one of the two following situations may be occurring: 1) the impressions of quality that users have are not related to any one of the criteria evaluated by peer-reviewers; or 2) if users evaluate the same aspects as peer-reviewers, they do not
agree with the ratings given by the experts about these aspects. From our point of view, these two
situations alternate depending on the category of the discipline and on the peer-review criteria
under evaluation. Moreover, the fact that some associations observed in the first analysis now
present a stronger coefficient of correlation in this second analysis may be an indication that, for
these cases, the situation number 2 is occurring. However, these are all assumptions that still require more investigation to be confirmed.
Conclusions and Future Work
The most important contribution of this paper is the indication that both communities of evaluators in MERLOT are communicating different views regarding the quality of the learning objects
refereed in the repository. Even though we have found associations between the users’ ratings and
the peer-reviewers’ ratings in some disciplines, such associations are relatively weak and can not
confirm that users and experts agree about the quality of the evaluated learning resources. This
reinforces the idea that peer-review and public-review approaches can be adopted in learning objects repositories as complementary strategies of evaluation that can both serve for the assurance
and the establishment of quality parameters for further recommendation of materials.
MERLOT has been successfully used and recognized mostly because of the implementation of
the peer-review system, which still remains its cornerstone strategy for quality evaluation. However, the community of members and their ratings in MERLOT are naturally growing much more
than the community of peer-reviewers and their evaluations. During the 30 days of preparation of
this paper the number of new members has increased by 1,426, whereas the number of peerreviewers by 5. Moreover, the number of new comments was approximately 6.5 times higher than
the number of new peer-reviews (MERLOT, 2010b). Such rapid increases makes it necessary to
invest attention in exploring the inherent potentialities of this expanding community, as it was
done for instance by Sicilia, Sanchez-Alonso, García-Barriocanal, & Rodríguez-García (2009).
Future work will qualitatively examine existing divergences between these two kinds of evaluations, as well as explore the ratings of groups of learning objects that are classified in more than
one category and evaluate the utilization of collaborative filtering in the recommendation of
learning objects inside MERLOT.
7
Analyzing Associations between the Different Ratings Dimensions of the MERLOT Repository
References
Benos, D. J., Bashari, E., Chaves, J. M., Gaggar, A., Kapoor, N., LaFrance, M., . . . Zotov, A. (2007). The
ups and downs of peer review. Advances in Physiology Education, 31, 145-152.
Biletskiy, Y., Wojcenovic, M., & Baghi, H. (2009). Focused crawling for downloading learning objects An architectural perspective. Interdisciplinary Journal of E-Learning and Learning Objects, 5, 169180. Retrieved from http://www.ijello.org/Volume5/IJELLOv5p169-180Biletskiy416.pdf
Brosnan, K. (2005). Developing and sustaining a national learning-object sharing network: A social capital
theory perspective. In J. B. Williams, & M. A. Goldberg (Eds.), Proceedings of The ASCILITE 2005
Conference, 105-114. Brisbane, Australia.
Cafolla, R. (2002). Project Merlot: Bringing peer review to web-based educational resources. Proceedings
of the USA Society for Information Technology and Teacher Education International Conference, 614–
618.
Han, P., Kortemeyer, G., Kramer, B. J., & Prummer, C. (2008). Exposure and support of latent social networks among learning object repository users. Journal of Universal Computer Science (JUCS), 14(10),
1717-1738.
Harnad, S. (2000). The invisible hand of peer review. Exploit Interactive, 5(April). Retrieved from
http://www.exploit-lib.org/issue5/peer-review
Kelty, C. M., Burrus, C. S., & Baraniuk, R. G. (2008). Peer review anew: Three principles and a case study
in postpublication quality assurance. Proceedings of the IEEE, 96(6), 1000–1011.
McGreal, R. (2008). A typology of learning object repositories. In H. H. Adelsberger, Kinshuk, J. M. Pawlowski, & D. G. Sampson (Eds.), International handbooks on information systems: Handbook on information technologies for education and training (pp. 5-28). Heidelberg: Springer.
Merlot. (2010a). About us: MERLOT Peer Review Process. Retrieved from
http://taste.merlot.org/peerreviewprocess.html
Merlot. (2010b). What’s New. Retrieved from http://www.merlot.org/merlot/whatsNew.htm
Monge, S., Ovelar R., & Azpeitia, I. (2008). Repository 2.0: Social dynamics to support community building in learning object repositories. Interdisciplinary Journal of E-Learning and Learning Objects, 4,
191-204. Retrieved from http://www.ijello.org/Volume4/IJELLOv4p191-204Monge.pdf
Nesbit, J. C., & Li, J. (2004). Web-based tools for learning object evaluation. Proceeding of the International Conference on Education and Information Systems: Technologies and Applications. Orlando,
Florida.
Nesbit, J. C., Belfer, K., & Leacock, T. (2003). Learning Object Review Instrument (LORI). E-Learning
Research and Assessment Network. Retrieved from
http://www.elera.net/eLera/Home/Articles/LORI%201.5.pdf
Peddibhotla, N., & Subramani, M. R. (2007). Contributing to public document repositories: A critical mass
theory perspective. Organizational Studies, 28(3), 327-346.
Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of ACM, 40(3), 56-58.
Sicilia, M-A., Sanchez-Alonso, S., García-Barriocanal, E., & Rodríguez-García, D. (2009). Exploring
structural prestige in learning object repositories: Some insights from examining references in MERLOT. Proceedings of the International Conference on Intelligent Networking and Collaborative Systems, 212-218.
Vuorikari, R., Manouselis, N., & Duval, E. (2008). Using metadata for storing, sharing and reusing evaluations for social recommendations: The case of learning resources. In D. Goh & S. Foo (Eds.), Social
information retrieval systems: Emerging technologies and applications for searching the web effectively (pp. 87-107). New York, NY: Idea Group.
8
Cechinel & Sánchez-Alonso
Wenger, E. (2006). Communities of practice: A brief introduction. Retrieved from
http://www.ewenger.com/theory/communities_of_practice_intro.htm
Biographies
M.Sc. Cristian Cechinel is a professor at the Computer Engineering
Course of Federal University of Pampa. He obtained his Bachelor's and
Master's degree in Computer Science from Federal University of Santa
Catarina, and he is currently taking his Ph.D. on Learning objects quality at the Computer Science Department of University of Alcalá. His
research focuses on Learning Technologies and Artificial Intelligence.
Dr. Salvador Sanchez-Alonso is an associate professor in the Computer Science Department of the University of Alcalá and a senior
member of the Information Engineering research unit of the same university. He previously worked as an assistant professor at the Pontifical
University of Salamanca for 7 years during different periods, and also
as a software engineer at a software solutions company during 2000
and 2001. He earned a Ph.D. in Computer Science at the Polytechnic
University of Madrid in 2005 with a research on learning object metadata design for better machine “understandability”. His current research interests include Learning technologies, Semantic Web and Computer science education.
9
This page left blank intentionally
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Facilitation of Formative Assessments using
Clickers in a University Physics Course
David M. Majerich
Institute for Schools and
Society, College of Education,
Temple University,
Philadelphia, PA, USA
Judith C. Stull
Department of Sociology,
LaSalle University,
Philadelphia, PA, USA
[email protected]
[email protected]
Susan Jansen Varnum
and Tiffany Gilles
Department of Chemistry,
Temple University,
Philadel-phia, PA, USA
Joseph P. Ducette
Department of Psychological
Studies in Education,
Temple University,
Philadelphia, PA, USA
[email protected];
[email protected]
[email protected]
Abstract
This study provides an empirical analysis of the integration of clickers, used to facilitate formative assessments, in a university physics course. The sample consisted of students from two consecutive semesters of the same physics course, where one group used clickers and the other did
not. Data included pre- and post-attitudinal and behavioral surveys, physics and mathematics
pre-tests, two course examinations, and one cumulative final examination. The clicker group
completed seven clicker episodes (weekly multiple choice questions and in-class discussion of
results). On average, students who participated in clicker episodes achieved significantly higher
scores on the cumulative final examination compared to the other group. Regression analysis was
used to control for differences among the students and to quantify the effect of clicker use. The
regression results indicate that controlling for all of the entered variables, for every one more
clicker episode the student responded to, the final grade was raised by 1.756 points. Thus, if a
student took all seven of the “clicker quizzes,” the final grade would have been 12.3 points
higher, a difference of a grade. Interestingly, how well the student did on these “clicker quizzes”
never proved significant in the regression analyses. In an analysis of the reMaterial published as part of this publication, either on-line or
siduals, grades were more important to
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
those who performed better than exworks for personal or classroom use is granted without fee
pected as compared to those who perprovided that the copies are not made or distributed for profit
formed less well than predicted. In sum,
or commercial advantage AND that copies 1) bear this notice
using clicker episodes appeared to result
in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To
in improved achievement but more recopy in all other cases or to republish or to post on a server or
search is needed to support these findto redistribute to lists requires specific permission and payment
ings.
of a fee. Contact [email protected] to request
redistribution permission.
Editor: Heinz Dreher
Facilitation of Formative Assessments using Clickers
Keywords: student response system, formative assessment, clickers, learning objects, clicker
episodes
Introduction
Faculty in the science education community are being charged to replace traditional methods of
teaching in the large lecture hall with more learner-centered, student-engaged, interactive strategies informed by what is now known about how many students learn (Bransford, Brown, &
Cocking, 2000). While the traditional methods of teaching have long been associated with disconnecting the students from both the instructor and course material, causing students to assume a
more passive role in the learning process, encouraging memorization over conceptual understanding of course material, and treating students as if they learn course material at the same time and
in the same way, these methods are common in many of today’s lecture halls (Mintzes & Leonard, 2006; Sunal, Wright, & Day, 2004). In better preparing students for the skills needed for
success in the 21st century (Floden, Gallagher, Wong, & Roseman, 1995; Partnership for 21st
Century Skills, 2010), using new technologies during instruction that are interactive have shown
to assist faculty in creating active learning environments whereby students learn by doing, receive
feedback during the learning trajectory, construct new knowledge and improve skills, and continually refine their understandings of course material (Bereiter & Scardamalai, 1993; Hmelo &
Williams, 1998; Mintzes & Leonard, 2006). While supporting research has shown increased student achievement and improved behavioral outcomes for students who are actively engaged with
the course content and increased dialogue and interaction with the instructor and peers (Crouch &
Mazur, 2001; Mintzes & Leonard, 2006; Slater, Prather, & Zeilik, 2006), “a key instructional implication from the research on learning is that students need multiple opportunities to think deeply
and purposefully about the content and to gain feedback on their learning” (Ueckert & GessNewsome, 2006, p. 147). Options available to instructors that have been used to engage students
and promote an active learning environment in the large lecture hall are Audience Paced Feedback, Classroom Communication Systems, Personal Response Systems, Electronic Voting Systems, Student Response Systems, Audience Response Systems, voting-machines, and zappers
(MacArthur & Jones, 2008). Each of these systems has also been referred to as ‘clickers’ (Duncan, 2005; MacArthur & Jones, 2008). In the most fundamental sense, clickers are radiofrequency, battery powered, hand-held devices that are part of an electronic polling system. The
predominant research about the clicker use has been shown to promote student discussion, increase engagement and feedback, and improve attitudes toward science (Cutts, 2004: Draper &
Brown, 2004; Duncan, 2005; Latessa & Mouw, 2005). However, an extensive 2009 review of the
literature revealed a paucity of empirical peer-reviewed evidence to support the claims that the
technique can be used to improve student achievement (Mayer, et al., 2009). Although several
research efforts report positive effects of clicker use on students’ achievement (Addison, Wright,
& Milner, 2009; Hoffman & Goodwin, 2006; Kennedy & Cutts, 2005; Watkins & Sabella, 2008),
the empirical evidence suggested by Mayer et al. (2009) that is needed to corroborate existing
results and substantiate any claims for using clickers requires additional studies. This study aims
to provide evidence from university physics classes.
Review of Related Literature
General Clicker Device Features and Uses
In general, clicker devices have a keypad (alpha, numeric, or alpha/numeric buttons) resembling a
television remote control device or small electronic calculator. Using presentation software, the
instructor poses a question (multiple choice or true-false formats). Students respond by selecting
their answer choice and using the corresponding button on their devices. After students submit
12
Majerich, Stull, Jansen Varnum, Gilles, & Ducette
their responses, a receiver and related clicker software collect and tabulate the students’ electronic
signals. A computer is used to display the collective results graphically at the front of the classroom. Clicker systems save the responses from each student, and the data can also be exported to
spreadsheet software. For some examples of the specific features of clicker devices, as well related technology and software compatibility-requirements, see Barber and Njus (2007) and Burnstein and Lederman (2003).
‘Clickers’ (Duncan, 2005) are classified as a type of instructional technology that affords students
multiple opportunities to participate in their own learning processes, be actively engaged with the
course content and the instructor, and receive frequent assessment feedback in real time. Both the
students and the instructor can see the level of course material learned (Bergtrom, 2006; Duncan,
2005). Although clickers can be used for simply taking attendance or quizzing students to see if
they have prepared for class, it has been suggested that they are most effective when used to challenge students to think about their understanding of the material under discussion (Barber & Njus,
2007; Duncan, 2005).
In a comprehensive review of the literature and empirical research on the use of clicker questions,
Caldwell (2007) identified nine general strategies. Presented here in summative, clickers were
used:
•
•
•
•
•
•
•
•
•
to increase or manage interaction
to assess student preparation and ensure accountability
to find out more about students’ opinions and attitudes
for formative assessment (e.g., assess students’ understanding, determine future
direction of lecture)
for quizzes and tests
to do practice problems
to guide thinking, review, or teach
to conduct experiments on or illustrate human responses; and,
to make the lecture fun (pp. 10-11).
When clickers were used as a type of formative assessment, the results revealed students’ misunderstandings of the course material (Wood, 2004), determined students’ readiness to continue
after solving a problem (Poulis, Masses, Robens, & Gilbert, 1998), and afforded opportunities to
self-assess their understandings at the end of class (Halloran, 1995). Although these studies have
shown improved student learning from clicker use, comparisons among these studies and others
(MacArthur & Jones, 2008) are compromised as researchers have used different methods or failed
to report them at all. This study aims to contribute evidence from university physics classes when
clicker-based quizzes were used as formative assessments, whereby a protocol for the formative
assessment procedure is made explicit.
Learning Objects and Interoperability
“Learning object” (LO) is a broad conceptual term that can possess multiple meanings (Bergtrom,
2006; Thompson & Yonekura, 2005). LOs can include animation and video segments all the way
to complete modules or lessons (Thompson & Yonekura, 2005). While a broad conceptualization
of LOs advanced by Harman and Koohang (2006) focused on online discussion boards, Bergtrom
(2006) extended their conceptualization to include clickers. Clickers “are learning objects in the
same sense that performance art is art – they have learning (or in the case of art, aesthetic) value
despite their transience” (p. 108), even though clickers are confined to the in-class presentation.
For the purpose of this paper, the conceptualization of LOs advanced by Harman and Koohang
(2006), and extended by Bergtrom (2006) to include clickers, is used.
13
Facilitation of Formative Assessments using Clickers
Contrary to the general idea that learning objects are interacted with individually or with a group,
the use of clickers requires instructor facilitation. For the present study, clicker quizzes were presented to the students as PowerPointtm slides. In the most fundamental sense, each clicker quiz
slide could be an interactive LO mediated by the instructor (Bergtrom, 2006). Students were presented with a slide, asked to solve a physics problem, and then submit their answers using their
InterWritetm Personal Response System clicker device. However, clickers are more than the
hand-held device students use to send their “votes” (answers) to a computer-based analysisreport-display system. As the results are discussed by the instructor and the students, the instructor guides the students towards knowledge and skills that parallel the knowledge and skills used
by experts in the field (Mintzes, Wandersee, & Novak, 1998). During the discussion, both the
instructor and students see learning has occurred. For the purpose of this present study, a “clicker
episode” begins with the presentation of a clicker question and ends when the students understand
the principles underlying the question.
In this research, each clicker quiz was a multiple choice question from a specific topic. When
clicker questions are used, they “allow the assembly/disassembly of broad subject matter into
component structural elements, ideas, concepts, and ways of thinking” (Bergtrom, 2006, p. 2).
The content for each of the clicker quizzes is self-contained, i.e., instructional message aligned to
a specific learning objective, and could be used at any time during the teaching-learning sequence
to assess students’ understanding of material presented to them. Furthermore, the ‘chunking’ of
broad topics into multiple subtopics creates learning objects of fine granularity (Bergtrom, 2006;
Fournier-Viger, Najjar, Mayer, & Nkambou, 2006). Since clickers are meant to be managed and
administered throughout the learning process to support student learning and inform how instruction needs to be changed in order to accommodate students’ needs, their use can facilitate formative assessment (Bergtrom, 2006; Crumrine & Demers, 2007). Clickers have an educational purpose (McGreal, 2004) and have pedagogical value, are learner-centered, and can be contextualized by the students (Duncan, 2005; Harman & Koohang, 2005). In addition, the LOs can be encapsulated, stored, and reused in appropriate contexts.
Science Education Reform and Assessment
Discussions of the role of assessments frequently take center stage in the arena of science education reform debates. As delineated in the National Science Education Standards (NRC, 1996),
“assessments provide an operational definition of standards, in that they define in measurable
terms what teachers should teach and students should learn” (pp. 5-6). Furthermore, “when students engage in assessments they should learn from those assessments” (p. 6). Extended to colleges and universities undergoing undergraduate science education reform (Siebert & McIntosh,
2001), this perspective suggests that teaching, assessment, and curriculum are mutually reinforcing and need to be aligned in order to optimize learning experiences and maximize student learning outcomes. While the curriculum is already established in many college and university
courses, and if assessment and learning are two sides of the same coin (NRC, 1996), it would
seem reasonable that administering frequent assessments, analyzing their results, and sharing
them with students, could inform changes to instruction needed in order to accommodate learners’ needs for continued learning.
As generally understood, assessment is used by most instructors to determine what learning has
occurred when compared to course expectations and is the basis for the assignment of grades to
overall achievement. This type of assessment is summative and is the measurement of achievement at the end of a teaching-learning sequence. Assessment is formative when frequent evidence during the students’ learning process is gathered and analyzed, where the results inform
changes needed to instruction in order to meet students’ needs, and provide students with feedback about their learning (Black & Wiliam, 1998). The results of formative assessments have
14
Majerich, Stull, Jansen Varnum, Gilles, & Ducette
been described as providing ‘snapshots’ of what students know and can do at specific junctures in
their learning processes (Treagust, Duit, & Fraser, 1996). Where traditional assessments have
been criticized for merely gauging what students know instead of probing what students know
(McClymer & Knoles, 1992; McDermott, 1991; Mintzes & Leonard, 2006; Pride, Voos, &
McDermott, 1997), feedback from formative assessments can align teaching with learning (Black
& Wiliam, 1998; Yorke, 2003). The feedback to the instructor illuminates both changes needed
in instruction and the degree to which instruction was successful. Feedback to the students helps
highlights problem areas and provides reinforcement for continued learning. Since formative assessment has been identified as a key predictor of student achievement (Black & Wiliam, 1998:
Bransford et al., 2000), its use has been recommended for integration into curriculum as part of
the learning process whereby students can self-regulate their own learning (Nicol & MacfarlaneDick, 2006).
Six-Stage Model of Formative Assessment
The model of a formative assessment that informs this present study is derived from the theoretical perspective described by Yorke (2003). The model is dynamic, recurs throughout the teaching-learning sequence, and has been modified and used elsewhere (Stull, Schilller, Jansen Varnum, & Ducette, 2008). The model is conceptualized as having six stages. Specifically, the instructor develops a lesson and related assessment based on the students’ preparedness and prior
knowledge (Stage 1). The instructor presents the lesson (Stage 2). The instructor administers an
assessment (Stage 3). Together the instructor and students consider the assessment results (Stage
4). Dialogue between the instructor and students begins (Stage 5). Thereafter, the instructor determines if reinstruction is warranted for the previously taught lesson or proceeds to the next lesson (Stage 6). In this model, formative assessment is theorized and the connections between roles
of the instructor and students are made explicit. While the instructor determines when and how
often to administer formative assessments and modify instruction to optimize learning experiences to accommodate students’ needs, the key is to provide sufficient informative feedback to
students so that students can chart their development, adjust their learning styles, and maximize
their learning (Yorke, 2003). While some recommended that formative assessment must be continuous (Brown, 1999; Heritage, Kim, Vendlinski, & Herman, 2009), it has been suggested that it
“can be very occasional, yet still embody the essential supportiveness towards student learning”
(Yorke, 2003, p. 479).
It has been argued that the time has come for formative assessments to receive greater prominence in the learning process (Black & Wiliam, 1998; Bransford et al., 2000; Layng, Strikeleather, & Twyman, 2004); however, in reality, a combination of formative and summative assessments should be incorporated into the overall teaching-learning sequence (Black & Wiliam,
1998). In doing so, the instructor must switch roles from being a supporter of learning to judging
the students’ overall achievement (Ramsden, 1992). It is through the analysis of frequent formative assessment results and continued dialogue with the students that the instructor will gain a
sense of when the shift in roles needs to occur.
Methodology
Learning Environment
This study was conducted to determine the effect that increased feedback from clicker episodes
(formative assessment) had on students’ physics achievement (summative assessment) for students who used clickers when compared to students who were nonusers.
15
Facilitation of Formative Assessments using Clickers
Students who enrolled in this physics course were mostly science and health profession majors
and took this course to fulfill either a university core requirement or a major requirement. Taught
in the large lecture hall, enrollment numbers generally range between 150-250 students per
course. While all students are taught together during the lecture by the same instructor, the students are required to register for recitation and laboratory sections which generally have 25-40
students and are taught by other instructors. The lecture textbook is used as a curriculum guide
and is the source for course content and assigned problem sets. Lecture examinations require that
students recall knowledge (facts, concepts, theories, laws, principles, generalizations, models),
but mostly solve problems. The cumulative final examination requires that students also recall
knowledge, but mainly to solve problems.
Methods and Subjects
This study was conducted at a large, public, urban university in the mid-Atlantic region. Data
were obtained from two fifteen-week introductory physics courses that met twice a week for 80
minute periods over two semesters taught by the same instructor. In the fall and spring semesters
of the course, respectively, 157 and 152 students participated. The fall semester course was traditionally taught and the following spring semester course had clicker episodes (formative assessments) integrated into the instruction. Each learning object episode began with a multiple-choice
question associated with a specific course topic, followed by a discussion of the results. The results of the clicker-based questions were collected, tabulated, and results displayed for students at
the beginning of the next scheduled class. Problems areas were identified and provided the topic
for discussion for the instructor and students. Based on the discussion, the instructor made appropriate adjustments to the instruction. In the end, the spring semester students (clicker group)
completed a total of seven formative assessments during weeks 5-7, 9-11, and 13.
In addition to the “clicker quizzes”, attitudinal and behavioral surveys were administered at the
beginning and the end of the semester as well as pre/post tests which included physics and mathematics questions (week 1). Among the attitudinal data collected were the students’ perceptions about the usefulness of class activities (group work and group grade, student-lead whole
class discussions, model making, descriptions of reasoning, investigations, presentations, selfevaluation of learning, decisions about course activities, assessments results modifying what is
taught) and hours spent on activities (watching television, playing computer games, socializing,
doing jobs at home, playing sports, working, attending classes, and doing homework). Students
took two course examinations (weeks 6 & 11) and a cumulative final examination (week 15).
The cumulative final examination contained a set of questions representative of the major course
topics discussed over the semester. The fall semester students comprised the control group. All
protocols in this study were approved by the university’s institutional review board.
Results
Equivalent Groups
Both groups suffered the loss of students. The attrition rates for the control and clicker groups
were 20.4% and 23.0%, respectively; however the difference of proportions was not significant.
It is expected that the more challenged students have a higher probability of withdrawing from
the class. Accounting for self-selection bias, it is acknowledged that the groups’ content and skill
sets should be better at the end of the course than at the beginning.
Maximum possible points for the physics and mathematics pretests were 7 and 25 points, respectively. Pretest scores were determined by applying a two-point rubric (1=correct solution;
0=incorrect or no solution) to students’ solutions. Points for each pretest were summed sepa-
16
Majerich, Stull, Jansen Varnum, Gilles, & Ducette
rately. Pretest scores were converted to percentages based on the number of correct solutions
divided by the total number of questions. Results of the pretests given at the beginning of the
semester revealed the control group’s pretest physics percentage scores (M=31.4%, SD=11.3%)
were higher than the clicker group (M=30.7%, SD=11.3%), but the difference was not statistically significant. The clicker group’s pretest mathematics percentage scores (M=57.3%,
SD=23.1%) were higher than the control group (M=56.8%, SD=21.5%), but again were not statistically significantly different. Based on these results, the groups were equivalent.
Regression Analyses
Regression analysis does assume interval level variables. However, categorical variables are accommodated. In the case of a nominal level variable as the dependent variable, discriminant function is the appropriate form to use. If the dependent variable is ordinal, then logistic regression
analysis is the correct form. In the case of categorical variables (either nominal or ordinal) as independent or predictor variables, the appropriate technique is to decompose the variable into
“dummy” variables where the option is either “present” or “not present.” For example, if social
class categorized into “upper,” “middle,” and “lower,” three separate variables would be made
and then, not to encounter the statistical problems (such as a variable being “forced” out of the
analysis), up to two could be entered into the analysis. The coefficient is interpreted as in comparison to the group or groups that have been excluded from the analyses (Gujarati, 1988).
While the use of ANOVA in this context will determine if one group is different from another,
that is all it will tell (Knoke, Bohrnstedt, & Mee, 2002). Regression analysis allows for testing a
more complex model, one in which confounding differences between or among groups can be
addressed as predictor and control variables can be entered. Also the size of the effect, i.e., what a
one unit change in the predictor resulted in the dependent variable, can be captured. Unstandardized coefficients are necessary as that is the only means of capturing these “effect size” contributions of the variables. The standardized coefficients give only strength of relationship. There may
be a very strong relationship between two variables, but the size of the effect may be very small.
Regression analysis was used to control for differences among students and to quantify the effect
of clicker use. In the model for predicting the students’ physics achievement, the dependent variable was the student’s final examination score and the independent variables were the physics/mathematics pretest score, the number of clicker quizzes taken, whether the course was a required one, the number of different types of assessments the student had previously experienced,
the number of hours per week the student reported working, and whether the student was male. In
specifying the model, the percentage of correct answers on any quiz were entered, but never
proved significant. These percentages were also averaged over all of the quizzes and then entered
into the regression analyses. This never proved significant as well. They were dropped from the
analyses as a result. Table 1 summarizes the means and standard deviations.
Table 1. Means and Proportions of Central Tendencies and Dispersion
for Variables Predicting Students’ Physics Achievement (N=329)
Variable
Physics/Mathematics pretest score, 0-200
Number of clicker episodes taken, 1-7
Was this a required course? (1=Yes, 0=No)
M
87.90
1.79
.92
SD
27.96
2.42
.27
Minimum
12.50
0
Maximum
166.67
7.00
Number of types of assessments student had experienced, 0-9
Number of hours students works at a job per week, 0-40
Male student dummy (1=Yes, 0=No, female student)
8.18
5.05
.63
2.12
6.13
.48
0
0
9.00
40.00
17
Facilitation of Formative Assessments using Clickers
Of these variables the pretest score was included to control for differences in students’ abilities.
Whether the course was a required one was included to control for student interest variations. The
number of different types of activities experienced by the student in class that they thought were
helpful was included. The student’s gender was also included as a control as research has shown
that males have a greater interest in physics than do females (Halpern, Benbow, Geary, Gur,
Hyde, & Gernsbacher, 2007; Marsh, Trautwein, Lüdtke, Köller, & Baumert, 2005; Seymour &
Hewitt, 1997). Lastly, the number of hours per week the student worked was included to control
for SES differences and the amount of time the student could devote to studying. Table 2 summarizes the results of the regression analysis.
Table 2. Summary of Regression Analysis for Variables
Predicting Students’ Physics Achievement
Model
Unstandardized
Coefficients
Standardized
Coefficients
B
Beta
t
Sig.
-.059
-.952
ns
Physics/Mathematics pretest score
Number of clicker episodes taken
1.756
.230
3.298
.001
Was this a required course?
-1.799
-.030
-.485
ns
Number of types of assessments student had experienced
-1.881
-.413
-5.895
.000
Number of hours students works at a job per week
-.109
-.036
-.594
ns
-.624
-.017
-.277
ns
Male student dummy
(Constant)
58.142
9.567
The R Square equaled 33 indicating that the included variables explained 33% of the variation in
the dependent variable. In the end, two variables proved significant – the number of clicker episodes and the number of different types of assessments the student had experienced. In all, there
were seven clicker episodes. The regression results indicate that controlling for all of the entered
variables, for every one more clicker episode the student took, the final grade was raised by 1.756
points. Thus, if a student took all seven of the “clicker quizzes,” the final grade would have been
12.3 points higher, a difference of a grade. Interestingly, how well the student did on these “clicker quizzes” never proved significant. The number of different types of assessments the student
has experienced (e.g., group work done with one grade assigned to the group; participation in
whole-class discussions where instructor talked less than the students; descriptions written of student’s own reasoning; investigative activities performed including data collection and analysis;
presentations designed and made by students to learn class concepts; the extent of student’s own
learning evaluated; decisions about course activities voiced by students; student assessment results modified what was taught and how) negatively related to how well they did on the final exam. Perhaps what is needed is consistency in assessing learning.
Different instructional strategies resonate with students. To understand who benefitted more from
the inclusion of these clicker episodes into the course, a residual analysis was performed. First,
the actual final course score was subtracted from what was predicted in the regression analysis
and then the students were separated into those who did well above what was predicted (70 percentile and above), those in the middle of the distribution, and those who performed well below
expectations (30th percentile and below). It should be noted that those in the “better than predicted” group need not have done well. They may have actually been in the lower part of the
grade distribution. What is important is that they did significantly above what was predicted in
the regression analyses. Also, students in the “worse than predicted” group may have earned good
18
Majerich, Stull, Jansen Varnum, Gilles, & Ducette
grades, but their grades were not as high as what was predicted in the regression analyses. In using the regression in this manner, differences in presenting abilities are “controlled for.” The coefficients capture the “value added” by what was done in the course. See Figure 1 for further details.
Figure 1. Regression plot
On both the pre-attitudinal and behavioral surveys, the students were asked about how they allocated their time in an average week. The categories were watching television and videos, playing
computer games, socializing or talking with friends outside of school, doing jobs at home, playing sports, working, attending classes, and doing homework or other class related activities. Chi
square tests indicate no statistically significant difference between those who did better than predicted and those who did worse than predicted. The students were also asked about how important grades were to them. On this variable, “How important are good grades to you?,” students
selected among the following responses: “Not important, Somewhat important, Important, or
Very important.” The two groups did differ on this variable; grades were more important to those
who did better than expected than those who did not as shown in Figure 2.
Figure 2. Distribution of Student Responses to How Important Grades Are by
Whether They Were Well Below Expectations or Were Well Above Expectations
19
Facilitation of Formative Assessments using Clickers
Conclusion
While there is an abundance of anecdotal information that advocates the use of clickers to improve student achievement in the science classroom, this study offered results to substantiate the
claim. It is apparent that integrating clicker episodes, in this case weekly formative assessments
consisting of multiple choice questions with in-class discussion of results, had a significant effect
on student achievement. On average, students who used clickers achieved significantly higher
scores on the cumulative final examination compared to the other group. The regression results
quantified the effect. In sum, using clicker episodes did prove to be positively associated with
improved achievement, but this is offered with caution as learning is a complex process and more
data are needed on students’ attitudes and behaviors.
However, there are some unresolved issues still to be addressed. While the students in both
classes performed equally as well on the first examination without using clickers, lower scores on
the second examination were obtained by the students who used clickers. To what extent are
there delayed effects on students’ learning and their metacognitive learning when using clicker
episodes? How lasting are the effects of clicker use? Does clicker use apply equally well to all
learning situations? These issues need to be studied as additional empirical evidence is gathered
to support the use of clickers to improve student achievement and to corroborate anecdotal information about their use. In using clickers, instructors can uncover more about their students and
what their students know about themselves during the learning process, and can be better informed about changes to instruction needed to promote student learning and achievement in the
science classroom.
References
Addison, S., Wright, A., & Milner, R. (2009). Using clickers to improve student engagement and performance in an introductory biochemistry class. Biochemistry and Molecular Biology Education, 37(2), 8491.
Barber, M., & Njus, D. (2007). Special feature: Clicker reviews. CBE-Life Science Education, 6, 1-20.
Bereiter, C., & Scardamalia, M. (1993). Surpassing ourselves: An inquiry into the nature and implications
of expertise. Chicago, IL: Open Court Publishing.
Bergtrom, G. (2006). Clicker sets as learning objects. Interdisciplinary Journal of Knowledge and Learning
Objects, 2, 105-110. Retrieved from http://ijello.org/Volume2/v2p105-110Bergtrom.pdf
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7-74.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and
school. Washington, DC: National Academy Press.
Brown, S. (1999). Institutional strategies for assessment. In S. Brown & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches (pp. 3-13). London: Routledge Press.
Burnstein, R. A., & Lederman, L. A. (2003). Comparison of different commercial wireless keypad systems.
Physics Teacher, 41, 272-275.
Caldwell. J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. CBE-Life
Science Education, 6, 9-19.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69, 970-977.
Crumrine, T., & Demers, C. (2007). Formative assessment: Redirecting the plan. The Science Teacher,
74(6), 64-68.
20
Majerich, Stull, Jansen Varnum, Gilles, & Ducette
Cutts, Q. (2004, November/December). Using an electronic voting system to promote active reflection on
coursework feedback. Paper presented at the 4th International Conference in Education, Melbourne,
Australia.
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system.
Journal of Computer Assisted Learning, 20(4), 81-94.
Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response
systems. San Francisco: Pearson/Addison-Wesley.
Floden, R., Gallagher, J., Wong, D., & Roseman, J. (1995, April). Seminar on the goals of higher education. Paper presented at the annual conference of the American Association of Science, Atlanta, GA.
Fournier-Viger, P., Najjar, M., Mayers, A., & Nkambou, R. (2006). A cognitive and logic based model for
building glass-box learning objects. Interdisciplinary Journal of E-Learning and Learning Objects, 2,
77-94. Retrieved from http://www.ijello.org/Volume2/v2p077-094Fournier-Viger.pdf
Gujarati, J. (1988) Basic econometrics. New York, NY: McGraw-Hill.
Halloran, L. (1995). A comparison of two methods of teaching computer managed instruction and keypad
questions versus traditional classroom lecture. Computers in Nursing, 13(6), 285-288.
Halpern, D. F., Benbow, C. P., Geary, D. C., Gur, R. C., Hyde, J. S., & Gernsbacher, M. A. (2007). The
science of sex differences in science and mathematics. Psychological Science in Public Interest, 8, 1–
51.
Harman, K., & Koohang, A. (2005). Discussion board: A learning object. Interdisciplinary Journal of
Knowledge and Learning Objects, 1, 67-76. Retrieved from http://ijklo.org/Volume1/v1p067077Harman.pdf
Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in
formative assessment? Educational Measurement: Issues and Practice, 28(3), 24-31.
Hmelo, C., & Williams, S. M. (1998). Learning through problem solving. The Journal of the Learning Sciences, 7(3 & 4).
Hoffman, C., & Goodwin, S. (2006). A clicker for your thoughts: Technology for active learning. New Library World, 107(1228/1229), 422-433.
Kennedy, G. E., & Cutts, Q. I. (2005). The association between students’ use of an electronic voting system
and their learning outcomes. Journal of Computer Assisted Learning, 21, 260-268.
Knoke, D., Bohrnstedt, G., & Mee, A. (2002). Statistics for social data analysis. Belmont, CA: Thomson/Wadsworth.
Latessa, R., & Mouw, D. (2005). Use of an audience response system to augment interactive learning.
Family Medicine, 37(1), 12-14.
Layng, T., Strikeleather, J., & Twyman, J. (2004, May). Scientific formative evaluation: The role of individual learners in generating and predicting successful educational outcomes. Paper presented at the
National Invitational Conference on The Scientific Basis of Educational Productivity, sponsored by the
American Psychological Association and the Laboratory for Student Success, Arlington, VA.
MacArthur J. R., & Jones, L. L. (2008). A review of literature reports of clickers applicable to college chemistry classrooms. Chemistry Education Research and Practice, 9, 187-195.
Marsh, H. W., Trautwein, U., Lüdtke, O., Köller, O., & Baumert, J. (2005). Academic self-concept, interest, grades, and standardized test scores: Reciprocal effects models of causal ordering. Child Development, 76, 397–416.
Mayer, R. E., Stull, A., DeLeeuw, K., Ameroth, K., Bimber, B., Chun, D., . . . Zhang, H. (2009). Clickers
in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Education Psychology, 3, 51-57.
21
Facilitation of Formative Assessments using Clickers
McClymer, J. F., & Knoles, L. S. (1992). Ersatz learning, inauthentic testing. Excellence in College Teaching, 3, 33-50.
McDermott, L. C. (1991). Millikan lecture 1990: What we teach and what is learned – Closing the gap.
American Journal of Physics, 59(4), 301-315.
McGreal, R. (2004). Learning objects: A practical definition. International Journal of Instructional Technology and Distance Learning, 1(9). Retrieved from http://www.itdl.org/journal/sep_04/article02.htm
Mintzes, J. J., & Leonard, W. H. (2006). Handbook of college science teaching. Arlington, VA: NSTA
Press.
Mintzes, J. J., Wandersee, J. H., & Novak, J. D. (1998). Teaching science for understanding: A human constructivist view. San Diego, CA: Academic Press.
NRC (National Research Council). (1996). National science education standards. Washington, DC: National Academy Press.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model
and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.
Partnership for 21st Century Skills. (2010). 21st Century Learning Environments. Retrieved from
http://www.p21.org/documents/le_white_paper-1.pdf
Poulis, J., Masses, C., Robens, W., & Gilbert, M. (1998). Physics lecturing with audience paced feedback.
American Journal of Physics, 66(5), 439-441.
Pride, T. O., Vokos, S., & McDermott, L. C. (1997). The challenge of matching learning assessments to
teaching goals: An example from the work-energy and impulse-momentum theorems. American Journal of Physics, 66(2), 147-157.
Ramsden, P. (1992). Learning to teach in higher education. London: Routledge.
Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences.
Boulder, CO: Westview Press.
Siebert, E. D., & McIntosh, W. J. (2001). Pathways to the science education standards. Arlington, VA:
NSTA Press.
Slater, T. F, Prather, E. E., & Zeilik, M. (2006). Strategies for interactive engagement in large lecture science survey classes. In J. J. Mintzes & W. H. Leonard (Eds.), Handbook of college science teaching
(pp. 45-54). Arlington, VA: NSTA Press.
Stull, J., Schiller, J., Jansen Varnum, S., & Ducette, J. (2008). The use of formative assessment in university level mathematics courses. Journal of Research in Education, 18, 58-67.
Sunal, D. W., Wright, E. L., & Day, J. B. (2004). Reform in undergraduate science teaching for the 21st
century. Greenwich, CT: Information Age Publishing.
Thompson, K., & Yonekura, F. (2005). Practical guidelines for learning object granularity from one higher
education setting. Interdisciplinary Journal of Knowledge and Learning Objects, 1, 163-179. Retrieved
from http://ijklo.org/Volume1/v1p163-179Thompson.pdf
Treagust, D.F., Duit, R. & Fraser, B (Eds.). (1996). Improving teaching and learning in science and
mathematics. New York: Teachers College Press.
Ueckert, C., & Gess-Newsome, J. (2006). Active learning in the college science classroom. In J. J. Mintzes
& W. H. Leonard (Eds.), Handbook of college science teaching (pp. 147-154). Arlington, VA: NSTA
Press.
Watkins, E., & Sabella, M. (2008). Examining the effectiveness of clickers in promoting learning by tracking the evolution of student responses. Proceedings of the 2008 Physics Education Research Conference, Edmonton, Alberta, CA, 1064, 223-226. doi: 10.1063/1.3021260
Wood, W. B. (2004). Clickers: A teaching gimmick that works. Developmental Cell, 6(6), 796-798.
22
Majerich, Stull, Jansen Varnum, Gilles, & Ducette
Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement
of pedagogic practice. Higher Education, 4(54), 477-501.
Biographies
David M. Majerich, Ed. D., is a science education researcher for the
Institute for Schools and Society and is an adjunct professor of science
education at Temple University. His research interests include improving the way that science is taught using demonstrations in the large
lecture hall and studying the effects of infusing research-based models
of teaching in science methods courses has on beginning teachers’ understanding of science content and their perceptions about their ability
to teach science. He holds an Ed.D. from Temple University.
Judith C. Stull, Ph. D., is an Associate Professor of Sociology at LaSalle University. Her teaching has been concentrated on statistics and
research methodology. For the past 15 years she has been involved in
research on improving teaching and learning. She has been a PI and coPI on federal and state funded education grants. She holds a Ph.D. from
Boston College.
Susan Jansen Varnum, Ph. D., is a Professor of Chemistry at Temple
University. She has published widely in scientific journals and has produced 13 Ph.D. students. An analytical chemist, Dr. Varnum has directed numerous federally and locally funded projects designed to improve the educational opportunities of diverse student populations. She
holds a Ph.D. from the University of Missouri.
Tiffany Gilles received her B.S. degree in Chemistry in 2005 and her
M.Ed. degree in Educational Psychology in 2009. She has worked
with science and science education faculty in developing experimental
protocols for formative assessment in university level science classes.
In addition, she has participated in multiple curriculum design projects
that include developing activities for chemistry teacher professional
development and a city-wide after school program. She is interested in
science instruction and has significant experience in laboratory research. She has two publications in the science literature and has numerous professional presentations in science and science education.
23
Facilitation of Formative Assessments using Clickers
Joseph P. Ducette, Ph.D., is a professor in the Department of Psychological Studies in Education, Temple University. He holds a B.A. in
Psychology from Wisconsin State University and a Ph.D. in Experimental Psychology from Cornell University. His research interests include: Attribution Theory, Diversity, Urban Education, Early Intervention, and Learning Disabilities.
24
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Modeling the Macro-Behavior of
Learning Object Repositories
Xavier Ochoa
Escuela Superior Politécnica del Litoral, Guayaquil, Ecuador
[email protected]
Abstract
Learning Object Repositories (LOR) are the result of the activity of hundreds or thousands of
contributing individuals. It has been shown in a previous work by the author (Ochoa & Duval,
2008) that LORs have an interesting macro-behavior, mostly governed by long-tailed distributions. The shape of these distributions provides valuable information for the management and
operation of LORs. However, the reason why these distributions appear is not known. This work
proposes a simple model to explain this macro-behavior as the consequence of very simple microbehavior of individual contributors, more specifically their number, production rate, and lifetime
in the repository. This model is formally presented and successfully validated against data from
existing LORs. While simple, this model seems to explain most of the large-scale measurements
in function of the small-scale interactions. Finally, this work discusses the implications that this
model has in the planning and maintenance of new and existing LORs.
Keywords: Learning Object Repositories, complex systems, long tailed distributions
Introduction
The publication of learning materials in online repositories is usually regarded as a simple process. To publish, the contributor provides or uploads the material (or the reference to the material), fills some metadata about the material, and then the material is available in the repository
for others to find and reuse. The contributor can repeat this process for more materials as desired,
while he or she is still interested in providing content to the repository.
These seemingly simple processes that determine the micro-behavior of contributors and consumers give rise to complex macro-behavior at the repository level once the contribution and
preference of hundreds or thousands of individuals is aggregated (Ochoa & Duval, 2008). For
example, some learning object repositories grow linearly while others, having a similar number of
contributors, grow exponentially. Also, the number of objects published by a given contributor is
distributed differently depending on the kind of repository, but always following a long-tailed
distribution (Anderson, 2006).
Material published as part of this publication, either on-line or
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
Unfortunately, there is no research
available about how the micro-behavior
of the individuals is related to the observed macro-behavior of Learning Object Repositories. The fields of Bibliometrics and Scientometrics have been
studying a similar problem: the process
of paper publication in different venues
(journals, conferences, repositories,
etc.). In these fields, several models
Editor: Janice Whatley
Modeling the Macro-Behavior of Learning Object Repositories
have been proposed to attempt to explain the observed patterns in the data. For example, De
Price Sola (1976) proposed “Cumulative advantage” as a model to explain the inverse-power law
distribution, also called Lotka by Coile (1977), observed in the number of papers published by a
scientist in a given field. Egghe and Rousseau (1995) and Egghe (2005) refine this notion with
the “success breeds success” model. However, the models used for scientific publication cannot
be transferred to learning object publication because one of their main characteristics, the increasing rate of production observed in most successful scientific contributors, has not been observed
in learning material contributors elsewhere (Ochoa & Duval, 2008). Nonetheless, the methodologies to establish and validate these models will be borrowed and re-used in the present study.
The present work proposes an initial model to explain the macro-behavior of LORs based on the
characteristics of their contributor base. This paper is structured as follows: the modeling section
presents previous unexplained characteristics of Learning Object Repositories that this work proposes to model. In the next section the model is formally defined and explained. The validation
section studies the model, comparing its predictions against empirical data. The paper ends with a
discussion of the relevance of this model and further research needed to improve it.
Modeling the Publication Process
In a previous work (Ochoa & Duval, 2008), several characteristics of the publication of learning
objects were measured. That work used data collected from several sources:
•
three Learning Object Repositories (LORp): Ariadne, Connexions and Maricopa Exchange;
•
three Learning Object Referatories (LORf): Merlot, Intute and Ferl First;
•
two Open Courseware sties (OCW): MIT OCW and OpenLearn and
•
one Learning Management System (LMS): SIDWeb.
The findings of that work could be summarized as:
•
LORp and LORf grow in number of objects linearly in two stages (bi-phase linearly), but
OCW and LMS grow exponentially.
•
Most LORp and LORf grow bi-phase linearly in the number of contributors. OCW and
LMS grow exponentially.
•
The number of objects published by a given author follows a Lotka distribution with exponential decay in the case of LORp and LORf. OCW and LMS present a Weibull distribution.
•
The rate at which contributors publish materials followed a Log-Normal distribution for
all the repositories studied.
•
The lifetime of the contributors (time that the contributor remains actively publishing material) is distributed exponentially for LORp and LORf and according to a Weibull distribution in the case of OCW and LMS.
While these findings provided information about how to manage repositories, the quantitative
study did not explain the connection between those measurements and the reason why they are
found in the first place. For example, Connexions, a LORp, has a linear growth in the number of
objects, but an exponential growth in the number of contributors. Also, it does not explain how
the behavior of the contributors (publications rate and lifetime) is related to the behavior of the
repository (repository growth and distribution of contribution)
26
Ochoa
This work tries to formulate a model that could simulate the observed results with the lowest
amount of initial parameters. The objective of this model is to understand the relation between the
micro-behavior (contributors publishing learning objects at a given rate during a given time) and
the macro-behavior (repositories growing linearly, publication distribution having a heavy tail).
Finally, this model could help us to adjust the initial parameter and simulate the macro-behavior
that a hypothetical repository would have. For example, the model will help us to know what type
of initial factors give rise to exponential growth.
This model is inspired by the ideas of Huber (2002). Huber modeled the distribution of the
amount of patents published among inventors using four variables: the Frequency (publication
rate), the Career Duration (lifetime), the Poissoness (the degree to which they conform to a Poisson distribution), and the Randomness. While we use some of his ideas, the methodology used in
this paper expands that model in two principal ways: 1) our model is capable of generating nonLotka distributions, and 2) the predictive scope of our model is larger, including the growth function and total size.
Definitions
The model is based on three factors. Two of them are directly related to the micro-behavior of the
contributor: the rate of publication and the lifetime. The third factor is related to the number of
contributors that a repository has at a given time. These factors are defined as follows:
•
Publication Rate Distribution (PRD): This specifies how talent or capability is distributed among the contributor population. Mathematically, PRD(x) is a random variable that
represents the probability of a contributor to publish one object each x days on average.
In the case of all the repositories studied in Ochoa and Duval (2008) the Log-Normal is a
good approximation of this distribution, although any distribution can be set to test
“what-if” scenarios.
•
Lifetime Distribution (LTD): This specifies the amount of time that different contributors will be active in the repository. Mathematically, LTD(x) is a random variable that
represents the probability that a contributor will stay active in the repository for x days.
Ochoa and Duval (2008) found that Exponential, Log-Normal, and Weibull distributions
seem to represent different types of contributor engagement.
•
Contributor Growth Function (CGF): This is a repository related factor that, for now,
cannot be predicted. Mathematically, CGF(x) is a function that represent the number of
contributors that the repository has after x days. Ochoa and Duval (2008) found that Biphase linear and Exponential are a good approximation for the contributor growth.
While the initial factors can be formally defined (distribution functions or growth functions), the
process to derive a formal model involves non-linear calculations (Huber, 2002) that make it unfeasible to derive an exact mathematical solution (resulting distribution) that can be easily interpreted. Therefore, a numerical computation is used to run the model. This approach, while less
formal, is very flexible to accommodate a greater range of initial factors.
The construction of the model can be described as follows:
1. The period of time, measured in days over which the model is run, is selected.
2. The Contributor Growth Function (CGF) is used to calculate the size of the contributor population at the end of that period.
3. A virtual population of contributors of the calculated size is created.
4. For each contributor, the two basic characteristics, publication rate and lifetime are assigned:
27
Modeling the Macro-Behavior of Learning Object Repositories
4.1. First, a publication rate value, generated randomly from the Publication Rate Distribution (PRD), is assigned to each contributor.
4.2. Second, a lifetime value, generated randomly from the Lifetime Distribution (LTD) is
also assigned to each contributor.
5. Once the virtual contributors’ parameters have been set, each contributor is assigned a starting date. The number of contributor slots for each day is extracted from a discrete version of
the CGF. Each contributor is assigned randomly to each one of those slots. If the start date
plus the lifetime of a contributor exceed the final date of the simulation, the lifetime is truncated to fit inside the simulation period.
Once the simulated population has been created, the model is run. A Poisson process is used to
simulate the discrete publication of learning materials. The lambda variable required by the Poisson process is replaced by the contributor’s publication rate. The process is run for each day of
the contributor’s lifetime. The result of the model is a list containing the contributors, the number
of objects that those contributors have published, and the dates in which those publications took
place. From this data, the macro-behavior of the simulated repository can be extracted in a similar
way as for real repositories.
In formal terms (Equation 1), the random variable N, representing the number of objects published by each contributor, is equal to the PRD, the random variable representing the rate of production of the contributor, multiplied by LTD, the random variable representing the lifetime of
the contributor in the repository. Given that solving the multiplication of random variables often
involves the use of the Mellin transform (Epstein, 1948) and the result is not always easily interpretable (Huber, 2002), this multiplication is solved through computation methods. Equation 2
shows the resulting distribution of N. The probability of publishing k objects is the combined
probability of each contributor publishing k objects. Given that the production of a contributor is
considered independent of the production of any other contributor, the combination of probabilities is converted into a product for the Nc contributors. To calculate the probability with which
the ith contributor publishes k objects, we use the formula of the Poisson process with production
rate Ri and lifetime Li randomly extracted from their correspondent distributions. This formula
calculates the probability that the contributor publishes exactly k objects during her lifetime.
(1)
(2)
Model Validation
To validate this model the simulated results are compared with the data extracted from real repositories. Three characteristics of the repository are compared: 1) distribution of the number of
publications among contributors (N), 2) the shape of the content growth function (GF), and 3) the
final size of the repository (S).
The repositories used for this evaluation is a subset of the repositories used in Ochoa and Duval
(2008): Ariadne, Connexions, and Maricopa Connexion representing the Learning Object Repositories (LORp); Merlot representing the Learning Object Referatories (LORf); MIT OCW representing the Open Courseware (OCW); and SIDWeb, the Learning Management System used in
the Escuela Superior Politécnica del Litoral, representing the LMS. This data was captured between the 5th and the 8th of November 2007. To perform the evaluation, the initial factors were
28
Ochoa
taken from the data extracted from these repositories and are presented in Table 1. These factors
were fed into the model and used to run the simulation. To have a statistically meaningful comparison between the real data and the output of the model, 100 Monte-Carlo simulated runs were
generated for each repository.
Table 1. Initial factors extracted from the empirical data of the studied repositories
Repository
PRD(x)
LTD(x)
CGF
Ariadne (LORp)
Log-Normal
(μlog=-3.25, σlog=1.27)
Exponential
(λ=0.0010)
Bi-Phase Linear
(slope1=0.02, slope2=0.06,
breakpoint =1277)
Connexions (LORp)
Log-Normal
(μlog=-4.11, σlog=1.36)
Exponential
(λ=0.0012)
Exponential
(λ=1.2x10-3)
Maricopa (LORp)
Log-Normal
(μlog=-5.18, σlog=0.95)
Exponential
(λ=0.0012)
Bi-Phase Linear
(slope1=0.06, slope2=0.28,
breakpoint =1095)
Merlot (LORf)
Log-Normal
(μlog=-2.47, σlog=1.11)
Exponential
(λ=0.0015)
Bi-Phase Linear
(slope1=0.12, slope2=0.54,
breakpoint 401)
MIT
(OCW)
Log-Normal
(μlog=-1.68, σlog=1.07)
Weibull
(k=1.72, λ=325)
Exponential
(λ=3.7x10-3)
SIDWeb (LMS)
Log-Normal
(μlog=-2.57, σlog=0.96)
Weibull
(k=1.21, λ=588)
Exponential
(λ=1.8x10-3)
First, a comparison is made of the distribution of publications (N) between the empirical and simulated data. To have a meaningful comparison, the parameters of the distribution of the simulated
data are estimated with the same methodology used to obtain the empirical measures (Ochoa &
Duval, 2008). As expected, each simulated data set was assigned slightly different parameters
values. However, the values were normally distributed. A simple t-test was applied to establish
whether it is reasonable to assume that the parameters assigned to the empirical data set belongs
to the same population as the simulated parameters. If all the parameters of the empirical distribution belong to the same population as the simulated ones, it can be concluded that the empirical
and simulated data sets have the same distribution. The p-value for the t-test is provided in Table
2 together with the mean values of the simulated parameters.
For LORs, the model is able to accurately simulate the alpha value for all the repositories. The
alpha parameter basically determines the general shape of the Lotka distribution. The rate parameter, on the other hand, has a more subtle effect. This parameter determines the slight reduction in probability of finding very productive contributors. The model does not seem able to consistently reproduce this value. The subtle effect that determines the exact value of rate is most
probably lost during the simplifications of the model. An example of the simulation of the Connexions repository is presented in Figure 1.
29
Modeling the Macro-Behavior of Learning Object Repositories
Figure 1. Comparison between the Empirical and Simulated Distribution of the Contribution (N)
30
Ochoa
Table 2. Comparison between the empirical and simulated distribution
of the number of objects published per contributor (N)
Repository
N empirical
N simulated (average)
p-values
Ariadne (LORp)
Lotka exp. cut-off
(α=1.57, λ=0.011)
Lotka exp. cut-off
(α=1.58, λ=0.001)
p-α: 0.60
p-λ: 0.02
Connexions (LORp)
Lotka exp. cut-off
(α=1.35, λ=0.0094)
Lotka exp. cut-off
(α=1.42, λ=0.0002)
p-α: 0.31
p-λ: 0.07
Maricopa (LORp)
Lotka exp. cut-off
(α=2.12, λ=0.0067)
Lotka exp. cut-off
(α=2.39, λ=0.04)
p-α: 0.60
p-λ: 0.02
Merlot (LORf)
Lotka exp. cut-off
(α=1.88, λ=0.0006)
Lotka exp. cut-off
(α=1.76, λ=0.002)
p-α: 0.28
p-λ: 0.10
MIT (OCW)
Weibull
(k=1.07, λ=40.5)
Weibull
(k=0.68, λ=35)
p-k: 0.00
p-λ: 0.22
SIDWeb (LMS)
Weibull
(k=0.52, λ=17.14)
Weibull
(k=0.60, λ=19)
p-k 0.21
p-λ: 0.55
The shape of the OCW MIT Weibull distribution of publications seems to present a major challenge for the model. The almost horizontal head of the distribution cannot be accurately simulated
with the current calculations. The shape parameter is vastly underestimated. However, the tail of
the distribution is reasonably matched by the simulated values and the scale parameter is correctly
estimated. The comparison between one simulation run and the empirical data can be seen in Figure 1. The model, nonetheless, can model less extreme Weibull distributions, as can be seen in the
estimation of the SIDWeb parameters.
The next step in validating the model is to compare the shape of the content growth function (GF)
and the final size of the repository (S). For the GF evaluation, the daily simulated production of
objects was counted across contributors. First, the count was fitted with the same methodology
and functions used to obtain the empirical results in Ochoa and Duval (2008). Counting the times
that the correct function, Bi-Phase Linear, was selected, provided the best-fitting alternative. For
the S evaluation, the total number of objects produced in each simulated data set was counted.
The distribution of the final size follows a left-skewed distribution. The Empirical Cumulative
Density Distribution (ECDF) was used to calculate the chances that the empirical size came from
the same population. The results of these evaluations are presented in Table 3.
The simulated growth functions seem to agree with the empirical measurements most of the time
(>50%). When the contributor base growth function is also Bi-Phase Linear (Ariadne, Maricopa,
MERLOT), the accuracy of the prediction is high (90% or higher). However, when an exponential contributor rate growth is involved in the calculation (Connexions, MIT OCW, and SIDWeb),
the identification rate decreases (60-80%). It is interesting to note that thanks to the variability in
the lifetime, an exponential contributor growth does not necessarily means exponential growth in
the number of objects. However, as the miss-interpretation rate shows, when there is exponential
growth in the number of contributors, exponential growth in the number of objects is a viable
31
Modeling the Macro-Behavior of Learning Object Repositories
outcome. These effects can be observed in Figure 2. There, a graphical representation of random
simulated growth functions is presented.
The actual parameters of the Growth Function (GF) are not analyzed, as they varied widely from
simulation to simulation. The implication of this variation is not clear; it could be that natural variation creates several types of growth from the same contributor population, or that this model, in
its simplicity, does not take into account some relation between lifetime and production rate that
is responsible for the shape of the function. More research is needed to solve this question.
Table 3. Validation of the Growth Function (GF) and the final Size (S) of the repositories
Repository
GF Empirical
% GF
Simulated = GF
Empirical
S Empirical
Average S
Simulated (p-value)
Ariadne (LORp)
Bi-Phase Linear
100%
4,875
5,516 (0.48)
Connexions (LORp)
Bi-Phase Linear
73%
5,134
6,052 (0.50)
Maricopa (LORp)
Bi-Phase Linear
100%
2,221
3,105 (0.36)
Merlot (LORf)
Bi-Phase Linear
98%
18,110
20,389 (0.61)
MIT (OCW)
Exponential
65%
53,880
48,320 (0.52)
SIDWeb (LMS)
Exponential
76%
21,675
25,443 (0.20)
Finally, a comparison is made of the final number of produced objects when the simulations have
been run for the same period of time as measured in the empirical data sets. As can be seen in
Table 3, the size values for all the repositories were estimated correctly, even in repositories
where the simulated and empirical publication distributions does not completely match (OCW).
The reason for this resilience is that the tail of the distribution (or the head, in the case of OCW)
is responsible for a small fraction of the objects. If the simulation can match the head (or the middle section, in the case of OCW), where most of the objects are published, the total simulated
output is similar to the original repository. These results support the use of this model to calculate
growth and required capacity.
32
Ochoa
Figure 2. Comparison between the Empirical Growth Function (left)
and the Simulated Growth Function (right)
33
Modeling the Macro-Behavior of Learning Object Repositories
Conclusions and Implications
The model presented makes the simple assumption that the only variables that affect the characteristics of a repository are how frequently the contributors publish material (publication rate),
how much time they persist in their publication efforts (lifetime), and at what rate they arrive at
the repository (contributor growth function). The model combines those variables through a computational simulation that is capable of predicting other repository characteristics, such as the distribution of publications among contributors, the shape of the content growth function, and the
final size of the repository.
The model has been evaluated with the data presented in the analysis sections. From this evaluation, it can be concluded that the simple model is capable of simulating quite well the characteristics observed in real repositories based only on the initial factor. However, the simplicity of the
model can be seen when the model tries to simulate repositories with special characteristics, for
example, when it tries to simulate repositories like OCW that have a small low-publishing community. Nonetheless, the model can be used at it is to predict future growth of current repositories
or to simulate repositories with characteristics not seen naturally. For example, what the publication distributions will be like if the publication rate is uniformly distributed. Improvements of this
model to include special cases, as well as interactions between the factors, are an interesting topic
for further research.
The most important implication that the development of this model has for LOR administratormanagement is to provide a tool that can be used to predict growth and behavior of the repository.
For example, based in the observed growth in the number of contributors, their rate of publication
and current lifetime, the model can be used to predict the number of objects that the repository
will have in the future. Also the model shows that if the lifetime distribution is altered, the
growth will be immediately affected. For example, if the repository could retain its contributors
for longer periods of time, its growth could change from linear to exponential.
Finally, the most important characteristic of the proposed model is its testability. It would be easy
to construct competing models and test if they predict different characteristics of the repositories
and can handle special cases that the current model cannot. This testability provides a way to
measure progress in efforts to understand the nature and workings of the learning object publication process.
References
Anderson, C. (2006). The long tail. New York: Hyperion.
Coile, R. (1977). Lotka’s frequency distribution of scientific productivity. Journal of the American Society
for Information Science, 28(6), 366-370.
De Price Sola, D. (1976). A general theory of bibliometric and other cumulative advantage processes.
Journal of the American Society for Information Science, 27(5-6), 292-306.
Egghe, L. (2005). The power of power laws and an interpretation of Lotkaian informetric systems as selfsimilar fractals. Journal of the American Society for Information Science, 56(7), 669–675.
Egghe, L. & Rousseau, R. (1995). Generalized success-breeds-success principle leading to time-dependent
informetric distributions. Journal of the American Society for Information Science, 46(6), 426–445.
Epstein, B. (1948). Some applications of the Mellin transform in statistics. The Annals of Mathematical
Statistics, 19(3), 370–379.
Huber, J. (2002). A new model that generates Lotka’s law. Journal of the American Society for Information
Science and Technology, 53(3), 209–219.
34
Ochoa
Ochoa, X. & Duval, E. (2009). Quantitative analysis of learning object repositories. IEEE Transactions on
Learning Technologies, 2(3), 226-238.
Biography
Xavier Ochoa is a professor at the Faculty of Electrical and Computer
Engineering at Escuela Superior Politécnica del Litoral (ESPOL) in
Guayaquil, Ecuador. He coordinates the research group on Teaching
and Learning Technologies at the Information Technology Center
(CTI) at ESPOL. He is also involved in the coordination of the Latin
American Community on Learning Objects (LACLO), the ARIADNE
Foundation, and several regional projects. His main research interests
revolve around measuring the Learning Object economy and its impact
on learning. More information at http://ariadne.cti.espol.edu.ec/xavier
35
This page left blank intentionally
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
An Agent-based Federated
Learning Object Search Service
Carla Fillmann Barcelos and João Carlos Gluz
Interdisciplinary Program in Applied Computer Science (PIPCA) –
Vale do Rio dos Sinos University (UNISINOS), São Leopoldo, RS, Brasil
[email protected]; [email protected]
Rosa Maria Vicari
Informatics Institute, Federal University of Rio Grande do Sul
(UFRGS), Porto Alegre, RS, Brasil
[email protected]
Abstract
The cataloging process represents one of the greatest issues in respect to the use of learning objects because it is through this process that appropriate objects can be found through search engines. Incorrect cataloging causes inefficacy in the search process, and this situation is aggravated
when the objects are distributed and maintained in several distinct repositories. The present work
proposes the creation of an agent-based federated catalog of learning objects: the AgCAT system.
This system is part of the MILOS infrastructure, which will provide the computational support to
the OBAA metadata proposal, a Brazilian initiative to create a new learning object metadata standard, able to support the requirements of multi-platform adaptability, compatibility with current
standards, special needs accessibility, and technological independence of hardware and software
platforms. The paper presents the functional structure and organization of the AgCAT system,
showing its architecture, the main aspects of its prototype, and main results obtained till now.
Keywords: Search Service, Learning Objects, Software Agents.
Introduction
The Brazilian Ministry of Education provides free digital pedagogical content by means of the
Virtual and Interactive Net for Education program (RIVED, 2009), distributing these objects
through the International Base of Educational Objects repository (BIOE, 2010). The main goal of
these programs is to aid in the development and distribution of electronic educational material by
using Learning Objects (LO) as the foremost technology to publish and disseminate such material. The material is formed by educaMaterial published as part of this publication, either on-line or
tional activities, which may contain
in print, is copyrighted by the Informing Science Institute.
multimedia resources, animations, and
Permission to make digital or paper copy of part or all of these
simulations. To locate a particular object
works for personal or classroom use is granted without fee
in a repository is a difficult problem deprovided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
pending on the rightful indexation and
in full and 2) give the full citation on the first page. It is percataloging of its material. This process
missible to abstract these works so long as credit is given. To
corresponds to the fulfilling of the LO
copy in all other cases or to republish or to post on a server or
metadata with correct information.
to redistribute to lists requires specific permission and payment
Metadata is information that describes
of a fee. Contact [email protected] to request
redistribution permission.
the characteristics of certain documents,
Editor: Marguerite Newcomb
Agent-based Federated Learning Object Search Service
material, or LO. The main purpose of metadata is still to be understood and used by people or
software agents in cataloging, searching, and similar tasks (Taylor, 2003).
The cataloging and indexation process represents one of the greatest issues to locating educational contents, such as learning objects, because it is through this process that these objects can
be found through search engines. Incorrect LO cataloging or indexation causes inefficacy in
search processes. This situation is aggravated when LO are distributed and maintained in several
distinct repositories. The increase of LO production in Brazil (and around the world) by several
different institutions has shown the risk that the material remains unused by the general community, or at least with very restricted use, limited only to the members of the institution in case a
unified search mechanism exists capable of finding LO in repositories of most anyone in the institution. Currently there is no standard infrastructure that gives support to a unified search and retrieval of educational resources such as LO (CORDRA Management Group, 2009).
To assist in this situation, the present work proposes the creation of an agent-based federated
catalog of learning objects (AgCAT). The general objective of this system is to provide an infrastructure of federated LO catalogs that are able to help in the search and retrieval of these educational resources. The system will make intensive use of technologies from Distributed Artificial
Intelligence (DAI) and Multi-Agent Systems (MAS) research fields (Weiss, 1999; Wooldridge,
2002), seeking to optimize the LO search process. The system will use several protocols and
technologies to harvest metadata from LO repositories and digital libraries. Several AgCAT systems can also be federated, forming a federation of LO catalogs. The search for LO in the federation is transparent for its users. A query made in any federated AgCAT system is transparently
propagated to all other AgCAT systems in the federation. Therefore, apart from communication
delay, a query in any AgCAT system is equivalent to the same query in any other federated system. Only the search propagation protocol must be supported by each federated AgCAT system.
The administration and management of each federated AgCAT system is completely independent
from the other federated systems, allowing for different institutions to be included easily in the
federation.
This work presents the functional structure and organization of the AgCAT system, showing the
system’s architecture, aspects of its prototype, and main results obtained until now. The next two
sections present a literature review concerning the main topics related in the present work focusing on the metadata standards supported by AgCAT and the multi-agent technology that supports
the system. The following section describes the multi-agent architecture of the system, the organization of its agents, particular details about the formation of the directory federation, and the
metadata harvesting process. The last section presents the prototype of the system, and its first
results.
Metadata
For the Learning Technology Standard Committee (LTSC) at the Institute of Electrical and Electronics Engineers (IEEE, 2002), a learning object is any entity, digital or not, that can be used,
reused or referenced during a learning process. A learning object is digital or non-digital (mockup, image, film, etc.) content that can be used for an educational purpose, including, internally or
through association, suggestions of contexts in which it should be used. Such a view is also
adopted in the present study, despite being restricted to the case of digital entities.
The main property of LOs are their re-usability. Such a characteristic can be achieved through
modularity, interoperability, and recovery. Modularity describes the degree of separation and subsequent recombination of LO components. Interoperability is the ability to operate in heterogeneous platforms. Recovery is related to the ability to be found due to its description of properties
and functionalities. These characteristics guide the efforts of several research groups and entities
38
Barcelos, Gluz, & Vicari
aiming to propose standardizations to enable development and use of LO worldwide. Within this
context, the following initiatives stand out: IEEE LTSC (IEEE, 2002), IMS Global Learning Consortium (IMS, 2009), and Advanced Distributed Learning Initiative (ADL, 2001).
The purpose of adopting open standards is to obtain platform independence in relation to exhibition, or execution of objects, enabling the use of different operational systems and hardware platforms to make object content available. A metadata LO standard can be seen as a specification of
a heading that provides information about the object. The data elements that comprise this heading are the metadata regarding the LO. Therefore, such a standard does not interfere with learning
object content or rules as it only groups metadata. For this reason, these standards have been
widely used in distance learning resources such as CAREO (Australian), the ABED and RIVED
(Brazilian) repositories, and in several standardization initiatives, for example, the ADL/SCORM
certification. In this respect, the Brazilian government, through the Secretary of Distance Learning of the Ministry of Education and Culture (MEC), brought forth an initiative to generate a diversity of interactive multimedia educational resources in the form of Learning Objects, which
already resulted in the development and publishing of hundreds of didactic resources to computer
and general use. These resources were developed by several teams of teachers and students of
higher education institutions, available in the BIOE of MEC (BIOE, 2010) and RIVED (RIVED,
2009) repositories.
Among the metadata standardization initiatives, the IEEE-LOM (Learning Object Metadata) is
considered as an open and internationally recognized standard, which facilitates the search,
evaluation, construction, and use of LO. The IEEE-LOM is specified by norm IEEE Std
1484.2.1–2002 and provides a data model for the metadata, normally codified in XML. Such
standard aims at specifying the syntax and semantics of information (metadata) concerning LOs.
These specifications enable cataloging educational material (through their metadata), considering
the diversity of cultural and linguistic contexts of LO creation and reuse. Thus, the objective is to
ensure efficient ways of identification, (re)use, management, interoperability, sharing, integration,
and recovery of these objects.
In practice, the IEEE-LOM standard defines a library of metadata that can be freely combined to
create the information heading of LOs. According to this standard, if the heading of some object,
which is formed only by elements defined in the IEEE-LOM standard, then the object is considered “strictly complying with the standard.” Otherwise, if, in addition to data elements defined in
the IEEE-LOM standard, the heading has other type of elements (extensions of IEEE-LOM), then
the object is considered only as “complying with the standard.”
In addition to re-usability, another important aspect that can be used to evaluate and compare
metadata standards for learning objects (or metadata standards for other types of objects and content) is the degree of coverage of the information stored in metadata in relation to the applications
intended for the objects. Within this context, there is a low degree of coverage offered by the
Dublin Core Metadata Initiative (DCMI) unqualified metadata standard for the Web contents of
educational and pedagogical applications, once compared with the IEEE-LOM standard.
On the other hand, within the context of multimedia content conversion and adaptation into different platforms of digital content availability, such as Web, mobile devices, digital TV, and
game consoles, there is a relative lack of coverage offered by the IEEE-LOM standard compared
to the MPEG-7 standard or other similar standards of metadata for multimedia content. Similarly,
there was also a lack of coverage in the current metadata standards for Learning Objects, including IEEE-LOM, IMS-LOM, ADL, and DCMI relative to accessibility requirements for people
with special needs and also relative to specific educational issues within the Brazilian context.
The project OBAA (Learning Objects supported by agents) was developed by the UFRGS University in partnership with the UNISINOS University in response to a request by MEC and the
39
Agent-based Federated Learning Object Search Service
Brazilian Ministries of Communication and Science and Technology for research projects capable
of dealing with multi-platform interoperability issues of digital content in the educational context.
The main goal of the research was to establish a standardized specification of the technical and
functional requirements of a platform for production, editing, management, and distribution of
interactive digital content, chiefly LOs, to be used in education applications. The specification
should allow the interoperability of the contents among Web and Digital TV environments. The
objective of the OBAA project was to answer this need, also considering mobile devices and accessibility requirements. It reached these goals relying extensively on the convergence among the
multi-agent systems, Learning Objects and ubiquitous computing technologies, allowing the authoring, storage, and recovery of LOs in varied contexts and through different digital platforms.
The OBAA metadata proposal (Viccari et al., 2010) is one of the main results of the OBAA project and it defines an extension of the IEEE-LOM standard. This proposal provides several new
metadata, which allow objects’ interoperability among multiple digital platforms beyond the Web
platform, supporting new platforms like Digital TV and mobile devices. It also provides specific
metadata for accessibility and pedagogical issues.
The proposed metadata intends to ensure freedom to the developer of pedagogical content so that
the professional encounters no technological restrictions. The proposed set of metadata establishes a wide structure for cataloging, enabling different forms of application according to the
needs of each LO designer. The metadata proposal followed the interoperability and functionally
requirements presented below:
•
Adaptability: enables the same description of an object to be used in an inter-operable
manner, adapting to the characteristics of each platform, that is, the system presents a
different interface according to the device. Initially supported by the Web, Digital TV
(DTV), and mobile platforms.
•
Compatibility: the metadata standard should maintain compatibility with the current
panorama of international standards since it is important to interact with services previously developed through international standards.
•
Accessibility: considering the right of universal access to knowledge, it is necessary to
enable access to LOs by all members of the society, including citizens with special
needs.
•
Technological Independence: the metadata standard should be flexible in order to support technological innovation, allowing extensions without losing compatibility with
the previously developed content.
The OBAA metadata proposal is an extension of the IEEE-LOM 1484.12.1 metadata standard,
adding new pedagogical requirements in addition to technological foundations to enable use of
LO in DTV and mobile devices. New metadata elements were added to technical and educational
metadata categories of IEEE-LOM, and two new categories of metadata were created: accessibility and multimedia segmentation metadata (see Figure 1).
40
Barcelos, Gluz, & Vicari
Figure 1: Metadata Groups of the OBAA Proposal.
Agents and Multiagent Systems
Artificial agents are computational entities that have autonomous behavior, belong to an environment, and can communicate with other agents in the same environment, whether artificial or
human. According to the definition by Wooldridge (2002), an agent is a computer system situated
in a particular environment that is capable of autonomous execution in order to attain its objectives. Agents are characterized, among other things, by autonomy, proactive behavior, and the
ability to communicate and work together forming multi-agent systems. Autonomy implies that
agents can carry out complex tasks independently. By being proactive, agents can take the initiative to accomplish a given task even without an explicit stimulus from a user. As they are communicative, they can interact with other entities to get help for each task and target.
The BDI (Beliefs – Desires – Intentions) cognitive model for agents assumes that the intentions
of agents are derived from beliefs and desires, and that the behavior of the agent is clearly implied
by its intentions. The BDI model is one of the cognition models of the Mental State approach for
agent modeling. In this model, the set of beliefs represent provisional knowledge of the agent,
which can change with the passing of the time. Beliefs define what the agent knows about the
environment, what it knows about other agents, and what it knows about itself. Beliefs are specified by logical properties concerning other agents, the environment, and about the agent itself.
Agents should update their beliefs to reflect changes detected (perceived) in other agents, the environment, and themselves. They must maintain the consistency of the beliefs after this update.
Desires specify the state of affairs the agent eventually wants to bring about. One particular state
of affairs is specified by a logical property to be held in this future state and by a list of attributes
that define the admissibility criteria of the desire. The admissibility criteria attributes specifies the
agent’s beliefs regarding desire. They define, at least, the priority of the desire, the ability of the
agent to achieve the desire, and the estimated possibility of the desire to become true. We believe
that the purpose of the agent, in the cognitive model of agents that we are using, is explicitly
stated as the set of highest-priority desires of the agent.
The fact that an agent has a desire does not mean it will act to satisfy it. Acts are governed by intentions that are characterized by a choice of a state of affairs to achieve and a commitment to
make this choice (here we follow the definition of Cohen & Levesque, 1990). Intentions are related to desires by admissibility criteria attributes. The agent will choose those desires that are
possible, according to these attributes and to the agent’s current base of beliefs. It is important to
note that intentions are also beliefs of the agent. One particular intention is a compromise in that
the agent has to reach a specific possible future, that is, the agent believes that the state of affairs
it wants to achieve does not hold now and that the agent must work to reach that state. It means
41
Agent-based Federated Learning Object Search Service
that before an agent decides what to do, it will be engaged in a reasoning process, confronting its
desires with its possibilities, defining its intentions, and then planning its actions in respect to this
intention.
In other words, an intention poses a decision problem (or a planning problem) for the agent. The
agent should solve this problem and decide the course of actions, or plan of actions, to be followed in order to achieve the intention. A plan of actions is composed by a set of actions structured by sequence, iteration, and test/choice order relations (operators). These plans do not need
to be fully specified from the beginning; they can be partial and the agent can start to follow the
plan and reassess or complete it during execution.
The interaction of the agent with its environment is done by actions and perceptions. An action is
an alteration in the external environment caused directly by the agent. From an intentional point
of view, it also represents a way to attain an end (intention). Therefore, internally, the agent
should know (believe) the basic effects produced by possible actions and what the relations of
these actions to their intentions are. Agents detect properties in the environment, or more commonly, changes in these properties through perceptions. Independent of the agent, these changes
may occur or they can be caused by actions executed by the agent or by other agents, but the only
way the agent has to detect them is through its perceptions. Perceptions produce updating in the
base of beliefs of the agent, yet the exact update produced by a particular perception depends on
the current state of beliefs of the agent.
Agents form the basic element of computation in multi-agent systems that can be simply defined
as systems formed by several agents working together. The fundamental characteristic of the
agents in a multi-agent system is the ability that the agents have to communicate. It is through
communication with other agents that a particular agent seeks to achieve its goals. Communication here is understood as occurring at the knowledge level where it is assumed that agents exchange knowledge (or more specifically beliefs) with each other. The traditional theory of agent
communication follows the epistemological and linguistic basis provided by the Speech Act Theory of Searle, which defines an intentional semantics of communication centered on the perspective of speaker agents. Beliefs are communicated between agents by the use of communicative
acts (or illocutionary acts), which are actions, from the point of view of the speaker agent, and
perceptions, from the point of view of the hearer agent, destined only for communication purposes. There are several distinct kinds (forces) of communicative acts, depending on the intended
purpose of the acts. The most common acts are assertive acts intended to make the other agent
believe in a particular assertion, directive acts intended to make the other agent execute an action,
commissive acts intended to make the other agent carry out a commitment, and other similar acts.
To be of value for communication purposes, these acts must be represented in a particular language and this language must be the same for all agents in a multi-agent system. This language is
called the Agent Communication Language (ACL) of the system. The main form of communication among agents is through an Agent Communication Language (ACL) (Chaib-draa & Dignum,
2002; Labrou, Finin, & Pen, 1999). Most ACL used today is the FIPA-ACL language, specified
by FIPA (Foundation for Intelligent Physical Agents) (FIPA, 2002). FIPA is an international nonprofit organization aimed at developing software standards for agent-based systems. More recently, FIPA has become an IEEE working group, keeping the same goal of development of
communication standards for agents. The set of all specifications of FIPA is divided into five distinct categories: (a) Agent Communication, (b) Agent Message Transport, (c) Agent Management, (d) Abstract Architecture, and (e) Applications. The FIPA-ACL language specifications
(FIPA, 2002) and the remaining specifications defined in the Agent Communication category are
the main specifications of the FIPA model for multi-agent systems. Specifications defined in the
Agent Management category are also important for this model and used in the present work be-
42
Barcelos, Gluz, & Vicari
cause they establish a reference model for the creation, registration, location, communication,
migration, and extinguishing agents.
The JADE agent platform (Bellifemine, Caire, & Greenwood, 2007) is a software development
environment based on FIPA standards that simplifies the development of these systems, handling
several issues related to the communication and the life cycle of agents, as well as helping in debugging activities like the execution monitoring of agents. JADE contributed to the dissemination
of the use of FIPA specifications due to the fact that it provides a set of software abstractions and
software tools that grant developers the ability to apply FIPA specifications without a deep technical knowledge of these specifications. The JADE platform offers the following set of graphical
tools to aid programmers in debugging and monitoring agents: RMA (Remote Management
Agent), Dummy agent, Sniffer agent, Introspector agent, the DF (Directory Facility) agent, and
LogManager agent.
The FIPA multi-agent systems management model is implemented by two JADE services: the
AMS (Agent Management System), which supervises the general operation of the platform, and
the DF, responsible for the directory service (yellow pages) of the platform. Both services are
implemented as agents of the JADE platform. The DF agent registers most any agent caring to
offer its services in the platform. It also allows agents to search for other agents and their services
in the system. Additionally, the DF accepts requests from agents who want to be notified whenever a record of service or modification is made. Multiple DF agents can be launched simultaneously in order to distribute the yellow pages service over various domains. If necessary, DF
agents located in different JADE platforms can be integrated into a federation (can be federated)
of DF agents, allowing the spread of some agent requests to the whole federation of DF agents.
AgCAT Architecture
The AgCAT catalog service is part of a general infrastructure for agents known as MILOS
(Multi-agent Infrastructure for Learning Object Support) (Viccari et al., 2010) that is being designed by our research group in order to provide various types of services to support the life-cycle
of OBAA compatible LO, including localization, authoring, use, management, content adaptation,
and conversion for different devices. The main goal of MILOS is to support all requirements and
functionalities specified in the OBAA metadata proposal. The architecture of MILOS is divided
into three main layers of abstraction (see Figure 2):
•
Ontology Layer: this layer is responsible for the specification of knowledge that will
be shared among all agents involved in infrastructure.
•
Agent Layer: this layer implements, through a set of multi-agent systems, the several
operations related to LO life-cycle.
•
Interface Facility Layer: this layer implements the communication services necessary
for MILOS agents to inter-operate with web servers, virtual learning environments,
LO repositories, databases, directory services and other types of educational legacy
applications.
43
Agent-based Federated Learning Object Search Service
OBAA Metadata OWL ontology
Ontologies for
Learning Domains
Ontologies
...
Ontologies for
Educational Applications
...
JENA Framework + OWL-API Interface
Agents
Authoring
Pedagogical
Support
Search
Management
Middleware FIPA/JADE
Interface
Facilities
VLEs
LO Repositores
LDAP
SQL
Web
Figure 2: MILOS Infrastructure
The Ontology Layer, besides the OBAA metadata ontology, contains the ontology for learning
domains and educational applications supported by MILOS.
The Agent Layer implements are supported for activities of authoring, adaptation, management,
publishing, localization, and use of LO compatible with OBAA. These activities were distributed
in four large multi-agent systems:
•
Search System: supports searching LO.
•
Pedagogical Support System: supports the pedagogical use of LO in educational contexts.
•
Authoring System: supports LO authoring activities, including aid for platform adaptation.
•
Management System: supports storing, managing, publishing, and distribution of LO
in distinct platforms.
The Interface Facilities Layer provides the facilities interface that will allow non-MILOS learning
environments and educational applications to gain access to MILOS agents and also permit
MILOS agents to have access to LO repositories, directory services, databases, and Web servers.
The AgCAT system contributes to the MILOS infrastructure in the context of the Search System,
being a prototype of this system. AgCAT agents will be responsible for obtaining, cataloging, and
searching LO metadata for MILOS agents and users. The architecture of the AgCAT system is
formed by three types of software agents (see Figure 3):
•
44
Finder agent: this agent provides the search service to AgCAT users and other MILOS
agents.
Barcelos, Gluz, & Vicari
•
Librarian agent: this agent obtains metadata from LO repositories and stores these
metadata in the local catalog database.
•
InterLibrarian agent: this agent is responsible for establishing the federation of LO
catalogs.
OBAA
Metadata
Ontology
Federated
AgCAT
Systems
LO
Repository
Local
Catalog
InterLibrarian
InterLibrarian
Inter
Librarian
Librarian
Finder
.
.
.
other
MILOS
Agents
...
1
Figure 3: AgCAT Architecture
Search of Learning Objects
The Finder agent provides search service to AgCAT users, allowing these users to keep an eye
out for educational content in distinct repositories. By means of a Web interface, the Finder agent
agrees to the retrieval of objects registered by Librarian agent or by other catalogs in the federation. Besides its Web interface, the Finder agent also provides a FIPA-ACL (FIPA, 2002) interface, answering the FIPA query messages from other agents (Figure 4).
The search mechanism implemented by the Finder agent permits objects to be found by logical
expressions which basic predicates compare operators with the values of metadata elements.
These expressions assent the conjunction (AND), disjunction (OR) and negation (NOT) of the
basic predicates. The mechanism translates these expressions in equivalent SPARQL queries,
which are used to retrieve the appropriate metadata from the local database or for other catalogs
federated to the current AgCAT system. A user-friendly version of these logical expressions is
provided by the Web interface of the agent. This interface also provides a basic search page
where a user can seek out an object by specifying some of its properties. Other agents can also
search for Learning Objects by using FIPA-ACL query messages. The language contents of these
queries can be direct SPARQL, or the user-friendly version adopted in the Web interface.
45
Agent-based Federated Learning Object Search Service
Local
Catalog
Finder
FIPA-Query
Protocol
other
Agents
Agent
Web
Interface
FIPA/JADE Middleware
...
Manual
Search
Figure 4: Finder Agent Interface.
Federated Search
The third agent used in the AgCAT system is the InterLibrarian agent, which implements the
federation of LO catalogs. This agent interacts with other InterLibrarian agents situated in remote
FIPA platforms. The agent is configured with a list of federated InterLibrarian agents. A federated InterLibrarian agent is a recognized agent that does not belong to the current FIPA platform.
The InterLibrarian agent in the local platform will accept a propagated query from federated
agents checking with the local Finder agent if there are objects that satisfy the query. Independent
of the result of the local query, the InterLibrarian will also propagate the query to the other federated InterLibrarian agents. The local InterLibrarian controls the destination and origin of queries,
redirecting the response to the appropriate querying agent. The local Finder agent can also use the
service of the InterLibrarian, asking it to propagate a query to the other federated agents.
The InterLibrarian agents from various FIPA platforms can be integrated into a federation, making it possible to propagate a search in an AgCAT system for the other systems throughout the
federation. The federation provides a single distributed yellow pages book for Learning Objects.
Figure 5 shows an example of a federation of directories covering some LO repositories.
Queries passed on to other agents in the federation are enclosed in the FIPA propagate messages,
along with auxiliary parameters. The x-search-id parameter is mandatory and will uniquely identify the query in the entire search process. Other parameters permit some fine-tuning of the search
process. This includes, among other things, the maximum number of responses returned by the
search (x-max-results) and the maximum depth of the tree of federated agents being formed in the
search process (x-max-depth). Whenever an agent receives a request to perform a search, it allows
the search to be propagated to the other agents of the federation only if the x-max-depth is greater
than 1. Furthermore, it is necessary for the agent to not receive a query with the x-search-id from
a previous request. When the search is propagated to other agents, the value of the x-search-id
parameter is not to be changed and the value of the x-max-depth is decreased by one unit.
46
Barcelos, Gluz, & Vicari
BIOE
CESTA
Librarian
Finder
Librarian
FIPA/JADE Middleware
Finder
FIPA/JADE Middleware
HTTP-MTP
Inter
Librarian
FIPA
Propagate
Messages
HTTP-MTP
Inter
Librarian
Internet
AgCAT
Federation
Inter
Librarian
HTTP-MTP
FIPA/JADE Middleware
Librarian
LUME
Finder
Figure 5: AgCAT Federation Example.
To transport the FIPA-ACL messages across distinct FIPA platforms, as in the case of a federation, it is necessary to use a MTP (Message Transfer Protocol). The current version of the JADE
platform provides support for two MTP: HTTP (Hypertext Transfer Protocol) and IIOP (Internet
Inter-ORB Protocol). HTTP has become the standard MTP for cross-platform communication,
replacing the IIOP (Grimshaw, 2009). By default, the HTTP-MTP is activated on the main container in any other containers MTP is activated. Thus, it creates a server socket in the main container, which waits for connections via HTTP. The communication of agents in the same platform
makes use of RMI (Remote Method Invocation).
Metadata Storing and Harvesting
The main function of the Librarian agent is to keep a local database of LO metadata, forming the
local catalog of metadata. The Librarian agent is able to attain the metadata from LO repositories,
digital libraries and other metadata catalog services. To do so, this agent maintains a list of the
repositories under its responsibility, as well as the configuration information about what kind of
protocol must be used to extract metadata information from these repositories. It also provides a
Web interface to its users, granting the manual cataloging of LO metadata. This interface lets the
Librarian agent users register, edit, or delete any LO metadata stored in the local catalog database. The Librarian agent will periodically check if there is new or updated metadata information
in the repositories. The new or updated metadata information is passed on to the local catalog database. To be fully conforming to the OBAA ontology, the local database will be implemented
through a database of RDF triples, compliant to the RDF representation of OWL objects.
SPARQL will be the preferential query interface of this database, permitting to formulate arbi-
47
Agent-based Federated Learning Object Search Service
trary logical queries on the metadata stored in the database. Figure 6 shows the main interfaces of
this agent, including the user interface.
Local
Catalog
Librarian
OAI-PMH
LO
Repository
Agent
LO
Repository
OAI-PMH
FIPA/JADE Middleware
Web
Interface
Z39.50
Digital
Library
Manual
Registration
Figure 6: Librarian Agent Interfaces
To acquire metadata from LO repositories, it is necessary to use a certain kind of metadata harvesting protocol. There are several options to achieve metadata harvesting activities, starting at
generic protocols based on directory services like the LDAP (Lightweight Directory Access Protocol), or database technologies similar to SQL queries over ODBC connections, passing through
older protocols like ANSI / NISO Z39.50, suitable enough to access the information of digital
libraries and reaching to standard harvesting protocols like OAI-PMH (Open Archives Initiative
Protocol for Metadata Harvesting) (OAI, 2009) specifically designed for retrieval of metadata.
Initially the Librarian agent will support OAI-PMH and LDAP protocols. In the future, we also
intend to add functionalities to the Librarian agent letting it access other types of catalogs of digital content, such as the catalogs of libraries that can be accessed through the protocol ANSI/NISO
Z39.50.
The OAI-PMH is a protocol designed to provide interoperability between digital repositories, defining how it should be possible to acquire metadata from these repositories. Its main function is
to facilitate the sharing of existing metadata in repositories that support the protocol. The OAI
standard (OAI, 2009) defines two basic entities: the data provider and service provider. The data
provider function is to search for metadata in databases and make it available to service providers
through the OAI-PMH protocol. On the other hand, service providers can harvest the metadata
for its users (OAI, 2009). In this case, the term harvesting refers to the search of metadata objects
in repositories.
According to these definitions, the Librarian agent is a service provider using OAI-PMH to harvest metadata from LO repositories that support this protocol. OAI-PMH requests are sent by service providers to data providers, in order to harvest for metadata under the responsibility of a data
provider. The response to an HTTP request that has been sent by the harvester consists of an
48
Barcelos, Gluz, & Vicari
XML document containing the metadata available in searchable repository. The OAI-PMH defines six verbs to be used in specifying the conditions of a query and is represented as parameters
in an HTTP request. The request message is formed by the repository URL combined with the
request verb.
Below is shown an example of a query to the BIOE repository pertaining to the Brazilian Ministry of Education using the OAI-PMH:
http://objetoseducacionais2.mec.gov.br/oai/request?verb=ListRecords&metadataPrefix=oai_dc
Repository URL
Verb
Argument
In this example, the verb ListRecords requests that all records available in that repository be retrieved. The metadataPrefix argument specifies that only Dublin Core metadata must be retrieved.
The AgCAT Prototype
The AgCAT prototype was developed in JAVA with the help of the JADE Framework. Only the
Librarian and the Finder agents were developed for the first version of the prototype. The user
interface of these agents was restricted to a local (non-web) graphical interface. This interface
makes it possible to configure the agents, perform the manual registration of LO in the local database, or search for LO according to the value of metadata. The functionality of the InterLibrarian
agent was mapped in the Directory Facility (DF) service of FIPA. The local database of metadata
was stored in the DF database, allowing not only access to metadata in the local DF, but also
metadata from remote DF through the federation of collective DF. The standard DF service of
FIPA already supports a type of distributed yellow pages, formed by the federation of DF services
from several FIPA platforms. The search for a particular agent (or a particular object) is distributed on all DF services in the federation. The federation of available DF services in FIPA platforms is similar to the federation of InterLibrarian agents in the AgCAT systems grating a fast
proof-of-the-concept implementation of the federation.
Development of the Prototype
The development of the prototype started with the creation of a simple software agent able to register its services in a FIPA platform and its DF service. The development of the Librarian agent
was incremental based on the addition of functionality to the initial agent. In the first place, its
user interface was added, which allowed the registration of LO metadata directly to the DF service. Through this interface, it was also possible to delete previously registered objects and to edit
these objects. Finally, functionality was implemented to harvest metadata information from LO
repositories using the OAI-PMH protocol. The Finder agent was implemented soon after the Librarian agent became operational. This agent basically performs a search in the DF service for
LO entries that satisfy the specified items in the search interface.
This type of solution offered some technical advantages, mostly because it makes use of a standardized service that already supports the concept of a federation of directories. But for this solution to work it is necessary to map the concept of LO in something that can be registered in a
FIPA DF. The FIPA DF service only registers information about agents and their associated services. According to the IEEE-LOM definition, LOs are not necessarily agents or a service, but a
digital (or non-digital) entity that can be used for educational purposes. Thus it was necessary to
find out how to map LO metadata elements in modules that can be registered in the DF service.
Interpreting an LO as an agent, even though possible, is technically a complex solution to this
problem because it implies the definition and implementation of unnecessary functionality of
49
Agent-based Federated Learning Object Search Service
agents in LOs. Thus, for cataloging purposes, it was assumed that an LO could be interpreted as a
service offered by a given agent if the following guidelines are followed: 1) the type of service
indicates that this is a LO entry and not a common agent service, 2) the name of the service provides a unique identifier for LOs, and 3) other information about the service (the properties of the
service) detail metadata information concerning LOs. A given LO will be represented as a “service” by an agent. The type of service is fixed and identified by the label lo-md-entry, indicating
that what is being offered by the service is actually an LO entry composed of the metadata records represented as the properties of the service. The service name corresponds to a unique identifier for LO within the agent associated with the service.
Using this technique, all metadata elements from OBAA were mapped in agent services. In order
to address the various standards for metadata, the compatibility metadata profiles defined in
OBAA have been put to use, which allows mapping all IEEE-LOM and Dublin Core metadata in
OBAA metadata.
First Results
The initial tests of the AgCAT system were focused on the validation of the system’s search features and on the verification of the robustness of the FIPA DF service when used as a catalog of
LO. Concerning the search tests, the Librarian agent imported metadata information from the
CESTA repository at UFRGS, which contained 400 Learning Objects by the time the tests were
executed. Figure 7 illustrates the main user interface of the Finder agent.
Figure 7: User Interface of the Finder Agent
An example of the search for a specific subject can be visualized as shown below. In this case the
basic search features of the Finder agent are put to use in the search for a particular word held in
the title, description, or keywords (subject) metadata regarding the object. The search item speci-
50
Barcelos, Gluz, & Vicari
fied in the example (see Figure 7) is the word “eletrodinâmica” (electrodynamics). Figure 8
shows the result for this particular query.
Figure 8: Search Results Obtained for the Example Query
In order to verify the robustness of the FIPA DF service, a series of tests were performed in
which a fixed number of LOs were registered in the service. A required time for each registration
was recorded for every test and compared with other tests. The JADE implementation of the
FIPA DF service was tested with the number of 500, 600, 700, 800, 900, 1000, 1500, and 2000
LOs. These objects were stored in files that could be read by the Librarian agent. In each one of
these tests, the Librarian agent imported the metadata from one of these files and registered it in
the DF service recording the necessary time to make the registration. Figure 9 shows the total
time necessary to register the metadata.
These tests proved, at least for registration purposes, that the JADE implementation of the FIPA
DF service is able to store the metadata information of a reasonable number of Learning Objects.
It also maintains the linearity for the necessary time for the registration process. However, more
tests are indispensable in order to check if this linearity is maintained in both search and update
processes.
51
Agent-based Federated Learning Object Search Service
3000
2500
2000
Total Registration
Time (ms)
1500
1000
500
0
500
600
700
800
900
1000
1500
2000
Figure 9: DF Service Registration Time Tests
Conclusion
The AgCAT system forms the core of the MILOS multi-agent system responsible for searching
repositories of learning objects. Consequently, it provides basic and advanced search facilities to
other agents of MILOS as to its users just as well.
The initial prototype of AgCAT demonstrated that it is possible to create a metadata searching
service based entirely on agent technology. This was an important result, because it demonstrated
the feasibility of the strategy adopted to create an agent-based federated catalog of learning objects.
However, several important features of the AgCAT system were not supported by the prototype.
As a consequence, the prototype is continually being developed to support all requirements of this
system.
The most important features currently being developed are the full support of federated catalogs
based on the InterLibrarian agent, the parsing of SPARQL queries by the Finder agent, and the
incorporation, in the Librarian agent, of the ability to harvest metadata through customizable
LDAP requests or SQL queries. In particular, the need for an InterLibrarian agent in the AgCAT
system was a direct consequence of the inability of the FIPA DF service to catalog other things
rather than agents and services. The InterLibrarian agent will, nevertheless, overcome this inability of FIPA platforms.
52
Barcelos, Gluz, & Vicari
References
ADL. (2001). Sharable Content Object Reference Model (SCORM) Version 1.2: The SCORM Overview.
Alexandria: ADLnet. Available at http://www.adlnet.org
Bellifemine, F., Caire, G., & Greenwood, D. (2007). Developing multi-agent systems with JADE. John
Wiley & Sons.
BIOE. (2010). BIOE – Banco Internacional de Objetos Educacionais. Retrieved August, 2010, from
http://objetoseducacionais2.mec.gov.br/
Chaib-draa, B., & Dignum, F. (2002). Trends in agent communication language. Computational Intelligence, 2(5),1-14.
Cohen, P., & Levesque, H. (1990). Intention is choice with commitment. Artificial Intelligence, 42, 213261.
CORDRA Management Group. (2009). An introduction to CORDRA - Content Object Repository Discovery and Registration/Resolution Architecture. Retrieved August, 2009, from
http://cordra.net/introduction/
FIPA. (2002). FIPA Communicative Act Library Specification, Std. SC00037J. Retrieved July, 2009, from
http://www.fipa.org/specs/fipa00037/
Grimshaw, D. (2009). Tutorial 4: Using the HTTP MTP for inter-platform communication. Retrieved June,
2009, from http://jade.tilab.com/doc/tutorials/JADEAdmin/HttpMtpTutorial.html
IEEE. (2002). Standard for learning object metadata - IEEE 1484.12.1-2002. Learning Technology Standards Committee of the IEEE.
IMS. (2009). Instructional management systems - Global learning consortium. Retrieved July, 2009, from
http://www.imsglobal.org/
Labrou, Y., Finin, T., & Pen, Y. (1999). Agent communication languages: The current landscape. IEEE
Intelligent Systems, March-April, 45-52.
OAI. (2009). Open Archives Initiative - Standards for web content interoperability. Retrieved August,
2009, from http://www.openarchives.org.
RIVED (2009). RIVED - Rede Interativa Virtual de Educação. Retrieved August, 2009, from
http://rived.mec.gov.br/site_objeto_lis.php
Taylor, C. (2003). An introduction to metadata. Queensland University, Australia. Retrieved July, 2009,
from http://www.library.uq.edu.au/iad/ctmeta4.html
Viccari, R., Gluz, J. C., Passerino, L. M., Santos, E., Primo, T., Rossi, L., . . . Roesler, V. (2010) The
OBAA proposal for learning objects supported by agents. Proceedings of MASEIE Workshop –
AAMAS 2010. Toronto, Canada.
Weiss, B. (1999). Multiagent systems: A modern approach to distributed modern approach to artificial
intelligence. The MIT Press.
Wooldridge, M. (2002). An introduction to multiagent systems. John Wiley & Sons.
53
Agent-based Federated Learning Object Search Service
Biographies
Carla Fillman Barcelos
Computer Engineering at Vale do Rio dos Sinos University (UNISINOS), Brazil
João Carlos Gluz
PhD in Computer Science at Federal University of Rio Grande Sul,
Brazil. Associate Professor at Vale do Rio dos Sinos University
(UNISINOS), Brazil.
Rosa Maria Vicari
PhD in Computer Science at University of Coimbra.
Associate Professor at Federal University of Rio Grande do Sul
(UFRGS), Brazil.
54
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
An Approach toward a Software Factory
for the Development of Educational Materials
under the Paradigm of WBE
Rubén Peredo Valderrama
Superior School of Computer Sciences of National Polytechnic
Institute, Mexico City, Mexico
[email protected]
Alejandro Canales Cruz
National Autonomous University of Mexico, Mexico City, Mexico
[email protected]
Iván Peredo Valderrama
Superior School of Computer Sciences of National Polytechnic
Institute, Mexico City, Mexico
Abstract
The Software Factory described in this paper is an organizational structure specialized in the production of educational materials based on software components according to educational specifications and externally-defined by the end-users requirements (Learners, Tutors, University, Instructional Designers, etc.). The Software Factory applies manufacturing techniques and principles of Domain Engineering to software development, to mimic the benefits of traditional manufacturing. The software components obtained are called Intelligent Reusable Learning - Components Object Oriented (IRLCOO), which are a special type of Sharable Content Object (SCO)
according to the Sharable Content Object Reference Model (SCORM). These software components consume Web Services (WS) to produce reusable and interoperable learning content.
Keywords: Software Factory, WBE, IRLCOO, SCORM.
Introduction
The Industrial Revolution created a high expectation in all aspects of society; many industries
experienced a dramatic transformation
in their methods and tools. Possibly, the
Material published as part of this publication, either on-line or
automotive industry is the most notable
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
example; the cars are end products, asworks for personal or classroom use is granted without fee
sembled from components supplied by a
provided that the copies are not made or distributed for profit
large number of suppliers, with highly
or commercial advantage AND that copies 1) bear this notice
specialized methods and tools. But curiin full and 2) give the full citation on the first page. It is perously, software engineering seems not to
missible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
have made use of the latest technologito redistribute to lists requires specific permission and payment
cal advances in software production.
of a fee. Contact [email protected] to request
The programmers have tools, such as
redistribution permission.
Editor: Janice Whatley
Software Factory for the Development of Educational Materials
computers, Integrated Development Environment (IDE), libraries, and frameworks, for doing
crafted work. But an industry based on craftsmanship is restricted by its methods and tools, and
the means of production are overrun. The software is a generic meta product that can be used to
create entire families of instances of similar software (Szyperski, 1998). The software industry
confronts a demand on a global scale for meta products of high level, needing an innovation technology platform to face a growing demand. Critical innovation is searching how to resolve two
chronic problems: complexity and change. There are four areas to resolve the two previous problematic ones: systematic reuse, development assembly, model-driven development, and process
frameworks (Greenfield, Short, Cook, Kent, & Crupi, 2004). The term Software Factory has
been used to describe large commercial efforts to automate software development along similar
lines. There are different definitions of the term software factory, and the following has been chosen as a working definition: “A software factory is a software product line that configures extensible tools, processes, and content using a software factory template based on a software factory
schema to automate the development and maintenance of variants of an archetypical product by
adapting, assembling, and configuring framework-based components” (Greenfield et al., 2004, p.
163). The Software Factory is a class of software production lines to build production lines of
concrete products. The main objective is to produce custom applications from a set of reusable
components, starting from scratch.
Background
In traditional education students can interact reciprocally in conjunction with the teacher. Unfortunately, the costs of traditional education have been increasing and management of learning resources is not very flexible; another disadvantage of this type of paradigm is that students cannot
advance at their own pace of learning. Virtual Education is seen as a good option to the population growth and information, conducting an intensive search for technological/pedagogical solutions in the teaching/learning process. The urgent global requirement of learning resources, reusable, practical and with high quality, involves learning technologies in the process of delivery and
innovation, these topics being of high priority in the development of Web Based Education
(WBE) systems. The research in WBE systems is centered on accessibility, durability, interoperability and reusability of didactic materials and environments of virtual education. The main
initiatives are: Open Knowledge Initiative (OKI) of the MIT (http://www.okiproject.org/), Advanced Distributed Learning (ADL) initiative (http://www.adlnet.gov/Pages/Default.aspx) and the
Global IMS Learning Consortium (http://www.imsproject.org). The main initiative is the ADL
by standards and tools.
The Semantic Web (SW) comprises techniques that promise to improve radically the current
Web. The Web is primarily designed for human beings and not for computers. The SW technologies propose a new vision for managing information and processing using the computer: the basic
principle is the creation and use of semantic meta data. The core of SW is the use of ontologies,
defined as, “Ontology is an explicit and formal specification of a conceptualisation of a domain
of interest” (Gruber, 1993. p. 1).
The IEEE 1484 LTSA specification is a high level architecture for information technologysupported learning, education, and training systems that describes the high-level system design
and the components of these systems (Learning Technology Standards Committee, 2001).
In general, this standard is accepted into the WBE community and the purpose of this paper is to
show an Agents and Components Oriented Architecture (ACOA) for the WBE systems development based on the IEEE 1484 LTSA specification, using ACOA components in the production
line to assemble learning resources. This idea arises as an answer to a group of necessities and
limitations that the current educational systems have not been able to satisfy in an appropriate
56
Valderrama, Cruz, & Valderrama
form. Such are the cases of the authoring and evaluation system that have tried to solve the professor’s technical deficiencies during the creation and publication of courses inside a WBE system, but in most cases these systems do not fulfill the educational software current demands. In
the technological aspects we have the quality, accessibility, flexibility, reuse, adaptability, interoperability, reduction in delivery times and high cost. On the side of the pedagogical aspects we
have a lack of pedagogical models in the courses development (learning content) that help in the
learner’s learning inside a WBE environment.
The Intelligent Reusable Learning - Components Object Oriented (IRLCOO) are part of ACOA
based on IEEE 1484 LTSA specification (Learning Technology Standards Committee, 2001),
using open standards such as XML (World Wide Web Consortium, 2006) and Resource Description Framework (RDF) (World Wide Web Consortium, 2010), ensuring that learning content is
‘interoperable’ with various learning management technologies, mainly based on ADL and
SCORM (http://www.adlnet.gov/Pages/Default.aspx).
The WBE paradigm has grown in recent years due to the increase in the number of students and
available learning resources suitable for a wide range of personal needs, backgrounds, expectations, skills, levels, etc. Therefore, the delivery process is very important, because it produces
learning content and presents it to the learner in multimedia format. Nowadays, there are approaches to this process that focus on new paradigms to produce and deliver quality content for
online learning experiences. These approaches try to develop, revise, and upgrade the learning
content in an efficient way. The work described in this paper is based on a special type of labeled
materials called IRLCOO, developed by Peredo, Balladares, & Sheremetov (2005). The IRLCOO
represents a kind of learning content characterized by rich multimedia, high interactivity, and intense feedback supported by a standard interface and functionality.
ACOA proposes enhancement in layers 1, 3 and 5 of the IEEE 1484 LTSA specification. The
paper is organized as follows: in the next section, the IEEE 1484 LTSA specification layers are
introduced; the third section describes ACOA and shows the software development pattern based
on IRLCOO (layer 3); the fourth section describes SiDeC 2.0, Evaluation System 2.0, and
VoiceXML based on our architecture are shown respectively; the fifth section depicts the Semantic Web Platform; the sixth section depicts the Software Factory for the development of educational materials. Finally, conclusions are presented.
IEEE 1484 LTSA
The Learning Technology Standards Committee (2001) of the IEEE Computer Society has proposed the IEEE 1484 LTSA specification. This Standard is composed of five layers:
1.
Learner and Environment Interactions: Concerns the learner’s acquisition, transfer, exchange, formulation, discovery, etc. of knowledge and/or information through interaction
with the environment.
2.
Learner-Related Design Features: Concerns the effect learners have on the design of
learning technology systems.
3.
System Components: Describes the component-based architecture, as identified in human-centered and pervasive features.
4.
Implementation Perspectives and Priorities: Describes learning technology systems from
a variety of perspectives by reference to subsets of the system components layer.
5.
Operational Components and Interoperability - codings, APIs, protocols: Describes the
generic "plug-n-play" (interoperable) components and interfaces of information technol-
57
Software Factory for the Development of Educational Materials
ogy-based learning technology architecture, as identified in the stake-holder perspectives.
Agents and Components Oriented Architecture
C. Szyperski (1998) defines a software component as “a unit of composition with contractually
specified interfaces and explicit context dependencies. A software component can be deployed
independently and is subject to composition by third parties” (p. 41). The components are widely
seen by software engineers as a main technology to address the so called “software crisis”. The
Software Industrial Revolution is based upon Component Based Software Engineering (CBSE).
The reasons that explain the relevance of the Component-Oriented Programming (COP) are the
high level of abstraction offered by this paradigm and the current trends for authoring reusable
component libraries, which support the development of applications for different domains. Additionally according to Wang and Qian (2005) three major goals pursued by COP are considered:
Conquering complexity, Managing Change, and Reusability. The components are used as composition units, adapting, assembling, and configuring. The tools developed are used to create new
components or update them.
The enhanced ACOA is based on layer 3 of IEEE 1484 LTSA specification. This architecture is
presented in Figure 1 and consists of four processes: learner entity, evaluation, coach, and delivery process; two stores: learner records and learning resources; and fourteen information workflows.
The coach process has been divided into two subprocesses: coach and virtual coach. This was
done because we considered that this process has to adapt to the learner’s individual needs in a
responsive way during the learning process. A number of decisions over sequence, activities, examples, etc., can be made manually for the coach but in others cases these decisions can be made
automatically for the virtual coach (Canales, Peña, Peredo, Sossa, & Gutiérrez, 2007).
IRLCOO platform
Flash is a multimedia platform with a powerful programming language denominated ActionScript
(AS) 3.0 (Learn ActionScript, 2009). This language is completely Object Oriented and enables
the design of client components that allow multimedia content. At Run-Time it loads multimedia
components and offers a programmable and adaptive environment for the student’s needs. The
components use different levels inside the Virtual Machine based on Flash Player (FP). The IRLCOO are tailored to the learner’s needs in a number of ways. The last versions of the components
were developed with Flex and AS 3.0. Flex was released as a J2EE JSP tag library that compiles a
tagged-based language called MXML and an Object Oriented language called AS 3.0 straight into
Flash applications, which create binary SWF applications in the side of the server. The Flex compiler is still a J2EE application but for the client now. A Flex application uses prebuilt components, custom components, rich class library, MXML, and AS 3.0. Flex is a cross-platform for the
development and deployment of Rich Internet Application (RIA) based on the Adobe Flash platform.
The IRLCOO are based on the composition pattern, to build complex systems that are constituted
of several smaller components. This allows developers to manage the components through a
common interface. The IRLCOO platform possesses common prebuilt communication, based on
the following Application Programming Interfaces (API) developed for the Learning Management System (LMS): Multi-Agent System (MAS), and different frameworks such as AJAX
(Grane, Pascarello, & James, 2005), Hibernate (Peak & Heudecker, 2005), Struts (Holmes, 2006),
etc., and dynamic load at Run-Time.
58
Valderrama, Cruz, & Valderrama
ltim
Mu
edi a
e
ef
pr
nc
re
r
es
o
ec
es
ss
e
sm
nt
ed
rd
Lo
cat
or
A
Figure 1. Components Oriented Architecture.
IRLCOO and Web Services
The WebService component gives access to the operations of SOAP-compliant Web Services
from the IRLCOO. The component enables access to remote methods offered by a LMS through
Simple Object Access Protocol (SOAP) protocol. This gives to a WS the ability to accept parameters and return a result to the script from IRLCOO. The components discover and invoke WS using SOAP and Universal Description, Discovery, and Integration (UDDI) via middleware and a
JUDDI server. Placing a middleware layer between a consuming client and a server provider
dramatically increases the options to write more dynamic clients, reducing the need for dependences inside the clients. The following code shows a request for the “Notes” from the learning
content to the middleware. The WebService prebuilt component is used to invoke the Web Services based on the industry standard SOAP message (World Wide Web Consortium, n.d.). Finally
the communication API ADL-SCORM consists of a collection of standard methods to communicate the client with the LMS (http://www.adlnet.gov/Pages/Default.aspx), the standard methods
call to the Communication Adapter based on JavaScript, the IRLCOO use the ExternalInterface
class to call the Communication Adapter. The answer (URL) is used to call the WS from IRLCOO using the WebService prebuilt component:
// Calling WS from a client IRLCOO with Flex
<mx:WebService id="WebService"
wsdl=" http://148.204.45.65:8080/juddi/inquiry?wsdl">
<mx:operation name="Notes"
resultFormat="object"
result="Notes_result(event);"
fault="Notes_fault(event);" />
</mx:WebService>.
59
Software Factory for the Development of Educational Materials
Patterns used in the Software Factory
Patterns are forms for encapsulating knowledge with a strategy for solving a frequently occurring
problem in a specific context. The pattern defines the relationship between a problem domain
and a solution domain.
Composition pattern for IRLCOO
The IRLCOO type components were built using the composition pattern that provides a common
interface to manage indivisible and compound components. The IRLCOO use a common interface via a Container_IRLCOO class; the class is defined as an abstract class, but AS 3.0 does not
support abstract classes, whereby we define classes knowing that these will not be instantiated,
but will be extended by subclasses. Abstract methods will be defined as a function declaration
that will throw an error if it is called. The Container_IRLCOO class defines default implementations for both Compound_IRLCOO and indivisible_IRLCOO and overrides necessary methods.
The methods will allow the clients to build the compound system. The methods from the base
class are not relevant to the indivisible components IRLCOO, nevertheless, are overridden and
implemented. The Container_IRLCOO is composed of indivisible and compound IRLCOO. The
classes provide a recursive implementation of the iterator() method. Figure 2 shows the class diagram of the composition pattern based on IRLCOO.
Figure 2. Class diagram of the composite pattern based on IRLCOO.
Model-View-Controller pattern with IRLCOO
The Model-View-Controller (MVC) pattern separates three elements without overlapping every
responsibility. The MVC is a compound pattern that consists of several patterns; in our case we
use two basic patterns: the observer and composition. The observer pattern is used in order to
keep the view updated, as well as the composition pattern, covered in the previous section. Figure 3 shows MVC architecture implemented with Struts 2.0 (http://struts.apache.org/).
60
Valderrama, Cruz, & Valderrama
Observer pattern with IRLCOO
The Observer pattern has an object called the subject; the subject maintains a list of subscribers,
automatically notifying the state change. The IRLCOO components subscribe then begin the service and continue until an IRLCOO component unsubscribes via the WebService component that
sends out the state, setting up the IRLCOO component to receive the information from a single
source. Figure 3 shows the Observer Pattern implemented.
Client
Web Browser
SERVER
Learner
Agent
Controller
FilterDispatcher
Container
call action
Model
Action
IRLCOO Content/Evaluation
Question
observer
pattern
Question
1
1
Answer 1
2
2
Answer 2
3
3
Answer 3
Answer 1
Area
Answer 2
Area
Answer
Area 3
Area
Area
Area
Answer 4
4
Answer 5
5
Answer 6
6
Area
Area
Area
response (HTML, XML)
register()
SOAP
result observer
pattern
View
Result
Registration
Service
...
IRLCOO Navigation
Figure 3. The MVC Architecture implemented with Struts 2 and IRLCOO.
Meta data with IRLCOO
The Ontologies standardize and provide meaning to our materials under the WBE paradigm, enabling understandable educational materials for the machine. The IRLCOO componets have semantic markup and enable reasoning based on the JENA inference engine. The ontology’s middleware supports creation and maintenance. The semantic markup is created for the subsystem
semantic_markup in an automatic way, based on XML and RDF. Figure 4 depicts the IRLCOO
with meta data.
Systems Developed Based on IRLCOO
The IRLCOO were built internally using the composition pattern that provides a common interface simplifying its use and reusability. The systems use the MVC pattern; the IRLCOO use the
View as the external interface of the application. The students interact with the systems through
the View dynamically built with IRLCOO. The last versions are based on Flex frameworks; the
IRCLOO are composed of prebuilt components, custom components, rich class library, Multimedia eXperience Markup Language (MXML), and AS 3.0. The framework allows users to build
RIAs based on FP. The system makes it possible to add an interface based on voice for the educational materials using VoiceXML, enabling a richer interaction with the student. Hibernate is a
project that aims to be a complete solution to the problem of managing persistent data in Java.
Object/Relational Mapping (ORM) is the name given to automated solutions to the mismatch
problem (Peak & Heudecker, 2005). This system implements the persistent data using Hibernate
on the server side (https://www.hibernate.org/). The IRLCOO components allow separation of
control and content to maximize the reusability.
SiDec 1.0 was used to build Web-based courseware from the stored IRLCOO (Learning Resources). The SiDec lesson templates were based on the cognitive theory of Conceptual Maps
61
Software Factory for the Development of Educational Materials
(CM) and Based-Problems Learning (BPL) (Barell, 2006). The SiDeC 2.0 new version supports
the IRLCOO new with AS 3.0 and Flex 3.0 to provide RIA online courses. The IRLCOO take
learner’s metrics, for example, time, learning tracking, and completed activities, with the purpose
of tailoring their learning experiences; the metrics are serialized as Semantic Web data using JENA (http://jena.sourceforge.net/). These IRLCOO are compliant with the specifications for learning items of the SCORM 2004 Models (Content Aggregation, Sequencing and Navigation, and
Run Time Environment) (http://www.adlnet.gov/Pages/Default.aspx). The IRLCOO use meta
data that represent the configuration files at Run-Time, the imsmanifest.xml file describes the
control/content to the LMS In our case we use three files imsmanifest.xml to implement the functionality of the manifest SCORM with the purpose to maximize the reusability of the IRLCOO
components in the system: the file imsmanifest_container.xml is the configuration file for the
container, imsmanifest_content/imsmanifest_evaluation are the configuration files for the content/evaluation and the respective multimedia resources for each, and imsmanifest_navigation is
the configuration file for the navigation that is responsible for the course’s sequence. The SiDec
2.0 added new lesson templates based on the Case Method (Ellet, 2007), Project-Based Learning
(PBL), Uskov (Uskov & Uskov, 2003), and a Structured Open (SO).
Figure 4. IRLCOO with meta data.
The Evaluation System 2.0 (ES 2.0) is designed using the same model used for the SiDeC 2.0; the
principal difference is that the ES 2.0 performs an analysis of the learner’s profile using the MAS
and Semantic Web, constructed at Run Time during the teaching/learning process with the learner’s experiences. The metrics were collected by IRLCOO and serialized as Semantic Web data
using JENA. The purpose of the analysis is to offer a personalized feedback and dynamic reconfiguration of the student’s educational materials, doing dynamic modifications to the sequence of
the course, personalized to learner’s needs, based on the learning process design, according to the
obtained results. The ES 2.0 invokes the MAS; the Coach Agent exchanges facts as input and
output with the inference engine via the StudentTracker and provides facts to the query engine.
The inference engine usees facts and ontologies to derive additional factual knowledge that is
implicated, implementing an assistance system based on Web Semantic.
Semantic Web Platform
Tim Berners-Lee has a vision for the future of the Web. The first is to make the Web a more collaborative environment. The second is to make the Web understandable and, thus, processable by
machines. Tim Berners-Lee’s original vision clearly involved more than retrieving Hyper Text
Markup Language (HTML) pages from Web servers. There are relationships between resources
62
Valderrama, Cruz, & Valderrama
that are not currently captured on the Web. The technology to capture such relationships is called
the RDF. The key idea is that the original vision encompassed additional meta data above and
beyond what is currently in the Web. This additional meta data is needed for machines to be able
to process information on the Web (Daconta, Obrst, & Smith, 2003).
The Semantic Web development environment used is the following: Java 1.6 Software Development Kit (SDK) (Oracle, 2009), Jena Semantic Web Framework (http://jena.sourceforge.net/ ),
Protégé Ontology Editor 4.0 Alpha (Protégé 4, 2009), Ontology Reasoner Pellet 1.5.2 (Clark &
Parsia, 2009) and Java Agent Development Framework 3.7 (JADE) (http://jade.tilab.com/). The
tools use the Java programming language. JENA was selected as the Semantic Web framework. It
is a Java framework for building Semantic Web applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine
(http://jena.sourceforge.net/).
StudentTracker
The StudentTracker integrates several different sources of information online about the student,
for example, time, learning tracking, and completed activities, along the student’s session via the
IRLCOO on client side. The information is combined into aSemantic Web data model where the
StudentTracker can navigate, query, and search across all the sources like a unique model. The
objective of StudentTracker is to combine information from several different sources of information online, offering alternatives for tailoring the learning resources at Run-Time. Figure 5 shows
the Semantic Web Platform, where the StudentTracker is being executed through the Coach
Agent.
Figure 5. Semantic Web Platform.
63
Software Factory for the Development of Educational Materials
The Architecture of the Software Factory
The industrialization of software development has moved the industry towards maturation. The
industries customize and assemble standard components to create comparable but different products, to standardize, integrate, and automate production processes, to build tools and configure
them to automate recurring tasks, minimizing costs and risks using production lines. Our software
factory builds families of contents and evaluations, maximizing the reusability. Assembling IRL-
Figure 6. The architecture of the software factory.
64
Valderrama, Cruz, & Valderrama
COO that consume WS means that the developed tools reduce the building complexity and the
amount of handwritten code. Figure 6 shows the architecture of the software factory.
Limitations
The main limitation of the proposed system lies in the components of software based on AS 3.0
and the Flash platform. A component of software is a unit of composition with contractually specified interfaces and explicit context dependencies, the main dependence context is the Virtual
Machine to execute code AS 3.0 denominated Flash Player, but enable a cross-platform Run
Time Environment which was used to build the components IRLCOO.
Conclusions
The authoring, evaluation, and VoiceXML systems were created under the same architecture
based on IRLCOO searching: conquering complexity, managing change, and reusability, keeping
in mind the separation of control & content. This architecture is integrated for ACOA, MAS,
IRLCOO, and Semantic Web Platform. This approach has focused on reusability, accessibility,
durability, and interoperability of the learning contents, which are built as IRLCOO, to be fundamental pieces in the delivery of learning content. Our communication model is composed of the
LMS communication API, AJAX, Struts Framework, Hibernate, IRLCOO, WS, Semantic Web,
and a server JUDDI. It provides new development capabilities for WBE systems because their
integral technologies are complementary. SiDeC 2.0 and the ES 2.0 were developed under this
model to help in the automation of and reducing the complexity of learning content development.
The Web Semantic Platform helps to build intelligent and adaptive systems (bidirectional communication) according to the learner’s needs. The ADL-Schema manages dynamic sequencing,
composition, content separation, and navigation at Run Time Environment (RTE). The proposal
has the same ADL advantages and adds the possibility of building Web and desktop applications,
using the same learning and evaluation components already generated. The Software Factory
schema has automated the development and maintenance of variants of IRLCOO and the production of fixed IRLCOO for adapting, assembling, and configuring ACOA. Finally the architecture,
IEEE 1484 LTSA, is improved for collecting certain student’s metrics in an automatic way, additionally adding support for the Semantic Web.
Acknowledgments
Authors of this paper would like to thank Superior School of Computer Sciences, Computer Science Research Center - Instituto Politécnico Nacional (IPN), and Universidad Nacional
Autónoma de México (UNAM) for the partial support for this project within the project IPN-SIP
20101278.
65
Software Factory for the Development of Educational Materials
References
Barell, J. (2006). Problem-based learning: An inquiry approach. Thousand Oaks: Corwin Press.
Canales, A., Peña, A., Peredo, R., Sossa, H., & Gutiérrez, A. (2007). Adaptive and intelligent web based
education system: Towards an integral architecture and framework. Expert Systems with Applications,
33(4), 1076–1089.
Clark & Parsia. (2009). Pellet: OWL 2 Reasoner for Java. Retrieved January 7, 2009, from
http://clarkparsia.com/pellet/
Daconta, M. C., Obrst, L. J., & Smith, K. T. (2003). The semantic Web: A guide to the future of XML, Web
services, and knowledge management. Indianapolis: John Wiley & Sons.
Ellet, W. (2007). The case study handbook: How to read, discuss, and write persuasively about cases. Boston: Harvard Business Press.
Grane, D., Pascarello, E., & James, D. (2005). Ajax in action. Greewich: Manning Publications.
Greenfield, J., Short, K., Cook, S., Kent, S. & Crupi, J. (2004). Software factories: Assembling applications
with patterns, models, frameworks, and tools. Indianapolis: Wiley.
Gruber, T. (1993). A translation approach to portable ontologies. Knowledge Acquisition, 5(2),199-220.
Holmes, J. (2006). Struts: The complete reference (2nd ed.). California: McGraw Hill – Osborne Media.
Learn ActionScript. (2009). Adobe Developer Connection/ActionScript Technology Center. Retrieved January 10, 2009, from http://www.adobe.com/devnet/actionscript.html
Learning Technology Standards Committee. (2001). IEEE P1484.1/D9, 2001-11-30 Draft Standard for
Learning Technology — Learning Technology Systems Architecture (LTSA). Retrieved February 15,
2005, from http://ltsc.ieee.org/wg1/files/IEEE_1484_01_D09_LTSA.pdf
Oracle. (2009). Java SE Downloads. Retrieved January 3, 2009, from
http://www.oracle.com/technetwork/java/javase/downloads/index.html
Peak, P., & Heudecker, N. (2005). Hibernate quickly. Greenwich, CT: Manning Publications.
Peredo, R., Balladares, L., & Sheremetov, L. (2005). Development of intelligent reusable learning objects
for web-based education systems. Expert Systems with Applications, 28(2), 273-283.
Protégé 4. (2009). Protégé. Retrieved January 7, 2009, from
http://protege.stanford.edu/download/protege/4.0/installanywhere/
Szyperski, C. (1998). Component software. Beyond object-oriented programming (2nd ed.). New York:
Addison-Wesley Professional.
Uskov, V., & Uskov, M. (2003). Reusable learning objects approach to Web-based education. International
Journal of Computer and Applications, 25(3), 188-197.
Wang, A., & Qian, K. (2005). Component-oriented programming. Georgia: John Wiley & Sons.
World Wide Web Consortium. (n.d.). Latest SOAP versions. Retrieved January 10, 2010, from
http://www.w3.org/TR/soap/
World Wide Web Consortium. (2006). Extensible Markup Language (XML). Retrieved December 10,
2006, from http://www.w3.org/XML/
World Wide Web Consortium. (2010). Resource Description Framework (RDF). Retrieved November 20,
2006, from http://www.w3.org/RDF/
66
Valderrama, Cruz, & Valderrama
Biographies
Rubén PEREDO‐VALDERRAMA received the B. Sc. degree from
the Escuela Superior de Ingeniería Mecánica y Eléctrica, ESIME, the
M. Sc. degree from the Centro de Investigación en Computación,
(CIC), Instituto Politécnico Nacional, IPN. He is currently a candidate
to a Ph.D. degree at CIC – IPN. His main research lines are Web-based
education, Semantic Web, multi-agent systems and multimedia. In
1999, he joined CIC ‐ IPN, where he worked in the area of Artificial
Intelligent (AI). He joined ESCOM ‐ IPN, where now he is working
in the area of Computer Systems Engineering. Currently, he is a level‐1 member of the SNI and has several publications in indexed international, national, and institutional journals. He is author of a book chapter Springer and has
several other publications and conference proceedings internationally and nationally.
Alejandro CANALES‐CRUZ graduated in communication and
electronic engineering and obtained his MSc in microelectronic science
from the Instituto Politécnico Nacional, Mexico. He received his PhD
in computer science from the Centro de Investigación en Computación
of the Instituto Politécnico Nacional, Mexico. His main research lines
are Web-based education, mobile collaborative learning, artificial intelligence based on multi-agent systems and design of secure software.
He has been author, or co-author, of multiple papers published in international journals as well as chapters of scientific books and in peer reviewed conference proceedings.
Iván PEREDO‐VALDERRAMA received the B. Sc. degree from
the Universidad Autónoma Metropolitana, UAM, the M. Sc. degree
from the Centro de Investigación en Computación, (CIC), Instituto
Politécnico Nacional, IPN. His main research lines are Web-based
education, Semantic Web, multi-agent systems and multimedia. He has
been author, or co-author, of multiple papers published in international
journals.
67
This page left blank intentionally
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
How Do Students View Asynchronous Online
Discussions As A Learning Experience?
Penny Bassett
Victoria University, Melbourne, Australia
[email protected]
Abstract
This case study explores students' perceptions of the value of asynchronous online discussions.
Postgraduate business students participated in asynchronous online discussions that are part of a
blended approach to teaching, learning, and assessment. The students were required to read three
refereed journal articles on global human resource information system (HRIS) implementations,
write annotated bibliographies, discuss online in groups of four, and reflect on the learning experience. Students' reflections are examined to identify factors that make this a worthwhile learning experience. These reflections indicate that students valued the learning experience and viewed
the online environment as an inclusive place in which to collaborate and discuss with their peers.
Planning, preparation, and structure were found to be factors relating to the effectiveness of the
online discussion, with timing generally being viewed as positive but seen as an issue by some
students. Overall, students valued this learning and assessment strategy believing that it increased
their knowledge and understanding of HRIS.
Keywords: Asynchronous online discussion; collaboration; student reflections; blended learning
Introduction
One of the challenges in many Australian universities is to provide learning and assessment experiences that cater for an increasingly diverse student population. This study has been undertaken in one of the most culturally diverse Australian universities with a philosophy that the curriculum should be inclusive and “equally relevant to local and international students” (Woodley
& Pearce, 2007, p.2). In this paper it is contended that an asynchronous online discussion, in a
blended approach to learning and assessment, can facilitate an inclusive learning environment.
A vital component of the blended approach is integration of learning and assessment activities,
with each complementing the other in achieving learning outcomes. In relation to online discussions, this is supported by Zhu (2006) and Groves and O’Donoghue (2009) who contend that
there should be alignment with curriculum objectives. The online activity in this study is part of a
blended approach to learning and assessment designed to develop knowledge, understanding and
critical skills for use in subsequent asMaterial published as part of this publication, either on-line or
signments.
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
Previous studies found that, while many
English as a second language (ESL) students lack confidence and are hesitant to
participate in face-to-face discussions,
these factors are alleviated in collaborative asynchronous online discussions
(Al-Salman, 2009; Gerbic, 2010). An
aim of learning is to enable students to
Editor: Norman Creaney
Asynchronous Online Discussions
work together in sharing knowledge and developing understanding. Palloff and Pratt (1999, as
cited in Vonderwell, 2003) believe that this collaboration leads to the development of new meaning, with Macdonald (2008, p.115) pointing out that student-centered learning online includes “einvestigation, e-writing and e-collaborating.”
Other researchers have examined various factors that relate to the success and limitations of asynchronous online discussions (for example, Andresen, 2009; Gerbic, 2006). These include structure, timing, group size, instructor presence/absence, incentives, collaboration, and the online environment itself.
Using a single case-study research methodology, this research explores asynchronous online discussions as a learning and assessment strategy from the students’ perspectives. Feedback from
student reflections, submitted as a component of the online discussion assessment requirements,
was examined to identify whether student learning was enhanced through this learning experience
and what factors contributed this.
Context and Background
Over a period of many years postgraduate students in the subject human resource information
systems (HRIS) have shown a reluctance to discuss in face-to-face learning situations, so it was
decided to include an online, asynchronous discussion in the curriculum. The structure and guidelines have been reviewed and refined as it has evolved into a well-structured learning and assessment strategy in a blended curriculum.
There has been an increasing number of ESL students in the subject, and they now outnumber the
Australians who are native English speakers. In 2010 there was a ratio of 80:10 ESL to native
English speakers in HRIS. An ongoing area of concern is that during the classroom discussions
responses are often directed to the lecturer rather than to each other, and there is limited participation. It is also difficult to get Australian students to interact or work collaboratively with international students.
Macdonald (2008) states that with online discussions it is possible to assess individual contributions. While the online discussion is a collaborative activity, in this study the assessment is
marked individually and consists of an annotated bibliography, contributions to the discussion
demonstrating understanding of the topics, and a short reflection on the learning experience. The
rationale for this is that all students are required to read and annotate three articles that provide
the basis for the discussion and this and the reflections are obviously individual activities. Previously, students collaborated on all aspects of this assignment, dividing the tasks, particularly the
annotated bibliography. In some cases students read only the article they annotated which subsequently affected the quality of the discussions.
The structure of this online discussion has evolved over many years from a relatively loosely
structured activity with few instructions on how to engage in the discussions to a finely structured
assessment related learning experience including clearly stated criteria, guidelines, and timelines,
with the individual contributions to the online discussion being assessed. According to Gerbic
(2010) students need to be motivated to participate in online discussions, with well-planned and
structured learning and assessment activities. This is supported by Vonderwell, Liang and Alderman (2007), who suggest that the structure of a discussion influences student participation and
subsequently how they value the assessment in the online learning environment. Further, Yeh
and Buskirk (2005, as cited in Hew, Cheung & Ng, 2010) found that assessing the discussion encouraged students to engage in the learning activity. Andressen (2009, p.252) also states that
“many learners need an incentive to participate,” suggesting that the degree of participation be
included in the assessment.
70
Bassett
Alignment with curriculum objectives and integration of learning and assessment activities are
essential factors in blended learning (Groves and O’Donoghue, 2009; Zhu, 2006). Learning outcomes and assessment guidelines in this subject are explicitly written to demonstrate alignment
with the curriculum objectives in the online discussion and other learning and assessment activities in the subject (see the Appendix).
The students are randomly organized into groups of four or five, although sometimes groups fall
to three because of student attrition. While the Du, Zhang, Olinzock, and Adams’ (2009) study
suggests that groups of three to four students are preferable, four was found to be the optimal size
in this case, with groups of three usually having less in-depth, shorter discussions and often more
difficulties with collaboration.
Gerbic (2010) found that preparation for online discussions and explanation of the link between
assessment and learning provides an incentive for effective online participation. While the assumption was made that students would be familiar with using web-based materials, in fact they
had seldom used interactive web-based learning materials and had to be given practice sessions.
Bonk and Zhang (2008) emphasize it is important that students are provided with rules and guidance on online discussion participation, as well as training if required.
Groups were given time in class to introduce themselves, select questions, and discuss how they
would organize their time. As well, they were given the opportunity to practice participating in an
online asynchronous threaded discussion based on a short journal article. They were encouraged
to engage in a discussion rather than simply posting statements, and it was emphasized that the
subsequent online discussions would be assessed on demonstration of knowledge, participation
and facilitating the on-going discussion through phrases, such as What do you think? I agree (or
disagree) but would like to add, etc. Also, students were required to open and close one discussion thread each, as in the ‘real’ online discussion.
A decision was made to have no lecturer participation in the online discussion. However, the discussion was teacher facilitated in terms of providing the structure through the questions posed but
with the discussion and each question being peer-facilitated, similar to Ng, Cheung, and Hew’s
(2009) study. When the online discussions were first incorporated into the subject, the lecturer
participated, commenting regularly on student contributions. This inhibited discussion, resulting
in answers being directed to the lecturer rather than fellow students.
While Vonderwell et al.’s (2007) study showed mixed findings on instructor participation: one
group of students was found to direct their responses to instructors. Zhu (2006) comments that the
instructor’s presence may negatively affect student participation and this had been noticed in relation to earlier online discussions in the same subject. Indeed, Gerbic (2010, p.133) suggests that
the students take responsibility for the discussion with the absence of the lecturer, “creating a democratic space.” As mentioned previously, in face-to-face discussions and lectures it was observed that many students were reluctant to discuss, and this is one of the main reasons the online
learning and assessment strategy was adopted.
Gerbic (2010) found that shy students felt less inhibited in the online discussion environment,
while Al-Salman (2009, p.12) believes that “online communication allows more intimidated people to participate.” In an asynchronous online discussion students have time to read and research
before posting answers (Gerbic, 2010). It was observed in earlier iterations of this learning activity that international students found the online environment a safe one in which their research and
ideas are respected. This aligns with Hassam’s (2007, p.81) contention that ‘developing students’
self-confidence and allowing that knowledge to surface will encourage the kind of critical thinking overseas students are too often accused of lacking.
This case study explores how a culturally diverse group of students perceive asynchronous online
discussions as a learning experience, particularly in terms of collaboration and structure.
71
Asynchronous Online Discussions
Methodology
A qualitative, single case study methodology was used as the research aimed to examine students’
perceptions of asynchronous online discussions as a learning strategy. Yin (2009) suggests that
this methodology is suited to situations where the research is providing an explanation for a
‘how’ question in a contemporary context; in this case, how students perceive a learning activity
in an online environment.
In an online asynchronous discussion assignment, students were required to read and annotate
three refereed journal articles, discuss online over a five day period, and reflect on the learning
experience. Discussions on a minimum of three questions based on the readings were mandatory
with students having the choice of basing the fourth question on the articles or to respond to an
open question that invited them to discuss their relevant professional experiences. In groups of
four each student had to open one discussion thread on the first day of the discussion, participate
in the discussion, and provide a summary closing as well as participate in the other questions selected by the specific group. Marks were allocated individually for an annotated bibliography, the
discussion, and the reflection. Details of the assignment are shown in the Appendix.
It was emphasized to students that their reflections would provide feedback that would be used to
improve the learning experience and assessment. While submission of the reflection was mandatory, students were informed that they would receive full marks for this component of the assessment if they reflected on either positive or negative aspects of the learning experience rather than
on the content of the articles. As the reflection is subjective, marks were lost only for nonsubmission of the reflection.
The Participants
The 90 students who participated in this study were enrolled in HRIS, a post-graduate unit at Victoria University, Melbourne, Australia in 2010. 72 students were international students for whom
English was their second language; 8 were domestic students with English as their second language; 10 were domestic students whose first language was English. Online discussion groups of
four were randomly organized using the Blackboard group manager tool.
Tools
The online discussions took place in the Blackboard learning management system.
Data collection and analysis
Data was collected from student reflections in response to the assessment requirement to “reflect
on the value of this on-line discussion as a learning experience (a personal reflection of no more
than half a page). The open statement was used in order to give students flexibility in their responses as well to collect a broad range of qualitative data (Yin 2010).
The reflections were analyzed in order to identify themes and percentages were calculated for
each theme.
Results
The results in Table 1 show that the majority of students (66%) positively valued the collaborative group work. Sixteen (18%) of students expressed positive views about the structure. While
22% of the students commented favorably on timing, 11% found the timekeeping of other students an issue. Sixteen (18%) of the participants reflected that they felt the online discussion provided an inclusive and safe environment. Twenty-one (23%) of participants either completed the
reflection incorrectly or did not submit one.
72
Bassett
Collaborative Learning
Both local and international students valued collaborating online and believed they learnt from
each other’s views and experiences.
It permits people to share their ideas with others and this enables us to learn from each
other and, at the same time, it is also an opportunity to discuss different points of view,
which is very enrichful.
Students also appreciated seeing other students’ perspectives believing this stimulated their thinking and extended their knowledge through interaction with their peers.
This particular exercise has encouraged exchanging of ideas and sharing of experiences
among our team members. It has promoted working in a team and stimulated our critical
thinking. Furthermore, this activity allowed us to know each team member better and appreciate our differences in opinion and interpretation of the articles. Coming from diverse
backgrounds, each one of us has different views that contributed to an engaging and fruitful
discussion. This activity has enabled us to learn more from each other’s experiences and
made us think beyond classroom lectures and textbooks.
Furthermore, the open question for students with professional experience proved a learning experience for some groups.
Fellow group members shared their experiences by comparing the factors pointed from the
articles to their previous organizations’ practices. Each member was able to share unique
experiences, as we were from different industry backgrounds. By scrutinizing the different
experiences shared, we noted that different organizations had varied approaches to HR.
Also, as none of us have in-depth knowledge or firsthand experience working in HR departments, I feel that the discussions have given us much to reflect about this subject. Such
as how organizations feel towards taking a strategic approach to HR, the challenges met by
project teams implementing HRIS, the factors that can affect HRIS implementations, just to
name a few points. I will now be better able to apply ourselves, with the knowledge gained
through these discussions if I find myself working in an HRIS implementation.
In any collaborative exercise there will be some issues and this often includes lack of preparation,
as was the case in one group.
I felt that another problem with the discussion was that certain group members did not read
both articles and as such were unsure of how to properly discuss questions.
While it is obvious that the majority of students valued the online discussion as a learning experience, in terms of collaboration an international student puts it in a nutshell in the following comment.
I felt valued and supported by the other members of the group I enjoyed learning from my
group members, who always seemed to set a good example and always had the answer. I
finished the assignment on an emotional high.
73
Asynchronous Online Discussions
While inclusion was identified as a separate theme, many of the comments related to the inclusive nature of the collaboration.
Structure
The structure of the learning and assessment experience was commented on by sixteen students
and viewed as an incentive to collaboration and participation, as shown in the following reflection
by a local student.
Although I have completed an online discussion previously in another subject, this one was
more worthwhile as the approach of delivering and commenting on information was very
structured. This assisted all group members in planning their responses and encouraged all
to make valuable contributions that drew on and discussed the previous ideas that were
presented.
An international student believed that the structure has improved his communication skills.
There were clear instructions set and I have discovered that individually, this structure has
enabled me to be a more effective communicator.
The preparation for the online discussion was commented on, with students appreciating the
chance to clarify the issues.
The online discussion was a new experience for me. Getting used to the system took a bit of
time. However, with the ‘dummy’ discussion that was set up for class time, it was easy to
carry out the real discussion.
While assessment was not reflected upon, it was part of the structure and obviously motivated the students to engage in the discussions.
Timing
There were thirty reflections on timing, mostly on the positive value of a structured assignment
with timelines. However, some students felt there was insufficient time and that other students did
not respect the timeline, leading to feelings of frustration.
One student compared the traditional classroom discussion to the online discussion, believing that
it was intellectually stimulating and allowed more time for developing insight.
It provides a platform for people to share their knowledge and experience freely without
lots of time constraints. Compared to traditional discussion approaches, I think online discussion could be intelligence stimulating. First of all, without pressure on thinking, it is
easier for people to think thoroughly and provide relevant as well as insightful ideas. In
traditional approach, people are likely to be stressful in creative thinking within a limited
time.
The other positive aspect was flexibility where students can communicate online rather than at a
specific venue.
Another positive aspect is that we were not limited to a specific time and place (just in the
classroom) and had the freedom to participate at the place and time we are comfortable, of
course within a timeframe.
While there were few negative responses, some reflections included statements such as:
One member left their responses till quite late and I think missed valuable opportunities to
expand on some discussions.
74
Bassett
In common with many collaborative exercises there will be some students who participate in a
limited manner, but overall the timing of the asynchronous online discussion was perceived as
satisfactory.
Inclusive Learning Environment
With a majority of ESL students in this subject there were a relatively small number of reflections
on how their confidence increased in the online environment. However, an international student’s
reflection on this is typical of other comments.
It is very useful because it helps me as an international student to give me time to type organized sentences in a good English. If this discussion in the class, then I could not say everything I want to say as it is provided in the e-discussion because we are usually shy to
speak quickly which will leads us to speak with no English grammar.
While some students were more confident in the online discussion, one local student found the
lack of debate interesting but obviously conformed to the group norm in her responses.
I wonder how much ‘culture’ influences the discussion. I wonder if it is culturally acceptable to openly disagree with another student’s point of view. Even without the cultural
overlay, I found it very difficult to disagree with another student’s point of view as I thought
I might inadvertently offend another person. I think every person in my group (without exception) agreed with each other across the three articles and the five questions.
As mentioned previously, many of the reflections on collaboration indicated that the online discussion was inclusive, increasing knowledge and understanding through the exchange of ideas in
a more flexible learning environment.
Discussion
Learning from each other and developing understanding were valued by the majority of students
in the collaborative online context. Groves and O’Donoghue (2009, p.148) observed, with some
reservations, that an asynchronous exercise “seemed to enhance the learning experience of the
students.” In most reflections collaboration was specifically linked to the learning experience:
learning from each other; self-insight about information systems issues; transfer of learning to the
work situation. All of these factors indicate that students view the online environment as an inclusive one.
A number of students appreciated the structure, with one student favorably comparing this online
discussion to another one she had participated in. This supports Vonderwell et al.’s (2007, p.321)
finding that “structure is an essential factor in the design, implementation and assessment of asynchronous discussions.” As Bonk and Zhang (2008) point out, rules and guidelines are essential
and in this study the practice discussion was useful in highlighting the need for structure and
preparation.
While timelines were incorporated into the design of the discussion, time management has always
been an issue in this learning exercise. Fewer students reflected on timing but those that did
commented favorably about the flexibility of time, stating that it allowed them to consider the
questions and concepts and thus discuss more critically or creatively. Gerbac’s (2006) study supports this, finding that there was time for students to research and consider their responses before
engaging in the discussion. However some students had concerns about group members’ time
management, with this affecting the quality of the discussion. This factor has been noted more in
relation to less structured asynchronous discussions that are not related to assessment (Groves &
O’Donoghue, 2009).
75
Asynchronous Online Discussions
There was a low response rate for inclusion, a concept that was narrowly defined in this study.
However, a number of reflections related to both inclusion and collaboration illustrated that the
online discussion increased ESL students’ confidence. They felt less shy and had time to think
about and outline their contributions, alleviating the fear of a poorly phrased answer. While this
may limit the spontaneity of responses it also provided an opportunity for clarifying concepts for
a more insightful reply and allowed them to interact with their group more comfortably. As Hassam (2007) explains, it is essential to develop international students’ confidence to enable them to
become critical thinkers.
In regard to inclusion and collaboration, an interesting cultural factor emerged with an Australian
student finding that there was a reluctance to disagree with any responses. She was in a group
with three international students and the hesitation to contribute what might be perceived as negative comments could relate to Ng et al.’s (2009) similar findings with a group of Asian students.
Conclusion
The online discussion was designed as part of a blended approach to learning, and, while it appears to be a successful learning strategy that students valued, the present study did not explore
the link between the various components of the curriculum. The exercise itself certainly integrated learning and assessment activities in achieving the learning outcomes (Groves &
O’Donoghue, 2009; Zhu, 2006), but further research is required to explore whether the linkage
between the various learning and assessment components of this HRIS subject was realized. The
themes identified were similar to those found in previous studies; that is, collaboration, structure,
timing, and inclusive learning experiences.
In this research, the students highly valued the collaborative nature of the discussion in the online
environment. Collaboration and its many facets will be the topic of the next study on online discussions; in particular, the concept of inclusion and its relationship to collaboration will be explored. While themes were identified and enumerated in the present study, in the future formal
coding and analysis of the qualitative data will be used in order to develop a greater understanding of the nature of online collaboration.
Guidelines, structure, and timing are obvious factors that affect participation and learning in this
study, appearing to support that a well structured asynchronous online discussion with clear
guidelines and outcomes facilitates learning and collaboration.
In a culturally diverse class both Australian and international students had positive reflections on
the learning experience and, it is contended, this was partly due to the fact that while the discussions were collaborative the students were assessed individually. A tentative conclusion can be
made that the collaborative nature of the online discussions facilitated an inclusive learning experience for all students. There was a noticeable lack of disagreement in one group, and it would
be interesting to investigate cultural factors relating to inclusiveness and intercultural communication in a future study.
Overall, the asynchronous online discussion was perceived as a valuable learning strategy that
resulted in increased confidence to interact in a collaborative online environment. Most students
believed their knowledge, understanding, and critical ability were enhanced in this inclusive context. The following reflection from one of the students sums this up.
On reflection there were a number of positive outcomes and a few barriers to a full and
frank discussion. The first positive was that this was a new and interesting exercise; in short
it was an enjoyable exercise. The process also helped develop relationships with fellow students and it was good to see different perspectives after reading the same article.
76
Bassett
In conclusion, this study provides the basis for further research on asynchronous online discussion. In particular, a more rigorous examination of collaboration and inclusion in the online
learning environment from a broader perspective is required.
References
Al-Salman, S. M. (2009). The role of the asynchronous discussion forum in online communication. Journal
of Instruction Delivery Systems, 23(2), 8-13.
Andresen, M. A. (2009). Asynchronous discussion forums: Success factors, outcomes, assessments and
limitations. Educational Technology and Society, 12(1), 249-257.
Bonk, C. J., & Zhang, K. (2008). Empowering online learning: 100+ Activities for reading, reflecting, displaying, and doing. Hoboken, USA: Jossey-Bass.
Du, J., Zhang, K., Olinzock, A., & Adams, J. (2008). Graduate students’ perspectives on the meaningful
nature of online discussions. Journal of Interactive Learning Research, 19(1), 21-36.
Gerbic, P. (2006). Chinese learners and online discussions: New opportunities for multicultural classrooms.
Research and Practice in Technology Enhanced Learning, 1(3), 221-237.
Gerbic, P. (2010). Getting the blend right in new learning environments: A complementary approach to
online discussions. Education and Information Technologies, 15, 125-137.
Groves, M., & O’Donoghue, J. (2009). Reflections of students in their use of asynchronous online seminars. Educational Technology and Society, 12(3), 143-149.
Hassam, A. (2007). Speaking for Australia: cross-cultural dialogue and international education. Australian
Journal of Education, 51(1), 72-83.
Hew, K. F., Cheung, W. S., & Ng, C. S. L. (2010). Student contribution in asynchronous online discussion:
A review of the research and empirical exploration. Instructional Science, 38, 571-606.
Macdonald, J. (2008). Blended learning and online tutoring: Planning learner support and activity design.
Aldershot, UK: Gower Publishing.
Ng, C. S. L., Cheung, W. S., & Hew, K. F. (2009). Sustaining asynchronous online discussions: Contributing factors and peer facilitation techniques. Journal of Educational Computing Research, 41(4), 477511.
Vonderwell, S. (2003). An examination of asynchronous communication experiences and perspectives of
students in an online course: A case study. Internet and Higher Education, 6, 77-90.
Vonderwell, S., Liang, X., & Alderman, K. (2007). Asynchronous discussions and assessment in online
learning. Journal of Research on Technology in Education, 39(3), 309-328.
Woodley, C., & Pearce, A. (2007). A toolkit for internationalizing the curriculum at VU. Retrieved November 2010, from http://tls.vu.edu.au/vucollege/staff/internationalisation.html
Yin, R. K. (2009). Case study research: Design and methods. Thousand Oaks, California: Sage Publications.
Yin, R. K. (2010). Qualitative research from start to finish. New York: Guilford Publications.
Zhu, E. (2006). Interaction and cognitive engagement: An analysis of four asynchronous online discussions. Instructional Science, 34, 451-480.
77
Asynchronous Online Discussions
Appendix
Online Discussion Assignment
Aim
This assignment is aimed at familiarizing you with academic theory relating to implementation of
human resource information systems. It is envisaged that you will use these references, together
with additional references, in your research assignments.
Learning outcomes
On successful completion of the assessment you will be able to:
•
•
demonstrate knowledge of the administrative and strategic value of an HRIS;
demonstrate knowledge of the change management issues involved in the implementation
of an HRIS;
show evidence of enhanced personal knowledge of HRIS issues, based on academic articles.
•
Readings:
Dong, L. (2008). Exploring the impact of top management support of enterprise systems implementations outcomes. Business Process Management Journal, 14(2), 204-218.
Tansley, C., & Newell, S. (2007). Project social capital, leadership and trust: A study of HRIS
development. Journal of Managerial Psychology, 22( 4), 350-368.
Tansley, C., Newell, S. & Williams, H. (2001). Effecting HRM-style practices through an integrated human resource information system: An e-greenfield site? Personnel Review, 30(3), 351370.
Discussion questions will be posted in Blackboard on 13 August. The discussion will close on 17
August.
You will be randomly assigned to discussion groups of four/five and will need to discuss
four/five questions: at least one on each of the above articles. Each member of the group should
start one (and only one) discussion thread. Each member of the group must end the discussion
they opened with a summary.
Your discussions will be individually assessed on:
•
•
•
Effective participation including commencing early and participating in the discussions
throughout the discussion period.
Ability to discuss rather than simply post answers.
Demonstration of conceptual understanding.
Reflection and annotated bibliography
•
Reflect on the value of this on-line discussion as a learning experience (a personal reflection of no more than half a page). No references required.
•
Write an annotated bibliography* of the three references
*Use the following website for guidelines on writing an annotated bibliography:
http://www.lc.unsw.edu.au/onlib/annotated_bib.html
78
Bassett
Marking criteria
Reflection:
Annotated bibliography:
On-line discussion:
TOTAL MARKS
5 marks
15 marks
40 marks
/60 ÷ 4 = /15%
Biography
Penny Bassett is a lecturer in Management and Information Systems
at Victoria University in Melbourne, Australia. She has research interests in internationalization of the curriculum and e-learning, having
participated in projects in both areas. Penny has been recognized for
her contribution to teaching and learning in being awarded Victoria
University’s Vice-Chancellor’s Award for Excellence in Teaching and
Learning and an Australian Learning and Teaching Council (ALTC)
Citation for Outstanding Contributions to Student Learning.
79
This page left blank intentionally
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Factors that Influence Student E-learning
Participation in a UK Higher Education Institution
Kay I. Penny
Edinburgh Napier University, Edinburgh, Scotland, UK
[email protected]
Abstract
E-learning involves the use of information and communication technologies to deliver teaching
and learning and is becoming increasingly important in the delivery of higher education. An
online questionnaire survey was designed to gather information on students’ participation and
opinions of the use of e-learning in a UK higher education institution, and the results show that
different student groups are more likely to participate regularly in certain types of study activities
than others. An exploratory factor analysis reveals three underlying factors which may be used to
classify the different types of e-learning activities, namely, information and communication use,
general educational use, and the use of specialised software. These three factors which represent
the different applications of e-learning should be considered individually in terms of design, delivery, and management of e-learning support systems, and provision of training for both staff and
students.
Keywords: E-learning Participation, Information and Communication Technologies, Higher
Education.
Introduction
E-learning is a concept derived from the use of information and communication technologies
(ICT) to deliver teaching and learning. A common definition states that e-learning in higher education is a technique to enhance learning and teaching experiences and is used to educate students
with or without their instructors through any type of digital media (Christie & Ferdos, 2004). Elearning has also been defined as learning and teaching facilitated online through network technologies (Garrison & Anderson, 2003) and described as utilising many ICT technologies (Kahiigi, Ekenberg, Hansson, Tusubira, & Danielson, 2008; Laurillard, 2004). E-learning can either
be used to replace traditional face-to-face teaching completely, or only partially, for example, the
use of ICT is sometimes introduced as an additional resource alongside traditional teaching methods. A major advantage of ICT is that accessing online learning resources is flexible and fast and
has no geographical barriers (Concannon, Flynn & Campbell, 2005; Sivapalan & Cregan, 2005).
According to Dalsgaard (2008), elearning technology offers a wide range
Material published as part of this publication, either on-line or
of opportunities for development of
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
education, and the major advantages of
works for personal or classroom use is granted without fee
the use of e-learning are independence
provided that the copies are not made or distributed for profit
of time and space and individuality, e.g.,
or commercial advantage AND that copies 1) bear this notice
courses can be adapted to the individual
in full and 2) give the full citation on the first page. It is perstudent and materials can be reused or
missible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
rearranged.
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request redistribution permission.
The higher education sectors have been
concentrating on increasing the use of
Editor: Alex Koohang
Factors which Influence Student E-Learning Participation
online applications of e-learning by using the internet to enhance education (Arabasz & Baker,
2003). With the rapid growth of e-learning, computers are now used by students in many different
educational processes and are considered to be valuable tools to enhance learning in higher education. Wenger (1998) has argued that participation is an intrinsic part of learning; hence a key
challenge for e-learning is to enhance student participation (Bento & Schuster, 2003). It is believed that learner participation may be enhanced by the use of computer-mediated media in both
traditional and e-learning settings (Haythornwaite, 2002; Leidner & Jarvenpaa, 1995). Online
learner participation has been defined as a process of learning by taking part and maintaining relations with others, a complex process comprising doing, communicating, thinking, feeling and
belonging, which occurs both online and offline (Hrastinski, 2008). Hrastinski (2009) provides a
review of the literature in the area of online learner participation and claims that participation and
learning are intricately interrelated and that, in order for learners to take full advantage, the participation experience needs to be satisfactory.
Davies and Graff’s (2005) study measured students’ access to communication areas and the group
area and used this measure to represent the degree of participation. Their findings concluded that
students who failed in at least one module interacted less frequently than students who passed all
their modules. Another study by Sivapalan and Cregan (2005) found that students who demonstrated an active participation in online activities scored better marks. It has also been suggested
that participation has a positive influence on learner satisfaction (Alavi & Dufner, 2005) and retention rates (Rovai 2002).
Vonderwell and Zachariah (2005) found that online learner participation is influenced by technology and interface characteristics, content area experience, student roles and instructional tasks,
and information overload.
The literature shows that online participation is associated with student achievement, and the motivation behind this study is to try to determine if particular groups of students are not making
sufficient use of online learning, so that these groups of students may be further encouraged to
use online activities in order to enhance their overall learning experience.
The aims of this study are (1) to describe students' usage of various types of e-learning activities
in higher education; (2) to investigate whether any demographic or study-related factors impact
on how regularly students participate in e-learning activities; (3) to determine underlying constructs or classifications of the different types of e-learning participation; and (4) to determine
which demographic or study-related student characteristics are independently associated with the
underlying constructs of e-learning participation.
Methods
A questionnaire survey was designed to gather information on students' experiences and opinions
of the use of E-learning. Ethics Approval was sought from Edinburgh Napier University Business
School and was granted in October 2009. The survey was piloted firstly to a small group of staff
and students at the university, in order to check and refine the content of the questionnaire. Following the pilot, all students enrolled for study at Edinburgh Napier University were contacted in
November 2009 and were invited to participate in the survey via a pop-up window when they
next logged onto the learning management system, WebCT. An invitation to participate and a link
to the online survey was also posted on the student portal internet page. The online questionnaire
survey was administered using SurveyMonkey.com (http://www.surveymonkey.com). In an attempt to maximise the response rate, students were contacted by email two weeks after the start
on the survey. Students were thanked if they had already responded to the questionnaire and were
reminded of the invitation to take part if they had not already done so. The survey responses were
collected over a three-week period during November 2009.
82
Penny
Questionnaire
Students were asked to provide some demographic information, such as age, gender, and some
details regarding their studies, e.g., school of study, year of study, and type of degree. The questionnaire included a section on computer use, consisting of questions asking students whether
they have access to a computer outside the university, the internet, and a high-speed internet connection.
Respondents were asked to estimate the number of hours spent per week on computer and internet use, in total and for educational purposes only. The next section on the questionnaire asked
students to provide details of how often they used a computer or the internet for various tasks related to their studies, e.g., for preparing essays or using certain types of software, and how often
they used the internet, e.g., for contacting lecturers and tutors or to participate in online discussions, etc. The data for these questions were collected using a five-point Likert scale with the options never, occassionally, sometimes, quite often and regularly.
The results presented in this paper are part of a larger questionnaire study which also gathered
information on informatics skills, satisfaction with university ICT provision, and attitudes and
opinions on the use of e-learning.
Data Analysis
Summary statistics are presented to describe the sample of responents; this includes a breakdown
according to age, gender and the level of study, type of degree, and school of study. MannWhitney tests and Kruskal-Wallis tests were used to test for differences in the number of hours
spent per week using a computer and the internet for all puposes and for educational purposes
only, in order to search for differences between student gender, age group, school, and level of
undergraduate study. Chi-square tests for independence were carried out to test for associations
between student gender, age group, school, and level of undergraduate study with student usage
of the various applications of e-learning at university.
An exploratory factor analysis (Everitt & Dunn, 2001), with the principal components method of
extraction, was used to identify any underlying themes or classifications of the different types and
usages of e-learning. The purpose of the exploratory factor analysis was to reduce the data set to a
smaller set of summary variables or factors such that each factor comprises multiple e-learning
measures that contribute to the same e-learning construct or theme.The Kaiser-Meyer-Olkin
measure of sampling adequacy and Bartlett's test of sphericity were calculated to assess the suitability of carrying out a factor analysis on these data.
This was followed by logistic regression modelling (Hosmer & Lemeshow, 2000) to determine
which student characteristics are associated with regular use of each of the underlying constructs
or themes of e-learning participation as determined by the preceding factor analysis.
Statistical analysis was carried out using SPSS version 16, and in each part of the analysis or for
each statistical test, all available data were included, i.e., respondents were excluded from the
analysis only if they had a missing value on at least one of the variables included in that particular
component of the analysis.
Results
Questionnaire Response
A total of 746 students responded to the online questionnaire survey. Several students did not
fully complete the questionnaire survey and appear to have suffered from respondent fatigue towards the end of the questionnaire. Only 19 students (2.5%) did not provide their age, and four
83
Factors which Influence Student E-Learning Participation
students (0.5%) chose not to provide their gender. Approximately 10% of students who started
the questionnaire did not answer the questions asking students to estimate how many hours per
week they spend using a computer and on the internet for all purposes and for educational purposes only. Around 13% did not respond to questions asking how often they use a computer or
the internet for a range of tasks connected with their university studies, e.g., 13.1% did not answer how often they use a computer for writing essays, reports, or other types of written papers.
Descriptive Statistics: Sample Respondents
The median age of respondents was 23 years (25th percentile = 20 years; 75th percentile = 29
years). The youngest respondent was 16 years of age, and the oldest aged 80 years. The majority
of respondents were female (64.3%) (Table 1).
Table 1. Sample Statistics
Median (25th, 75th percentiles)
Age
23 (20, 29)
n (%)
Gender: Male
265 (35.7%)
Female
Faculty: Engineering, Computing & Creative Industries
477 (64.3%)
207 (27.8%)
Health, Life & Social Studies
269 (36.2%)
Business School
268 (36.0%)
Mode of Study: Full-time
638 (86.4%)
Part-time
100 (13.6%)
Level of Study: Postgraduate
Undergraduate
134 (18.1%)
607 (81.9%)
Two hundred and seven respondents (27.8%) were participating in programmes of study within
the School of Engineering, Computing and Creative Industries; 36.2% within the School of
Health, Life and Social Studies; and 36.0% within the Business School. Most respondents were
studying at undergraduate level (81.9%), of which 27.8%, 24.2%, 27.3% and 20.6% of undergraduates were in years 1, 2, 3, and 4 respectively. The majority of students were studying fulltime (86.4%), and the remaining 13.6% of students were studying part-time.
Respondents were asked where their family or permanent home was situated; the majority were
from Scotland, 28.2% from Edinburgh and 30.2% from elsewhere in Scotland. Only 7.8% stated
that their family homes were elsewhere in the UK, and one third (33.7%) were from families
based outside the UK. Almost half of respondents (47.9%) live in their family homes during term
time and travel to and from university on a daily basis.
Usage of ICT
The vast majority of respondents said they had unlimited use of a computer or laptop at home
(97.5%), and 96.9% had internet access at home. Of those who had internet access at home, only
8 students (1.2%) had dial-up internet access, whereas all the others had a high speed internet
connection, for example, via broadband or cable.
84
Penny
Respondents were asked to estimate how many hours per week they spend on a computer, excluding internet use and also using the internet. Summary statistics for full-time students are presented
in Table 2. Nonparametric Kruskal-Wallis or Mann-Whitney tests were carried out to determine
whether any differences exist in the numbers of hours spent using a computer between full-time
students according to their university school, year of study, age group, and gender.
Male students spent significantly longer per week (p < 0.001) using a computer, excluding the
internet, for all purposes (median = 15 hours) compared to females (median = 10 hours); however, the length of times spent per week excluding the internet for educational purposes did not
differ significantly between males (median = 10 hours) and females (median = 8 hours). When
considering internet use only, no significant difference was found in the number of hours per
week spent on all purposes between the genders; however, female students spent longer per week
on the internet for educational purposes (median = 10 hours) compared to males (median = 8
hours) (p = 0.030).
Students aged 25 years or over spent significantly longer using a computer, excluding the internet, for all purposes (median = 15 hours per week) than the younger students (median = 10 hours
per week) (p < 0.001). Students in the oldest age group also spent significantly longer using the
computer, excluding the internet, for educational purposes only (p < 0.001), and spent on average
10 hours per week compared to an average of 9 hours for those aged 21 – 24 years, and only 6
hours for those aged 16 – 20 years. When comparing the time spent on the internet for all purposes, students in the 21 – 24 year age group spent longer per week (median = 20 hours) compared to those students aged 16 - 20 years (median = 15 hours) and those aged 25 years or over
(median = 15 hours) (p= 0.001). When using the internet for educational purposes, students in the
youngest age group spent less time (median = 7 hours per week) than the older age groups (median = 10 hours per week for both groups) (p = 0.001).
Differences exist across the three faculties in the number of hours spent per week using a computer, excluding internet use, for all purposes (p = 0.001); students based in the School of Engineering, Computing and Creative Industries spend longer on average (median = 15 hours) compared to those in the Business School (median = 12 hours) and the School of Health, Life and
Social Studies (median = 10 hours). However, there was no significant difference in the number
of hours spent, excluding internet use, for educational purposes between the three faculties. When
using the internet for all purposes, students in the School of Health, Life and Social Sciences
spend less time per week (median = 15 hours) than those in the other two schools who spent 20
hours on average per week (p = 0.022). However, no significant difference was found in the
length of time spent by students on the internet for educational purposes (p = 0.092); students in
Engineering, Computing and Creative Industries reported spending 8 hours a week on the internet
for educational purposes on average, whilst students in the other two schools spent 10 hours a
week on average.
When comparing the four undergraduate years of study (Table 2) for full-time students only, no
significant differences were found in the length of internet use and the use of a computer for educational purposes only. However, excluding internet use, fourth year undergraduates reported
spending longer on a computer for all purposes (median = 14 hours per week) compared to a median of 10 hours per week for years one, two, and three (p = 0.010).
Chi-square tests for independence were carried out to investigate if any differences exist in the
frequency of computer and internet usage for particular study activities between genders, age
groups, faculties, and level of study. Respondents were asked to rate how often they used a computer for each activity on a five-point scale labelled never, occasionally, sometimes, quite often
and regularly. Although these tests were calculated over the 5 categories of response, for brevity,
only the percentages who regularly used a computer for each activity are reported in Tables 3 and
85
Factors which Influence Student E-Learning Participation
4. Since this scale is subjective and doesn’t quantify the time spent on each activity, both fulltime and part-time students are included in the analysis of results presented in Tables 3 and 4.
Since numerous chi-square tests have been carried out, the potential for Type I error is increased,
therefore a Bonferroni correction will be applied when considering the results. To achieve an
overall 5% significance level over all tests presented in Tables 3 and 4, only results when p ≤
0.001 will be considered as statistically significant.
Table 2. Hours spent per week using a computer by full-time students
Median (25th, 75th percentiles)
p-value
All Full-time
Female StuExcluding internet use:
Male Students
Students
dents
All purposes
10 (5, 20)
15 (8, 25)
10 (5, 20)
< 0.001 †
Education purposes
9 (4, 15)
10 (4, 18)
8 (4, 15)
0.594 †
Internet use only:
All purposes
20 (10, 30)
20 (10, 30)
17 (10, 25)
0.154 †
Education purposes
10 (5, 15)
8 (4, 15)
10 (5, 15)
0.030 †
Excluding internet use:
All purposes
Education purposes
Internet use only:
All purposes
Education purposes
Excluding internet use:
All purposes
Education purposes
Internet use only:
All purposes
Education purposes
Excluding internet use:
All purposes
Education purposes
Internet use only:
All purposes
Education purposes
Mann-Whitney Test †
Aged 16 – 20
years
10 (5, 15)
6 (3, 10)
Aged 20 – 24
years
10 (5, 21)
9 (4, 15)
Aged 25 years
or over
15 (7, 29)
10 (5, 20)
15 (10, 25)
7 (4, 13)
20 (12, 30)
10 (5, 16)
15 (10,25)
10 (5, 15)
Eng., Computing
& Creative Ind.
15 (7, 25)
10 (4, 20)
Health, Life &
Social Studies
10 (5, 19)
7 (4, 15)
Business
School
12 (5, 20)
10 (5, 15)
20 (10, 30)
8 (5, 15)
15 (10, 25)
10 (5, 15)
20 (10,30)
10 (5, 17)
0.022 ◊
0.092 ◊
Year of Study
Third
Fourth
10 (5, 21)
14 (7, 25)
8 (4, 15)
10 (5, 20)
p-value
0.010 ◊
0.118 ◊
First
10 (5, 20)
7 (4, 14)
Second
10 (6, 20)
10 (5, 15)
15 (10,25)
10 (4, 15)
20 (10,25)
10 (5, 15)
20 (12,30)
8 (4, 15)
20 (10,30)
10 (5, 15)
p-value
< 0.001 ◊
< 0.001 ◊
0.001 ◊
0.001 ◊
p-value
0.001 ◊
0.060 ◊
0.057 ◊
0.708 ◊
Kruskal-Wallis Test ◊
Male respondents (76.5%) reported more regular use of a computer for preparing essays compared to 62.2% of females (p = 0.001), and 42.9% of males reported regular use for preparing
presentations compared to 37.8% of females. (p = 0.001). However females reported more regular
use for drawing (17.1%) compared to only 9.8% of males (p < 0.001). Male respondents reported
regular use of the internet in connection with their studies more often than females; 76.9% of
males regularly download department materials compared to 63.0% of females (p < 0.001);
86
Penny
40.9% of males regularly submit coursework online compared to only 25.4% of females (p =
0.001); and 27.9% of males compared to 20.4% of females regularly search for study related information (p < 0.001).
Table 3. Participation in Study Activities according to Age and Gender
Male Students
Female Students
% regular use
Computer use:
Essays
Presentations
Reading
Drawing
Spreadsheet
Statistics/ maths
Image/video
Internet use:
Contact lecturers
Contact students
Dept web pages
Download dept materials
Additional materials
Submit coursework
Online discussion
Study-related information
Computer use:
Essays
Presentations
Reading
Drawing
Spreadsheet
Statistics/ maths
Image/video
Internet use:
Contact lecturers
Contact students
Dept web pages
Download dept materials
Additional materials
Submit coursework
Online discussion
Study-related information
76.5%
42.9%
53.4%
9.8%
16.3%
8.4%
13.2%
χ2 test
p
62.2%
37.8%
39.9%
17.1%
18.3%
7.9%
13.6%
0.001
0.001
0.025
< 0.001
0.016
0.011
0.004
36.2%
25.2%
25.9%
17.0%
38.2%
31.0%
76.9%
63.0%
65.4%
50.4%
40.9%
25.4%
14.2%
9.3%
27.9%
20.4%
Age Group in Years
16- 20
21 - 24
25 or over
% regular use
0.002
0.003
0.278
< 0.001
0.007
0.001
0.155
<0.001
66.5%
40.7%
43.4%
11.5%
12.1%
8.8%
11.5%
73.1%
42.1%
53.0%
13.0%
17.1%
10.3%
19.1%
75.1%
41.4%
50.0%
12.3%
20.8%
5.1%
10.2%
0.365
0.104 1
0.229
0.183
0.143
0.001
0.001
21.4%
17.6%
28.2%
72.5%
50.0%
27.6%
7.7%
20.3%
36.6%
25.5%
34.9%
72.2%
63.3%
35.7%
11.2%
25.9%
37.1%
24.5%
42.6%
72.9%
65.4%
41.7%
17.4%
28.8%
0.010
0.020
0.257
0.828
0.022
0.090
0.001
0.209
χ2 test
p
87
Factors which Influence Student E-Learning Participation
Table 4. Participation in Study Activities according to School and Year of Study
Eng., Comp. &
Creative Ind.
Health, Life &
Social Studies
Business
School
% regular use
Computer use:
Essays
Presentations
Reading
Drawing
Spreadsheet
Statistics/ maths
Image/video
Internet use:
Contact lecturers
Contact students
Dept web pages
Download dept materials
Additional materials
Submit coursework
Online discussion
Study-related information
58.1%
36.3%
37.3%
23.5%
16.8%
8.0%
15.6%
29.1%
18.4%
27.5%
55.3%
53.4%
26.0%
8.9%
19.0%
First
Computer use:
Essays
Presentations
Reading
Drawing
Spreadsheet
Statistics/ maths
Image/video
Internet use:
Contact lecturers
Contact students
Dept web pages
Download dept materials
Additional materials
Submit coursework
Online discussion
Study-related information
74.9%
34.7%
55.9%
6.8%
16.0%
10.1%
11.4%
38.1%
24.3%
42.7%
78.2%
63.6%
48.7%
20.9%
29.4%
Year of Study
Second
Third
% regular use
χ2 test
p
78.3%
51.7%
49.6%
9.6%
18.3%
6.5%
14.0%
< 0.001
< 0.001
0.008
< 0.001
< 0.001
0.087
< 0.001
28.7%
24.3%
34.4%
78.3%
61.3%
28.6%
6.5%
25.7%
0. 219
0.025
0.055
< 0.001
0.157
< 0.001
< 0.001
0.019
Fourth
χ2 test
p
68.5%
39.9%
46.9%
9.1%
14.0%
5.6%
15.4%
76.0%
41.9%
51.2%
10.1%
17.1%
14.7%
13.2%
68.3%
46.9%
43.4%
13.1%
22.1%
8.4%
15.2%
71.8%
33.6%
51.8%
17.3%
11.8%
7.3%
12.8%
0.382
< 0.001
0.253
0.007
< 0.001
< 0.001
0.243
28.0%
18.2%
32.2%
76.1%
48.3%
34.5%
11.9%
25.4%
31.8%
24.0%
39.4%
72.9%
62.8%
34.6%
12.5%
23.3%
33.8%
26.9%
37.9%
74.5%
60.4%
33.6%
12.4%
29.0%
35.5%
22.7%
40.7%
66.4%
61.8%
34.5%
6.4%
25.5%
0.089
0.836
0.927
0.837
0.210
0.504
0.748
0.666
When comparing computer and internet study activities between age groups, the 21 - 24 year age
group reported more regular use of mathematics/ statistics packages (10.3%) and for image/ video
processing (19.1%) compared to those aged 16 - 20 years (8.8% and 11.5% respectively) and
those aged 25 years or over (5.1% and 10.2% respectively) (p = 0.001). However, those aged 25
years or over reported more regular participation in online discussions (17.4%) compared to those
aged 16 - 20 years (7.7%) and those aged 21 - 24 years (11.2%) (p = 0.001).
88
Penny
Eight of the chi-square tests comparing participation in study activities between the three faculties
are significant (p < 0.001) (Table 4). Respondents studying in the School of Engineering, Computing and Creative Industries reported more regularly using a computer for drawing (23.5%) and
image or video processing (15.6%), but for preparing essays and downloading departmental materials less often than those in the other faculties. Students in the Business School reported regular
use of a spreadsheet (18.3%) and for using a computer to prepare presentations (51.7%) more often than students based in the other two schools (p < 0.001). Students in Health, Life and Social
Studies used the internet more regularly for submitting coursework (48.7%) and participating in
online discussions (20.9%).
There were very few significant differences in the use of ICT between the different years of study
at undergraduate level, although those in third year of study reported more regular use of a computer to prepare presentations (46.9%) and using a spreadsheet (22.1%), and second year students
reported more regularly using mathematics or statistic software (14.7%) (all three results statistically significant at the 0.1% level).
Factor Analysis
A factor analysis was carried out on the fifteen e-learning participation scores (as listed in Tables
3 and 4) to determine any underlying factors or themes which make up the overall student participation in educational usage of computer and internet applications. The Kaiser-Meyer-Olkin
measure of sampling adequacy (0.837) indicated that a factor analysis is appropriate, and Bartlett’s test of sphericity indicated that the correlation matrix is not equal to the identity matrix ( p <
0.001), i.e., concluding that the strength of the relationship among variables is strong, further confirming that it is appropriate to proceed with a factor analysis.
The principal components method was used to extract the three factors, each of which has an eigenvalue greater than 1, followed by a varimax rotation to aid interpretation of the components.
The three factors explained a total of 51.4% of the variability in scores and can be explained as
factors relating to participation in (1) information and communication technologies, (2) general
educational tools, and (3) technical/ specialised computer software packages. The rotated component matrix is presented in Table 5, and component loadings greater than 0.5 are highlighted in
bold.
Factor 1 can be interpreted as participation in information and communication technologies in
connection with university studies and has high loadings for contacting lecturers and students,
searching for information in departmental and university web pages, participating in online discussions, and submitting assessments online. Factor 2 represents the usage of general educational
tools such as writing up work, preparing presentations, reading, and downloading digital teaching
materials. Factor 3 represents the use of more technical or specialised software for drawing or
constructing, processing images or videos, using statistical or mathematical software and using a
spreadsheet. These 15 different types of computer and internet tools and applications, which are
commonly used in a higher education environment, can be classified into these three distinct factors.
Student Characteristics Associated with the Three ICT Factors
Logistic regression modelling is used to determine which student characteristics are associated
with regularly using (1) information and communication technologies, (2) general educational
tools, and (3) specialised computer software packages. The dependent variable is the factor score,
which has been converted onto a binary scale and coded as zero (0) if the factor score was less
than 0, representing less regular use, and categorised as one (1) if the factor score is greater than
or equal to 0, representing regular use of the corresponding factor theme. Since the factor scores
89
Factors which Influence Student E-Learning Participation
were standardised to have mean 0, this ensures roughly equal numbers of students in each of the
two categories. The three final logistic regression models are presented in Tables 6 - 8.
Table 5. Loadings for each of the three factors retained
Factor
Participation in activities in connection with university
studies
1
2
3
writing essays, reports, other papers
0.012
0.736
0.152
preparing presentations
0.039
0.534
0.455
reading digital education materials
0.133
0.680
0.175
software for drawing/ constructing
0.027
-0.042
0.754
using spreadsheet
0.116
0.153
0.710
statistical or math software
0.072
0.091
0.680
image / video software
0.101
0.065
0.645
contact lecturers
0.599
0.358
0.139
contact students
0.647
0.273
0.102
search for information on dept web pages
0.517
0.451
0.013
download materials from webCT or lecturer's webpage
0.293
0.628
-0.099
find additional teaching materials or info
0.364
0.617
-0.009
submit coursework or assessments
0.673
0.022
0.016
online discussions connected to studies
0.817
-0.054
0.128
find study-related information
0.634
0.286
0.113
Student age, gender, and school of study were found to be independently associated with regular
use of a computer for information and communication purposes in connection with university
studies (Table 6). Compared to students aged between 16-20 years, older students have a significantly increased odds ratio (OR) of reporting regular information and communication usage;
those aged 21 – 24 years have an OR of 2.56 (95% confidence interval is 1.65 to 3.97) (p <
0.001), and those aged 25 years or over have an OR of 2.17 (95% confidence interval is 1.43 to
3.31) (p < 0.001). Males are significantly less likely than females to use a computer for information and communication purposes (OR = 0.40; 95% confidence interval is 0.21 to 0.79) (p =
0.008). Students based in the School of Health, Life and Social Studies have an increased odds
ratio (OR = 1.86; 95% confidence interval is 1.02 to 3.41) of reporting regular information and
communication usage compared to students based in the School of Engineering, Computing and
Creative Industries. An interaction term between school and gender is also included in the model.
Although males generally report less regular usage of information and communication, this is not
the case for male students based in the Business School; these male students are more likely to
report regular usage.
90
Penny
Table 6. Model 1: Factors Associated with Information and Communication Use
Factor
N
OR (95% CI)
Wald (df)
p-value
Age Group: 16 – 20 years
21 – 24 years
25 years or over
180
204
224
1.00
2.56 (1.65, 3.97)
2.17 (1.43, 3.31)
19.8 (2)
17.5 (1)
13.0 (1)
<0.001
<0.001
<0.001
Gender:
395
213
1.00
0.40 (0.21, 0.79)
7.0 (1)
0.008
School: Eng, Comp & Creative Ind
Health, Life & Social Studies
Business School
165
226
217
1.00
1.86 (1.02, 3.41)
0.65 (0.35, 1.21)
19.9 (2)
4.0 (1)
1.8 (1)
<0.001
0.044
0.177
School * Gender Interaction:
Eng, Comp & Creative Ind * Female
Health, Life & Social Studies * Male
Business School * Male
59
41
66
1.00
1.03 (0.39, 2.69)
2.76 (1.13, 6.75)
6.5 (2)
0.0 (1)
5.0 (1)
0.038
0.958
0.026
Female
Male
Student gender and school of study were found to be independently associated with reporting
regular usage of a computer for general educational use (Table 7). Males were significantly less
likely to report regular usage compared to females (OR = 0.50; 95% confidence interval is 0.27 to
0.95) (p = 0.035), and students in the Business School were more than twice as likely to report
usage compared to students in Engineering, Computing and Creative Industries (OR = 2.20; 95%
confidence interval is 1.17 to 4.13) (p = 0.014). However, the interaction term between school
and gender shows that this trend does not hold for male students based in the School of Health,
Life and Social Studies, as this group of male students are more likely to report usage for general
educational use.
Table 7. Model 2: Factors Associated with General Educational Use
Factor
Gender:
N
Female
Male
OR (95% CI)
Wald (df)
p-value
401
219
1.00
0.50 (0.27, 0.95)
4.4 (1)
0.035
School: Eng, Comp & Creative Ind.
Health, Life & Social Studies
Business School
169
229
222
1.00
1.03 (0.57, 1.87)
2.20 (1.17, 4.13)
11.3 (2)
0.12 (1)
6.0 (1)
0.004
0.912
0.014
School * Gender Interaction:
Eng, Comp & Creative Ind * Female
Health, Life & Social Studies * Male
Business School * Male
60
43
67
1.00
2.57 (1.00, 6.59)
0.80 (0.33, 1.92)
6.7 (2)
3.9 (1)
0.3 (1)
0.035
0.050
0.614
Student gender and location of a student’s permanent home were independently associated with
regular use of more specialised software (Table 8). Males were almost twice as likely as females
(p < 0.001) to regularly use specialised software (OR = 1.99, 95% confidence interval is 1.41 to
2.81). Also, students whose permanent home address is outside the UK were more likely to regu-
91
Factors which Influence Student E-Learning Participation
larly use specialised software (OR = 1.91) compared to students whose permanent home is in Edinburgh (p = 0.003).
Table 8. Model 3: Factors Associated with Specialised Software Use
Factor
N
Permanent Home:
Edinburgh
Elsewhere in Scotland
Elsewhere in UK
Outside the UK
Gender:
Female
Male
OR (95% CI)
Wald (df)
p-value
174
192
51
203
1.00
0.90 (0.59, 1.39)
1.38 (0.73, 2.61)
1.91 (1.25, 2.92)
15.0 (3)
0.2 (1)
1.0 (1)
8.9 (1)
0.002
0.635
0.321
0.003
401
219
1.00
1.99 (1.41, 2.81)
15.3 (1)
<0.001
Discussion
The vast majority of students had unlimited use of a computer (97.5%) and internet access
(96.9%) at home. Although males reported spending longer per week on average than females
using a computer for all purposes excluding the internet, females spent longer using the internet
for educational purposes than males. These findings confirm those found by Adamus, Kerres,
Getto, and Engelhardt (2009) and Cuadrado-Garcia, Ruiz-Molina, M., & Montoro-Pons (2010),
that although men are more prone to use computers than females, females tend to prefer communicative activities. Also, Bruestle et al. (2009) argue that e-learning, through its flexible and interactive learning approach, is most suited to women.
Student age was also associated with the length of time spent on a computer; those aged 25 years
or over spent longer using a computer excluding internet use than the younger students, although
those aged 21 – 24 years spent longer using the internet for all purposes and those aged 16 – 20
years spent less time using the internet for educational purposes. These differences in time spent
between genders and age groups may be due to differences in motivation between genders or age
groups, or these differences could be partly due to computer literacy or computing experience of
the different groups of students, i.e., it may be that some students are less computer literate and
therefore spend longer than more literate students to complete a similar task.
Students based in the School of Engineering, Computing and Creative Industries spent significantly longer using a computer excluding internet use than the other faculties; this may at least be
partly explained by the programmes of study within this particular school, and it is not surprising
that, for example, computing students spend longer using a computer than students based in other
schools. Students in the School of Health, Life and Social Studies reported spending less time
using the internet; this school includes nursing students, many of whom with be involved in practical placements or studies and, hence, may have less time available to surf the internet. When
comparing full-time undergraduate students, those in fourth year spent significantly longer using
a computer excluding internet use than those in earlier years; it may be that fourth year students
are more motivated to complete their studies with a good degree pass, and many may be working
on writing up a dissertation project.
The chi-square test results (Tables 3 and 4) indicate many differences between genders and age
groups in regular use of a computer for various study activities; generally males reported participating in many of the study activities more often than females, and those aged 25 years or over
92
Penny
tended to participate more often than the younger age groups. Differences in participation of the
various activities also exist between the different schools, e.g., students in the Business School
more regularly use a computer for preparing essays and presentations, and students in Engineering, Computing and Creative industries reported using a computer for drawing or image or video
processing more often. Fewer differences exist between the years of undergraduate study; those
students in third year reported using a computer for preparing presentations and using a spreadsheet more often, and those in second year reported using mathematics or statistics software more
often than those in other years of study.
An exploratory factor analysis revealed three underlying factors which represent the overall student participation in computer and internet applications for educational use (Table 5). Although
these three factors account for only 51.4% of the variability in the data, this technique has enabled
the 15 individual study activities to be reduced to 3 distinct factors, each of which contain activities which are related to one another and fall within a type of participation activity. Hrastinski
(2009) suggests that online learner participation moves beyond conceptualising participation as
writing and should include terms such as doing and belonging, and Hrastinski emphasises that
students learn both online, e.g., by computer-mediated communication with peers and teachers,
and offline, e.g., by reading course literature. The three factors determined in this paper confirm
that e-learning participation should not be measured by one type of activity alone and should be
viewed in terms of different constructs or themes of e-learning participation, namely, information
and communication, usage of general educational tools, and use of specialised software.
Logistic regression modelling was used to determine which student characteristics were associated with regular participation in each of the three distinct factors or themes (Tables 6 - 8). Students aged over 20 years, who are female or based in the School of Health, Life and Social studies were more likely to use a computer for information and communication. However, the interaction term shows that this general trend does not apply to students who are based in the Business
School; since males are more likely to use a computer for information and communication than
females within this school.
Female students and those based in the Business School were found to be more likely to regularly
use a computer for general educational use, except for male students based in the School of
Health, Life and Social Studies, who were more likely than females to report using a computer for
general educational purposes.
Male students or students whose permanent home is outside the UK are more likely to use specialised software than female students or those from the UK. This finding may suggest that more
male than female students choose to study modules with a high technical or mathematical content
and also that students from outside the UK are more likely to study modules which involve the
use of specialised mathematical, statistical or technical software.
The findings presented in this paper are based on a sample of students from a UK university and
will therefore be subject to sampling variability. However, the sample includes respondents from
a wide range of ages, from both genders, and from each of the Schools; hence, the sample is believed to be reasonably representative of the student population.
Although numerous statistical tests have been carried out which will increase the Type I error, a
Bonferroni correction has been applied to the significance level when interpreting these tests in
order to reduce the overall Type I error.
Conclusions
The descriptive statistics and statistical tests presented in this paper confirm that differences exist
between students, mainly according to gender, age group, and School of study, in their participa-
93
Factors which Influence Student E-Learning Participation
tion in the different types of e-learning activities. The use of factor analysis has enabled computer
usage in higher education to be classified into three types of e-learning activity: for the purposes
of information and communication, general educational use, and the use of specialised software.
The logistic regression models build on the initial statistical test results by confirming that usages
of the different types of e-learning activities, as determined using factor analysis, are associated
with faculty of study, gender, and other demographic variables.
It is proposed that all e-learning use in higher education can be incorporated into one of these
three constructs or themes of e-learning activity. Providers and teachers of higher education may
find it useful to consider each of these three themes of e-learning activity on an individual basis
when designing, developing, or managing ICT systems and also when considering the training
needs of both staff and student groups.
It is known that students who participate in online activities most often are more likely to be
higher achievers in their educational studies (Davies & Graff, 2005; Sivapalan & Cregan, 2005).
The findings reported in this paper will aid in targeting resources to encourage those groups of
students who currently participate least often in e-learning activities, with an aim to enhancing
student engagement and student learning.
Acknowledgement
I wish to thank all students who completed the online questionnaire survey, both for their time
contribution and for providing information relating to their e-learning participation.
References
Adamus, T., Kerres, M., Getto, B., & Engelhardt, N. (2009) Gender and e-tutoring – A concept for gender
sensitive e-tutor training programs. 5th European Symposium on Gender and ICT Digital Cultures:
Participation – Empowerment – Diversity, March 5-7, 2009 – University of Bremen. Retrieved February 11, 2011, from http://www.informatik.unibremen.de/soteg/gict2009/proceedings/GICT2009_Adamus.pdf
Alavi, M., & Dufner, D. (2005). Technology-mediated collaborative learning: A research perspective. In
S.R. Hiltz & R. Goldman (Eds.), Learning together online: Research on asynchronous learning networks (pp. 191-213). Mahwah, NJ: Lawrence Erlbaum.
Arabasz, P., & Baker, M.B. (2003). Evolving campus support models for e-learning courses. ECAR Educause Center for Applied Research. Retrieved October 3, 2010, from
http://net.educause.edu/ir/library/pdf/ecar_so/ers/ERS0303/EKF0303.pdf
Bento, R., & Schuster, C. (2003). Participation: The online challenge. In A. Aggarwal (Ed.), Web-based
education: Learning from experience (pp. 156-164). Hershey, PA: Idea Group Publishing.
Bruestle, P., Haubner, D., Schinzel, B., Holthaus, M., Remmele, B., Schirmir, D., & Reips, U. D. (2009).
Doing e-learning/ doing gender? Examining the relationship between students’ gender concepts and elearning technology. 5th European Symposium on Gender & ICT Digital Cultures: Participation – Empowerment – Diversity, March 5-7, 2009 – University of Bremen. Retrieved February 11, 2011, from
http://www.informatik.uni-bremen.de/soteg/gict2009/proceedings/GICT2009_Bruestle.pdf
Christie, M. F., & Ferdos, F. (2004). The mutual impact of educational and information technologies:
Building a pedagogy of e-learning. Journal of Information Technology Impact, 4(1), 15-26.
Concannon, F., Flynn, A., & Campbell, M. (2005). What campus-based students think about the quality
and benefits of e-learning. British Journal of Educational Technology, 36(3), 501-512.
Cuadrado-Garcia, M., Ruiz-Molina, M., & Montoro-Pons, J.D. (2010). Are there gender differences in elearning use and assessment? Evidence from an interuniversity online project in Europe. Procedia Social and Behavioral Sciences, 2, 367-371.
94
Penny
Dalsgaard, C. (2008). Pedagogical quality in e-learning: Designing e-learning from a learning theoretical
approach. E-learning and Education. Retrieved September 24, 2010, from
http://eleed.campussource.de/archive/1/78
Davies, J., & Graff, M.O. (2005). Performance in e-learning: Online participation and student grades. British Journal of Educational Technology, 36(4), 657-663.
Everitt, B. S., & Dunn, G. (2001). Applied multivariate data analysis (2nd ed.). London: Arnold.
Garrison, D. R., & Anderson T. (2003). E-learning in the 21st century: A framework for research and practice. London: Routledge Falmer.
Haythornthwaite, C. (2002). Building social networks via computer networks: Creating and sustaining distributed learning communities. In K. Renninger & W Schumer (Eds.), Building virtual communities:
Learning and change in cyberspace (pp. 159-190). Cambridge: Cambridge University Press.
Hosmer, D. W., & Lemeshow, S. (2000). Applied logistic regression (2nd ed.). USA: John Wiley & Sons.
Hrastinski, S. (2008). What is online learner participation? A literature review. Computers & Education,
51, 1755-1765.
Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1),
78-82.
Kahiigi, E. K., Ekenberg L., Hansson, H., Tusubira, F. F., & Danielson, M. (2008). Exploring the elearning state of art. The Electronic Journal of e-learning, 6(2), 77-88. Retrieved 24 February, 2010,
from www.ejel.org
Laurillard, D. (2005). E-learning in higher education. In P. Ashwin (Ed.), Changing higher education: The
development of learning and teaching. London: Routledge Falmer.
Leidner, D. E., & Jarvenpaa, S. L. (1995). The use of information technology to enhance management
school education: A theoretical View. MIS Quarterly, 19(3), 265-291.
Rovai, A. 2002. Building sense of community at a distance. International review of research in open and
distance learning, 3(1), 1-16.
Sivapalan S., & Cregan, P. (2005). Value of online resources for learning by distance education. CALlaborate, 14, 23-27.
Vonderwell, S., & Zachariah, S. (2005). Factors that influence participation in online learning. Journal of
Research on Technology in Education, 38(2), 213-230.
Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge: Cambridge
University Press.
Biography
Kay Penny studied at the University of Aberdeen, Scotland at both
undergraduate and postgraduate level, and was awarded a PhD in 1995.
Since 1999, she has held the post of Lecturer in Statistics at Edinburgh
Napier University in the UK, and has previously held research posts at
both the University of Aberdeen (1995 – 1997) and Edinburgh University (1997 – 1999).
Kay’s research interests include applied statistics, data mining, dealing
with missing data, multivariate analysis, survey methods and elearning.
95
This page left blank intentionally
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Why Learners Choose Plagiarism:
A Review of Literature
Deanna Klein
Minot State University, Minot, North Dakota, USA
[email protected]
Abstract
Through a review of literature, this paper presents a theoretical foundation for understanding why
learners may choose to plagiarize both online and on ground. Ethical theories, social desirability,
perceptions of plagiarism, and demographics and academic dishonesty in relation to the reasons
learners choose to plagiarize are presented. Web sites that encourage plagiarism and online tools
that are available to detect plagiarism are discussed.
Keywords: Plagiarism, ethical theory, social desirability, perceptions of plagiarism, demographics, academic dishonesty, online plagiarism detection
Introduction
Many actions performed by learners in higher education could be considered dishonest. Central
Connecticut State University (2004) reports actions such as falsifying data, presenting another’s
words or ideas as one’s own, or cheating on assigned work as being dishonest. Godfrey and
Waugh (n.d.) describe dishonest practices as copying from previous assignments or from books,
inappropriate student collaboration on assignments, inappropriate assistance from relatives, inappropriate reference to crib notes, cheating during exams, and lying to faculty when missing deadlines.
According to McCabe & Trevino (1993, 1997, 2002), learner cheating is becoming a campus
norm, institutions of higher education are lacking an honor code and adequate penalties, and there
is little chance that a learner will get “caught” – due in part by lack of faculty support for academic integrity policies. McCabe supports these statements based on his involvement with many
research studies on academic integrity. McCabe has studied learner self-reported academic dishonesty involving 2,100 learners surveyed in 1999, faculty self-reported academic dishonesty
involving over 1,000 faculty members on 21 campuses in 1999, and the influence of honor codes
on academic dishonesty in 1990, 1995, and 1999 involving over 12,000 learners and 48 campuses
(Center for Academic Integrity, n.d.). As a result of these studies, McCabe reports one-third of
the participating learners admitted to serious test cheating and half admitted to one or more instances of serious cheating on written
Material published as part of this publication, either on-line or
assignments. One-third of the faculty
in print, is copyrighted by the Informing Science Institute.
reported that they were aware of learner
Permission to make digital or paper copy of part or all of these
cheating in their course in the last two
works for personal or classroom use is granted without fee
years, but did nothing to address it (Cenprovided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
ter for Academic Integrity, n.d.). In refin full and 2) give the full citation on the first page. It is pererence to reported faculty’s cavalier attimissible to abstract these works so long as credit is given. To
tude on cheating, McCabe finds, as sugcopy in all other cases or to republish or to post on a server or
gested by learner reporting, that the ento redistribute to lists requires specific permission and payment
gagement of cheating is higher in coursof a fee. Contact [email protected] to request
redistribution permission.
es where learners know faculty members
Editor: Mark Stansfield
Why Learners Choose Plagiarism
are likely to ignore cheating (Center for Academic Integrity, n.d.). McCabe & Treveno (2002)
argue that Academic honor codes effectively reduce cheating. Surveys administered by McCabe
demonstrate a positive impact of honor codes and learner involvement on academic dishonesty.
Serious test cheating on campuses with honor codes is typically 1/3 to 1/2 lower than the level on
campuses that do not have honor codes. The level of serious cheating on written assignments is
1/4 to 1/3 lower (Center for Academic Integrity, n.d.).
Researchers believe that the act of plagiarism is growing in higher education (Anderson, 2001;
Ashworth, Bannister, & Thorne, 1997; Braumoeller, & Gaines, 2001; Bushweller, 1999; Center
for Academic Integrity, 2001; Fain & Bates, 2002; Groark, Oblinger, & Choa, 2001). The advent
of the Internet has made a wealth of information available for learners to research for writing papers (Weinstein & Dobkin, 2002). Some learners are using the availability of information via the
Internet to improve the quality of their work; however, others are using it to simply “cut and
paste” information into the paper. Because there is such a range of information that is relatively
easy to access, learners can easily plagiarize the work of others. McKenzie (1999) reports on
teachers complaining that new technology is making it easier for learners to plagiarize. Under the
“new plagiarism,” as McKenzie refers to plagiarism using technology, learners are now able to
access and save numerous documents with little reading, effort, or originality as opposed to the
huge amount of time it took for learners to move words from an encyclopedia to white paper and
changing a few words in an effort to avoid plagiarism.
The purpose of this paper is to provide a theoretical foundation for understanding why learners
may choose to plagiarize. This paper is organized as follows. The paper presents a review of literature that includes ethical theory, social desirability, perceptions of plagiarism, and demographics and academic dishonesty linking them to the reasons learners choose to plagiarize. The paper
concludes with a discussion on Web sites that encourage plagiarism and online tools that are
available to detect plagiarism.
Ethical Theory
Morality and Ethics
McShane and Von Glinow (2005) describe ethics as the study of moral values involving actions
that may be right or wrong or result in good or bad outcomes. Sullivan and Pecorino (2002) further classify ethics by ethical theory and ethical principle. They suggest that ethical theory takes
the most general point of view when interpreting ethical experience, obligations, or the role of
reason. Ethical principles, on the other hand, are general rules of conduct that emerge or are derived from ethical theory. When discussing ethical principles, one should consider moral intensity
and the ethical sensitivity as well (McShane & Von Glinow, 2005). Moral intensity measures the
degree to which the application of ethical principal is necessary. When the intensity of a moral
issue increases, a higher degree of ethical consideration is necessary. Ethical sensitivity deals
with a personal characteristic. The more ethically sensitive a person is, the better he/she is able to
recognize the presence and importance of an ethical issue.
According to Lyons (2005), many Americans are finding the moral and ethical climate to be troubling. According to Gallup’s annual Mood of the Nation poll, as taken in 2005, 59% of Americans are somewhat or very dissatisfied with the ethical and moral climate of the country (Lyons,
2005). Only 7% of the remaining 40% are very satisfied with the ethical and moral climate of the
country, and 33% are somewhat satisfied. Follow up interviews with some of the respondents
resulted in diverse reasons for the responses. However, it should be noted that one of the concerns
listed was plagiarism in the schools. The Gallup Poll also reports that the younger the respondent,
those aged 18 to 29, the more satisfied with the ethical and moral climate of the country.
98
Klein
Sullivan and Pecorino (2002) consider morality to be a social phenomenon because moral behavior is based on situations in which humans are living with others. For example, inappropriate use
of another’s work is legally wrong in the United States and considered to be morally wrong by
many people in the country. However, China doesn’t have such a law, and plagiarism doesn’t
have the moral affect on Chinese learners studying in their country.
Cultural Relativism
Cultural values have an important influence on personal ethical behavior (McShane & Von Glinow, 2005). For example, when referring to an institution of higher education, one would think
the cultural environment is pretty stable and is usually understood by learners at some point.
When the structure of the institution is further broken down, the culture within the major is even
more obvious to the learner. However, Ashworth, Freewood, and Macdonald (2003) imply that
this is a time of considerable change in higher education, and the changes effect the concern for
increased plagiarism.
Changes in higher education include moving from the elite status to a mass-system. Changes in
assessment are changing from formal proctored exams to a greater emphasis on coursework such
as term papers and projects (Ashworth et al., 2003). The advances in technology have changed
how learners work on assignments and papers (Scanlon & Neumann, 2002). Scanlon and Neumann (2002) recommend research in the area of the new and next generation of learners that have
been exposed to advanced technology from an early age to determine if the advances in technology have an influence on learners. Another change presented by Ashworth et al. (2003) is the increased focus on group-based learning. With the ambiguity on collective and individual ownership this method of learning may have an influence on the perception of academic honesty overall.
The culture of plagiarism itself may have been derived from such implementations as the copyright law, or the cultural history of the idea of individual originality, or contemporary cultural
variations (Ashworth et al., 2003). Although the copyright law was originally intended to restrict
competition among publishers, it has since evolved to protect the rights of authors. The idea of
individual originality in regard to plagiarism is to explore creative and unique ideas as authors
rather than repeatedly presenting materials from the existing literary world. The advances of
technology have been a common consideration when studying plagiarism. Safo (1994) sees technology as a tool to use as an author, just as the chainsaw or laser printer or earlier technology,
such as a carving device or pencil, are just tools. An example of contemporary cultural variations
would be the perception Americans have of Chinese students being rote learners because the educational culture in China relies heavily on memorization (Pennycook, 1996). In America, this is
considered plagiarism if the memorized work is used as the student’s own work.
Utilitarian Theories
Utilitarianism is the idea of choosing the greatest good for the greatest number of people or to
seek the highest degree of satisfaction to those affected by our decisions – all people are considered morally equal (McShane & Von Glinow, 2005). The theory was developed by Benthem for
English lawmakers in order to encourage decision making for the common good rather than their
own social class (Sullivan & Pecorino, 2002). The utilitarian theory is problematic because it focuses on the consequences and not the process for the accomplishment (McShane & Von Glinow,
2005). If the focus is to achieve the greatest results, the ethical consideration in the process may
be overlooked, such as in writing. If earning a grade or a degree is considered the greatest good to
an individual or the family of the individual, perhaps choosing to plagiarize in order to achieve
the grade or degree is the option a utilitarian learner will choose.
99
Why Learners Choose Plagiarism
Two types of utilitarian theory are act and rule (Sullivan & Pecorino, 2002). The difference is that
the act utilitarian only considers the single act or decision, and the rule utilitarian will look at the
overall or long term consequences of the decision. An example would be a learner contemplating
whether or not to plagiarize on a research paper. If the learner chooses to plagiarize, he or she will
believe the best work was submitted and the consequences will be good. However, the rule utilitarian would consider that the paper will be turned in to the teacher and the plagiarism may be
detected resulting in bad consequences.
Utilitarian was established as a dominant ethical theory by Jeremy Bentham and was further developed by John Stuart Mill in the mid and late 19th century (Sullivan & Pecorino, 2002). Although the two agree on most of the foundation behind the utilitarian theory, Mill considers moral
implications along with the good consequences and the number of people it would be good for.
Mill will make a decision that might not have the same good or pleasurable results as another, but
understands it to be morally the better decision. A learner following Mill’s theory would be less
inclined to plagiarize than one following Bentham’s theory.
Kantian Theories
Immanuel Kant is an influential philosopher from the 1700’s whose philosophies have a profound
effect on ethics. Unlike the utilitarian theory, Kant’s theory on thinking and acting does place
high value on ethics (McCormack, 2001). Kant does make the assumption that the consequences
from some actions are just wrong even if they produce the most good for the most people. Furthermore, Kant considers the fact that not all people know what makes them happy and it is difficult to measure happiness.
Moral law is considered by Kant to be an instinctive sense, something that is part of our conscious or even deeper than our conscious (Sullivan & Pecorino, 2002). Moral law is the source of
human freedom and autonomy and is derived from human reason within oneself. Kant sees the
basis for the theory of good as what lies in the intention or the will of a person. In this case, the
decision or act is morally praiseworthy and done out of the sense of what is right rather than what
the consequences are (Sullivan & Pecorino, 2002).
Kant considers it a person’s duty to apply human reason to determine the right or rational thing to
do. Human reasoning is the search for universal laws that are central to human morality, the heart
of the demand for impartiality and justice. A legal system, however, has been structured so that
impartiality is avoided just as the universal law is created to remove impartial ideas and reasoning, rather than consider justice and morality.
Social Desirability
Bushweller (1999) reports that many educators consider the erosion of ethics in our self-centered
society as the reason why learners are increasingly cheating. Other educators consider the rise in
learner collaboration as a factor, while still others blame teachers for not caring or not bothering
to deal with cheating. Finally, some blame the parents who don’t hold their children accountable
if they are caught cheating. In reality, there are a number of social factors that could influence
learner cheating in higher education (Bushweller, 1999).
Several social theories may also influence why learners plagiarize. Cross and Brodt (2001) explain social projection theory as viewing people and places based on one’s own beliefs, knowledge, or experience rather than on anything objective about the person or place. In this case,
learners that plagiarize might anticipate that it occurs more often in higher education than it really
does, but they want to believe plagiarism is rampant in order to excuse their own behavior. Social
identity theory and self-categorization could also influence why learners plagiarize. The social
identity theory assumes that people’s perception of the world depends on the perception they have
100
Klein
of themselves (Haslam, Eggins, & Reynolds, 2003). People in this group will define themselves
based on the group they belong to and feel an attached to. As an example, Young (2001) reports
in the Chronicle of Higher Education that learners do not see the process of cut and paste without
quoting or referencing as a problem. Today’s learners are accustomed to downloading music,
sharing files, and reading articles for free – making it seem acceptable to submit plagiarized
work. Young (2001) quotes Donald L. McCabe, a professor at Rutgers University, as saying, “A
typical attitude I hear from high school learners is ‘if it’s on the Internet, it’s public knowledge,
and I don’t have to cite it’” (p.1).
If academic integrity is expected in higher education, faculty must play a vital role in socializing
learners on an ethical culture (Lumpur, Jaya, Pinang, & Miri, 1995). Lumpur et al. (1995) describe a learner’s attitude about white collar crime. The learner’s position is that if Donald Trump
can get away with skipping out on billions in loan payments, then why can’t other business people get away with white collar crime or petty theft? This student has based his perception of what
is acceptable on the information he is getting in this world around him (McShane & Von Glinow,
2005). Therefore, Lumpur et al.’s (1995) suggestion of socializing learners on an ethical culture is
important and further suggests that prominent business leaders be included to send a clear message to the learners that unethical behavior is unacceptable, and, furthermore, ethical behavior
will be rewarded, even in this competitive business climate. Including both the instructor and a
business leader creates strong reinforcement on the importance of ethical behavior.
Peirce and Allshouse (1999) attribute cheating to peer pressure, insecurity, or striving for perfection. In a report by the Duke University Academic Integrity Assessment Committee (2001), a
learner was quoted as saying “if you don’t want cheating to go on here at Duke, you should work
to have a more cooperative rather than competitive environment (p. 12).” This learner feels the
competitive nature of the institution adds pressure to cheat in order to succeed. Some learners
simply consider plagiarism as socially acceptable (Fain & Bates, 2002). The belief that everyone
or a large majority of peers frequently cheats or facilitates cheating makes the choice to cheat less
difficult (Central Connecticut State University, 2004). However, a 1994 study at University of
North Carolina-Chapel Hill (as cited by Peirce & Allshouse, 1999) revealed that 89% of the
freshmen surveyed indicated that they either disagreed or strongly disagreed with the statement,
“Academic cheating in college courses is an acceptable behavior under certain circumstances.”
Even though these learners do not condone cheating, it still exists.
Bandura (1986, as cited by McCabe and Trevino, 1993) explains social learning theory as human
behavior “learned through the influence of example” (p. 527). If learners are observing their peers
cheating with nonexisting or minimal punishment, the learner is more apt to cheat as well. Under
this theory, the burden to prove dishonesty rests on the professor, even though it is an unpleasant
situation for everyone involved.
Perceptions of Plagiarism
For several reasons, learners have a different perception of what plagiarism is. In some cases, the
learners have received ambiguous or conflicting education on plagiarism (Ashworth et al., 1997;
Heron, 2001; Lathrop & Foss, 2000; Peirce & Allshouse, 1999; Weiss & Bader, 2003). In other
cases it is social identity where learners are comparing themselves to others (McShane & Von
Glinow, 2005). If learners perceive “everyone” to be a cheater or perceive faculty not to care
about plagiarism, their perception on plagiarism may be skewed.
Public Perception of Plagiarism
Weiss and Bader (2003) report that the public perception of academic dishonesty in higher education is that it is a serious problem. Because public perception is so poor, they argue it will be dif-
101
Why Learners Choose Plagiarism
ficult to change the perception where mistrust and disinterest are prevalent. Peirce and Allshouse
(1999) suggest that situations such as take-home tests, previous tests kept on file, and online services that practically beg learners to download ready-to-submit papers only exacerbate the public
perceptions on cheating. Another finding by Heberling (2002) indicates the public perception on
cheating is that it takes place online more than in the classroom on ground wherein the reality is
that academic dishonesty takes place in both environments.
The results of a three study analysis by Education Testing Services (1999) indicates the general
“public perception is that cheating is more prevalent and accepted today;” the respondents to the
surveys see cheating “in many facets of life: politics, business, home, and school,” and “collaborative environments like the Internet are making the definition of cheating even murkier” (p. 1).
ETS also reports that “56% of educators and 31% of the public (including parents, and learners)
say that they hear about cheating incidents. However, only 35% of educators and 41% of the public (including learners and parents) agree that there is a problem with cheating on tests” (p. 2).
The fact that these respondents know plagiarism is taking place but don’t consider it to be a problem makes addressing the problem from a preventative nature in higher education more important.
Learner Perception of Plagiarism
Many researchers argue that there is ambiguity on what is perceived as academic dishonesty
among learners (Ashworth et al., 1997; Heron, 2001; Lathrop & Foss, 2000; Peirce & Allshouse,
1999; & Weiss & Bader, 2003). Learners have claimed that they don’t know what instructors
consider to be dishonest or cheating. An example of an area of ambiguity might include peer collaboration and knowing to what extent the collaboration is considered inappropriate (Weiss &
Bader, 2003). Lathrop and Foss (2000) agree that there is an inherent conflict between an instructor’s desire to assign collaborative work to learners for preparation for future careers and the need
to teach learners to do their own work. The point of crossing the line to cheating may differ by
each instructor (Williams, 2001).
Even though there is ambiguity among learners on what constitutes academic dishonesty, there is
also a cavalier attitude toward cheating by learners in higher education (ETS, 1999; McCabe,
n.d.; McCabe & Trevino, 1993, 1997). Research consistently reports that learners feel their cheating will not affect others (Weinstein & Dobkin, 2002). Some researchers argue that students understand plagiarism to be a victimless crime; the only person that plagiarism is cheating is oneself. Studies on self-reported plagiarism indicate that plagiarism is accepted among their peers
(Gillespie, 2003), the likelihood of getting caught is slim, and if the learner does get caught, the
punishment will be minimal (Weinstein & Dobkin, 2002). Gibbs (1975, as cited by McCabe and
Trevino, 1993) suggests that learners will not be deterred from misconduct, in this case cheating,
unless they perceive they will get caught and that the punishment is perceived to be severe.
Learners will simply weigh the cost and benefits of plagiarizing based on their personal beliefs
(Weinstein & Dobkin, 2002). The potential cost is the probability of getting caught and the perceived punishment. The perceived benefit is based on learner perception of how much plagiarism
will improve his or her grade. Under this theory, faculty must establish policy, inform learners of
the policy, and enforce the policy with strict consequences in order to deter plagiarism in the
course.
Learners accepting plagiarism as the “norm” are the people responsible for the future “civil society and the economy” (Gillespie, 2003, p. 30) and, unfortunately, this cavalier attitude of learners
is not ending at graduation, but is continuing with resume fraud, crib notes for the CPR exam, and
altering of other learner scores (ETS, 1999). In 1993, Sims published an article on the relationship
between academic dishonesty and unethical business practices (as cited by Gillespie, 2003). Sixty
people were surveyed and 91% of the respondents admitted they had been dishonest in college
102
Klein
and 98% of the respondents admitted to dishonest work behaviors. The author of this study concludes that his data is consistent with the results of a 2001 study by Nonis and Swift (as cited by
Gillespie, 2003) who found that many students accept academic dishonesty as acceptable behavior and that learners that are dishonest in college are more likely to carry the dishonesty into the
work place.
For learners to have this cavalier attitude toward dishonesty is of concern because, in most cases,
institutions of higher education have a learner conduct code and in many cases this code is published right on the course syllabus. What learners don’t understand is the credibility of their alma
mater and that their degree is at risk due to this behavior.
Demographics and Academic Dishonesty
Donald McCabe (as cited by ETS, 1999) reports overall personal indicators of learners who selfreport cheating in higher education to be business or engineering majors, more often men, who
are preparing for business and learners with either a low or a high GPA. Cizek (1999) reviewed
research on academic dishonesty and concludes that, although studies over time indicate males
admit to academic dishonesty at a higher rate than women, the proportion of males and females
reporting are about equal. Also, Cizek (1999) concludes that females have admitted to academic
dishonesty as often as males under certain circumstances. In addition to gender, Cizek (1999) and
McCabe and Trevino (1997) have reported data on the impact age may have on the engagement
of academic dishonesty. In both cases, the researchers have found that the engagement of academic dishonesty decreases as age increases and nontraditional learners tend to cheat less than
traditional aged learners.
The Center for Academic Integrity (CAI) summarized a review of literature from studies conducted in 1990, 1993, 1995, 1999, and 2001 (Center for Academic Integrity, 2001). The result of
this review indicates a slow increase in academic dishonesty as learners’ progress through the
elementary and junior high grades. The peak age appears to be high school grades 11 and 12 with
the trend slowly declining as the learner progresses through college (Center for Academic Integrity, 2001). Additionally, this review by CAI is consistent with the argument presented by
McCabe and Trevino (1997) that engineering and business students report academic dishonesty at
a higher rate than other disciplines. On the other hand, in a study by Nowell and Laufer (1997),
computer science majors reported the highest level of academic dishonesty. CAI concurs with
McCabe and Trevino (1997) in that males report academic dishonesty at a higher rate than females. McCabe and Trevino indicate that the higher incident of males reporting may have an effect on the engineering and business fields as they tend to be male dominated. Finally, studies
show that as a GPA decreases, learners report a higher level of academic dishonesty (Center for
Academic Integrity, 2001). Researchers suggest that perhaps learners feel they have less to lose if
the cheat with a lower GPA than those with a higher GPA (Nowell & Laufer, 1997).
On Ground and Online Demographics
Learners plagiarize for many reasons. In some cases learners are overwhelmed by assignments,
procrastinate until they run out of time, or simply have too many responsibilities that contribute to
each of the aforementioned reasons. It is important to understand the demographics of learners
that report plagiarism in order to identify meaningful policies to encourage an environment of
academic integrity.
Clayton (2001) provides demographic statistics on national enrollment in higher education from
The Chronicle Almanac, 2001-2: The Nation. According to Clayton, “The Chronicle Almanac
reports 62.7% of students enrolled in two and four year institutions in fall 2000 under the age of
25 and 37.3% over 25. Analysis of the national full-time and part-time enrollment shows 72% of
103
Why Learners Choose Plagiarism
all students enrolled attend school full-time, while 28% attend part-time, and of the part-time students, 59.2% and 60.3% in two and four year institutions, respectively, are above the age of 25”
(2001, p. 3).
Halsne & Gatta (2002) conducted a study to identify learning characteristics of learners taking
online courses with learners taking the same course on ground. In their literature review, Halsne
and Gatta (2002) identify learners who use technology for the delivery of courses in higher education as “more mature, more diverse, and display varying degree of readiness” (p. 1). Also, these
learners have “various commitments and cannot relinquish their current jobs for the sake of education” (Halsne & Gatta, 2002, p. 1). Other contributing variables include geographic restrictions
or restrictions by their work schedules. The findings of Halsne and Gatta’s own study show a
higher number of women taking online classes than men. The women responded that they work
outside of the house – primarily full-time. Heberling’s (2002) research found that online learners
are primarily married or divorced with children living at home. Online education provides working adults or adults with families with another delivery option that may be more conducive to
their specific lifestyles.
Kramarae’s study on women learning online (2001) also identifies an average online learner as a
woman, 34 years old, employed part-time, and with previous college credit. Kramara reports that
many of the women have children and work on their courses either late at night or early in the
morning. The on ground learners were typically male, not married, under 25 years old, and had no
dependent children living at home. The typical on ground learner was a full-time learner and is
employed only part-time.
To evaluate The Chronicle Almanac’s statistics, the data reported by Halsne and Gatta (2002),
and the results from Kramarae’s (2001) study, one would have to acknowledge that the general
characteristics indicate a higher percent of learners take online courses for the flexibility the delivery offers for a slightly older learner with commitments beyond school. However, to complicate that theory, Hurst (2001) reports that at many campuses up to 75% of the learners enrolled in
online courses are also resident learners taking classes on campus.
Encouraging Plagiarism – Selling Papers Online
The issue of plagiarism has been around for many years. According to Standler (2001), the commercial sale of term papers dates back to the late 1960s by people offering “academic research
services.” The added benefit of the Internet is that the text does not even need to be rekeyed; a
simple copy/paste will suffice. Even though it is difficult to determine how common plagiarism
is; one thing is certain – that there is an abundance of paper mill sites available for learner access.
One anonymous writer told his story about how he makes a living writing papers for students
(Dante, 2010). In the past year, Dante (pseudonym) estimates he has written roughly 5000 pages
of scholarly literature for subjects that vary widely and include subjects such as history, cinema,
labor relations, and ethics. Dante has worked for an online company that generates tens of thousands of dollars a month by writing for cheating students. These students are willing to pay a
handsome fee for a quick turn-around and for Dante to follow their specific instruction. With a
custom paper, plagiarism is much more difficult to detect, and therefore, the student does not typically get caught.
Fain and Bates (2002) list many links to paper mill sites available for learner access. Some of these sites are free and some require a fee of different sorts. There are, however, no guarantees of
quality or validity. Also, if a learner doesn’t see a topic of choice, a paper can be custom written
for an excessive fee. For example, Chris Pap’s Essay Database (free), found at Chuckies College
Resources http://www.chuckiii.com/, offers papers for a variety of prices depending on how fast
the customer needs it. In one case, the prices range from $49.95 per page for 8 to 23 hour re-
104
Klein
sponse to $19.99 per page for a seven day or longer turnaround time. It should be noted, however,
that the Web site has the following quote posted at the bottom of the page, “Users of this website
are hereby advised that Student Network Resources, Inc. is the parent company of various sites
and also provides custom services for other websites. Any services provided by these websites
are governed by the Terms and conditions section of this website. Users are further advised that
Student Network Resources strictly prohibits the copying, reproduction, or plagiarism of any materials purchased from any of its websites. Violators may face civil and/or criminal penalties. The
copying or reproduction of this website without expressed written consent of Student Network
Resources is strictly prohibited.” Even though Chuckies College Resources publishes this statement, it is difficult to determine what the consequences would be and the likelihood of getting
caught.
Coshe’s Reports found at http://www.cyberessays.com offers a variety of essays grouped in topics such as Art, Politics, English, or History. Prewritten example papers are available or custom
written papers can be purchased. Coshe’s report listed The Doctor found at
http://www.fas.harvard.edu/-dberger/papers/; however, the site is no longer available. A widely
know site called SchoolSucks found at http://www.schoolsucks.com/ promotes itself as the most
popular term paper and free home work site. SchoolSucks has over 60,000 papers available for
$9.95 a page.
While Fain and Bates (2002) have primarily listed Internet paper mill sites, Weisbard (2004) has
included a list of sites from university course web pages and department sites that are accessible
via the WWW. Weisbard’s sites include Social Science Paper Publisher found at
http://www.sspp.net/. Weisbard reports that this site offers papers by undergraduates, graduate
learners, and faculty. The site was public until July 2003 when restricted access was noted, and in
November 2003 the site was no longer in operation. However, current access to
http://www.sspp.net is available under the name Study Services Provider Portal. The content of
this page is a listing of many online and distance education opportunities such as Phoenix Online
and Devry University are listed. Access to Social Science Paper Publisher was not located.
Other sites listed by Weisbard (2004) are made available by the National Undergraduate Research
Clearinghouse sponsored by the National Science foundation and Missouri Western State College, Barbie: the Image of Us All offered by University of Virginia, Classics: Women in Antiquity offered by Tufts University, and Peace Feminism in International Relation offered by the
University of Denver. Ethics and Law on the Electronic Frontier offered by MIT is said to be “an
archive of exemplary papers written by learners over several years” (Weisbard, 2004, p. 2).
Even though there are numerous opportunities for learners to purchase a research paper, several
states have enacted laws to make it unlawful to sell term papers, essays, reports, and so on
(Standler, 2000). For this reason, most Web sites will post a statement that inappropriate intent
for cheating or plagiarism is not permissible. In the case of a state without a specific law regarding the unlawful sale of research papers, misuse can still result in legal action; the argument of
“aiding and abetting fraud in obtaining a college degree” (Standler, 2000; p. 6) has been used.
The 1972 State v. Saksnitt case (as cited by Standler, 2000) prosecuted in New York is an example of legal action for charging $1.90 per page for a term paper in the company’s stock and $3.85
per page for a custom written paper. The argument for legal action is on the basis that Saksnitt is
aiding and abetting learners in fraudulent behavior in attempt to earn a diploma or degree.
Electronic Detection Tools
Web sites are available for accessing electronic plagiarism detection tools. Electronic detection
can be used in a number of different ways (Weisbard, 2004). A good search engine such as
google.com or altavista.com can be utilized by submitting a stream of text from the paper. Or a
105
Why Learners Choose Plagiarism
commercial plagiarism detection site may be utilized by submitting the entire document for plagiarism analysis.
The Center for Intellectual Property (2005) at University of Maryland University College has
listed several detection services Web sites. One example is Copycatch at
http://www.copycatch.freserve.co.uk/. However, access to this site was not available. Essay Verification Engine at http://www.canexus.com/eve/index.shtml offers a plagiarism detection service
for $19.99 with unlimited submissions. The submitted papers are checked against EVE’s database, which includes only Internet sources. Glatt Plagiarism Services at
http://www.plagiarism.com/ offers a computer program that requires the learner to cut and paste
into the program their written assignment. The software will remove every fifth word and the
learner is to replace the word. The theory behind this method is that each individual has his/her
own writing style and therefore, should be able to supply the missing word. Learners suspected of
plagiarizing would need to prove the appropriate word in order to demonstrate authenticity of the
submitted work. MyDropBox.com at http://www.mydropbox.com uses the Internet and institutional databases to detect plagiarism in a learner’s submitted paper. MyDropbox.com promotes
itself as the most comprehensive search of sources available.
TurnItIn.com will be the plagiarism detection Web site used for this study. Like MyDropbox.com, Turnitin.com claims to have the most comprehensive database of sources available for
plagiarism detection (Turnitin.com, n.d.). Groark, Oblinger, and Choa (2001) describe turnitin.com as a portal for users registered with the company. Users may be either faculty using the
site to detect plagiarism or learners who want verify that they have properly used references and
properly cited the references. In either case, the paper will be checked against the turnitin.com
database of electronic documents. The database “contains learner papers, papers posted online,
material from academic Web sites, and documents indexed by major search engines” (p. 3). The
Web site for turnitin.com also includes journals and books in its list of available sources (Turnitin.com, n.d.). Additionally, the papers submitted to turnitin.com will remain in the database
further building the archive and will continue to detect existing learner papers; in turn, this should
prevent the recirculation of papers on campus (Groark et al., 2001; Turnitin.com, n.d.).
Turnitin.com has tried to keep the process of submitting a paper simple. The instructor or learner
submits the paper using a “proprietary search engine” (Groark et al., 2001). Turnitin.com responds using an “originality report.” This report indicates the probability of plagiarism in terms
of percentages. In addition to the percentage plagiarized, the source is given for the recipient to
verify the detected plagiarism. This process can be completed in 24 hours, depending on the
length of text and the level of demand (Groark et al., 2001; Turnitit.com, n.d.).
A technical review of plagiarism detection software was performed for the Joint Information Systems Committee at University of Bedfordshire located in Luton (Bull, Collins, Coughlin, and
Sharp, 2001). Bull et al. (2001) use a five star rating scale (excellent, good, acceptable, poor, and
unsatisfactory) to rate a variety of electronic detection sites. Turnitin.com is reportedly good in
developer’s stability, good in speed of response, excellent in clarity of reports, good in accuracy
of reports, and poor in reliability of software/service.
The Center for Intellectual Property (2002) at University of Maryland University College also
developed a Faculty and Administrators Guide to Detection Tools and Methods. Several detection
sites, including turnitin.com were profiled. Some limitations of using plagiarism Web site detection have been identified. For example, only electronic format is searched; books and other learner works may not be included in an electronic format. In addition, subscription literature databases are almost never accessible for electronic plagiarism detection. Detected words can be identified as plagiarism, but thoughts and ideas cannot be; in some cases, and for various reasons,
106
Klein
works marked as “plagiarized” may in fact not be plagiarized (Virtual Academic Integrity Laboratory, 2002, pg. 3).
While there are some limitations with plagiarism detection Web sites, Braumoeller and Gaines
(2001) found plagiarism software to be successful and merits use in a wide variety of classroom
situations. Braumoeller and Gaines published a study using learners in a university setting as their
population. The learners submitted an approximately five page paper. The learners in one section
received a written warning regarding plagiarism, and the instructor also gave an oral warning
against plagiarism. The other group of learners was not warned. These authors then submitted the
papers to a program called Essay Verification Engine, or EVE, version 2.1.
After the learners in the study by Braumoeller and Gaines (2001) submitted their papers, but before they were informed the papers would be submitted for plagiarism detection, they were asked
to complete a survey. The focus of this survey was to determine if the written and oral warning
deterred learners from plagiarizing. This may be the reason the authors did not disclose specifics
on the survey, but used the information to compare learner’s estimates of plagiarism rates with
the percentage of plagiarism detected by the detection site. Across both sections – plagiarism detection and survey results – 40% plagiarism was detected and reported at the blatant level, and the
maximum estimate of causal plagiarism was 80%. Across both sections, the means values for
plagiarism were 60% original, 32% causal, and 8 % blatant. This study by Braumoeller and
Gaines was conducted in 2001; it is important to mention that detection sites available today have
increased in sophistication in both technology and access to resources for the database.
Conclusion
Whether learners lack proper education on what accounts for plagiarism, have misperceptions of
what plagiarism is, or perceive a lack of consequences for their actions, learner plagiarism exists,
both online and on ground, and some fear it is becoming an academic norm. There are several
theories on why learners choose plagiarism over academic honesty. If the mood of the nation is
shifting to dissatisfaction with our society’s ethical and moral climate as literature indicates, the
question becomes what moral and ethical behaviors are shaping learner behavior in our country.
To further compound the issue, some argue the culture of higher education is shifting from an
elite status to a mass system and assessment in higher education is changing to more group based
learning or research paper requirements rather than the traditional proctored tests more directly
measuring individual comprehension, which moves away from individual learner accountability.
In addition to overcoming the perception of people in our society compromising their ethical and
moral beliefs, there is now an abundance of electronic paper mill sites available for learners to
access existing research. While these sites print a disclosure indicating copying or reproduction
from their website is strictly prohibited, there is temptation for those who are overwhelmed by
assignments, procrastinate until they run of time, or have too many responsibilities. However,
there are also electronic detection tools to be utilized by both learners and faculty to help instill
awareness about plagiarism and to provide a tool for faculty and learners to understand how plagiarism is detected and what constitutes plagiarism or academic dishonesty. Ultimately, it is important to educate students about what constitutes plagiarism, the repercussions of plagiarism, and
what tools are available to enhance research as opposed to perpetuating plagiarism.
References
Anderson, C. (2001). Online cheating: A new twist to an old problem. Student Affairs E-Journal, 2. Retrieved September 14, 2004 from http://www.studentaffairs.com/ejournal/Winter_2001/plagiarism.htm
107
Why Learners Choose Plagiarism
Ashworth, P., Bannister, P., & Thorne, P. (1997). Guilty in whose eyes? University students’ perceptions of
cheating and plagiarism in academic work and assessment. Studies in Higher Education, 03075079, 22
(2).
Ashworth, P., Freewood, M. & Macdonald, R. (2003). The student life world and the meanings of plagiarism. Journal of Phenomenological Psychology, 34(2).
Braumoeller, B. & Gaines, B. (2001). Actions do speak louder than words: Deterring plagiarism with the
use of plagiarism-detection software. The American Political Science Association Online. Retrieved
September 14, 2004 from http://www.apsanet.org/PS/dec01/braumoeller.cfm
Bull, J., Collins, C., Coughlin, E., & Sharp, D. (2001). Technical review of plagiarism detection software
report: Prepared for the Joint Information Systems Committee, University of Luton. Retrieved February 4, 2005 from
http://online.northumbria.ac.uk/faculties/art/information_studies/Imri/Jiscpas/docs/jisc/luton.pdf
Bushweller, K. (1999). Generation of cheaters. The American School Board Journal. Retrieved February 4,
2005 from http://www.asbj.com/199904/0499coverstory.html
Center for Academic Integrity. (n.d.). Center for Academic Integrity – Research. Retrieved April 9, 2005
from http://www.academicintegrity.org/cai_research.asp
Center for Academic Integrity. (2001). Academic integrity: A research update. Retrieved April 9, 2005
from http://www.academicintegrity.org/mem_cai_pub.asp
Center for Intellectual Property. (2005). Current issues and resources – Plagiarism. University of Maryland University College. Retrieved January 10, 2005 from
http://www.umuc.edu/distance/odell/cip/links_plagiarism.htm.
Central Connecticut State University. (2004). Academic integrity – Faculty and student surveys. Retrieved
December 22, 2004 from http://www.ccsu.edu/AcademicIntegrity/FacultyandStudentSurveys.htm
Cizek, G. (1999). Cheating on tests: How to do it, detect it, and prevent it. Mahwah, New Jersey: Lawrence
Erlbaum.
Clayton, M. (2001). Who’s online? A look at demographics of online student populations. V Congress of
the Americas: Pueblo, Mexico
Cross, R., & Brodt, S. (2001). How assumptions of consensus undermine decision making. MIT Sloan
Management Review: Massachusetts Institute of Technology.
Dante, E. (2010). The shadow scholar. The Chronicle of Higher Education: The Chronicle Review. Retrieved December 01, 2010 from http://chronicle.com/article/article-content/125329/ll
Duke University Academic Integrity Assessment Committee. (2001). Renewing our shared responsibility:
Promoting academic integrity at Duke University.
Educational Testing Services. (1999). The educational testing service/add council campaign to discourage
academic cheating. Retrieved December 22, 2004 from http://www.glass-castle.com/clients/wwwnocheating-org/adcouncil/research/
Fain, M. & Bates, P. (2002). Cheating 101: Paper mills and you. Retrieved March 4, 2005 from
http://www2.sjsu.edu/ugs/curriculum/cheating.htm
Gillespie, K. (2003). The frequency and perceptions of academic dishonesty among graduate students: a
literature review and critical analysis. University of Wisconsin – Stout.
Godfrey, J. & Waugh, R. (n.d.) Student’s perceptions of cheating in Australian independent schools. Retrieved December 22, 2004 from http://edoz.com.au/educatoinaustralia/archive/features/cheat.html
Groark, M., Oblinger, D., & Choa, M. (2001). Term paper mills, anti-plagiarism tools, and academic integrity. EDUCAUSE Center for Applied Research (ECAR).
108
Klein
Halsne, A. & Gatta, L. (2002). Online versus traditionally-delivered instruction: A descriptive study of
learner characteristics in a community college setting. Retrieved November 28, 2004 from
http://www.westga.edu/~distance/ojdla/spring51/halsne51.html
Haslam, S., Eggins, R., & Reynolds, K. (2003). The ASPIRe model: Actualizing social and personal identity resources to enhance organizational outcomes. Journal of Occupational and Organizational Psychology, 76(1).
Heberling, M. (2002). Maintaining academic integrity in online education. Online Journal of Distance
Learning Administration, V(1). Retrieved September 14, 2004 from
http://www.westga.edu/~distance/ojdla/spring51/heberling51.html
Heron, J. L. (2001). Plagiarism, learning dishonesty or just plain cheating: The context and countermeasures in information systems teaching. Australian Journal of Education Technology, 17(3). Retrieved
December 22, 2004 from http://www.ascilite.org.au/ajet/ajet18/leheron.html
Hurst, F. (2001). The death of distance learning. EDUCAUSE Quarterly, 24(3).
Kramarae, C. (2001). The third shift women learning online. Washington, DC: American Association of
University Women Educational Foundation. Retrieved April 2, 2005 from
http://www.aauw.org/research/3rdshift.cfm
Lathrop, A., & Foss, K. (2000). Student cheating and plagiarism in the Internet era: A wakeup call. Libraries Unlimited Inc., Englewood, CO.
Lumpur, K., Jaya, P., Pinang, P., & Bahru, J. (1995). Cheating among business students: A challenge for
business leaders and educators. Journal of Management Education, 19(2). Retrieved September 14,
2004 from http://mgv.mim.edu.my/Articles/00417/960218.Htm
Lyons, L. (2005). Morality meter: Americans dissatisfied with ethical climate. Gallup Poll Organization.
Retrieved April 29, 2005 from http://www.gallup.com/poll/content/print.aspx?ci=15154
McCabe, D. (n.d.). Center for Academic Integrity: Research. Retrieved December 2, 2004 from
http://www.academicintegrity.org/cai_research.asp
McCabe, D., & Trevino, L. (1993). Academic dishonesty: Honor codes and other contextual influences.
The Journal of Higher Education, 64(5). Retrieved December 22, 2004 from http://www.jstor.org/
McCabe, D., & Trevino, L. (1997). Individual and contextual influences on academic dishonesty: A multicampus investigation. Research in Higher Education, 38.
McCabe, D., & Trevino, L. (2002). Honesty and honor codes. Academe. Retrieved April 5, 2005 from
http://www.aaup.org/publications/Academe/2002/02JF/02jfmcc.htm
McKenzie, J. (1999). The new plagiarism: Seven antidotes to prevent highway robbery in an electronic age.
From Now On: The Educational Journal, 7(8). Retrieved April 7, 2005 from
http://www.fno.org/may98/cov98may.html
McCormack, M. (2001). Immanuel Kant (1724 – 1804) Metaphysics. The Internet Encyclopedia of Philosophy. Retrieved April 28, 2005 from http://www.utm.edu/research/iep/k/kantmeta.htm
McShane, S. & Von Glinow, M. (2005). Organizational behavior: Emerging realities for the workplace
revolution. New York: McGraw Hill.
MyDropbox.com (n.d.). Retrieved April 22, 2005 from http://www.mydropbox.com/technology.html
Nowell, C., & Laufer, D. (1997). Undergraduate student cheating in the fields of business and economics.
The Journal of Economic Education, 28(1).
Peirce, A., & Allshouse, B. (1999). The influence of peer pressure on the reporting of academic dishonesty
in a survey. Academic dishonesty at University of North Carolina: A collaborative study. Retrieved on
December 22, 2004 from
http://www.unc.edu/~bmize/teaching/english_12/academic_dishonesty/peirce&allshouse.html
109
Why Learners Choose Plagiarism
Pennycook, A. (1996). Borrowing others’ words: Text, ownership, memory, and plagiarism. TESOL Quarterly, 30.
Safo, P. (1994). The place of originality in the information age. Journal of Graphic Design, 12(1).
Scanlon, P. & Neumann, D. (2002). Internet plagiarism among college students. Retrieved from
http://www.rit.edu/~pmsgsl/Ethics%20in%20TechComm/Internet%20Plagiarism%20among%20Colle
ge%20Students.htm
Standler, R. (2000). Plagiarism in colleges in USA. Retrieved February 4, 2005 from
http://www.rbs2.com/plag.htm
Sullivan, S., & Pecorino, P. (2002). Ethics. Retrieved April 28, 2005 from
http://www2.sunysuffolk.edu/pecorip/SCCCWEB/ETEXTS/ETHICS/CONTENTS.htm
Turnitin.com (n.d.). Retrieved January 4, 2005 from www.turnitin.com
Virtual Academic Integrity Laboratory. (2002). Faculty detection tools and methods – Choosing a detection
tool. Retrieved February 4, 2005 from
http://www.umuc.edu/distance/odell/cip/vail/faculty/detection_tools/choosing.html
Weinstein, J. & Dobkin, C. (2002). Plagiarism in U.S. higher education: Estimating Internet plagiarism
rates and testing a means of deterrence. University of California, Berkeley.
Weisbard, P. (2004). Cheating, plagiarism (and other questionable practices), the Internet, and other electronic resources. Retrieved September 14, 2004 from
http://www.library.wisc.edu/libraries/WomensStudies/plag.htm
Weiss, D. H., & Bader, J. B. (2003) Undergraduate ethics at Homewood. Joint Curriculum Committee:
Krieger School of Arts and Sciences and Whiting School of Engineering.
Williams, J. (2001). Flexible assessment for flexible delivery: Online examinations that beat the cheats.
UniServe Science News, 18. Retrieved September 14, 2004 from
http://science.uniserve.edu.au/newsletter/vol18/williams.html
Young, J. (2001). The cat-and-mouse game of plagiarism detection. The Chronicle of Higher Education
Information Technology. Retrieved September 14, 2004 from
http://chronicle.com/prm/weekly/v47/i43/43a02601.htm
Biography
Dr. Deanna Klein is an associate professor in the College of Business,
Department of Business Information Technology. In addition to teaching in the classroom, Dr. Klein teaches online classes and instructs
continuing Education workshops. She is a professional member of the
National Business Education Association (NBEA), International Association for Computer Information Systems (IACIS), Delta Kappa
Gamma Chapter and a member and past President of the Minot High
Marketing Advisory Board. Dr. Klein's current research interest is in
the areas of distance learning and Management Information Systems.
110
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Design of an Open Source Learning Objects
Authoring Tool – The LO Creator
Alex Koohang, Kevin Floyd, and Cody Stewart
Macon State College, Macon, Georgia, USA
[email protected]; [email protected];
[email protected]
Abstract
This paper presents the design and development of an Open Source Learning Objects Authoring
tool – the LO Creator. The LO Creator has two unique elements – simplicity of design and a free
style pedagogical design environment. The simplicity element may encourage the LO designer to
include appropriate user interface elements in the design process of learning objects. A free style
pedagogical design environment gives the LO designers the flexibility to design creative LOs using learning theories and principles appropriate for a chosen audience. Furthermore, the paper
discusses a systematic and methodical approach in designing and creating sound learning objects
using the LO Creator. Recommendations for further research are made.
Keywords: Open Source Software, learning objects authoring tool, LO Creator, user interface
design, learning theories/principles
Introduction
Authoring tools that allow creation of learning objects are emerging at a slow pace. There are
authoring tools available to the public for free download. Reload, for example, is an authoring
tool that facilitates “creation, sharing and reuse of learning objects using a range of pedagogical
approaches through the use of lesson plans” (See http://www.reload.ac.uk/). Another example is
GLO maker, a free authoring tool for creating generative learning objects. (See
http://www.glomaker.org/). Although these authoring tools are free to download, they are not
Open Source (OS). Conversely, there are several OS authoring tools that are freely available to
anyone who would like to adapt, expand, modify, and/or enhance the software. Several examples
of these OS authoring tools are as follows:
•
Xical - Xical is an Open Source player for media presentations, e-learning lectures, tutorials and webinars. It is designed and programmed in Flash and ActionScript and runs on
Macromedias OS-agnostic. (See http://xical.org/)
• eXe - The eXe project is an
Open Source authoring applicaMaterial published as part of this publication, either on-line or
tion for creating Web educain print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
tional content. (See
works for personal or classroom use is granted without fee
http://exelearning.org/wiki)
provided that the copies are not made or distributed for profit
• Multimedia Learning Object
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is perAuthoring Tool - The Multimemissible to abstract these works so long as credit is given. To
dia Learning Object Authoring
copy in all other cases or to republish or to post on a server or
Tool is an Open Source tool that
to redistribute to lists requires specific permission and payment
“enables content experts to easiof a fee. Contact [email protected] to request
redistribution permission.
ly combine video, audio, images
Editor: William Housel
Design of an Open Source Learning Objects Authoring Tool
•
and texts into one synchronized learning object.” (See
http://www.learningtools.arts.ubc.ca/mloat.htm)
Xerte - Xerte is an Open Source server-based suite of tools that is “aimed at developers of
interactive content who will create sophisticated content with some scripting.” (See
http://www.nottingham.ac.uk/~cczjrt/Editor/)
Most of these OS authoring tools have an extensive learning curve with little attention to simplicity and design flexibility. Therefore, these LO authoring tools may not be suitable for everyone wishing to create learning objects. While the task of developing OS Learning Objects authoring tools is slowly picking up among the communities of practice, two elements must be taken
into consideration during the development process – (1) simplicity of design and (2) a free style
pedagogical design environment.
Simplicity means that the authoring tool is simple and straightforward. It is uncomplicated and
the learning curve is nominal. Simplicity of the authoring tool may encourage the LO designer to
include appropriate user interface elements in the design process of learning objects.
A free style pedagogical design environment does not impose a set of pedagogical approaches to
be followed by the LO designer; rather it allows the flexibility for creativity of design using learning theories and principles suitable for a chosen audience.
The primary purpose of this paper is to present the birth of an Open Source Learning Objects authoring tool (the LO Creator) that emphasizes simplicity and flexibility for a free style pedagogical design environment for creating learning objects. Consistent with its purpose, this paper is
organized as follows. First, the concept of Open Source is explained. Secondly, a brief explanation of learning objects is presented. Next, the paper presents the Open Source Learning Objects
Authoring Tool – LO Creator from design to implementation. The discussion then turns to presenting a systematic and methodical approach in designing and creating sound learning objects
using the LO Creator. Conclusions and recommendations for future research complete the paper.
What is Open Source?
GNU project (See http://www.gnu.org/) defines open source (OS) software as “a matter of the
users’ freedom to run, copy, distribute, study, change and improve the software.” GNU project’s
freedom statements (0-3) regarding OS software are:
•
•
•
•
Freedom 0 - The freedom to run the program, for any purpose.
Freedom 1 - The freedom to study how the program works, and adapt it to your needs.
Access to the source code is a precondition for this.
Freedom 2 - The freedom to redistribute copies so you can help your neighbor.
Freedom 3 - The freedom to improve the program, and release your improvements to the
public, so that the whole community benefits. Access to the source code is a precondition for this. (For a thorough definition of free software visit
http://www.gnu.org/philosophy/free-sw.html)
The Open Source Initiative (See http://www.opensource.org/) is an organization that ensures the
integrity of OS software by advancing a set of ten principles and values inherent to OS Software.
These principles and values are:
•
•
•
•
112
Free redistribution
Source code must be included and allowed for distribution
Derived works – allowing for modification and derived works
Integrity of the author's source code
Koohang, Floyd, & Stewart
•
•
•
•
•
•
No discrimination against persons or groups
No discrimination against fields of endeavor
Distribution of license
License must not be specific to a product
License must not restrict other software
License must be technology-neutral (For more information visit
http://www.opensource.org/docs/osd)
There are many OS software products that are being used widely by millions of people. Several
examples of successful open software are GNU/Linux operating system (http://www.linux.org),
Apache HTTP server (http://www.apache.org), e-commerce platform osCommerce
(http://shops.oscommerce.com), Mozilla Firefox browser (http://www.mozilla.org), VoIP applications with Asterisk - PBX (http://www.asterisk.org/applications/pbx), and OpenOffice
(http://www.openoffice.org).
The many advantages of OS software are cited in the literature. For example, Coppola and Neelley (2004) delineated the following:
•
•
•
•
•
The OS software evolves more rapidly and organically.
Users’ needs are rapidly met as the OS software model harnesses collective expertise and
contribution.
New versions are released very often and rely on the community of users and developers
to test it, resulting in superior quality software tested on more platforms, and in more environments than most commercial software.
The development “team” is often largely volunteers, distributed, many in numbers, and
diverse. Often, paid members of the development team will manage the project and organize the work of the volunteers.
Security is enhanced because the code is exposed to the world.
Coppola and Neelley (2004) believe that OS software offers the following benefits for open learning models:
•
•
•
•
•
Freedom to choose
Increased user access
Increased user control
promoting the formation of a global community - communities of practice
Advancing quality of teaching and learning
Enhancing innovation in teaching and learning.
What are Learning Objects?
A learning object is referred to as small independent chunk of information that is self-contained,
interactive, and reusable. It is based on a clear instructional strategy (Wisconsin Online Resource
Center, n.d.).
IEEE’s Learning Technology Standards Committee (IEEE, 2002) stated that learning objects are
“any entity, digital or non-digital, that can be used, re-used or referenced during technologysupported learning. Examples of technology-supported learning applications include computerbased training systems, interactive learning environments, and intelligent computer-aided instruc-
113
Design of an Open Source Learning Objects Authoring Tool
tion systems, distance learning systems, web-based learning systems and collaborative learning
environments.”
Wiley (2000) believed that LOs are “any digital resource that can be reused to support learning.
This definition includes anything that can be delivered across the network on demand, be it large
or small.”
Harman and Koohang (2005) asserted that “a learning object is not merely a chunk of information
packaged to be used in instructional settings. A learning object, therefore, can include anything
that has pedagogical value - digital or non-digital such as a case study, a film, a simulation, an
audio, a video, an animation, a graphic image, a map, a book, or a discussion board so long as the
object can be contextualized by individual learners. The learner must be able to make meaningful
connections between the learning object and his/her experiences or knowledge he/she previously
mastered.” For the purpose of e-learning, a learning object is digital in nature.
The Open Source LO Creator
The OS LO Authoring Tool – the LO Creator is the initiation of a much needed task within the
learning objects community. The intent is to create a learning objects authoring tool that includes
the element s of simplicity and the flexibility of design in hopes of building a community of practice to continue the endeavor. The OS LO Authoring Tool – the LO Creator was developed and
designed by a team of three volunteer individuals: two IT professors and one IT student.
The Open Source LO Creator application consists of six main pages. They are:
•
•
•
•
•
•
LO Creator Home Page (index.php)
My Learning Objects page (myLearningObjs.php)
My Learning Objects detail view page (learningObj.php)
Create Learning Object (createLearningObj.php)
Add and Edit Learning Objects Slide page (newslide.php)
The Administrator Publish page (publish.php)
LO Creator Home Page (index.php)
Figure 1 shows a screenshot of the LO Creator home page. The function of the LO Creator home
page is two-fold; it allows new users to register or create a new account by inputting a username,
e-mail address, and password, and it allows existing users to access their learning objects by entering a valid username and password. When a new user creates and account or an existing user
is successfully authenticated, the user is redirected to his/her “My Learning Objects” Page as
shown in Figure 2.
114
Koohang, Floyd, & Stewart
Figure 1: The LO Creator Home page
Figure 2: My Learning Objects Page
115
Design of an Open Source Learning Objects Authoring Tool
My Learning Objects page (myLearningObjs.php)
The My Learning Objects page allows new users to create their first learning object as shown in
Figure 2. Existing users who have already created at least one learning object will see a bulleted
list of previously created objects.
My Learning Objects detail view page (learningObj.php)
When a user clicks on one of the existing learning objects, the learning object loads in a view
page with the following options:
•
•
•
•
•
Add Before – Allows the user to add a new slide to the LO before the current slide.
Add After – Allows the user to add a new slide to the LO after the current slide.
Edit – Allows the user to edit the current slide. The slide opens in edit mode and the user
has the ability to edit the content as well as making style changes to the slide.
Delete – Allows the user to delete the current slide.
Publish – Allows the user to submit the LO to the LO Creator Administrator. If the Administrator approves the LO, it is published to the World Wide Web for public access.
Users without existing learning objects or users who would like to create additional learning objects can click the “Create Learning Object” link that loads the Create Learning Object page.
Create Learning Object (createLearningObj.php)
A new learning object is created by entering a name for the LO and clicking the “Create” button
as illustrated in Figure 3 (left screenshot). Next, content can be added to the object using the
built-in text editing tool.
Figure 3: Creating a New Learning Object
116
Koohang, Floyd, & Stewart
Add and Edit Learning Objects Slide page (newslide.php)
The built-in text editor provides basic text formatting features, use of list structures, and the ability to insert images and hyperlinks, as well as foreground and background color options. After a
learning object is complete, the “Save Slide” option is clicked and the new object is saved to the
user’s account with the LO name created in the previous step. The new LO will become a part of
the user’s account and will be accessible from the user’s “My Learning Objects Page”. At anytime the LO author can modify, tweak, or redesign the learning object.
Administrator Publish page (publish.php)
Before a Learning Object can be made accessible to outside users, the object must be approved by
the Learning Objects Creator site administrator. The site administrator approves Learning Objects
by entering an administrative user name and password from the LO Home page. After successful
authentication, the administrator will be presented with a list of learning objects that are pending
approval. The following options are available:
•
•
Approve – selecting the approve option will mark the LO as approved. This will publish
the LO on the server. The LO is now accessible to public to view. The LO owner will be
automatically notified by email. The email will include a link by which the approved and
published LO will be accessible.
Decline – when selecting the decline option, the administrator provides a reason for the
decline status. This information is automatically sent by email to the LO owner.
It is important to note that a LO may continue to be edited by the owner/author while it is pending
approval by the LO Creator administrator. After a LO is submitted for approval, the status of the
LO will be available to the owner, when they log in, from the My Learning Object detail view
page (learningObj.php). The three status options include:
•
•
•
Pending Publish – the LO is pending administrator approval.
Share – the LO has been approved and is accessible to the public. The owner can click on
this status to view the URL of the published LO.
Publish Declined – the LO has been declined by the administrator. The owner can click
on this status to review why the LO was declined.
A LO is not accessible to the public until it has been approved by the administrator.
Development Environment
The LO Creator pages are written completely in XHTML 1.1. They conform to XHTML 1.1
standards. They meet all W3C CSS standards and all 3 priority levels of the W3C accessibility
guidelines. The layout is pure CSS with no tables or other elements that can cause accessibility
problems.
The open source PHP programming language was used. PHP runs on virtually any platform and
is appropriate for Web programming. It can be embedded into XHTML. In addition to being
open source, PHP has many advantages. These are speed, stability, security, and simplicity
(Pushman, 2000). The Structured Query Language (SQL) and Stored Procedures are used to
manage data within the MySQL relational database management system.
MySQL is a fast, robust, and portable relational database management system that works well
with PHP on all of the major operating systems. MySQL is available at no charge under an open
source license (the GPL) (Welling & Thomson, 2009).
117
Design of an Open Source Learning Objects Authoring Tool
The LO Creator consists of three tables – Accounts, LearningObjects, and Slide. The Accounts
table is used to store username, password, and e-mail addresses for each user, the LearningObject
table stores learning objects associated with each user, and the Slide table is used to keep track of
all slides that make up learning objects associated with each user account.
Because the attempt is to create a community of practice to continue this work, we chose to license the work with Creative Commons. Creative Commons is a not-for-profit organization that
is “dedicated to making it easier for people to share and build upon the work of others, consistent
with the rules of copyright.” (See http://creativecommons.org/). The license does not allow for
commercial use. It allows for the work to be modified by others within the community of practice and carries international Jurisdiction.
Designing Learning Objects Using the LO Creator
Du Plessis and Koohang (2005) believed that creating a learning object is systematic and methodical. Following a sound structure “facilitates the quantitative evaluation of instructional activity
and the ability to pinpoint unintended weakness in design and implementation.” (Du Plessis &
Koohang, 2005, p. x) The authors’ systematic and methodical approach/system in creating learning objects included seven phases focusing on the digital environment and its capacity to design,
develop, and employ learning objects. The seven phases are conceptualizing, preparing, creating,
tagging, storing, managing, and evaluating.
For the purpose of this paper we discuss only the three phases of conceptualizing, preparing, and
creating learning objects as they relate to the LO Creator. However, one must note that the tagging of the learning object with metadata, storing it in a repository, managing it in a learning
management system, and constantly evaluating the learning object are all parts of the systematic
and methodical approach/ in creating the learning object.
The most important tasks in the conceptualization phase in designing a LO with the LO Creator
should include the following:
•
•
•
Determining the audience – their assumed and required skills to negotiate the instructional content
Evaluating various media types (if any) for inclusion in the learning object
Being creative and innovative (adapted and modified from Du Plessis & Koohang, 2005)
Furthermore, the notion of simplicity and the flexibility for the free style pedagogical design environment embedded into the LO Creator ought to be noted in this phase.
From conceptualization, the designer moves into the preparation phase. In the preparation phase,
the notion of platform-independent (reusability, interoperability, and accessibility) becomes a
vital issue. Reusability is the ability of the learning object to be used over and over in different
instructional contexts. Reusability of a learning object depends upon interoperability and accessibility. Interoperability is the ability of the learning object to function in various environments
regardless of the platform. Accessibility is the ability of the learning object to be accessed by
learners in any location regardless of the learner experience, or the type of platform the learner
uses (Du Plessis & Koohang, 2005).
The notion of platform independent requires following a set of standards. These standards contribute to reusability, interoperability, and accessibility. A few examples of standards are:
•
118
IEEE LO Metadata (LOM) Learning Technology Standards Committee (LTSC) P1484
(See http://www.ieeeltsc.org:8080/Plone)
Koohang, Floyd, & Stewart
•
•
Advanced Distributed Learning (ADL) Initiative - Shareable Courseware Object Reference Model (SCORM) (See http://www.adlnet.gov/Pages/Default.aspx)
The Dublin Core Metadata for Electronic Resources (See http://dublincore.org/)
After preparation phase, the attention is turned into the design of learning object. The three elements in the design phase of a learning object are granularity, the instructional design theories/principles, and the user interface/usability.
Granularity
The content for the learning object being created by the LO Creator can be a concept, a
theory, or a view. Regardless, the content must have clear learning goals with clear educational value (Du Plessis & Koohang, 2005; Harman & Koohang, 2005). The success of the
learning object’s reusability depends upon its granularity. Wiley (1999, p.2) believed that
reusability and granularity represent "the two most important properties of learning objects."
Granularity of a learning object conveys the size and decomposability of the learning object.
Webopedia defines granularity as “the extent to which a system contains separate components (like granules). The more components in a system -- or the greater the granularity -- the
more flexible it is.” (See http://www.webopedia.com/TERM/G/granularity.html)
With granularity, objects as units of instructions are aggregated in multiple ways. An object
can serve not only as one idea, but also it can be aggregated with other objects. IEEE (2002)
Learning Object Metadata (LOM) has stated four scales for aggregation level describing the
functional granularity of a learning object. The aggregation levels are:
1.
2.
3.
4.
The smallest level of aggregation (raw media data or fragments)
A collection of level 1 learning objects (a lesson)
A collection of level 2 learning objects (a course)
The largest level of granularity (a set of courses leading to a certificate)
Instructional Design Theories/Principles
The literature has consistently reported that learning theories and principles should be included in
the design of e-learning instruction (Egbert& Thomas, 2001; Koohang & Durante, 2003; Pimentel, 1999; Randall, 2001). More specifically, constructivism is deemed as an appropriate learning
theory for e-learning design (Harman & Koohang, 2005; Hung, 2001; Hung & Nichani, 2001;
Koohang, 2009; Koohang & Harman, 2005; Koohang, Riley, Smith, & Schreurs, 2009). Furthermore, Bannan-Ritland, Dabbagh, and Murphy (2000), and Du Plessis and Koohang (2007),
assert that constructivism theory is well-suited for designing learning objects.
Koohang, Riley, Smith, and Schreurs (2009) believe that fundamental design elements inherent to
constructivism learning theory are critical for securing learning in e-learning activities.
As mentioned earlier, the LO Creator’s free style environment does not impose a set of pedagogical approaches and allows the flexibility for creativity of design suitable for a chosen audience.
We believe that taking advantage of this flexibility the designer of the learning object can carefully embed these elements to secure learning when creating learning objects with the LO Creator.
These fundamental elements listed by Koohang et al. (2009) are as follows:
•
•
•
Learners should be presented with a real-world situation
Learners should be encouraged to develop their own goals and objectives in solving problems
Learners should be allowed to do exploration
119
Design of an Open Source Learning Objects Authoring Tool
•
•
•
•
•
•
Learners should have control of their own learning
Learners should be asked to include and apply their own previous experience and knowledge in solving problems
Learners should be encouraged to go beyond their own discipline - interrelatedness and
interdisciplinary learning should be promoted
Learners should be asked to reflect on what they have learned
Learners should be required to give justification for their answers and go beyond what
they have learned.
Learners should be required to go beyond what they have learned. Scaffolding should be
encouraged.
The learning object should also be designed so that it will go beyond the individual learners and
move into the collaborative environments where learners continue to construct knowledge
through collaboration/cooperation, presentation of multiple perspectives, representations of content/idea/concept, and social negotiation (Du Plessis & Koohang, 2007; Koohang et al., 2009).
User Interface/Usability
Koohang and Du Plessis (2004, p. 43) state,“All instruction occurs in some medium or an ensemble of media, ranging from mediation by air itself in direct face-to-face instruction, to instruction
via the Internet with mediation by digital technologies. The moment the learner has to manipulate
tools, equipment, or a system, usability is an essential issue. The usability properties are essential
for e-learning instructional design process and subsequently instruction and learning to be conducted effectively.”
Using the LO creator, one must give attention to properties listed below that are inherent to user
interface design.
•
•
•
•
•
•
•
•
•
•
•
•
•
120
Adequacy/Task Match – The content/information matches the topic on the learning object.
Consistency – The consistency of appearance, user-interface, and functional operation is
present in all slides.
Control – Learners are in control of their learning.
Direction – Direction is provided (when needed).
Feedback – Feedback is provided.
Load time – Slides are quickly loaded.
Navigability – Users can easily navigate (where to go) throughout the learning object.
Readability – The content/information is uncluttered and readable. This includes link visibility, high color contrast, and appropriate font type/size.
Recognition – The key points are quickly recognized by the learner.
Right to the point information – The content/information is brief, short, and right to the
point.
Simplicity – The content/information in simple to understand. It is uncomplicated and
straightforward.
Visual Presentation – The visual presentation elements such as text boldfacing, italicizing, and underlining are used when necessary.
Well-organized – The content/information is structured and well-organized. (Adapted
and modified from Koohang & Du Plessis, 2004)
Koohang, Floyd, & Stewart
Conclusion
The primary purpose of this paper was to present the development of an Open Source Learning
Objects authoring tool (the LO Creator) giving attention to two imperative elements -- simplicity
of design and a free style pedagogical design environment. The elements of simplicity may encourage designers to comfortably include appropriate user interface elements in the design
process of learning objects. In addition, the free style pedagogical design environment element
gives the flexibility to the LO designers to be creative in using appropriate learning theories and
principles for a chosen audience.
The introductory remark included a brief explanation of both the OS software and learning objects. Next the development, design, and implementation of the Open Source Learning Objects
Authoring Tool – the LO Creator was presented. The discussion then shifted into a systematic
and methodical approach in designing and creating sound learning objects using the LO Creator,
specifically paying attention to the elements of simplicity and free style pedagogical design.
While this work is in its infancy, we believe that the elements of simplicity and the flexibility of
the free style pedagogical design environment give the designers/authors to create sound pedagogical LOs. We also believe that this work must be continuously scaled and improved. Wiley
(2005) asserted that that decentralization improves the scalability. Koohang and Harman (2007)
further asserted that because a unique community of practice’s (CoP) characteristic is decentralization; the CoP can enhance the scalability. Therefore, we strongly recommend that the LO
communities of practice take the task of decentralizing this work in order to continue to build
upon and strengthen the software.
The source code for OS LO Authoring Tool – the LO Creator will be available via
http://InforminScience.org to public and the LO community as a model to use, expand, adapt,
modify, and/or enhance the software.
References
Bannan-Ritland, B., Dabbagh, N., & Murphy, K. (2000). Learning object systems as constructivist learning
environments: Related assumptions, theories, and applications. In D. A. Wiley (Ed.), The instructional
use of learning objects: Online version. Retrieved November 14, 2010 from,
http://reusability.org/read/chapters/bannan-ritland.doc
Coppola, C., & Neelley, E. (2004). Open source open learning: Why open source makes sense for education. Retrieved November 14, 2010 from
http://www.rsmart.com/assets/OpenSourceOpensLearningJuly2004.pdf
Du Plessis, J., & Koohang, A. (2005). Learning object: From conceptualization to utilization. Proceedings
of Knowledge Acquisition and Management Conference, 13, 38-46.
Du Plessis, J., & Koohang A. (2007). Securing learning in learning object. International Journal of Innovation and Learning, 4(2), 197-208.
Egbert, J., & Thomas, M. (2001). The new frontier: A case study in applying instructional design for distance teacher education. Journal of Technology and Teacher Education, 9(3), 391-405.
Harman, K., & Koohang A. (2005). Discussion board: A learning object. Interdisciplinary Journal of
Knowledge & Learning Object, 1, 67-77. Retrieved from http://www.ijello.org/Volume1/v1p067077Harman.pdf
Hung, D. (2001). Design principles for web-based learning: Implications for Vygotskian thought. Educational Technology, 41(3), 33-41.
Hung D., & Nichani M. (2001). Constructivism and e-learning: Balancing between the individual and social levels of cognition. Educational Technology, 41(2), 40-44.
121
Design of an Open Source Learning Objects Authoring Tool
IEEE. (2002). IEEE standard for learning object metadata. 1484.12.1-2002.
Koohang, A. (2009). A learner-centered model for blended learning design. International Journal of Innovation and Learning, 6(1), 76-91.
Koohang, A., & Du Plessis, J. (2004). Architecting usability properties in the e-learning instructional design process. International Journal on E-Learning, 3(3), 38-44
Koohang, A. & Durante, A. (2003). Learners’ perceptions toward the Web-based distance learning activities/assignments portion of an undergraduate hybrid instructional model. Journal Information Technology Education, 2, 106-113. Retrieved November 14, 2010 from,
http://www.jite.org/documents/Vol2/v2p105-113-78.pdf
Koohang, A., & Harman, K. (2005). Open source: A metaphor for e-learning. Informing Science Journal,
8, 75-86. http://www.inform.nu/Articles/Vol8/v8p075-086Kooh.pdf
Koohang, A. & Harman, K. (2007). Advancing sustainability of open educational resources. Issues in Informing Science & Information Technology, 4, 535-544.
Koohang, A., Riley, L., Smith, T. & Schreurs, J. (2009). E-learning and constructivism: From theory to
application. Interdisciplinary Journal of E-Learning & Learning Objects, 5(1), 91-109. Retrieved from
http://www.ijello.org/Volume5/IJELLOv5p091-109Koohang655.pdf
Pimentel, J. (1999). Design of net-learning systems based on experiential learning. JALN 3(2). Retrieved
November 14, 2010 from, http://www.aln.org/publications/jaln/v3n2/v3n2_pimentel.asp
Pushman, J. (2000). Why PHP? Web developers journal. Retrieved November 14, 2010 from
http://www.webdevelopersjournal.com/articles/why_php.html
Randall, B. (2001). Effective web design and core communication issues: The missing components in
Web-based distance education. Journal of Educational Multimedia and Hypermedia, 4, 357-67
Welling, L., & Thomson, L. (2009). PHP and MYSQL web development. Upper Saddle River, NJ: Addison-Wesley.
Wiley, D. (1999). The Post-LEGO learning object. Retrieved November 14, 2010 from
http://wiley.ed.usu.edu/docs/post-lego/
Wiley, D. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor, and
a taxonomy. In D. A. Wiley (Ed.), The instructional use of learning objects: Online version. Retrieved
November 14, 2010 from, http://reusability.org/read/chapters/wiley.doc
Wiley, D. (2005). Thoughts from the Hewlett Open Ed Grantees meeting. Retrieved November 14, 2010
from http://opencontent.org/blog/archives/192
The Wisconsin Online Resource Center. (n.d.). What are learning objects? Retrieved November 14, 2010
from http://www.wisc-online.com/Info/FIPSE%20-%20What%20is%20a%20Learning%20Object.htm
122
Koohang, Floyd, & Stewart
Biographies
Alex Koohang is Peyton Anderson Eminent Scholar and Professor of
Information Technology in the School of Information Technology at
Macon State College. He is also the Dean of the School of Information
Technology at Macon State College. Dr. Koohang has been involved in
the development of online education, having initiated and administered
some of the earliest asynchronous learning networks. His current research interests are in the areas of e-learning, learning objects, open
education, and curriculum design.
Kevin Floyd is assistant professor of Information Technology in the
School of Information Technology at Macon State College. He teaches
in the areas of programming & application development, information
security, and IT integration. His current research interests are in the
areas of open source, accessibility, and information security.
Cody Stewart is a senior student in the School of Information Technology at Macon State College. Cody enjoys Web programming and plans to land a position as a programmer with a company. He also plans on attending graduate school. He has designed, developed, and implemented
information systems for various organizations, including Informing Science Institute.
123
This page left blank intentionally
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Exploring the Influence of Context on Attitudes
toward Web-Based Learning Tools (WBLTs)
and Learning Performance
Robin Kay
University of Ontario Institute of Technology,
Oshawa, Ontario, Canada
[email protected]
Abstract
The purpose of this study was to explore the influence of context on student attitudes toward
Web-Based Learning Tools (WBLTs) and learning performance. Survey data about attitudes and
quasi-experimental data on learning performance were collected from 832 middle and secondary
schools students. Five contextual variables were assessed including subject area (math vs.
science), grade level (middle vs. secondary school), lesson plan format (teacher led vs. student
based), collaboration (pairs vs. individual), and technological problems experienced (minor vs.
major). Science-based WBLTs, higher grade levels, teacher-led lessons, and the absence of major
technological problems were associated with significantly higher student attitudes toward the
learning, design, and engagement value of WBLTs. Science-based WBLTs, higher grade levels,
teacher-led lessons, working alone, and the absence of software problems were associated with
significant gains in student learning performance. It is reasonable to conclude that the context in
which a WBLT is taught can significantly influence student attitudes and learning performance.
Keywords: evaluate, strategies, assess, quality, middle school, secondary school, high school,
learning object, web-based learning tools, math, science, collaboration
Introduction
Web-Based Learning Tools (WBLTs), also referred to as learning objects, are operationally defined in this study as “interactive web-based tools that support the learning of specific concepts
by enhancing, amplifying, and/or guiding the cognitive processes of learners.” This definition is
derived from previous attempts to delineate critical features of learning objects (Kay & Knaack,
2008a; 2009). The WBLTs used in this study enabled students to explore, manipulate variables,
apply concepts, or answer questions based on formal presentation of material targeting a relatively narrow concept.
Material published as part of this publication, either on-line or
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
Research over the past 10 years suggests
that, overall, WBLTs have had a positive effect on student attitudes and learning performance (e.g., Akpinar & Bal,
2006; Docherty, Hoy, Topp, & Trinder,
2005; Kay & Knaack, 2007a, 2007b;
Lim, Lee, & Richards, 2006; Nurmi &
Jaakkola, 2006). A closer look at the
data, though, reveals that WBLTs are
not always accepted or beneficial. Student attitudes toward WBLTs and im-
Editor: Alex Koohang
Exploring the Influence of Context
pact on learning performance vary according to teaching strategies used and individual characteristics of students. For example, teaching approach (e.g., coaching, scaffolding, preparation) can
influence the impact of a WBLT (Kay, Knaack, & Muirhead, 2009; Liu & Bera, 2005; Schoner,
Buzza, Harrigan, & Strampel, 2005; Van Marrienboer & Ayres, 2005). In addtion, attributes such
as gender, age, and computer comfort level affect student attitudes toward WBLTs and learning
performance (De Salas & Ellis, 2006; Lim et al., 2006; Kay & Knaack, 2007b, 2008a, 2009).
One area, not yet studied in the domain of WBLTs, is the impact of learning context. According
to Bransford, Brown, & Cocking’s (2000) seminal work, “Learning is influenced in fundamental
ways by the context in which it takes place” (p. 25). Contextual factors such as the degree of collaboration, passive vs. active lesson plan design, and the impact of technology-based problems
during a lesson have not been explored to date. The purpose of the following study was to examine the influence of contextual variables on student attitudes toward WBLTs and learning performance.
Context and WBLTs
While the effect of contextual variables has not been formally evaluated, an informal content
analysis of 183 peer-reviewed articles (Kay, 2009) revealed five potentially important contextbased factors including subject area, grade level, lesson plan format, collaboration, and technology-based problems.
Subject area
The majority of WBLT research in middle and secondary school environments has focused on
either mathematics or science (e.g., Akpinar & Bal, 2006; Kay & Knaack, 2007b, 2008; Kong &
Kwok, 2005; Liu & Bera, 2005; Nurmi & Jaakkola, 2006). Only one study has directly looked at
the impact of subject area and WBLTs. Kay & Knaack (2008b) reported that secondary school
students rated science-based WBLTs significantly higher than math-based WBLTs with respect
to learning, design, and engagement. In addition, learning performance was significantly higher
when science-based WBLTs were used. Kay & Knaack (2008b) were at a loss to explain why the
difference occurred but speculated that science-based WBLTs may focus on concepts that are
more concrete, visual, and easier for students to relate to than abstract mathematical ideas that
have less real-world application. It is possible that WBLTs are better suited to concepts that have
a personally meaningful context. Bransford et al.’s (2000) comprehensive analysis of how people
learn suggests that establishing context and meaning for concepts, as opposed to teaching disconnected facts, is critical for student success. More research is needed to confirm whether subject
area influences student attitudes and learning performance with respect to WBLTs.
Grade level
Research on grade level and the use of technology suggests that students in higher grades view
computers as tools for getting work done (e.g., word processing, programming, use of the Internet, and email), whereas students in lower grades tend to see computers as a source of entertainment (e.g., play games and use graphics software) (Colley, 2003; Colley & Comber, 2001; Comber, Colley, Hargreaves, & Dorn, 1997). Regarding WBLTs, Kay & Knaack (2007b, 2008a) reported that older students (Grade 12) were more positive about WBLTs and performed better than
younger students (grade 9 and 10). De Salas & Ellis (2006) added that second and third year university students were far more open to using WBLTs than first year students. WBLTs were originally created for higher education students (Haughey & Muirhead, 2005), and it is possible that
the range of cognitive skills required to use a WBLT (e.g., reading instructions, writing down
results, interpreting and digesting “what-if” scenarios, working independently) may overwhelm
126
Kay
younger students who are expecting to be entertained. More research is needed to determine
whether the effect of grade level on attitudes toward WBLTs and learning performance is robust.
Lesson plan format
Considerable debate exists about the optimum level of instructional guidance necessary for successful learning (Kirschner, Sweller, & Clark, 2006). Some researchers suggest that learning is
best supported when students are given the essential tools and required to construct understanding
themselves (e.g., Bruner, 1986; Steffe & Gale, 1995; Vannatta & Beyerbach, 2000; Vygotsky,
1978). This minimal level of instructional guidance is referred to by a variety of names including
discovery learning, problem-based learning, inquiry learning, and constructivist learning (Kirschner et al., 2006). Other researchers maintain that students need to be provided with direct instruction to learn effectively (e.g., Cronbach & Snow, 1977; Mayer, 2004; Sweller, 1988).
The constructivist approach has grown in popularity over the past 10 years; however Kirschner et
al. (2006) provide considerable evidence to suggest that direct instruction is significantly more
effective, particularly when students have limited knowledge and understanding of the concepts
to be learned. To date, the role of the student and teacher in using WBLTs has not been examined. While WBLTs were designed for higher education students to work independently (e.g.,
Haughey & Muirhead, 2005), it is unclear whether middle and/or secondary students learn best
with a teacher-led as opposed to a student-based approach. If constructivist theorists are correct
(e.g., Bruner, 1986; Papert, 1980; Vygotsky, 1978), a student-based approach should result in
more effective learning. However, if Kirschner et al.’s (2006) analysis is accurate, a teacher-led
WBLT lesson plan would probably work best.
Collaboration
While access to computers and the Internet in schools has increased markedly in the past ten
years (Compton & Harwood, 2003; Organization for Economic Co-operation and Development
[OECD], 2006), average student-to-computer ratios range from 3:1 in the United States, 4:1 in
Australia, 5:1 in Canada and England, and 10:1 in many European Countries (OECD, 2006).
Therefore, students may have to share computers and collaborate when using WBLTs. The benefits of collaborative learning are well documented (e.g., Johnson & Johnson, 1994, 1998; Kagan,
1997; Sharon, 1999); however, five key elements are necessary to ensure success and include
positive interdependence, face-to-face interaction, individual accountability, social skills, and
group processing (Johnson & Johnson, 1994). Limited research has been conducted on the effect
of collaboration and the use of WBLTs. One study examining individual vs. pairs use of WBLTs,
reported that collaboration was not significantly related to student attitudes toward WBLTS or
student performance (Kay, Knaack, & Muirhead, 2009). More research is needed to investigate
the role of collaboration and the use of WBLTs.
Technology problems
Numerous challenges have been documented regarding teachers’ use of technology in the classroom and include excessive time required to learn new software (Eifler, Greene, & Carroll, 2001;
Wepner, Ziomek, & Tao, 2003), fear and anxiety associated with using technology (Bullock,
2004; Doering, Hughes, & Huffman., 2003), limited technological skills (Eifler et al., 2001;
Strudler and Wetzel, 1999; and insufficient access to software and hardware (Bartlett, 2002; Russell, Bebell, O’Dwyer, & O’Connor, 2003). Less has been written about the impact of technological challenges on student attitudes and learning during an actual lesson. One study noted that
60% of teachers experienced some level of software or hardware-based problems while WBLTs
were being used (Kay, Knaack, & Petrarca, 2009). No studies have examined the extent to which
hardware and software problems alter the learning experience associated with using WBLTs.
127
Exploring the Influence of Context
Because the use of WBLTs is partially dependent on computers that work well and software that
runs smoothly, it is important to examine both the frequency and magnitude of difficulties observed.
Purpose
The purpose of the current study was to examine the influence of five context-based variables on
student attitudes toward WBLTs and learning performance. The context-based variables assessed
included subject area (math vs. science), grade level (middle vs. secondary school), lesson plan
format (teacher vs. student based), collaboration (pairs vs. individual), and technology problems
(minor vs. major).
Method
Overview
Six key steps were followed in this study, based on Kay & Knaack’s (2009) extensive review of
WBLT research, to ensure high quality of data collection and analysis. These include using:
1.
2.
3.
4.
5.
6.
a large number of WBLTs;
a sizeable, diverse, sample;
reliable and valid measurement tools;
a database of carefully pre-selected WBLTs;
pre-designed lesson plans created by experienced teachers; and
an enhanced measure of learning performance custom designed for each WBLT .
Sample
Students
The sample student population consisted of middle (n=442, 244 females, 198 males) and secondary (n=390, 197 females, 193 male) school students ranging from 11 to 17 years of age (M =
13.3, SD = 0.97). Students were enrolled in grades seven (n=228), eight (n=214), nine (n=348),
or ten (n=43). Seventy-six percent (n=627) of the students reported that their average mark was
70% or more in the subject area where the WBLT was used. In addition, over 75% of the students
agreed or strongly agreed that they were comfortable at working with computers in school. Students were sampled from 33 different classes located in a sub-urban region of nearly 600,000
people.
Teachers
The sample teacher population included 28 teachers (8 males, 20 females) who taught mathematics (n=15) or science (n=13) in grades seven (n=9), eight (n=9), nine (n=7) or ten (n=2). Class
size ranged from 9 to 28 with a mean of 18 students (SD = 5.4). Teaching experience varied from
0.5 to 23 years with a mean of 7.1 (SD = 6.7). Twenty-three out of 28 teachers agreed that they
were good at and liked working with computers.
WBLT selection and lesson plan design
Four teachers were hired and trained for two days on how to select WBLTs and develop effective
lesson plans. WBLTs were chosen based on Kay & Knaack’s (2008b) multi-component model
for assessing WBLTs. Lessons plan design was based on previous research identifying successful
teaching strategies for using WBLTs (Kay, Knaack, & Muirhead, 2009). Key dimensions of a
good quality lesson plan included a guiding set of questions, a structured well-organized plan for
128
Kay
using a WBLT, and time to consolidate concepts learned. Lessons were 70 minutes in duration
and comprised of an introduction (10 min.), guiding questions and activities involving the use of
a WBLT (50 minutes), and consolidation (10 min).
A database of 122 lesson plans and WBLTs was created over a period of two months (78 for mathematics and 44 for science). Twenty-two unique WBLTs from this database were used by
classroom teachers in this study. See Kay (2011) for links to all WBLTs, corresponding lesson
plans, and pre-post tests.
Procedure
Teachers from two boards of education were informed of the WBLT study by an educational
coordinator. Participation was voluntary and teachers could withdraw at any time. Note that no
teachers or students chose to withdraw from the study. Each participant received a full day of
training on implementing the pre-designed WBLT lesson plans. After the training session, they
were then asked to use at least one WBLT in their classroom in the following three months.
Email support was available for duration of the study. All students in a given teacher’s class participated in the WBLT lesson chosen by the teacher; however, survey and pre-post test data was
only collected from those students with signed parental permission forms.
Research Design and Data Sources
Overview
A mixed methods research design was used in this study to assess the influence of five explanatory variables (subject area, grade level, lesson plan format, collaboration, and implementation
problems) on two response variables (student attitudes and change in learning performance). Student attitude data was collected using an online survey and compared within each context variable. Change in learning performance was attained using a quasi-experimental approach where
pre-test scores were subtracted from post-test scores. Change in learning performance was then
compared within each context variable. Subjects were neither selected nor assigned randomly they volunteered to participate.
Explanatory variables
Five context-based explanatory variables were assessed including subject area, grade level, lesson
plan format, collaboration, and implementation problems. Subject area was either math (n=388)
or science (n=445). Grade level was either middle (n=442) or secondary school (n=391). Lesson
plan format was either teacher-led (n=227) where the instructor displayed the WBLT at the front
of the class using an LCD projector or student-based where students worked in a computer lab
(n=582). Collaboration was assessed by comparing students working in pairs (n=345) with those
working individually (n=176). Finally, technology-based problems were explored by comparing
classes where no/minor problems existed with classes where major problems occurred in two
areas: hardware and software.
Response variables
Two categories of response variables were used in this study: student attitudes toward WBLTs
and learning performance. Student attitudes towards WBLTs were assessed using the WBLT
Evaluation Scale for Students which consisted of 13, seven-point Likert scale items asking students about their perceptions of how much they had learned (learning construct - 5 items), the
design of the WBLT (design construct - 4 items), and how much they were engaged when using
the WBLT (engagement construct - 4 items). According to Kay & Knaack (2009), the scale dis-
129
Exploring the Influence of Context
played good internal reliability, construct validity, convergent validity, and predictive validity
(see the Appendix for the scale items).
Learning performance was assessed by asking students to complete pre- and post-tests based on
the content of the WBLT used in class. These tests were included with all pre-designed lesson
plans to match the learning goals of the WBLT. The difference between pre- and post-test scores
was used to determine changes in student performance on four possible knowledge areas including remembering, understanding, application, and analysis. One or more of these knowledge
areas, derived from the revised Bloom’s Taxonomy (Anderson & Krathwhol, 2001), were assessed based on the learning goals of each specific WBLT lesson.
Research Questions
To investigate contextual differences and the effectiveness of WBLTs, the following questions
were addressed in the data analysis:
1. Does subject area (math vs. science) have a significant impact on student attitudes toward WBLTs and learning performance?
2. Does grade level (middle vs. secondary school) have a significant impact on student attitudes toward WBLTs and learning performance?
3. Does lesson plan format (teacher-led vs. student based) have a significant impact on student attitudes toward WBLTs and learning performance?
4. Does collaboration (pairs vs. individual) have a significant impact on student attitudes
toward WBLTs and learning performance?
5. Do technology-based problems (minor vs. major) have a significant impact on student attitudes toward WBLTs and learning performance?
Results
Subject Area
Attitudes toward WBLTs
A MANOVA was run for subject area (math vs. science) and the three student attitude constructs.
Gender, age, computer comfort level, subject area comfort level, and average grade in subject
area were entered as covariates to address the impact of potentially confounding variables. Hotelling’s T was significant (p<.001), so independent comparisons of WBLT quality constructs were
conducted. Student ratings of learning value (p <.001), design (p <.001), and engagement (p
<.001) for WBLTs were significantly higher in science than in mathematics. The effect sizes for
these differences based on Cohen’s d are considered moderate (Cohen, 1988, 1992) (Table 1).
Learning performance
While a MANOVA is generally considered a better test when assessing multiple response variables, each WBLT had a unique set of learning goals that never targeted all five of Bloom’s
knowledge categories. It would be unrealistic for any one WBLT to cover such a wide range of
knowledge. A MANOVA needs all five knowledge categories to run; therefore, a series of ANCOVAs was used to compare science and mathematics-based WBLTs on the four learning performance measures. Gender, age, computer comfort level, subject area comfort level, and average grade in subject area were entered as covariates to address the impact of potentially confounding variables. 130
Kay
Total learning performance (all four categories) was significantly higher when science-based
WBLTs were used (p <.001). The effective size for this difference was considered moderate according to Cohen (1988, 1992). Learning performance was significantly higher for the analysis
knowledge area (p <.005) when science was taught with WBLTs. The effect size for this difference based on Cohen’s d is considered large (Cohen, 1988, 1992). Changes in remembering, understanding, and application knowledge areas showed no significant differences between science
and mathematics (Table 1).
Table 1: Student Attitudes and Learning Performance as a Function of Subject Area
Attitudes
Learning
Design
Engagement
Learning Performance
(% Change)
Remembering
Understanding
Application
Analysis
Total
*
p < .05
** p < .005
*** p < .001
Math
Science
M (SD)
M (SD)
23.9 (6.5)
20.7 (4.6)
18.8 (5.5)
26.3 (6.4)
21.9 (4.5)
20.3 (5.4)
22.3 ***
13.6 ***
13.0 ***
0.37
0.26
0.28
20.9
16.5
14.9
17.1
18.0
30.7
37.2
18.8
54.2
28.8
3.8
3.2
0.9
11.8
19.0
------0.85
0.37
(38.4)
(31.9)
(33.4)
(40.8)
(30.8)
(44.4)
(44.7)
(25.5)
(46.6)
(27.5)
F
Effect Size
Cohen’s D
ns
ns
ns
**
**
Grade Level
Attitudes toward WBLTs
A MANOVA was run for grade level (middle vs. secondary students) and the three student attitude constructs. Gender, computer comfort level, subject area comfort level, and average grade in
subject area were entered as covariates. Hotelling’s T was significant (p <.001), so independent
comparisons of WBLT quality constructs were conducted. Student ratings of learning value (p
<.001), design (p<.001), and engagement (p <.001) for WBLTs were significantly higher when
WBLTs were used in secondary as opposed to middle school classrooms. The effect sizes for
these differences based on Cohen’s d are considered small to moderate (Cohen, 1988, 1992) (Table 2).
Learning performance
A series of ANCOVAs were used to compare middle and secondary school use of WBLTs on the
four learning performance measures. Gender, age, computer comfort level, subject area comfort
level, and average grade in subject area were entered as covariates. Total learning performance
(all four categories) was significantly higher when WBLTs were used by secondary students as
opposed to middle school students (p <.005). The effective size for this difference was considered large according to Cohen (1988, 1992). Changes in learning performance were significantly higher for remembering (p <.001), understanding (p <.005), application (p<.001) and analysis
(p <.005) knowledge areas when WBLTs were used in secondary school classrooms. The effect
131
Exploring the Influence of Context
sizes for these differences based on Cohen’s d are considered moderate to large (Cohen, 1988,
1992) (Table 2).
Table 2: Student Attitudes and Learning Performance as a Function of Grade Level
Middle School
M (SD)
Secondary School
M (SD)
F
Effect Size
Cohen’s D
24.0 (5.9)
20.7 (4.2)
18.7 (5.0)
26.4 (5.9)
21.1 (4.2)
20.7 (5.0)
34.5 **
24.2 **
29.8 **
0.40
0.29
0.30
12.8
24.6
9.7
2.8
16.6
38.7
42.3
27.6
44.8
31.9
34.3
8.3
37.6
11.8
11.4
0.61
0.39
0.61
1.06
0.54
Attitudes
Learning
Design
Engagement
Learning Performance
(% Change)
Remembering
Understanding
Application
Analysis
Total
*
p < .005
** p < .001
(43.5)
(49.3)
(31.5)
(28.3)
(29.5)
(41.0)
(40.1)
(27.5)
(48.5)
(27.5)
**
*
**
*
*
Lesson Plan Format
Attitudes toward WBLTs
A MANOVA was run for lesson plan format (teacher led vs. student-based) and the three student
attitude constructs. Gender, age, computer comfort level, subject area comfort level, and average
grade in subject area were entered as covariates. Hotelling’s T was significant (p <.001), so independent comparisons of WBLT quality constructs were conducted. Student ratings of learning
value (p <.001), design (p <.001), and engagement (p <.005) for WBLTs were significantly higher with a teacher-led lesson plan format. The effect sizes for these differences based on Cohen’s d
are considered small to moderate (Cohen, 1988, 1992) (Table 3).
Learning performance and WBLTs
A series of ANCOVAs were used to compare teacher-led and student-based WBLT lesson plan
formats on the four learning performance measures. Gender, age, computer comfort level, subject area comfort level, and average grade in subject area were entered as covariates. Total learning performance (all four categories) was significantly higher when the WBLT lesson plan was
teacher-led as opposed to student-based (p <.001). The effective size for this difference was considered moderate according to Cohen (1988, 1992). Changes in learning performance were significantly higher for the understanding knowledge category (p <.005) when a teacher-led lesson plan
format was used. The effect size for this difference, based on Cohen’s d, is considered large (Cohen, 1988, 1992). Changes in remembering, understanding and application knowledge areas
showed no significant differences with respect to lesson plan format (Table 3).
132
Kay
Table 3: Student Attitudes and Learning Performance as a Function of Lesson Plan Format
Attitudes
Learning
Design
Engagement
Learning Performance
(% Change)
Remembering
Understanding
Application
Analysis
Total
*
p < .005
** p < .001
Student Based
M (SD)
Teacher Led
M (SD)
F
24.6 (7.1)
20.9 (5.2)
19.2 (6.3)
26.9 (5.5)
22.5 (3.5)
20.5 (5.0)
22.4 **
22.1 **
10.5 *
0.38
0.38
0.25
22.4
27.3
14.6
2.6
20.6
39.1
56.2
24.9
45.4
33.3
2.3
20.8
2.2
2.4
23.3
--0.70
----0.44
(42.4)
(44.1)
(31.4)
(27.5)
(29.1)
(44.4)
(37.8)
(29.4)
(48.5)
(29.1)
Effect Size
Cohen’s d
ns
**
ns
ns
**
Collaboration
Attitudes toward WBLTs
A MANOVA was run for collaboration (individual vs. working in pairs) and the three student
attitude constructs. Gender, age, computer comfort level, subject area comfort level, and average
grade in subject area were entered as covariates. Hotelling’s T was not significant, so it was concluded that there were no significant differences in student attitudes with respect to individual vs.
pairs’ use of WBLTs (Table 4).
Table 4: Student Attitudes and Learning Performance as a Function of Collaboration
Perceptions of
Learning
Design
Engagement
Learning Performance
(% Change)
Remembering
Understanding
Application
Total
*
p < .05
**
p < .001
Individual
(M (SD)
Pairs
M (SD)
25.7 (6.0)
21.6 (4.2)
19.6 (5.3)
25.1 (5.9)
21.2 (4.1)
19.7 (5.3)
49.3
35.7
20.0
27.1
15.9
24.4
16.3
20.4
(49.4)
(41.0)
(29.8)
(31.7)
(37.4)
(44.5)
(30.3)
(28.4)
F
Effect Size
Cohen’s d
1.3 ns
1.0 ns
0.1 ns
18.3
2.2
2.2
5.1
**
ns
ns
*
-------
0.76
----0.24
Learning performance and WBLTs
A series of ANCOVAs were used to compare individual vs. pair use of WBLTs on the four learning performance measures. Gender, age, computer comfort level, subject area comfort level, and
average grade in subject area were entered as covariates. Total learning performance (all four
133
Exploring the Influence of Context
categories) was significantly higher when student worked on WBLTs individually vs. in pairs (p
<.05). The effective size for this difference was considered small according to Cohen (1988,
1992). Changes in learning performance were significantly higher for the remembering knowledge area (p <.001) when WBLTs were used individually vs. when they were used in pairs. The
effect size for this difference, based on Cohen’s d, is considered large (Cohen, 1988, 1992).
Changes in understanding and application knowledge areas showed no significant differences
with respect to collaboration (Table 4).
Problems in Technology
Before analyzing the impact of technological problems, it is important to note that major software
and hardware problems occurred infrequently. Only five percent of all classes that used WBLTs
experienced major technological difficulties.
Hardware
A MANOVA was run for hardware problems (none/minor vs. major) and the three student attitude constructs. Gender, age, computer comfort level, subject area comfort level, and average
grade in subject area were entered as covariates. Hotelling’s T was significant (p <.001), so independent comparisons of WBLT quality constructs were analyzed. Student ratings of learning
value (p <.005) and design (p <.001) for WBLTs were significantly higher when fewer or no
hardware problems were experienced while using WBLTs. The effect sizes for these differences
based on Cohen’s d are considered small to moderate (Cohen, 1988, 1992) (Table 5). There were
no significant differences in student attitudes toward engagement with respect to magnitude of
hardware problems experienced.
An ANCOVA was used to assess learning performance as a function of hardware problems observed. Total learning performance was used because the sample size was too small to assess all
four learning performance categories. Gender, age, computer comfort level, subject area comfort
level, and average grade in subject area were entered as covariates. Total learning performance
was not significantly different for the no/minor vs. major hardware problem conditions.
Table 5: Student Attitudes and Learning Performance as a Function of Hardware Problems
Attitudes
Learning
Design
Engagement
None or Minor
M (SD)
Major
M (SD)
F
Effect Size
Cohen’s d
25.6 (6.0)
21.7 (4.0)
19.8 (5.1)
22.9 (5.9)
18.8 (4.0)
19.1 (5.1)
7.0 *
17.3 **
0.7 ns
0.45
0.73
---
27.5 (31.3)
17.1 (18.5)
3.8 ns
---
Learning Performance
(% Change)
*
**
Total
p < .005
p < .001
Software
A MANOVA was run for software problems (none/minor vs. major) and the three student attitude
constructs. Gender, age, computer comfort level, subject area comfort level, and average grade in
subject area were entered as covariates. Hotelling’s T was significant (p <.001), so independent
134
Kay
comparisons of WBLT quality constructs were done. Student ratings of learning value (p <.05)
and engagement (p<.005) for WBLTs were significantly lower when major software problems
were experienced while using WBLTs. The effect sizes for these differences based on Cohen’s d
are considered small to moderate (Cohen, 1988, 1992) (Table 6). There were no significant differences in student attitudes toward WBLT design between none/minor vs. major software problem categories.
An ANCOVA was used to assess learning performance as a function of software problems observed. Total learning performance was used because the sample size was too small to assess all
four learning performance categories. Gender, age, computer comfort level, subject area comfort
level, and average grade in subject area were entered as covariates. Total learning performance
was significantly lower (p <.05) when major as opposed to no/minor software problems were experienced. The effect size for this difference based on Cohen’s d is considered moderate (Cohen,
1988, 1992) (Table 6).
Table 6: Student Attitudes and Learning Performance as a Function of Software Problems
None or Minor
M (SD)
M (SD)
Perceptions of
Learning
Design
Engagement
Learning Performance
(% Change)
Total
*
p < .05
*
p < .005
Major
M (SD)
M (SD)
F
Effect Size
Cohen’s d
25.6 (5.9)
21.6 (4.0)
19.9 (5.1)
21.8 (5.9)
20.2 (4.0)
17.1 (5.0)
5.9 *
3.2 ns
8.2 **
0.64
--0.56
27.4 (30.4)
14.4 (30.9)
4.6 *
0.42
Discussion
The purpose of this study was to explore the influence of context on student attitudes toward
WBLTs and learning performance. Five contextual variables were assessed including subject
area, grade level, lesson plan format, collaboration, and technology-based problems. The results
for each of these variables will be discussed in turn.
Subject Area (Math vs. Science)
Students who used science-based WBLTs in this study rated perceived learning, design, and engagement significantly higher than students who used math-based WBLTs. This finding was also
reported by Kay & Knaack (2008b). One possible explanation for this result is that students
simply like science better than mathematics; however, subject area comfort level was accounted
for as a covariate in the analysis. Kay & Knaack (2008b) suggest that meaningful context and
rich multimedia lead to more positive student attitudes toward WBLTs. Student attitudes toward
science-based WBLTs may be enhanced because these WBLTs provide more meaningful context
and richer visuals than the math-based WBLTs. Twelve of thirteen science-based WBLTs used
in this study offered rich context and/or hands on participation in real world tasks whereas eight
of the nine math-based WBLTs focused solely on the concept being taught without a real-world
connection.
135
Exploring the Influence of Context
One could argue that differences between science- and math-based WBLTs was influenced by the
qualities of the tools used, not subject area. In other words, if math-based WBLTs were presented in a richer context with more enticing visuals, they might be just as popular as their
science-based counterparts. On the other hand, mathematics at the middle and secondary school
level may include concepts that are more routinely de-contextualized than those concepts learned
in science. For example, in this study, sample concepts for mathematics included adding and subtracting integers, probability with coin tossing, graphing co-ordinates, and line of best fit. It is
challenging to find real world meaning for these concepts, particularly for younger students. On
the other hand, science-based concepts in this study included building bridges, creating electrical
circuits, space stations, and lightening. These topics appear to be more grounded in the real
world.
It is worth noting that total learning performance was significantly higher when science-based as
opposed to math-based WBLTs were used, a finding also noted by Kay & Knaack (2008b). This
effect may have been partially influenced by positive attitudes students had toward science-based
WBLTs. An analysis of individual knowledge areas, though, revealed that the only category
showing a significant difference was analysis. Students who used science-based WBLTs improved significantly more when analytic knowledge was tested. It is conceivable, that the interactive, constructive qualities of many of the science-based WBLTs versus the more passive, direct
instructions of the math-based WBLTs may have contributed to significant gains in higher level
knowledge areas. Since this is a first study examining performance using Bloom’s revised taxonomy (Anderson & Krathwhol, 2001), more research is needed to establish whether specific
knowledge categories such as remembering, understanding, application, and analysis are differentially impacted by subject area.
Grade Level (Middle vs. Secondary School)
Secondary school students in this study rated WBLTs significantly higher than middle school
students with respect to learning, design, and engagement. These findings are consistent with
previous research (e.g., Kay & Knaack, 2007b, 2008a). One possible explanation may stem from
past studies suggesting that older students have different expectations than younger students when
using computers in the classroom (e.g., Colley, 2003; Colley & Comber, 2003; Comber et al.,
1997). Older students may be looking for a straight-forward, easy-to-use tool that will help them
learn. They are not expecting bells and whistles, so the rather conservative, but useful WBLTs
used in this study meet their expectations and are rated highly. One the other hand, younger middle school students might want entertainment and excitement. Most WBLTs in this study offer
good visual supports but are not designed to entertain; therefore middle school students may rate
them lower because they are not meeting expectations.
Grade level also had a significant impact on learning performance in favour of older students.
According to Cohen (1998, 1992), the size of effect for all four learning performance categories
was moderate to large. One possible explanation for the impact of grade on learning performance
might be to related student expectations and range of cognitive skills required to use a WBLT,
including reading instructions, writing down results, interpreting and digesting “what-if” scenarios, and working independently. Younger students, expecting to be entertained, might be surprised and overwhelmed by how much effort is required to learn with WBLTs. Kirschner et al.
(2006) note younger students may need more direct scaffolding or instruction when learning new
concepts. The challenge of learning independently with a WBLT and acquiring new concepts
may cognitively overload middle school students and undermine learning.
Qualitative research in the form of interviews or focus groups would be helpful in trying to understand the differential expectations of students based on grade level and how these expectations
might influence subsequent learning performance.
136
Kay
Lesson Plan Format (Teacher-led vs. Student-Based)
In this study, student attitudes toward WBLTs and learning performance were significantly higher
for teacher-led as opposed to student-based lesson plans. The findings cannot be attributed to
gender, age, computer comfort, subject comfort, average grade because these variables were entered as covariates in the analysis. The results support the notion that students in this study, at an
average age of 13 years old, needed more direct instruction and/or and support when WBLTs are
used (Kirschner et al., 2006). Teacher-guidance may be more effective for young adolescents
acquiring a preliminary understanding of basic concepts.
Collaboration (Pairs vs. Individual Use)
There were no significant differences with respect to attitudes toward WBLTs and whether students worked alone or in pairs. However, students who worked individually significantly outperformed students who worked in pairs, particularly for the category of remembering. At first
glance, these findings are somewhat counter intuitive, given the extensive research supporting the
extensive benefits of collaboration (e.g., Johnson & Johnson, 1994, 1998; Kagan, 1997; Sharon,
1999). The precise details of how students collaborated were not recorded and it is possible that
one or more of the essential conditions for successful co-operative learning (Johnson & Johnson,
1994) were not met. It is also possible that working with peers may have been more of a distraction than a benefit, especially with younger students and insufficient scaffolding. More research
is needed documenting the quality of collaboration and the use of WBLTs as well as the extent to
which students could inhibit each others’ progress.
Technology-Based Problems (Hardware and Software)
When major hardware problems were experienced, student attitudes toward learning and design
were significantly lower BUT learning performance was unaffected. Major software problems
were associated with significantly lower student attitudes toward WBLTs and reduced learning
performance. These results are not unexpected. Because WBLTs are, to a large extent, hardware
and software dependent, major technological challenges are likely to inhibit learning. The important finding is that hardware and software problems were experienced by only five percent of the
total sample. Even though technology-based problems can have a negative impact on student
perceptions and learning performance, they did not occur often, therefore the overall effect of this
contextual variable is limited.
Educational Implications
This study is a first attempt to explore the potential impact of context on the use of WBLTs, so
strong recommendations for educators would be premature. However, the results suggest that
there are several factors that teachers should be cognizant of when they plan and implement
WBLT-based lessons.
Science-based WBLTs appear to be more attractive to students than math-based WBLTs. It is
speculated that the rich, meaningful context of science is more appealing than the decontextualized presentation used with mathematics. Teachers may have to supplement mathbased WBLTs with added materials that build interest and meaning.
Teacher-led lessons may work better for younger students or for teaching fundamental concepts
because of potential cognitive overload. Students may have difficulty guiding their own learning,
using a new WBLT, and learning a set of new concepts simultaneously. Teacher guidance might
provide the necessary scaffolding and focus needed for students who are not yet able to guide
their own learning.
137
Exploring the Influence of Context
If students are collaborating while using WBLTs, it would be prudent to ensure that key elements
(face-to-face interaction, individual accountability, social skills, and group processing) outlined
by Johnson & Johnson (1994, 1998) are present. Otherwise students might distract each other,
thereby inhibiting performance.
Finally, major hardware and software problems can inhibit the acceptance of WBLTs, and to a
lesser extent learning performance, but fortunately these kind of problems are rare. Teachers need
not worry excessively about the impact of technology-based challenges when WBLTs are used.
Caveats & Future Research
Considerable attention was directed toward method in this study. Well tested, reliable, valid, and
comprehensive measurement tools were used to assess student perceptions and learning performance for a relatively wide range of systematically selected WBLTs. Because the exploration of
context-based variables and the use of WBLTs is new, more research is needed to establish consistency and to address unanswered questions.
It is critical to note that the results of this study are correlational. One cannot assume that the impact of any of the contextual variables actually causes student perceptions or learning performance to change, even though factors such as age, computer ability, and subject areas knowledge
were entered as covariates. Experimental research is rare in education, so direct cause and effect
relationships are hard to ascertain. However, one promising new direction in future research
would be to collect detailed qualitative data in the form of interviews, focus groups, and/or open
ended questions to help explore why certain differences exist.
Regarding specific contextual variables assessed in the current study, several changes could be
made to gather more useful information. For example, a wider range of subject areas could be
tested to assess the impact of content. As well, WBLTs could be assessed in more detail to determine the impact of a meaningful context on student learning experiences. Regarding lesson
plan format, types of scaffolding could be manipulated to examine how much support is required
for successful learning and whether that required support varies as a function of grade level.
With respect to collaboration, clearer guidelines could be provided to students (see Johnson &
Johnson, 1994, 1998) to determine whether working in small groups can be an effective teaching
approach when using WBLTs.
One final suggestion would be to record think-aloud data while students are actually using the
WBLTs. Essentially, the think-aloud procedure offers a window into the internal talk of a subject
while he/she is learning. Ericsson & Simon (1980), in a detailed critique of the technique conclude that “verbal reports, elicited with care and interpreted with full understanding of the circumstances under which they were obtained, are a valuable and thoroughly reliable source of information about cognitive processes” (pp. 247).
Summary
The current study examined the influence of five context-based variables (subject area, grade level, lesson plan format, collaboration, and technology-based problems) on student attitudes toward
WBLTs and learning performance. Science-based WBLTs, higher grade levels, teacher-led lessons, and the absence of major technology-based problems were associated with significantly
higher attitudes toward WBLT learning, design, and engagement quality. The presence or absence of collaboration had no impact on student attitudes toward WBLTs. Science-based
WBLTs, higher grade levels, teacher-led lessons, working alone, and the absence of software
problems were associated with significantly higher learning performance. It is reasonable to conclude that the context of a WBLT learning environment can have a significant impact on student
attitudes and learning.
138
Kay
References
Akpinar, Y., & Bal, V. (2006). Student tools supported by collaboratively authored tasks: The case of work
learning unit. Journal of Interactive Learning Research, 17(2), 101-119.
Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A
Revision of Bloom’s taxonomy of educational objectives. New York: Longman.
Bartlett, A. (2002). Preparing preservice teachers to implement performance assessment and technology
through electronic portfolios. Action in Teacher Education, 24(1), 90-97.
Bower, M. (2005). Online assessment feedback: Competitive, individualistic, or…preferred form! Journal
of Computers in Mathematics and Science Teaching, 24(2), 121-147.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn. Washington, DC: National Academy Press.
Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press.
Bullock, D. (2004). Moving from theory to practice: An examination of the factors that preservice teachers
encounter as the attempt to gain experience teaching with technology during field placement experiences. Journal of Technology and Teacher Education, 12(2), 211-237.
Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). New York: Academic
Press.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159. doi: 10.1037/00332909.112.1.155
Colley, A. (2003). Gender differences in adolescents’ perceptions of the best and worst aspects of computing at school. Computers in Human Behaviour, 19(6) 673-682. doi:10.1016/S0747-5632(03)00022-0
Colley, A., & Comber, C. (2001). Age and gender differences in computer use and attitudes among secondary school students: What has changed? Educational Research, 45(2), 155-165. doi:
10.1080/0013188032000103235
Comber, C., Colley, A., Hargreaves, D. J., & Dorn, L. (1997). The effects of age, gender and computer
experience upon computer attitudes. Educational Research, 39(2), 123-133. doi:
10.1080/0013188970390201
Compton, V., & Harwood, C. (2003). Enhancing technological practice: An assessment framework for
technology education in New Zealand. International Journal of Technology and Design Education,
13(1), 1-26.
Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on
interactions. New York: Irvington.
De Salas, K., & Ellis, L. (2006). The development and implementation of learning objects in a higher education. Interdisciplinary Journal of E-Learning and Learning Objects, 2006 (2), 1-22. Retrieved from
http://www.ijello.org/Volume2/v2p001-022deSalas.pdf
Docherty, C., Hoy, D., Topp, H., & Trinder, K. (2005). E-Learning techniques supporting problem based
learning in clinical simulation. International Journal of Medical Informatics, 74(7-8), 527-533.
doi:10.1016/j.ijmedinf.2005.03.00
Doering, A., Hughes, J. & Huffman, D. (2003). Preservice teachers: Are we thinking with technology?
Journal of Research on Technology in Education, 35(3), 342-361.
Eifler, K. Greene, T., & Carroll, J. (2001). Walking the talk is tough: From a single technology course to
infusion. The Educational Forum, 65(4), 366-375. doi: 10.1080/00131720108984518
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215 251.
139
Exploring the Influence of Context
Haughey, M., & Muirhead, B. (2005). Evaluating learning objects for schools. Australasian Journal of
Educational Technology, 21(4), 470-490. Retrieved from
http://www.ascilite.org.au/ajet/ajet21/haughey.html
Johnson, D. W., & Johnson, R. T. (1994). An overview of cooperative learning. In J. S. Thousand, R. A.
Villa, & A. I. Nevin (Eds.), Creativity and collaborative learning: A practical guide to empowering
students and teachers (pp 31-44). Baltimore, MD: Brookes.
Johnson, D. W., & Johnson, R. T. (1998). Learning together and alone. Cooperation, competition, and
individualization (5th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Kagan, S. (1997). Cooperative learning (2nd ed.). San Jose Capistrano, CA: Resources for Teachers.
Kay, R. H. (2009). Understanding factors that influence of the effectiveness of learning objects in secondary school classrooms. In L. T. W. Hin & R. Subramaniam (Eds.), Handbook of research on new media literacy at the K-12 level: Issues and challenges (pp. 419-435). Hershey, PA: Information Science
Reference.
Kay, R. H. (2011). Web-based learning tools used in contextual differences study. Retrieved from
http://faculty.uoit.ca/kay/res/context_diff/context.html
Kay, R. H., & Knaack, L. (2007a). Evaluating the learning in learning objects. Open Learning, 22(1), 5-28.
doi: 10.1080/02680510601100135
Kay, R. H., & Knaack, L. (2007b). Evaluating the use of learning objects for secondary school science.
Journal of Computers in Mathematics and Science Teaching, 26(4), 261-289.
Kay, R. H., & Knaack, L. (2008a). An examination of the impact of learning objects in secondary school.
Journal of Computer Assisted Learning, 24(6) 447-461.
Kay, R. H., & Knaack, L. (2008b). A formative analysis of individual differences in the effectiveness of
learning objects in secondary school. Computers & Education, 51(3), 1304-1320. doi:
10.1016/j.compedu.2008.01.001
Kay. R. H., & Knaack, L. (2009). Assessing learning, quality and engagement in learning objects: The
learning object evaluation scale for students (LOES-S). Education Technology Research and Development, 57(2), 147-168.
Kay. R. H., Knaack, L., & Muirhead, B. (2009). A formative analysis of instructional strategies for using
learning objects. Journal of Interactive Learning Research, 20(2), 295-315.
Kay, R. H., Knaack, L., & Petrarca, D. (2009). Exploring teacher perceptions of web-based learning tools.
Interdisciplinary Journal of E-Learning and Learning Objects, 5, 27-50. Available at
http://ijklo.org/Volume5/IJELLOv5p027-050Kay649.pdf
Kong, S. C., & Kwok, L. F. (2005). A cognitive tool for teaching the addition/subtraction of common fractions: A model of affordances. Computers and Education, 45(2), 245-265.
doi:10.1016/j.compedu.2004.12.002
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not
work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquirybased teaching. Educational Psychologist, 41(2), 75-86. doi: 10.1207/s15326985ep4102_1
Lim, C. P., Lee, S. L., & Richards, C. (2006). Developing interactive learning objects for a computing mathematics models. International Journal on E-Learning, 5(2), 221-244.
Liu, M., & Bera, S. (2005). An analysis of cognitive tool use patterns in a hypermedia learning environment. Educational Technology, Research and Development, 53(1), 5-21. doi: 10.1007/BF02504854
Mayer, R. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided
methods of instruction. American Psychologist, 59(1), 14–19. doi: 10.1037/0003-066X.59.1.1
Nurmi, S., & Jaakkola, T. (2006). Effectiveness of learning objects in various instructional settings. Learning, Media, and Technology, 31(3), 233-247. doi: 10.1080/17439880600893283
140
Kay
Organization for Economic Co-operation and Development (OECD). (2006). Education at a glance. Retrieved from
http://mt.educarchile.cl/MT/jjbrunner/archives/libros/OECD_Gl2006/Edu_glance2006.pdf
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.
Russell, M., Bebell, D., O’Dwyer, L., & O’Connor, K. (2003). Examining teacher technology use: Implications for preservice and inservice teacher preparation. Journal of Teacher Education, 54(4), 297-310.
doi: 10.1177/0022487103255985
Schoner, V., Buzza, D., Harrigan, K., & Strampel, K. (2005). Learning objects in use: ‘Lite’ assessment for
field studies. Journal of Online Learning and Teaching, 1(1), 1-18. Retrieved from
http://jolt.merlot.org/documents/vol1_no1_schoner_001.pdf
Sharon, S. (Ed.). (1999). Handbook of cooperative learning methods. Westport, CT: Praeger.
Steffe, L., & Gale, J. (Eds.). (1995). Constructivism in education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Strudler, N., & Wetzel, L. (1999) Lessons from exemplary colleges of education: Factors affecting technology integration in preservice programs. Educational Technology Research and Development, 47(4),
63-81. doi: 10.1007/BF02299598
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257–
285.
Vannatta, R. A., & Beyerbach, B. (2000). Facilitating a constructivist vision of technology integration
among education faculty and preservice teachers. Journal of Research on Computing in Education,
33(2), 132-148.
Van Merrienboer, J. J. G., & Ayres, P. (2005). Research on cognitive load theory and its design implications for e-learning. ETR&D, 53(3), 1042-1629. doi: 10.1007/BF02504793
Vygotsky, L.S. (1978). Mind in society. Cambridge, M.A.: Harvard University Press.
Wepner, S. B., Ziomek, N., & Tao L. (2003). Three teacher educators’ perspectives about the shifting responsibilities of infusing technology into the curriculum. Action in Teacher Education, 24(4), 53-63.
141
Exploring the Influence of Context
Appendix – WBLT Evaluation Scale
Learning
1. Working with the learning object helped me learn.
2. The feedback from the learning object helped me learn.
3. The graphics and animations from the learning object helped me learn.
4. The learning object helped teach me a new concept.
5. Overall, the learning object helped me learn.
Design
6. The help features in the learning object were useful.
7. The instructions in the learning object were easy to follow.
8. The learning object was easy to use.
9. The learning object was well organized.
Engagement
10. I liked the overall theme of the learning object.
11. I found the learning object engaging.
12. The learning object made learning fun.
13. I would like to use the learning object again.
All scale items used the following 7-point Likert scale
1 = Strongly Disagree
2 = Disagree
3 = Somewhat Disagree
4 = Neutral
5 = Somewhat Agree
6 = Agree
7 = Strongly Agree
Biography
Robin Kay, Ph.D. is an Associate Professor in the Faculty of Education at the University of Ontario Institute of Technology. He has published over 50 articles and chapters in the area of computers in education, presented numerous papers at 15 international conferences, refereed five prominent computer education journals, and taught computers, mathematics, and technology for over 20 years. Current projects
include research on laptop use in teacher education, classroom response systems, web-based learning tools, and factors that influence
how students learn with technology.
142
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Characteristics of an Equitable Instructional
Methodology for Courses in Interactive Media
Frank Kurzel
University of South Australia, Adelaide, SA, Australia
[email protected]
Abstract
This paper focuses on an action research that was conducted to address difficulties with the development of multimedia applications. These difficulties were associated with the programming
(scripting) parts of the development environment that were required to create the interactive elements within them. Initially, a learning environment based on adaptive hypermedia was constructed to provide for students with different backgrounds. Unfortunately a large amount of the
content that was developed became redundant when the development software changed.
Anecdotally, I was aware of these difficulties, but an analysis of questionnaire data that had been
collected at the end of each course offerings, revealed a Difficulty factor that could be reduced to
a value. When we looked at this figure for arts and then computing students, we found that arts
students found these elements significantly more difficult than the computing. This in itself was
expected but their respective values provided a metric to use in future evaluations.
What followed was a longitudinal study that involved an action research to resolve the difference
in this metric; the result hopefully being that students managed the development environment irrespective of their background. This involved presenting the framework for the development in a
more abstract way so that global commands could be planned by the group and then used within
individually created sections.
A project based instructional methodology suited this course and authentic projects were used.
Students were expected to engage in all aspects of the project, including the interactive elements.
We avoided the situation where the arts group member became responsible for the graphic design
alone. Peer review and peer assessment were embedded within the course to ensure that students
maintained their engagement and got meaningful feedback that could be included in their projects.
The instructional methods used resulted in there being an emphasis on all the parts of the project,
and a subsequent valuing of all the components required for the project’s completion.
Keywords: game development, instructional methodologies, project based learning, peer assessment, peer review, Flash.
Material published as part of this publication, either on-line or
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
Introduction
It is the goal of tertiary institutions to
provide an equal footing for all students
irrespective of their background. Treating students as individuals with different
skills and abilities becomes problematic
with large classes especially where the
students come from different programs.
Editor: Rowan Wagner
Equitable Instructional Methodology
Students come into the interactive media major course offerings from a range of areas including
computing, media arts, and education and others. With this in mind, I began my investigation into
learning environments and instructional methodologies that could hopefully cater for these differences.
Universities have embraced online learning among all faculties and the multimedia studies department of which I was a member had been involved in developing materials for courses prior to
the advent of a university wide system. Even with the introduction of a universal system, online
facilities, as Twigg (2003, p. 38) noted, “individualized faculty practice … and standardized the
student learning experience.” He outlined that we treated all students in the course as if their
learning needs, interests, and abilities were the same while the opposite needed to be done: we
needed to individualize student learning and standardize faculty practice.
The World Wide Web (WWW) was seen as a vehicle for the development of learning environments that could cater to the expectations and learning styles of students from different cultures
and backgrounds. Online course materials supported traditional lectures and tutorials initially;
some replaced them.
Background
Typically, the educational systems were coarse-grained in nature, which limited the reusability of
course materials and required rewrites when any changes were made. The instructional methodologies employed were tightly coupled to the content, making it difficult for the teacher to alter
the learning theory that might be employed in the delivery; for example, changing to a project
based instructional strategy required significant alterations to the materials. Personalising the instruction to take into account prior knowledge, or indeed learning style, was difficult.
Hypertext and hypermedia systems were thought of as the foundational technology to support
student centred learning, where the teacher could organise materials hypertextually and the student could make the decision on following the associative link. It was argued (Eklund & Brusilovsky, 1998) that Adaptive Hypermedia (AH) had the potential to individualise instruction in
higher education. Brusilovsky, Eklund, and Schwarz (1996) demonstrated personalising features
that could account for individual differences in knowledge. Brusilovsky (2000) used the term
concept when referring to the elementary pieces of knowledge within a domain. These were later
to be called fine grained learning objects (Wiley, 2001).
Adaptive hypermedia was derived from Intelligent Tutoring Systems (ITS); these are formed with
an expert system and a communication module. The expert system accounts for the student history, the pedagogical model, and the domain knowledge. AH systems provide adaptive navigation support through hypertext/hypermedia pages by coupling and maintaining a student profile
with the domain model. Expressing this intersection with navigational notations provided a
mechanism for the individualised administration of content. Links could be hidden by the system
or annotated to provide student course information. An alternate feature known as adaptive presentation provided the ability to present different content.
Personalised learning environments based on AH offered a technology that might account for
some of the learning difficulties encountered by providing a hypermedia/multimedia learning
platform that had:
•
•
•
adaptive characteristics driven by an student overlay model
a range of tools to support student centred learning
instructional modules constituting the domain knowledge.
This was the technological setting when I embarked on the action research to address anecdotal
differences in the way students developed the interactive elements within multimedia courses that
144
Kurzel
I was responsible for. Arts students generally managed the graphical elements of any group project work well but had difficulties with the interactive elements. A personalised adaptive learning
environment solution appeared to offer promise for realising the goal of providing individualised
instruction.
Course Characteristics
Courses in multimedia usually involve creating digital artifacts and computer applications using
development environments (software) that allow the structuring of:
•
•
•
•
•
•
text – static and dynamic
images – raster and vector
animations – 2D and 3D
sound – sound affects, voice-overs and music
video with controls
Instructions (scripts) to control the application.
Initially, the software used was Macromedia Director, which used a movie metaphor where the
media elements were placed into a timeline that a play head processed. The programming language available to control the interactions with media elements like buttons and the navigation
along the main timeline was called Lingo. This was an event driven, object oriented language that
used behaviours (series of functions or methods driven by user events) to produce the interactivity.
More recently, Flash has been used; Director was abandoned by Adobe after it purchased Macromedia. It, too, is timeline based, and the programming language involved is known as actionScript. The latest version of Flash CS5 uses actionScript 3; its predecessors were interpreted languages and did not employ the strict typing that exists in actionScript 3. It also has integrated the
notion of a class into the development environment, enabling movieClips (modules with their
own timelines) to be associated with class definitions that define their behaviour.
The goal of this technology is to provide an environment that enhances the communication between the computer and the human. In an instructional sense, Mayer (2001) talked of the multimedia principle where “people learn better from words and pictures than from words alone.” This
further elaborated into the Cognitive Theory of Multimedia Learning where “multimedia narration and graphical images produce verbal and mental representations, which integrate with prior
knowledge to construct new knowledge.”
Interactivity is more than the user’s ability to navigate and control media components; it involves
a two-way communication between the interactive element and the user. Activities between humans that are interactive include games, conversations, and storytelling, to name a few. There are
different levels of interactivity ranging from simple page turning to immersive virtual reality. In
almost all instances the quality of an application is determined by its level of interactivity; that is,
the more interactive it is, the better it is. Shedroff (2000, p.283) also notes that interactivity is a
spectrum that proceeds from passive to active.
He outlines that there are a number of components to interactivity, each displaying a range of
values. These components include:
• feedback
• control – simple to sophisticated, audience control
• productivity – creation tools
• creativity, co-creativity
• communications – chats, forums, live documents
• adaptivity – personalisation, pseudo-intelligence, etc.
145
Equitable Instructional Methodology
Interactive multimedia then refers to the integration of multiple media items combined in an application to hopefully increase the impact of the message by providing a range of two way communication structures and responses approaching those that we would equate to activities between
humans.
To create these interactions in games, for example, we typically need to involve the scripting
(programming) that is provided in the development environment. Given that the students come
from non programming backgrounds, this is always an issue. Courses in interactive multimedia
invariably include some scripting to establish the interactive elements that might involve:
• managing dynamic dialogue
• controlling navigation and media
• maintaining game and player profiles
• sensing interactions
• using profiles to dynamically alter outcomes
Another complication is in the structuring of the solution where all group members need to work
collaboratively in a constructive way so that what they produce can be combined to complete the
application. This generally involves group discussions about how the state of play can be stored
and communicated as the application is being used.
The Method
The methodology to account for these difficulties has involved action research; Baskerville
(1997) contends that action research involves social processes that can be “studied best by introducing changes into these processes and observing the effects of these changes.” Further; the
researcher who is actively involved in the cyclic process linking theory and practice benefits from
the process. Figure 1 from Baskerville is taken from information systems research and illustrates
its cyclic nature.
Figure 1: Action Research Methodology (Baskerville 1997)
146
Kurzel
The course Creating Interactive Multimedia (INFT2001) is a capstone course in the interactive
multimedia stream of the media arts program. It has been the principal focus of my research and
aims to expose students to some of the techniques and skills used in constructing multimedia applications. Given that it has been offered on a yearly basis for a number of years, it has provided
the opportunity to perform a longitudinal study in an attempt identify issues and subsequently
address this disparity in the difficulty encountered in the production of a multimedia projects.
With each iteration of this research, the instructional methodology was altered in some way in
response to an observed issue and/or some instructional goal that had been decided upon.
The course Interactive Design for Multimedia which precedes Creating Interactive Multimedia in
the program sequence used an individual multi-phase project as the major assessment component.
At the end of the semester, a questionnaire was administered to collect voluntary feedback from
the students. This consisted of twenty-seven Likert Scale questions that were presented online
through an in-house online survey system. The response options scored 1 to 5, were strongly
agree/ agree/ neutral/ disagree/ strongly disagree.
A factor analysis was applied to the questionnaire and it was apparent that a 3 factor resolution,
accounting for 45% of the overall variance, was appropriate; the factor loadings are shown in the
table in Appendix A. On examination of the questionnaire data discussed above, factors that required more consideration were:
•
•
•
instructional methodology OK,
group work,
difficulty encountered.
In the Creating Interactive Multimedia course in the following semester, a similar instructional
strategy was used but the students were organised into formal groups of 4 to account for a range
of skills. An independent variable indicating the student’s program, e.g., computing or media arts,
was introduced into a similar questionnaire with 31 items, and a factor analysis was applied.
A two factor resolution accounting for 35% of the overall variance was appropriate in this case
(See Appendix B). The group work factor, which was now part of the methodology, disappeared.
On reducing the InstructionOK and Difficulty factors and then performing a one-way ANOVA,
the media arts students reported the work as relatively more difficult than the computing students
with means of 2.31 and 3.04, F(1,45)=6.17, p<.001. Given that this course is situated in the media
arts stream, this was significant. Both groups agreed on the acceptability of the instructional methodology with means of 2.24 and 2.37 respectively. This provided my action research with a metric to consider and evaluate subsequent offering. Changes could be made within the instruction
of the course and the mean for Difficulty calculated. The goal then was to establish a learning situation that allowed all students to feel equally comfortable with the development environment.
Over the iterations of the course that followed, the difference in the Difficulty metric for the two
student groups moved closer together (see Table 1). In 2008 S2, both cohorts of students agreed
equally with the notion that the development was difficult, and again in 2009 S2, but to a lesser
degree. The difference in the InstructionOK factor remained insignificant in each iteration, suggesting that the project based learning strategy was generally viewed favourably.
147
Equitable Instructional Methodology
Table 1: Creating Interactive Media (INFT 2001)
Factors 2006-2009
2006
S2
Factors
2007
S1
2008 S1
2008
S2
2009 S2
2.19
2.29
2.15
2.27
2.34
2.52
2.44
1.89
2.31
2.39
2.24
2.24
2.68
3.04
2.87
2.77
2.20
2.7
Media
2.24
InstructionOK Arts
2.37
Computing
Difficulty
Media
Arts Computing
The variations in the instruction that might have accounted for this trend will be discussed in the
following sections.
Establishing a Learning Environment
One way of tackling this difference in this perception of Difficulty was to utilise a Learning Management System (LMS) that could enable a tailoring of the instruction to suit a range of student
needs. A Multimedia Learning Environment (AMLE) was a Learning Environment (LE) that was
designed and developed specifically to provide for these differences in student’s backgrounds and
multimedia knowledge. AMLE provided direction for both declarative and practical content
through annotated links (Brusilovsky, 2000) based on a competency model of the student.
The AMLE session viewer in Figure 2 (Kurzel, 2005) demonstrates a listing of concepts that constituted an Introduction to Multimedia. Notations provided information to the student as to
whether the LE viewed that the concept was suitable to be accessed. We chose not to employ link
hiding and allow the student to make the decision as to whether they wanted to access the information.
Figure 2: Session Viewer
148
Kurzel
Instructor organised groupings of content, as demonstrated above, provided the scaffolding for
student interactions with the content. The LE also provided tools to group concepts and allow
students to establish their own personalised structures of information. Bookmarking was employed as an effective way of aggregating information. AMLE also provided tools, e.g., search
engine and glossary (see Figure 3) (Kurzel, 2005), to locate concepts where required; the Concept
Viewer then displayed the content in a range of media formats.
Figure 3: AMLE Tools
To provide for variations in the instructional methodology and to allow the instructor to change
from the expository learning framework inherent in the AMLE system, we proposed an architecture that introduced higher level instructional objects that could account for the groupings (sessions as discussed before) and, indeed, any other object that played some role in the methodology
(Allert, Dhraief, & Nejdl, 2002). A course then might be presented by the instructor in different
ways, e.g., Creating Interactive Multimedia (Problem Based Learning), or Creating Interactive
Multimedia (Expository). The changing of the methodology in theory could also then be extended
to the student because the meta-data that supported these fine grained learning objects already
existed. (See Figure 4.)
So AMLE had the potential to provide a different learning experience for different students; this
might have addressed some of the differences of how they viewed the development environment.
149
Equitable Instructional Methodology
Figure 4: Instructional Framework
Ameliorating Difficulties
The first semester of 2007 provided another course offering and an opportunity to alter the instruction in an attempt to address this difficulty feature. To avoid the situation where arts students
would work on the graphics elements alone and subsequently not be involved in the other aspects
of the projects, such as establishing the navigation or the interactive elements, all students needed
to have an understanding of how the whole project was structured. This is not to say that they still
could not have a major influence in their area of expertise; arts students could still take the major
responsibility for the design aspects, and computing students might be responsible for all the interactive elements. An additional individual report outlining the structure of the project was also
made part of the overall assessment strategy.
In an attempt to not hide aspects of the interactive techniques required, more was done to outline
basic programming aspects through exemplars and discussion. More effort was made to allow
students to reflect on script samples posted on the discussion forum and through explanatory internal documentation encouraged within tutorial examples. Students were also encouraged to use
the locally developed portal/learning environment (AMLE discussed above) which contained a
wealth of information and example scripts on Director techniques. The search and glossary features discussed previously could be used to access the content provided in a range of media formats.
When the 2007 course data was analysed, there was still a difference in the Difficulty factor although it had decreased. The arts students reported the work as more difficult than the computing
students with respective means of 2.39 and 2.87; the lower the value, the more the group agreed
with the Difficulty proposition. When summative peer assessment data was also reviewed, it became apparent that the computing students (who perhaps were involved in the interactive elements of the project) were being valued more than the arts students in the work that they conducted. This end of course review involved rating other group member’s performance including
their own focussing on the following:
•
•
150
professional practice – attendance, punctuality, etc.
involvement in group decision making and discussions
Kurzel
•
•
knowledge of the project requirements
quality of work produced.
More was done to elaborate and highlight the skills and knowledge required and their worth to the
final production. A more extensive set of peer assessment questions was presented in the next
iteration of the course that was offered in early 2008 to a small group of 20 students. One project
group unfortunately was out of step with the others in terms of the quality of the work that was
produced for the group’s design work for the project; a possible way to handle this, as Falchikov
(1996) reported on, might be to introduce peer review.
More Involvement and Engagement
In a further attempt to address this perceived Difficulty aspect of the course encountered by media
arts students, all students were asked to become more involved with the development aspects.
Each student took control of an equal section of the project space; they were asked to take responsibility for the design and production of an equal section. In this case, the project involved
the creation of a 2D virtual environment where a computer forensic investigation was to be used
to solve a crime. The audience for this game was to be 15 year olds and all game tasks were to be
simulated.
The group decided on the language of the game play and arrived at words/actions that could be
consistently applied throughout the production to achieve the required functionality. They also
decided on how the game could be represented internally within project. To solve the crime, for
example, evidence would need to be collected requiring the use of an appropriate handler, e.g.,
addToEvidence(Item). These abstractions were decided on by the group so that each member had
some ownership of the game format and their possible use.
The project was organised into components that could be worked on independently, and an agile
production model was pursued. Peer review was used with each assessable piece and the feedback from others could be acted upon before marking. The Difficulty factor was again reduced;
their means were almost identical (2.24 and 2.20). My interpretation of this was that both cohorts
found the development difficult to the same degrees.
Using Flash in Game Development
By 2009, Director had been superseded by Adobe Flash as the preferred development environment for 2D games and other multimedia artifacts. The content of the learning environment
AMLE had become redundant; there were some general concepts that could have been re-used
but the majority of it was specific to the development environment and needed a re-write. Most
declarative and practical concepts in the AMLE repository would have had an equivalent version
within the Flash development environment but the change would have been time consuming. My
judgement was that the biggest gains did not come from using AMLE but instead came from the
instructional decisions taken. The in-house university wide content delivery system still provided
an efficient way of delivering the content, albeit, without any notations to personalise the experience.
Again with this iteration of the course, a game was used as the basis of the project. The Credit
Union Christmas Pageant offered sponsorship for the project, provided that the instruction used
the Christmas Pageant that is conducted in early November every year, as its theme. The design
and production of an interactive game for publishing on the website presented an authentic group
project for the students to work on. The game space was subsequently arrived at and divided
equally; students took responsibility for their section. The groups decided on this break up along
with the navigational structure of the game. A shell of the game was composed of individual
151
Equitable Instructional Methodology
Flash files developed early in the project. Each student was then able to use game commands
within their sections, e.g., addToInvetory(item), to drive the game. The Game Object Model
(Amory & Seagram, 2007) was also used as a framework for the game structure. Most of the interactive elements were accounted for in practical sessions, but some groups also investigated
other interactive resources on the WWW.
Hanrahan and Isaccs (2001) have argued that self and peer assessment skills help students develop life long learning skills. The peer and self assessment used in the course provided a good
match to the group project work that was being conducted (Wood & Kurzel, 2008); rubrics elaborating on what the students should consider were used to drive this peer assessment. When the
data was collected and the Difficulty factor reduced from the questionnaire data, the two student
groups reported the same level of difficulty (2.68 and 2.7).
Conclusion
The expectation that I have within any multimedia course offering is that all our students should
at least feel comfortable with the development environment and appreciate the process that is
used in the construction of multimedia applications: in particular, games. I argue that students
need this appreciation because they might find themselves working in groups in industry and be
placed in the position of project leader. Group project work within this course then is a good
match with the goals of this course.
This action research has convinced me that getting students to own equal sections of the project
and to then fully appreciate how group decisions on game actions could be used in their sections
is vital to their understanding of the process involved. Allowing arts students to avoid the interactive details of a game created the situation where they thought that the environment was difficult
to work with and that the work of computing students was worth more that theirs.
The strategy pursued was reliant on the development environment having the ability to modularize the solution so that students could work on components themselves and then bring them together when required. It also required the environment to enable the group to create a game language that involved the abstractions, e.g., addtoInventory(item), addToScore(number), to drive
the game and yet not require a knowledge of the full implementation details.
We can not underestimate the value, in an instructional sense, of peer review and assessment.
Peer review provided a mechanism where students could see the expected standard of work required for assessment pieces and still have an opportunity to include changes in their work prior
to marking. The group projects worked on were all different and so could be discussed and reflected upon collectively. Comments and assessments from other students were taken seriously
and self assessment helped in the student’s reflection of their own work.
Establishing rubrics that elaborated on the work required for the project helped students to appreciate all components of the work. Each section of the project, from specification to design to implementation, required an elaboration of the profession practice, the standard of work, and an understanding of the requirements of each assessable piece. It resulted in there being an emphasis on
all the parts of the project and a subsequent valuing of all the components.
Where problems occurred with interactive components that were not addressed in class, students
were encouraged to investigate other available sources. Having an abundance of support materials
on the WWW for students to research helped in the development stage because every suggested
interactive element could not be handled by instruction alone. Life-long learning skills are obviously associated with investigations to satisfy problems encountered. Collaborating on solutions
to problems in group and class forums helped in satisfying any problems encountered. Finally,
working on an authentic project and interacting with industry had the result of engaging the
152
Kurzel
groups and encouraging them to use everything at their disposal to meet deadlines to satisfy the
project requirements.
References
Allert, H., Dhraief, H., & Nejdl, W. (2002). Meta-level category 'role' in metadata standards for learning:
Instructional roles and instructional qualities of learning objects. COSIGN-2002, September 2002,
University of Augsburg.
Amory, A., & Seagram, R., (2007), Game object model version II: A theoretical framework for educational
game development. South African Journal of Higher Education, 17(2), 206 – 217.
Baskerville, R. (1997). Distinguishing action research from participative case studies. Journal of Systems
and Information Technology, 1(1), 25-45.
Brusilovsky, P. (2000) Adaptive hypermedia: From intelligent tutoring systems to web-based education.
Proceedings of Intelligent Tutoring Systems'2000, pp.1-7.
Brusilovsky, P., Eklund, P., & Schwarz, E., (1996). A tool for developing adaptive electronic textbooks on
the WWW. Proceedings of WebNet'96, World Conference of the Web Society, San Francisco, CA.
Eklund, J., & Brusilovsky, P. (1998). Individualising interaction in web-based instructional systems in
higher education. AUC Academic Conference, University of Melbourne, Australia.
Falchikov, N. (1996). Improving learning through critical peer feedback and reflection. Paper presented at
the Different Approaches: Theory and Practice in Higher Education. Proceedings of HERDSA Conference 1996, Perth WA.
Hanrahan, S. J., & Isaacs, G. (2001). Assessing self- and peer-assessment: the students' views. Higher Education Research and Development, 20(1), 53-70.
Kurzel, F. (2005). Customizing instruction. Proceedings of the Informing Science and IT Education Conference, InSITE 2005, Flagstaff, Arizona, June 16-19, 2005
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
Shedroff, N. (2000). Information interaction design: A unified field theory of design. In R. Jacobson (Ed.),
Information design, p.283. MIT Press.
Twigg, C. A. (2003). Improving learning and reducing costs - New models for online learning. EDUCAUSE Review, September/October
Wood, D., & Kurzel, F. (2008). Engaging students in reflective practice through a process of formative
peer review and peer assessment. Proceedings of the ATN Assessment Conference 2008: Engaging
Students in Assessment, Adelaide. Retrieved from
http://www.ojs.unisa.edu.au/index.php/atna/article/view/376/252
Wiley, D. A., II. (2001). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. In D. A. Wiley, The instructional use of learning objects, pp 4-5, Bloomington,
IN:
153
Equitable Instructional Methodology
Appendix A
Learning Environment Scale: Obimin Rotation Loadings
Factor 1 Instructional Methodology is OK (eigen 5.219 , variance 20.878 )
Loadings
11
I achieved more in this course than I thought I initially would
.791
19
The instructional methodology provided me with enough scope to display
.723
my skills
7
The assessment structure matched the structure of the course
.715
2
The project based instruction in this course suited the way I like to learn
.659
12
The setting of weekly goals helped me focus on what needed to be
.646
achieved.
23
I found being able to collaborate with my group in practical sessions very
.595
helpful
8
The project enabled me to demonstrate the skills that I brought to the
.582
group.
18
I was really satisfied with what the group ended up achieving in the pro.581
ject work
13
I found the course initially challenging but managed to satisfy the project
.546
requirements.
22
I was given the opportunity to discuss and reflect on my learning
.545
20
The resources provided allowed me to satisfy the course requirements
.537
24
I enjoyed working on a project that was authentic.
.520
Factor 2 Difficulty (eigen 3.476, variance 13.903)
3
I preferred working on the graphical design aspects of the course
.763
4
I preferred working on the programming in the project (R)
.723
14
An online helpdesk would have been helpful when I was working with
.687
Director.
15
I have a good understanding of how to use Director to produce multimedia
.621
pieces (R)
17
I like to be able to choose between a number of different media formats
.619
representing content.
10
I would have liked to have a discussion forum with only my group mem.519
bers
(a) The response options. Scored 1 to 5, were as follows: strongly agree/ agree/ neutral/ disagree/ strongly
disagree.
(b) Items score in reverse are shown by (R).
(c) n=50
154
Kurzel
Appendix B
Learning Environment Scale: Obimin Rotation Loadings
Item
Factor 1
11
19
7
2
12
23
8
18
13
22
20
24
Factor 2
3
4
14
15
17
10
Statements
Instructional Methodology is OK (eigen 5.219 , variance 20.878 )
I achieved more in this course than I thought I initially would
The instructional methodology provided me with enough scope to display my
skills
The assessment structure matched the structure of the course
The project based instruction in this course suited the way I like to learn
The setting of weekly goals helped me focus on what needed to be achieved.
I found being able to collaborate with my group in practical sessions very
helpful
The project enabled me to demonstrate the skills that I brought to the group.
I was really satisfied with what the group ended up achieving in the project
work
I found the course initially challenging but managed to satisfy the project requirements.
I was given the opportunity to discuss and reflect on my learning
The resources provided allowed me to satisfy the course requirements
I enjoyed working on a project that was authentic.
Loadings
Difficulty (eigen 3.476, variance 13.903)
I preferred working on the graphical design aspects of the course
I preferred working on the programming in the project (R)
An online helpdesk would have been helpful when I was working with Director.
I have a good understanding of how to use Director to produce multimedia
pieces (R)
I like to be able to choose between a number of different media formats representing content.
I would have liked to have a discussion forum with only my group members
.791
.723
.715
.659
.646
.595
.582
.581
.546
.545
.537
.520
.763
.723
.687
.621
.619
.519
(a) The response options. Scored 1 to 5: strongly agree/ agree/ neutral/ disagree/ strongly disagree.
(b) Items score in reverse are shown by (R).
(c) n=50
155
Equitable Instructional Methodology
Biography
Frank Kurzel is a lecturer in the School of Communication, International Studies and Language at the University of South Australia and
he is currently a Program Director for the Bachelor of Media Arts. He
has had extensive experience in Education, Computer Science and
Multimedia areas. His research interests include web-based instructional systems to support his teaching, and interactive environments.
He is also interested in instructional methodologies and enhancing his
teaching through project based learning focusing on game development.
156
Interdisciplinary Journal of E-Learning and Learning Objects
Volume 7, 2011
Keeping an Eye on the Screen:
Application Accessibility for Learning Objects
for Blind and Limited Vision Students
Cristiani de Oliveira Dias and Liliana Maria Passerino
Interdisciplinary Centre for New Technologies Education
[Centro Interdisciplinar de Novas Tecnologias na Educação
– CINTED], Federal University of Rio Grande do Sul,
Porto Alegre – RS, Brazil
[email protected]; [email protected]
João Carlos Gluz
Interdisciplinary Program in Applied Computer Science
(PIPCA), Vale do Rio dos Sinos University (UNISINOS),
São Leopoldo – RS, Brasil
[email protected]
Abstract
A new profile emerges with the Web 2.0. It has evolved from users as mere information receivers
to users as creators and content developers. In this new profile users may generate and produce
material to later share it with classmates and teachers through the Internet. Nowadays, one of the
main challenges teachers face is how to follow students through the digital world and beyond,
while making use of these digital resources in order to make classes approachable so that students
feel motivated to learn. Therefore we understand that teachers can interact with students by developing richer digital educational material that is able to accommodate all students, including
those with special educational requirements. This paper’s main goal is to analyze Learning Objects with a focus on accessibility issues and to recommend applications for the construction of
accessible Learning Objects for individuals with special educational requirements.
Keywords: Inclusive Education, Learning Objects, Accessibility, Visual Impairment, Assistive
Technology.
Introduction
Material published as part of this publication, either on-line or in
print, is copyrighted by the Informing Science Institute. Permission
to make digital or paper copy of part or all of these works for personal or classroom use is granted without fee provided that the
copies are not made or distributed for profit or commercial advantage AND that copies 1) bear this notice in full and 2) give the full
citation on the first page. It is permissible to abstract these works
so long as credit is given. To copy in all other cases or to republish
or to post on a server or to redistribute to lists requires specific
permission and payment of a fee. Contact [email protected] to request redistribution permission.
We live in society where it is expected
that all people may participate in all
sorts of social environments. At the
same time, this society exalts an inclusive education in which every single
person should have the opportunities for
choice and self-determination (Mittler,
2003). According to Mittler, an inclusive education does not imply placing all
children in school, but transforming the
Editor: Marguerite Newcomb
Keeping an Eye on the Screen
schools in order to become more responsive to all student needs and to aid teachers into accepting
the responsibility for their students’ learning. Students have evolved from mere information receivers to creators and content developers in which they may generate and produce material to
later share it with classmates and teachers by means of the Internet. One of the main challenges
for a teacher nowadays is how to follow students through a digital world and go beyond that by
making use of these resources in order to make classes approachable so that students feel motivated to learn. Therefore we comprehend that teachers can be part of this interaction with the student by developing richer digital educational material that is able to include all students. The diversity in media is able to favor the inclusion of students with special educational needs and the
same goes for the adaptation of such digital educational material.
The main goal of this paper is to analyze Learning Objects (LO) (Wiley, 2001), with a focus on
accessibility issues, and recommend applications for the construction of accessible Learning Objects to individuals with special educational requirements.
The interest in this research came from a study on Learning Objects and its application in public
schools, which received the incentive of the Brazilian Federal Government for the development
of such educational material as an alternative approach for educational technology
(RIVED/SEED, 2008).
Among several programs that promote the development of LO, the Interactive Web of Virtual
Education (in Portuguese RIVED - Rede Interativa Virtual de Educação) is a Brazilian program
that makes use of digital pedagogical resources in order to aid the development of students' reasoning abilities and critical thinking. RIVED will form a virtual and interactive educational web,
which associates the potential of computer science to new pedagogical approaches. Project
RIVED is a program of the Distance Education Secretary (SED) of the Brazilian Ministry of
Education (MEC), which is the Ministry's department for distance learning in Brazil. The project
aims to produce digital pedagogical content in the form of Learning Objects. Project RIVED
started in 1997, when, in agreement with the United States, it began the development of technological material for education. The staff of development was formed by several educators from
Brazil, Peru, and Venezuela, and worked together until 2003, producing 120 objects for a variety
of high school disciplines.
The distribution of the educational material produced by Universities and other learning institutions reaches a great number of public schools. The allocation is carried out by means of electronic formats distributed by the Ministry of Education in each state in Brazil. In 2004 the responsibilities of development were transferred to several Universities in Brazil, transforming the project into a “Virtual Factory”. This transference caused the growth in production of objects able
not only to serve higher education, but also basic and technical education. All objects developed
were distributed to public schools, allowing students and teachers to make use of them in class.
Other projects also evolved in order to promote LO's use and production. As an example, there is
the International Database of Educational Objects (in Portuguese BIOE - Banco Internacional de
Objetos Educacionais) from Brazil. According to the Brazilian Ministry of Education (2009), this
International Database of Educational Objects will become a portal with the intent to assist teachers. The database makes available many free educational resources through several kind of media
and different languages (audio, video, animation/simulation, hypertext, educational software),
ranging from basic education up to higher education in most knowledge areas.
Among other actions that were implemented, one may encounter the Teacher’s Portal
(http://portaldoprofessor.mec.gov.br), where the teacher must register in order to create and share
resources and material with colleagues, prepare lessons, and access institutions or school sites.
Another continuous formation program is the Medias in Education project, created with the intent
to instruct professionals who learn how to work with media resources so that they can develop
158
Dias, Passerino, & Gluz
didactic material in the classroom and thus become multipliers. The Learning Objects are built
based on agents in order to improve flexibility, adaptability, and interactivity within educational
environments. In addition there is creation of a new Brazilian metadata standard for Learning Objects, called OBAA (Vicari et al., 2010), that may operate in diverse platforms like the Web or
even mobile platforms or Digital TV.
Parallel to these programs that motivate the production of didactic material, the Brazilian Federal
Government has been promoting the school inclusion process since 2003 (SEESP, 2003). On the
educational side, this is regarded as an important process aligned with the principle “educate for
diversity”. Keeping this principle in mind, it is possible to perceive that we are face-to-face with a
new paradigm that can be called the paradigm of inclusion.
Within an inclusive paradigmatic view, there is no room for considering the educational process
as being separate and segregated. In this case, the teacher needs to plan, select, and build the didactic material that will provide a base for the knowledge appropriation process of all students.
However, the development of inclusive didactic materials should meet accessibility criteria. As a
consequence, such criteria should also be used in the construction of learning objects.
Besides the social concepts of accessibility and inclusion, there are many other reasons why to
invest in the accessibility of a product, including, for instance, legal and economical reasons. By
market logic, every enterprise that cares to expand business is in need of contemplating the biggest number of people possible. Making its product reachable to all sorts of people is a policy that
is in search for market expansion and greater profits. In legal terms, the concept of accessibility
that emerged in Brazil was already making a reference through communication. As to the law
decree #3.298 of 1999, the accessibility in the federal public administration in Brazil defined it as
an achievable condition of approach of utilization with assurance and autonomy of spaces, urban
furnishings and equipment of the facilities and sporting equipment, the edification, transportation,
and the systems and means of communication (Brazil, 1999).
Considering what has been discussed, it is possible to perceive that public policy promoting the
development of didactic material, as well as the enrollment of individuals with special needs in
regular school, requires a better articulation so that investment results benefits everyone. One
great challenge is how to develop LO that can be accessible to the blind and those with limited
eyesight and that respects the principle of diversity and inclusion without becoming a resource for
only few students. It should be so that teachers may make use of it in different manners with the
entire class.
For the development of Learning Objects, there are standards that assure the portability of objects
in different spaces. However, what has been observed is that these standards are more concerned
with technological aspects, relegating pedagogical aspects to a second level.
Generally the construction of content on learning objects is the focus of development. However,
when it comes to content, the quality is not the only variable that should be regarded in this public
policy. Considering that content constitutes a doorway to the educative process we need to think
of the access and comprehension of information in a mode to serve diversity in the classroom.
Considering these questions, several researches were carried out in the context of the Learning
Object Supported by Agents (OBAA) project (Vicari et al., 2010), studying the Learning Objects
that are available in classrooms through portals such as RIVED. This showed us that practically
all of these objects do not conform to the minimum accessibility recommendations (Behar,
Passerino, Dias, Fronzi, & Silva, 2009). The present research aims to deliberate on how the adaptation process of Learning Objects can attend accessibility criteria and to propose a methodology
for the adaptation of accessible LO. The presentation of the methodology takes into account its
implications in terms of software engineering, and also the expected educational results due to its
159
Keeping an Eye on the Screen
use, giving attention to the fact that the suggested methodology may also function as a way to
orient computational developments for other areas respecting the diversity in a manner to assure
accessibility to everyone.
Accessibility Criteria
Regarding accessibility concerns to build a society with participation and equality, which contains in its principles the effective interaction of all citizens, the establishment of inclusion policy
that acknowledges differences and in which everyone should participate with equal rights according to its specifications becomes fundamental.
Consistent with Dias (2003), accessibility in digital material means that any person making use of
computer technology should be able of interacting with most kinds of contents. He or she must be
able to fully comprehend the displayed information.
Therefore, accessibility does not mean only transforming graphic and interactive educational material into textual educational material for visually handicapped students. Accessibility implies
transforming any product to become accessible without losing its properties and content.
Some judgments are important in the construction of Learning Objects or other educational material. Table 1 describes basic WCAG (Web Content Accessibility Guidelines) 2008 recommendations for Learning Object content adaptation.
Table 1: Source: W3C, 2008 (http://www.w3.org/WAI/intro/components.php)
Principle 1: Perception
The information and components of user’s interface must be presented to users in a perceivable manner to them
Text alternatives
Time-based media
Adaptable content
Discernible content
If educational material is
being built with too many
images or animations, they
must supply text alternatives for any non-textual
content.
Alternatives for time-based
media must be supplied.
These alternative media
should be available within the
displayed material. It may be
a video with subtitles or an
audio along with a selfdescription in both medias.
Create content that can
be presented in different
manners (example: a
simple layout) without
losing information or
structure.
Facilitate audio and visual
content for users, including a
separation of the foreground
with the background.
Principle 2: Operable
User’s interface components and navigation has to be operable
Accessible by keypad
Sufficient time
Navigation
Permit that all functionality may be available from
keypad.
Provide sufficient time for
users to read and use content
and permit the student to adjust the time of executing
activities.
Provide ways to aid users in order to navigate and locate
content and determine the place they are found.
Principle 3: Comprehension
Principle 4: Robustness
Information and operation of user’s interface must be understood by the user
The content must be robust enough to be interpreted in a
concise manner by several agents by the user that includes assistive technology
160
Dias, Passerino, & Gluz
In the paradigm of inclusive education, all technological resources used in the learning process,
including Learning Objects (Wiley, 2001), also need to be accessible. In this case, LO are considered as small reusable digital components to be used in education.
The concept of reusability is similar to the reutilization of objects in object-oriented programming
where the entire object's code or part of it can be used in the development or adaptation of another program. Depending on the target public that will use this object, it may be remodeled to
attend its needs (Takahashi, 1990).
Present research started looking for accessible Learning Objects but found only a few objects,
which were not actually focused on the accessibility concepts previously mentioned. We believe
that for Learning Objects to be accessible by those with special needs, they must be constructed
with accessibility needs in mind from the start. This concern is not recent. Several research projects approach this accessibility issue showing the importance of adapting material to different
student necessities (Sonza, 2008). However, most of the time these proposals are about how to readapt material to a certain requirement or limitation. There is, for example, the educational material developed by the Technological Institute of Bento Gonçalves (in Portuguese Cefet-BG) at the
following site: http://www.bento.ifrs.edu.br/.
In short, materials that claim to be accessible are actually “new” versions that are differentiated
from original educational material and that somehow exclude the handicapped student by not
permitting interaction of him with the same material. Therefore, the “adapted” material deprives
the student from having access to the full information.
Learning Objects Adaptation: First Approach
In order to achieve our research goals it was necessary to define a methodology for the adaptation
of LO into accessible LO, and validate this methodology with an existing LO, through the application of the resulting accessible LO with students with special needs. The accessibility adaptation process was centered on blind students. A qualitative research approach was chosen as the
basis of the methodology, with a focus on the analysis of standards for the definition of requirements, and with the validation of technological development through field research. The four
stages of the methodology are presented below.
Stage 1: Choice of Learning Objects
This stage aims to show that Web pages accessibility criteria can be also applied to LO. To do so,
a case study with a particular object was developed. The selected object is a game entitled
“Banca do Quincas” (Quinca´s Shop) that won a prize during the 2007 RIVED´s Award contest
developed by the University of Salvador (UNIFACS). The main characteristic of this game is to
be a simulation style open LO where it is possible to manage a shop that sells organic products
(for more information see Santanche (2008) at http://logames.sourceforge.net). The choice also
was influence due to a partnership between research groups at the Federal University of Rio
Grande do Sul (UFRGS) and the University of Salvador.
Stage 2: Initial Accessibility Evaluation of Learning Objects
In this methodology the validation of Learning Objects is continuous, forming a part of the entire
process since the analysis up to the implementation of the object, following Granollers’ (2004)
User-Centered Design (UCD) methodology.
In order to evaluate the accessibility of this object before the restructuring process, an initial test
was conducted with an individual with low vision (loss of central vision, while retaining peripheral vision). The evaluation resulted as non-accessible.
161
Keeping an Eye on the Screen
Stage 3: Re-engineering Process of Learning Objects
According to Chilkofsky and Cross (1990), re-engineering can be compared to (a) direct engineering: a traditional process starting from high level abstractions and logical projects that come
to the implementation of the system; (b) reverse engineering that creates the representations of a
system in another form or in a higher level of abstraction; (c) restructuring that stands for the
transformation of a representation into another in the same level of abstraction, preserving the
external behavior of the system. Re-engineering is composed by reverse engineering and also by
direct engineering or by restructuring. In contrast with restructuring, re-engineering can involve
modifications in functionality or by techniques of implementation. According to Coleman (1996),
the intention to re-engineer a chosen object is to create new functionalities from existing ones by
modifying the source-code of the object. Following the author’s ideas, we chose the (b) scenario
in which an addition in the implementation of the system took place without affecting its functionality.
In our case, the implementation consisted of a communication bridge called Java Access Bridge
which allows the communication among screen reader software with the learning object. Initially
the selected LO was used with blind and low-vision students, without the adaptation, to identify
the weak points and limitations of this LO, in respect to accessibility issues. This experimentation
facilitated the subsequent inclusion of accessibility API that incorporated other accessibility features to the LO.
Besides the accessibility API, we also used the Java Accessibility Utility, which is formed by
classes intended to help the development of assistive technologies able to access programs working on the virtual Java, and understanding API accessibility (Sun, 2009a. 2009b. 2009c).
We assumed that Learning Objects should be built in the perspective of object oriented software
development. Thus, we have opted to develop the software using the Fusion method (Coleman,
1996). Fusion is an object oriented software development process, which supplies all resources
for analysis, project, and implementation. The process is divided into:
Analysis phase: this is the phase where the expected behavior of the system is defined. In this
phase, we made several studies about the objects' original source-code, and UML diagram, built
by the initial development team.
Project phase: during this phase we created an ER Diagram for the object (Figure 1), in order to
verify which are the functionalities of the object and how is the interaction processes of the object
with the user. The re-engineering process of the object showed that Table element (see Figure 1)
makes all interactions with users. In the resulting adapted LO, this Table element, was reconstructed as a Web form, now accessible to screen reader software.
162
Dias, Passerino, & Gluz
Figure 1 – Relationship-Entity Diagram of the LO Quincas Shop
The object's metadata structure was also extended to support the OBAA accessibility metadata
(Vicari et al., 2010) in order to represent the accessibility information added to the object (Table
2).
Table 2: Part of the meta-data developed in the OBAA project (Vicari et al., 2010).
Name
Description
Size
Values
Origin
10. Accessibility
Accessibility is the ability of the
learning environment to adapt to
the needs of each user/student
1
-
OBAA
10.1 Has Visual
Indication of the existence of LO if
the same presents content with visual information
1
true, false
OBAA
10.2 Has Auditory
Indication of the existence of LO if
the same presents content with
auditory information
1
true, false
OBAA
Implementation phase: in this phase was implemented the Java Access Bridge communication
connection for the object.
163
Keeping an Eye on the Screen
Stage 4: Validation of Adapted Learning Objects
In the methodology we are following, we consider that the user is both part of the creation and the
evaluation process of the LO. Users tested several versions of the adapted LO, during the development phases of this object, and after the development. The combination of all these tests form
the validation process of the adapted object.
Table 3 presents a synthesis of the validation process, including the initial evaluation performed
in Stage 2.
We understand that validating the adaptation process with only one single individual is not
enough. These tests were conducted with 5 individuals with different visual problems, including
total blindness and two types of low vision (subnormal vision and only peripheral vision). The
individuals also have different technical skills, ranging from users with no knowledge about computers, to users with intermediate knowledge about how to use computers. Two users are primary
school students, one of them is a secondary school student, and the other two users are primary
school teachers, which work with blind and low vision students in a inclusive school.
All tests were conducted in a notebook which, besides the learning object, contained several
screen reader software programs. The screen readers used in the tests were DOSVOX (Borges,
2006), ORCA (2009), and NVDA. These readers were selected because they are free, they are
already known by users, and they are used in schools. The test sessions were video recorded; the
duration of these sessions depended on the user's interest, and knowledge. The observation protocol aimed to verify the accessibility level of the object, including all of its contents. This protocol was not intended to check the user competence.
Table 3: LO validation with the user.
Individual Validation stage
Really accessible? (user’s point of
Previous
knowledge view)
A1 (only
peripheral
vision)
Initial evaluation, before the adaptation. LO with no adaptation
resource, initial process.
Yes
No because the Jaws reading screen
could not read the application in Java.
This evaluation was done at the beginning
of the research.
A2 (blind)
Validation stage during the middle
of the adaptation process
Yes
Not very accessible as audio was on a
high level.
S (blind)
Last stage, validation with
individuals.
Yes
Accessible. Capable of use in class with
blind students.
A (subnormal vision)
Last stage, validation with
individuals.
No
Approved LO, yet needs more contact
with computer science material to adapt
to technologies
M (blind)
Last stage, validation with individuals.
Yes
Accessible. Approved LO. Played the
game again to improve performance.
164
Dias, Passerino, & Gluz
This validation with users was essential for the adaptation of LO. Without the blind and visual
impairment subjects, we would have not sufficient knowledge to apply the changes. Practical
studies on accessibility are not enough unless there are users who use the adapted resources in
their daytime tasks. Learning about screen readers and using screen readers are very different
actions. The research reported here has shown to us that developers, besides having to know
about resources and languages, also need to know the target audience for which the LO is
intended. Teachers that create educational material, in addition to knowing the content and
features that will be used in the class, also need to know potentials and limitations of the students
in its classes.
The interactions with the subjects and the LO are not reported in this article, however, this
information is available in the original document (Dias, 2010). These interactions reaffirm the
effectiveness of the methodology used in the creation and adaptation of accessible LO.
All four stages of the adaptation methodology can be synthesized in a workflow as shown in Figure 2. The stages are continuous and the validation of the adapted LO by testing subjects is a
permanent task along the all stages. This development and re-adaptation process can be employed
either for the construction of LO or the development of educational software.
Figure 2: Software development workflow.
Final Remarks
We believe that the results achieved with this research are important for the field of special educational under an inclusive perspective. Such relevance is reaffirmed when we take in account
that teachers are becoming promoters of their own material, and that students with special educative needs not only wish diversified material, but also require inclusive material. Indeed, the
165
Keeping an Eye on the Screen
process of the adaptation of classroom material can be considered constant, as it has been identified during interviews with teachers for this research.
With the intention of assisting the development of LOs, we set the goal to create a methodology
that can provide the adaptation of an existing LO into one Accessible LO. To test the
methodology we applied it to the Quincas Shop LO, developed by the UNIFACS Salvador team.
After the adaptation process the LO was used in two inclusive schools, where it was validated by
blind and low vision subjects. The validation of the proposed methodology confirmed its
feasibility and allowed us to infer the impact that the methodology can cause in the educational
process.
From this experience, we came to believe that these results will be useful to the development and
adaptation of Learning Objects for students with special needs, particularly for the blind and low
vision students. Other kinds of special needs are being considered in future works, to extend the
applicability of the methodology.
References
Behar, P., Passerino, L., Dias, C., Frozi, A. P., & Silva, K. K. (2009). Um Estudo sobre requisitos
pedagógicos para objetos de aprendizagem multi-plataforma. IFIP World Conference on Computer in
Education 2009. Bento Gonçalves, July.
Brazil. (1999). Presidência da Republica. Casa. Decreto Lei 3.298. Retrieved December 4th 1999 from
https://www.planalto.gov.br/ccivil/decreto/d3298.htm
Brazilian Ministry of Education. (2009). Ministerio da Educacao. Banco Internacional de Objetos Educacionais. [International Database of Educational Objects]. Brasilia, DF: MEC/SEESP, 2009.
Borges, A. (2006). Projeto Dosvox. [Dosvox Project] Retrieved: may 2008 from
http://intervox.nce.ufrj.br/dosvox
Chilkofsky, E. J., & Cross, J. H. (1990). Reverse engineering and design recovery: A taxonomy. IEEE
Software, 7(1),13-18.
Coleman, D. (1996). Desenvolvimento orientado a objetos: o método fusion. [Object-oriented development:
the fusion method (1994)] Rio de Janeiro: Editora Campus.
Coll, C. (1996). Psicologia e Currículo. Uma aproximação psicopedagógica na elaboração do currículo
escolar. [Psicología y Curriculum. Una forma de psico-pedagogía del desarrollo curricular. Paidos
Iberica, Ediciones S. A. (15 Mar 1991)] Sao Paulo: Atica.
Dias, C. (2003). Usabilidade na Web: Criando portais acessíveis. Alta Books.
Dias, C. (2010). De olho na tela: requisitos de acessibilidade em objetos de aprendizagem para alunos
cegos e com limitação visual. [Keeping an Eye on the Screen: Application Accessibility for Learning
Objects for Blind and Limited Vision Students]. Dissertação de Mestrado, Universidade Federal do
Rio Grande do Sul. March.
Dias, C. & Passerino, L. M. (2008). Objetos de Aprendizagem e Acessibilidade: um estudo sobre objetos
acessíveis. XIX Simpósio Brasileiro em Informática na Educação. Fortaleza – November.
Granollers, T. (2004). Una metodologia que integra la ingenieria del software, la interacción personaordenador y la accesibilidad en el contexto de equipos de desarrollo multidisciplinares. Tesis de doctorado, Universidad de Lleida, July.
Mittler P. (2003). Educação Inclusiva: contextos sociais. [Working Towards Inclusive Education: Social
Contexts. David Fulton Publishers, Jan 2001] Porto Alegre: ArtMed.
ORCA. (2009). Orca Screen Reader. Retrieved July 24th 2009 from http://live.gnome.org/Orca
166
Dias, Passerino, & Gluz
RENAPI. (2009). Rede Nacional de Pesquisa e Inovação em Tecnologias Digitais. [National Web of Research and Digital Technology Innovation]. Retrieved March 10th 2009 from
http://bento.ifrs.edu.br/acessibilidade/index.php
RIVED/SEED. (2008). Rede Interativa Virtual de Educação [Interactive Web of Virtual Education]. Retrieved November 12th 2008 from http://www.rived.mec.gov.br/
Santanchè, A. (2008). Projeto Jogos e Objetos de Aprendizagem. [Project Games and LO]. Retrieved August 2008 from http://logames.sourceforge.net/
SEESP. (2003). Ministerio da Educacao. Diretrizes Nacionais para a Educação Especial na Educação
Básica. Brasilia, DF: MEC/SEESP, 2003.
Sonza, A. P. (2008). Ambientes Virtuais Acessíveis sob a perspectiva de Usuário com Limitação. Tese
Doutorado [Virtual Environments Accessible from the perspective of User with Limitation. PhD
thesis]. Universidade Federal do Rio Grande do Sul, Programa de Pós-Graduação em Educação, Porto
Alegre.
Sun. (2009a). Accessibility. Retrieved July 16th 2009 from http://www.sun.com/accessibility
Sun. (2009b). Developing Accessible JFC Applications - Test Cases. Retrieved July 16th 2009 from
http://www.sun.com/accessibility/docs/dev_access_apps.jsp#4
Sun. (2009c). Java Accessibility Quick Tips - Ensuring and Verifying Basic Application Accessibility. Retrieved July 16th 2009 from http://www.sun.com/accessibility/docs/java_access_tips.jsp
Takahashi, T. (1990) Programação Orientada a Objetos [Object-Oriented Programming]. Escola de Computação, São Paulo.
Viccari, R., Gluz, J. C., Passerino, L. M., Santos, E., Primo, T., Rossi, L., Bordignon, A., Behar, P., Filho,
R., & Roesler, V. (2010). The OBAA Proposal for Learning Objects Supported by Agents. Proceedings of MASEIE Workshop – AAMAS 2010. Toronto, Canada.
W3C. (2008). Web Accessibility Initiative. Retrieved April 6th 2008 from http://www.w3.org/WAI
Wiley, D. A. (2001). Connecting learning objects to instructional theory: A definition, a metaphor and a
taxonomy. In D. Wiley, D. (Ed.), The instructional use of learning objects. Retrieved May 2008 from
http://www.reusability.org/read/chapters/wiley.doc
Wiley, D. A., & Nelson, L. M. (1998). The fundamental object. Retrieved August 20th 2008 from
http://wiley.ed.usu.edu/docs/fundamental.html
Biographies
Cristiani de Oliveira Dias, Master in Education by the Federal
University of Rio Grande do Sul (UFRGS) in Brazil, Specialist in Informatics in Education by the Luteran University of Brazil and Bachelor of Informatics by the Universidade da Região da Campanha. Presently as PhD in Computer Education at Federal University of Rio
Grande do Sul (UFRGS), with a line of research in Informatics in Special Education and groups of development of Accessible Learning Objects along with the implementation of supporting software for Distance Learning that focuses on the blind and limited eyesight.
Mail: [email protected]
Address: Av. Paulo Gama, 110, PGIE- room 340, CEP 90046-900 - Porto Alegre-RS Brazil
Phone/FAX: +55 -51 - 33083070
167
Keeping an Eye on the Screen
Liliana M. Passerino received her PhD in Computer Education in
2005 from Federal University of Rio Grande do Sul (UFRGS), Brazil
and MSc in Computer Science from UFRGS in 1992. Currently is a
professor and researcher at the Federal University of Rio Grande do
Sul (UFRGS), Rio Grande do Sul, Brazil, working in Post-Graduate
Education and Computer Education. Productivity Fellow CNPq
(National Research Council in Brazil) conducting his research on the
theme of inclusion, disability and technology. Currently his main
research is related to the development of educational technologies and
methodologies in the field of visual impairment and pervasive
developmental disorders, with special focus on augmentative and
alternative communication.
Mail: [email protected]
Address: Av. Paulo Gama, 110, PGIE- room 340, CEP 90046-900 - Porto Alegre-RS Brazil
Phone/FAX: +55 -51 - 33083070
João Carlos Gluz PhD in Computer Science at Federal University of
Rio Grande Sul, Brazil. Associate Professor at Vale do Rio dos Sinos
University (UNISINOS), Brazil.
168