ournal of Information Systems Technology and Planning

Transcription

ournal of Information Systems Technology and Planning
ISSN 1945-5259
ournal of Information Systems Technology and Planning
Volume 5, Issue 12
Published and Sponsored by: Intellectbase International Consortium (IIC)
Journal of Information Systems
Technology & Planning
Volume 5, Issue 12
Editor-In-Chief
Dr. Maurice E. Dawson Jr., Alabama A&M University, USA
Contributing Editors
Senior Advisory Board
Dr. Khalid Alrawi, Associate Editor
Dr. Svetlana Peltsverger
Dr. Jeffrey Siekpe, Associate Editor
Dr. Kong-Cheng Wong
Dr. Frank Tsui, Associate Editor
Dr. Tehmina Khan
Mrs. Karina Dyer, Managing Editor
Dr. Sushil K. Misra
Al-Ain University of Science and Technology, UAE
Tennessee State University, USA
Southern Polytechnic State University, USA
Intellectbase International Consortium,
Australian Affiliate
ISSN: 1945-5240 Print
Southern Polytechnic State University, USA
Governors State University, USA
RMIT University, Australia
Concordia University, Canada
ISSN: 1945-5267 Online
ISSN: 1945-5259 CD-ROM
Copyright ©2012 Intellectbase International Consortium (IIC). Permission to make digital or hard copies of all or part of this journal for
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial use. All copies
must bear this notice and full citation. Permission from the Editor is required to post to servers, redistribute to lists, or utilize in a for-profit
or commercial use. Permission requests should be sent to Journal of Information Systems Technology & Planning (JISTP), 1615
Seventh Avenue North, Nashville, TN, 37208.
www.intellectbase.org
Published by Intellectbase International Consortium (IIC)
1615 Seventh Avenue North, Nashville, TN 37208, USA
Editor’s Message
My sincere gratitude goes to the Intellectbase International Consortium (IIC) program committee for their
hard work in producing Volume 5, Issue 12. In addition, I want to thank all of the Reviewers’ Task Panel
(RTP), Executive Editorial Board (EEB), Senior Advisory Board (SAB), and the Contributing & Managing
Editors (CME) for their efforts, which has made JISTP a successful and indexed academic Journal. They
work hard to review, comment and format the various research papers to fulfill accreditation standards. The
articles in this issue offer intellectual contributions and focus on the broadening of academic resources, a
continuous development and exchange of ideas among global research professionals.
This Journal covers general topics in Information Systems Technology (IST) and Planning which have
qualitative, quantitative or hybrid perspectives. JISTP examines contemporary topics with special interest to
practitioners, governments and academics that pertains to: organization infrastructure, management
information systems, technology analysis, strategic information systems, communication transformation and
decision support systems. Importantly, it highlights areas such as: (1) IST planning, technology innovation,
technology models and services; (2) Practitioners, academics and business leaders who have impact on
information systems adoption, both inside and outside of their organizations; and (3) Businesses that
examine their performance, strategies, processes and systems which provide further planning and
preparedness for rapid technological changes.
JISTP seeks research innovation & creativity and presents original topics. The goal of the Journal of
Information Systems Technology & Planning (JISTP) is to provide innovative research to the business,
government, and academic communities by helping to promote the interdisciplinary exchange of ideas on a
global scale. JISTP seeks international input in all aspects of the Journal, including content, authorship of
papers, readership, paper reviews, and Executive Editorial Board Membership. We continue to look for
individuals interested in becoming a reviewer for Intellectbase conference proceedings and Journals.
Potential reviewers should send a self-nomination to the editor at [email protected]. Reviewers may
also be asked to be part of the Executive Editorial Board (EEB) after they have established a positive record
of reviewing articles in their discipline. Also, I want to thank the Intellectbase International Consortium (IIC)
Team for their hard work in producing this Issue.
A COMMITMENT TO ACADEMIC EXCELLENCE
Articles published in the Journal of Information Systems Technology & Planning (JISTP) have
undergone rigorous blind review.
Intellectbase is one of the world's leading publishers of high-quality multi-disciplinary research in both
Academia and Industry. Intellectbase International Consortium has an unwavering commitment to providing
methodical Journal content and presenting it in a comprehensible format.
In the areas of integrity and journalism excellence, Intellectbase maintains a high editorial standard.
Intellectbase publications are based on the most current research information available and are reviewed by
members of the Executive Editorial Board (EEB) and Reviewers’ Task Panel (RTP). When there is lack of
research competence on a topic (conceptual or empirical), together the EEB and RTP provide extensive
feedback (based on what is known and accurate) to author(s).
For upcoming Intellectbase International Consortium (IIC) conferences, please visit the IIC website at:
www.intellectbase.org
Reviewers Task Panel and Executive Editorial Board
Dr. David White
Roosevelt University, USA
Dr. Dennis Taylor
RMIT University, Australia
Dr. Danka Radulovic
University of Belgrade, Serbia
Dr. Harrison C. Hartman
University of Georgia, USA
Dr. Sloan T. Letman, III
American Intercontinental University, USA
Dr. Sushil Misra
Concordia University, Canada
Dr. Jiri Strouhal
University of Economics-Prague, Czech Republic
Dr. Avis Smith
New York City College of Technology, USA
Dr. Joel Jolayemi
Tennessee State University, USA
Dr. Smaragda Papadopoulou
University of Ioannina, Greece
Dr. Xuefeng Wang
Taiyun Normal University, China
Dr. Burnette Hamil
Mississippi State University, USA
Dr. Jeanne Kuhler
Auburn University, USA
Dr. Alejandro Flores Castro
Universidad de Pacifico, Peru
Dr. Babalola J. Ogunkola
Olabisi Onabanjo University, Nigeria
Dr. Robert Robertson
Southern Utah University, USA
Dr. Debra Shiflett
American Intercontinental University, USA
Dr. Sonal Chawla
Panjab University, India
Dr. Cheaseth Seng
RMIT University, Australia
Dr. Jianjun Yin
Jackson State Univerrsity, USA
Dr. R. Ivan Blanco
Texas State University – San Marcos, USA
Dr. Shikha Vyas-Doorgapersad
North-West University, South Africa
Dr. Tahir Husain
Memorial University of Newfoundland, Canada
Dr. James D. Williams
Kutztown University, USA
Dr. Jifu Wang
University of Houston Victoria, USA
Dr. Tehmina Khan
RMIT University, Australia
Dr. Janet Forney
Piedmont College, USA
Dr. Werner Heyns
Savell Bird & Axon, UK
Dr. Adnan Bahour
Zagazig University, Egypt
Dr. Mike Thomas
Humboldt State University, USA
Dr. Rodney Davis
Troy University, USA
Dr. William Ebomoyi
Chicago State University, USA
Dr. Mohsen Naser-Tavakolian
San Francisco State University, USA
Dr. Joselina Cheng
University of Central Oklahoma, USA
Reviewers Task Panel and Executive Editorial Board (Continued)
Dr. Mumbi Kariuki
Nipissing University, Canada
Dr. Khalid Alrawi
Al-Ain University of Science and Technology, UAE
Dr. Rafiuddin Ahmed
James Cook University, Australia
Dr. Natalie Housel
Tennessee State University, USA
Dr. Regina Schaefer
University of La Verne, USA
Dr. Nitya Karmakar
University of Western Sydney, Australia
Dr. Ademola Olatoye
Olabisi Onabanjo University, Nigeria
Dr. Anita King
University of South Alabama, USA
Dr. Dana Tesone
University of Central Florida, USA
Dr. Lloyd V. Dempster
Texas A & M University - Kingsville, USA
Dr. Farhad Simyar
Chicago State University, USA
Dr. Bijesh Tolia
Chicago State University, USA
Dr. John O'Shaughnessy
San Francisco State University, USA
Dr. John Elson
National University, USA
Dr. Stephen Kariuki
Nipissing University, Canada
Dr. Demi Chung
University of Sydney, Australia
Dr. Rose Mary Newton
University of Alabama, USA
Dr. James (Jim) Robbins
Trinity Washington University, USA
Dr. Mahmoud Al-Dalahmeh
University of Wollongong, Australia
Dr. Jeffrey (Jeff) Kim
University of Washington, USA
Dr. Shahnawaz Muhammed
Fayetteville State University, USA
Dr. Dorothea Gaulden
Sensible Solutions, USA
Dr. Brett Sims
Borough of Manhattan Community College, USA
Dr. Gerald Marquis
Tennessee State University, USA
Dr. Frank Tsui
Southern Polytechnic State University, USA
Ms. Katherine Leslie
American Intercontinental University, USA
Dr. John Tures
LaGrange College, USA
Dr. David Davis
The University of West Florida, USA
Dr. Mary Montgomery
Jacksonville State University, USA
Dr. Peter Ross
Mercer University, USA
Dr. Frank Cheng
Central Michigan University, USA
Dr. Van Reidhead
University of Texas-Pan American, USA
Dr. Vera Lim Mei-Lin
The University of Sydney, Australia
Dr. Denise Richardson
Bluefield State College, USA
Reviewers Task Panel and Executive Editorial Board (Continued)
Dr. Robin Latimer
Lamar University, USA
Dr. Reza Vaghefi
University of North Florida, USA
Ms. Alison Duggins
American Intercontinental University, USA
Dr. Jeffrey Siekpe
Tennessee State University, USA
Dr. Michael Alexander
University of Arkansas at Monticello, USA
Dr. Greg Gibbs
St. Bonaventure University, USA
Dr. Kehinde Alebiosu
Olabisi Onabanjo University, Nigeria
Dr. Mike Rippy
Troy University, USA
Dr. Gina Pipoli de Azambuja
Universidad de Pacifico, Peru
Dr. Steven Watts
Pepperdine University, USA
Dr. Andy Ju An Wang
Southern Polytechnic State University, USA
Dr. Ada Anyamene
Nnamdi Azikiwe University, Nigeria
Dr. Edilberto Raynes
Tennessee State University, USA
Dr. Nancy Miller
Governors State University, USA
Dr. Dobrivoje Radovanovic
University of Belgrade, Serbia
Dr. David F. Summers
University of Houston-Victoria, USA
Dr. George Romeo
Rowan University, USA
Dr. Robert Kitahara
Troy University – Southeast Region, USA
Dr. William Root
Augusta State University, USA
Dr. Brandon Hamilton
Hamilton's Solutions, USA
Dr. Natalie Weathers
Philadelphia University, USA
Dr. William Cheng
Troy University, USA
Dr. Linwei Niu
Claflin University, USA
Dr. Taida Kelly
Governors State University, USA
Dr. Nesa L’Abbe Wu
Eastern Michigan University, USA
Dr. Denise de la Rosa
Grand Valley State University, USA
Dr. Rena Ellzy
Tennessee State University, USA
Dr. Kimberly Johnson
Auburn University Montgomery, USA
Dr. Kathleen Quinn
Louisiana State University, USA
Dr. Sameer Vaidya
Texas Wesleyan University, USA
Dr. Josephine Ebomoyi
Northwestern Memorial Hospital, USA
Dr. Pamela Guimond
Governors State University, USA
Dr. Douglas Main
Eastern New Mexico University, USA
Dr. Vivian Kirby
Kennesaw State University, USA
Reviewers Task Panel and Executive Editorial Board (Continued)
Dr. Sonya Webb
Montgomery Public Schools, USA
Dr. Randall Allen
Southern Utah University, USA
Dr. Angela Williams
Alabama A&M University, USA
Dr. Claudine Jaenichen
Chapman University, USA
Dr. Carolyn Spillers Jewell
Fayetteville State University, USA
Dr. Richard Dane Holt
Eastern New Mexico University, USA
Dr. Kingsley Harbor
Jacksonville State University, USA
Dr. Barbara-Leigh Tonelli
Coastline Community College, USA
Dr. Joan Popkin
Tennessee State University, USA
Dr. William J. Carnes
Metropolitan State College of Denver, USA
Dr. Chris Myers
Texas A & M University – Commerce, USA
Dr. Faith Anyachebelu
Nnamdi Azikiwe University, Nigeria
Dr. Kevin Barksdale
Union University, USA
Dr. Donna Cooner
Colorado State University, USA
Dr. Michael Campbell
Florida A&M University, USA
Dr. Kenton Fleming
Southern Polytechnic State University, USA
Dr. Thomas Griffin
Nova Southeastern University, USA
Dr. Zoran Ilic
University of Belgrade, Serbia
Dr. James N. Holm
University of Houston-Victoria, USA
Dr. Edilberto A. Raynes
Tennessee State University, USA
Dr. Richard Dane Holt
Veterans' Administration, USA
Dr. Cerissa Stevenson
Colorado State University, USA
Dr. Rhonda Holt
New Mexico Christian Children's Home, USA
Dr. Donna Stringer
University of Houston-Victoria, USA
Dr. Yu-Wen Huang
Spalding University, USA
Dr. Lesley M. Mace
Auburn University Montgomery, USA
Dr. Christian V. Fugar
Dillard University, USA
Dr. Cynthia Summers
University of Houston-Victoria, USA
Dr. John M. Kagochi
University of Houston-Victoria, USA
Dr. Barbara-Leigh Tonelli
Coastline Community College, USA
Dr. Yong-Gyo Lee
University of Houston-Victoria, USA
Dr. Rehana Whatley
Oakwood University, USA
Dr. George Mansour
DeVry College of NY, USA
Dr. Venugopal Majur Shetty
Multimedia University, Malaysia
Reviewers Task Panel and Executive Editorial Board (Continued)
Dr. Peter Miller
Indiana Wesleyan University, USA
Dr. Carolyn S. Payne
Nova Southeastern University, USA
Dr. Ted Mitchell
University of Nevada, USA
Dr. Veronica Paz
Nova Southeastern University, USA
Dr. Alma Mintu-Wimsatt
Texas A & M University – Commerce, USA
Dr. Terence Perkins
Veterans' Administration, USA
Dr. Liz Mulig
University of Houston-Victoria, USA
Dr. Sue-Jen Lin
I-Shou University, China
Dr. Robert R. O'Connell Jr.
JSA Healthcare Corporation, USA
Dr. Kong-Cheng Wong
Governors State University, USA
Dr. P.N. Okorji
Nnamdi Azikiwe University, Nigeria
Dr. Azene Zenebe
Bowie State University, USA
Dr. James Ellzy
Tennessee State University, USA
Dr. Donn Bergman
Tennessee State University, USA
Dr. Padmini Banerjee
Delaware State University, USA
Dr. Yvonne Ellis
Columbus State University, USA
Dr. Aditi Mitra
University of Colorado, USA
Dr. Elizabeth Kunnu
Tennessee State University, USA
Dr. Myna German
Delaware State University, USA
Dr. Brian A. Griffith
Vanderbilt University, USA
Dr. Robin Oatis-Ballew
Tennessee State University, USA
Mr. Corey Teague
Middle Tennessee State University, USA
Dr. Dirk C. Gibson
University of New Mexico, USA
Dr. Joseph K. Mintah
Azusa Pacific University, USA
Dr. Susan McGrath-Champ
University of Sydney, Australia
Dr. Raymond R. Fletcher
Virginia State University, USA
Dr. Bruce Thomas
Athens State University, USA
Dr. Yvette Bolen
Athens State University, USA
Dr. William Seffens
Clark Atlanta University, USA
Dr. Svetlana Peltsverger
Southern Polytechnic State University, USA
Dr. Kathy Weldon
Lamar University, USA
Dr. Caroline Howard
TUI University, USA
Dr. Shahram Amiri
Stetson University, USA
Dr. Philip H. Siegel
Augusta State University, USA
Reviewers Task Panel and Executive Editorial Board (Continued)
Dr. Virgil Freeman
Northwest Missouri State University, USA
Dr. William A. Brown
Jackson State University, USA
Dr. Larry K. Bright
University of South Dakota, USA
Dr. M. N. Tripathi
Xavier Institute of Management, India
Dr. Barbara Mescher
University of Sydney, Australia
Dr. Ronald De Vera Barredo
Tennessee State University, USA
Dr. Jennifer G. Bailey
Bowie State University, USA
Dr. Samir T. Ishak
Grand Valley State University, USA
Dr. Julia Williams
University of Minnesota Duluth, USA
Dr. Stacie E. Putman-Yoquelet
Tennessee State University, USA
Mr. Prawet Ueatrongchit
University of the Thai Chamber of Commerce, Thailand
Dr. Curtis C. Howell
Georgia Southwestern University, USA
Dr. Stephen Szygenda
Southern Methodist University, USA
Dr. E. Kevin Buell
Augustana College, USA
Dr. Kiattisak Phongkusolchit
University of Tennessee at Martin, USA
Dr. Simon S. Mak
Southern Methodist University, USA
Dr. Reza Varjavand
Saint Xavier University, USA
Dr. Jay Sexton
Tennessee State University, USA
Dr. Stephynie C. Perkins
University of North Florida, USA
Dr. Katherine Smith
Texas A&M University, USA
Dr. Robert Robertson
Saint Leo University, USA
Dr. Michael D. Jones
Kirkwood Community College, USA
Dr. Kim Riordan
University of Minnesota Duluth, USA
Dr. Eileen J. Colon
Western Carolina University, USA
Mrs. Patcharee Chantanabubpha
University of the Thai Chamber of Commerce, Thailand
Mr. Jeff Eyanson
Azusa Pacific University, USA
Dr. Neslon C. Modeste
Tennessee State University, USA
Dr. Eleni Coukos Elder
Tennessee State University, USA
Mr. Wayne Brown
Florida Institute of Technology, USA
Dr. Brian Heshizer
Georgia Southwestern University, USA
Dr. Tina Y. Cardenas
Paine College, USA
Dr. Thomas K. Vogel
Stetson University, USA
Dr. Ramprasad Unni
Portland State University, USA
Dr. Hisham M. Haddad
Kennesaw State University, USA
Reviewers Task Panel and Executive Editorial Board (Continued)
Dr. Dev Prasad
University of Massachusetts Lowell, USA
Mrs. Donnette Bagot-Allen
Judy Piece – Monteserrat, USA
Dr. Murphy Smith
Texas A&M University, USA
Dr. Ya You
University of Central Florida, USA
Dr. Jasmin Hyunju Kwon
Middle Tennessee State University, USA
Dr. Christopher Brown
University of North Florida, USA
Dr. Nan Chuan Chen
Meiho Institute of Technology, China
Dr. L. Murphy Smith
Murray State University, USA
Dr. Zufni Yehiya
Tree Foundation - London, USA
Dr. Yajni Warnapala
Roger Williams University, USA
Dr. Sandra Davis
The University of West Florida, USA
Dr. Brad Dobner
Tennessee State University, USA
Dr. Katherine Taken Smith
Murray State University, USA
Dr. Ibrahim Kargbo
Coppin State University, USA
The Journal of Information Systems Technology & Planning (JISTP) is published semi-annually by Intellectbase
International Consortium (IIC). JISTP provides a forum for both academics and decision makers to advance their
presumptions in diverse disciplines that have an international orientation. Articles emerging in this Journal do not
necessarily represent the opinion of Intellectbase International Consortium (IIC) or any of the editors or reviewers.
JISTP is listed in Cabell's Directory of Publishing Opportunities in Computer Science – Business Information
Systems, ProQuest, Ulrich’s Directory and JournalSeek.In addition, JISTP is in the process to be listed in the
following databases: ABI Inform, CINAHL, ACADEMIC JOURNALS DATABASE and ABDC.
TABLE OF CONTENT
EFFICIENT INTRUSION DETECTION BY USING MOBILE HANDHELD DEVICES FOR
WIRELESS NETWORKS
Somasheker Akkaladevi and Ajay K Katangur ...................................................................................... 1
ASPECTS OF INFORMATION SECURITY: PENETRATION TESTING IS CRUCIAL FOR
MAINTAINING SYSTEM SECURITY VIABILITY
Jack D. Shorter, James K. Smith and Richard A. Aukerman .............................................................. 13
IMPACT OF INFORMATION TECHNOLOGY ON ORGANIZATION AND MANAGEMENT THEORY
Dennis E. Pires ...................................................................................................................................... 23
IMPLEMENTING BUSINESS INTELLIGENCE IN SMALL ORGANIZATIONS USING THE
MOTIVATIONS-ATTRIBUTES-SKILLS-KNOWLEDGE INVERTED FUNNEL VALIDATION (MIFV)
Jeff Stevens, J. Thomas Prunier and Kurt Takamine .......................................................................... 34
ROA / ROI-BASED LOAD AND PERFORMANCE TESTING BEST PRACTICES: INCREASING
CUSTOMER SATISFACTION AND POSITIVE WORD-OF-MOUTH ADVERTISING
Doreen Sams and Phil Sams ................................................................................................................ 49
MEDIATION OF EMOTION BETWEEN USERS’ COGNITIVE ABSORPTION AND
SATISFACTION WITH A COMMERCIAL WEBSITE
Imen Elmezni and Jameleddine Gharbi ............................................................................................... 60
INVESTIGATING PERSONALITY TRAITS, INTERNET USE AND USER GENERATED
CONTENT ON THE INTERNET
Jeffrey S. Siekpe .................................................................................................................................... 72
WHY DO THEY CONSIDER THEMSELVES TO BE ‘GAMERS’?: THE 7ES OF BEING
GAMERS
M. O. Thirunarayanan and Manuel Vilchez .......................................................................................... 80
ACADEMIC INTEGRITY, ETHICAL PRINCIPLES, AND NEW TECHNOLOGIES
John Mankelwicz, Robert Kitahara and Frederick Westfall................................................................. 87
THE AUSTRALIAN CONSUMER LAW AND E-COMMERCE
Arthur Hoyle ......................................................................................................................................... 102
LIVE USB THUMB DRIVES FOR TEACHING LINUX SHELL SCRIPTING AND JAVA
PROGRAMMING
Penn Wu and Phillip Chang ................................................................................................................ 118
QUEUING THEORY AND LINEAR PROGRAMMING APPLICATIONS TO EMERGENCY
ROOM WAITING LINES
Melvin Ramos, Mario J. Córdova, Miguel Seguí and Rosario Ortiz-Rodríguez............................... 127
S. Akkaladevi and A. K. Katangur
JISTP - Volume 5, Issue 12 (2012), pp. 1-12
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
EFFICIENT INTRUSION DETECTION BY USING MOBILE HANDHELD
DEVICES FOR WIRELESS NETWORKS
Somasheker Akkaladevi1 and Ajay K Katangur2
1Virginia
State University, USA and 2Texas A&M University - Corpus Christi, USA
ABSTRACT
W
ith the rapid growth of internet and cutting edge technology, there is a great need to
automatically detect a variety of intrusions. In recent years, the rapidly expanding area
of mobile and wireless computing applications definitely redefined the concept of
network security. Even though that wireless had opened a new and exciting world with its advancing
technology, the biggest concern with either wireless or mobile computing applications in security. It can
no longer be effective in the traditional way of securing networks with the use of firewalls and even with
the use of stronger encryption algorithm keys. Intrusion Detection Systems (IDS) with the use of mobile
agents are the current trend and efficient techniques for detecting attacks on mobile wireless networks.
The paper provides an in-depth analysis of the weaknesses of the wireless networks and discusses
how an intrusion detection system can be used to secure these networks. The paper reviews
foundations of intrusion detection systems and the methodologies. The paper presents a way to secure
these networks by using mobile agents. A mobile agent is a type of agent with the ability to migrate
from one host to another where it can resume its execution and also it can collaborate in detection
tasks. This technique still has some drawbacks. In this paper, we propose an efficient approach to
Intrusion Detection using mobile handheld devices.
Keywords: Mobile Agent, Network Security, Intrusion Detection Systems (IDS), Firewall, Intrusion,
Mobile Device.
INTRODUCTION
The security of data and of any computer system is always at risk. With the rapid growth of internet and
cutting edge technology, there is a great need to automatically detect a variety of intrusions. Computer
networks connected to Internet are always exposed to intrusions. The intruder can access, modify or
delete critical information. The Intrusion Detection Systems (IDS) can be used to identify and detect
intrusions.
The definition of an intrusion can be defined as a set of events and actions that unfortunately lead to
either a modification or an unauthorized access to a particular system. The consequences of network
intrusion can be denial of service, ID theft, spam, etc. The purpose of an Intrusion Detection System
(IDS) is to constantly perform monitoring of the computer network, and also where possible detect any
1
Efficient Intrusion Detection by Using Mobile Handheld Devices for Wireless Networks
intrusions that have been perpetrated, and hence alert the concerned person after the intrusion has
been detected and recorded [Krishnun].
The Melissa Virus first found in the year 1999 used holes in Microsoft Outlook, Melissa shut down
Internet mail systems that got clogged with infected e-mails propagating from the worm. Once executed
the original version of Melissa used a macro virus to spread to the first 50 addresses in the user’s
Outlook address book. The damage from this virus was estimated at $1.1 billion [Rahul]. The I LOVE
YOU worm first found in the year 2000 spread quickly across the globe. Instead of sending a copy of
the worm to the first 50 or 100 addresses in the host’s Outlook address book like Melissa, I Love You
used every single address in the host’s address book. The damage from this worm was estimated at
$8.75 billion [R. P. Majuca]. The Code Red worm first found in the year 2001 exploited vulnerability in
Microsoft's Internet Information Server (IIS) web servers to deface the host’s website, and copy the
command.com file and rename it root.exe in the Web server’s publically accessible scripts directory.
This would provide complete command line control to anyone who knew the Web server had been
compromised. Code Red spread at a speed that overwhelmed network administrators as more than
359,000 servers became compromised in just over 14 hours. At its peak, more than 2,000 servers were
being compromised every single minute. Estimates are that Code Red compromised more than
750,000 servers. Figure 1 shows the number of infected hosts over time [D. Moore]. The growth of the
curve between 11:00 and 16:30 UTC is exponential. This clearly shows us the speed of the infection.
The damage from this worm was estimated at $2.6 billion. Similarly other viruses and worms such as
the NIMDA ($645 million damage), Klez ($18.9 billion damage), Sobig ($36.1 billion damage), Sasser
($14.8 billion damage) etc; have caused huge losses to the IT industry.
Figure 1: The infection of Code Red worm against time
2
S. Akkaladevi and A. K. Katangur
JISTP - Volume 5, Issue 12 (2012), pp. 1-12
Even after having highly secure firewall systems in place all these viruses were able to penetrate the
firewalls and cause lots of destruction. Therefore, the traditional way of securing networks with firewalls
or by making use of stronger encryption algorithm keys prove to be no longer effective. For example the
internet worm mostly known as Code Red was intended to cause a disruption of service among
Window-based server. However, it was not at its first incident as it has been detected and caught in
different occasion due to the use of mobile computers of several business travelers who have access to
laptops while going onto the different conferences hence making use of wireless access to the Internet
which does have an extremely high probability of being infected by the worm [Sihan]. When, at a later
stage, these laptops returned to their original base – i.e. when they are connected back to their
company network, the worm can spread from within thus leaving the firewall to be of absolute gimmick
[Krishnun].
Also with the mobile ad-hoc network commonly described as (MANET) is known to be a short of selfconfiguring network that comes together automatically by a collection of mobile nodes without having to
rely on a fixed infrastructure. This therefore include that each node is equipped with a wireless
transmitter and receiver to allow communication flow with each node.
Expansion of information sharing has led to cloud storage and distribution through computer networks.
Internet transactions already use firewalls, cryptography, digital certificates, biometrics, and some forms
of IDS.
History has shown us that techniques such as encryption and authentication systems are not enough.
These can be the first line of defense. As the network system grows complexity grows and the network
will need more defenses. An IDS can be considered as a second line of defense.
The main concern of IDS is its responsibility in gathering and collecting activity information which then
is put to analysis to determine whether there has been any intrusion(s) which has cause any rules to be
infringed. Once the IDS are certain that an irregularity has occurred or such as an unusual activity has
been recorded, it then alerts the concern person – i.e. the system administrator. Even with the
numerous intrusion detection techniques that have been developed for wired networks, they still do not
adjust onto the wireless networks due their different trails.
INTRUSION DETECTION SYSTEM (IDS)
An Intrusion Detection System (IDS) are software tools used to strengthen the security of information
and network communication systems. IDS can be defined as a system that inspects all inbound and
outbound network activities and identifies suspicious patterns that may indicate either a system or
network attack from an intruder attempting to break in or comprise a system [Krishnun].
For example, a firewall connected to a network may block unauthorized hosts. This would reduce the
risk of an attack by restricting access to the network [Premaratne]. This includes reducing the paths that
an attacker can take to compromise a system by blocking unused and unsafe protocols and services.
This blocking can also reduce the visibility of vulnerabilities of the system to the attacker [Leversage].
On the other hand, an IDS actively monitors user traffic and behavior for anything suspicious.
If an attacker were to compromise an authorized host and launch an attack from the compromised host,
the firewall would be powerless. However an IDS would be capable of detecting and reacting to the
3
Efficient Intrusion Detection by Using Mobile Handheld Devices for Wireless Networks
suspicious behavior of the compromised host [Premaratne]. Due to this capability of an IDS, modern
network gateways combine the qualities of both firewalls and IDS for maximal security.
To combat attackers, intrusion-detection systems (IDSs) can offer additional security measures by
investigating configurations, logs, network traffic, and user actions to identify typical attack behavior.
However, an IDS must be distributed to work in a grid and cloud computing environment. It must
monitor each node and, when an attack occurs, alert other nodes in the environment. This kind of
communication requires compatibility between heterogeneous hosts, various communication
mechanisms, and permission control over system maintenance and updates [Vieira].
IDSs are divided based on models or data sources. Based on the model of detection, they can be
divided into misuse-based and anomaly-based. Based on data sources, they can be divided into hostbased or network-based.
Misuse -Based and Anomaly-Based IDSs
Traditionally intrusion detection is done using misuse detection and anomaly detection methods.
Misuse detection techniques are based on expert systems, model-based reasoning systems, etc.
Misuse detection uses specific patterns of user behaviors that match well-known intrusion scenarios.
Anomaly detection techniques are based on historical patterns and also on statistical approaches.
Anomaly detection develops models of normal network behaviors, and new intrusions are detected by
evaluating significant deviations from the normal behavior. The advantage of anomaly detection is that
it may detect intrusions that have not been observed yet. Anomaly detection usually suffers from a high
false positive rate problem.
Host-Based and Network-Based IDSs
A host based IDS resides on the individual system being monitored or on a device in the network and
tracks changes made to important files, directories and other internal activities such as system calls.
A network based IDS monitors and analyzes network traffic for attack patterns and suspicious behavior.
A sensor is required in each segment in which network traffic is to be monitored. These sensors can
sniff packets, and uses a data analyzer to analyze network, transport, and application protocols. When
a sensor detects a possible intrusion, alarms are raised and it will report to a central management
console, which will take care of the appropriate passive or active response. Communication between
the remote sensor and the management console should be secure to avoid interception or alteration by
the intruder.
The expansion of the networks has rendered conventional IDS insufficient. In order to mitigate these
deficiencies, Distributed Intrusion Detection System (DIDS) have been developed as a set of
disseminated sensors (or mobile agents) which collaborate in detection tasks Macia]. The mobile
agents are small software components. They are light-weight and are capable of low to moderate
computations. However, current DIDS, built under a generally hierarchic architecture, display a lack of
scalability that makes the use of decentralized techniques mandatory [C. Kruegel].
The use of a substantial number of sensors collaborating, together with the volume of information they
generate and the growing speed of networks, hinders analysis and increases costs, making the use of
light, autonomous detection-capable hardware mechanisms more and more appropriate [J. M.
4
S. Akkaladevi and A. K. Katangur
JISTP - Volume 5, Issue 12 (2012), pp. 1-12
Gonzalez]. Low-cost embedded devices with one or more sensors interconnected through wired or
wireless networks integrated into the Internet provides endless opportunities for monitoring and
controlling the communication networks Macia].
Distributed IDS consists of several IDS over a network, all of which communicate with each other or
also combined with a central server together for monitoring the network. If intrusion detection is carried
out using a central point network it will not work. This type of approach depends on finding outside
network anomalies and it is also prone to DoS attacks. In this case, distributed intrusion detection using
various IDS is a promising solution.
PROBLEMS WITH WIRELESS NETWORKS
A wireless network environment is greatly vulnerable to malicious attacks because of the usage of
wireless medium. With this transmission the data is wide open in the air unlike a wired medium where it
is flowing on a wire. On a wired network an intruder should gain access to network wires by means of
bypassing firewalls and gateways whereas on a wireless network, attacks could come from anywhere
and hence any node on the wireless network could be targeted. Such implication could definitely result
in lack of information which could be very costly to the organization. The bottom line of using wireless
ad-hoc network is that there is not a clear defense line of security and that every node or access points
must be ready and prepared in terms of securing the network to encounter direct or indirect attacks
from outsiders.
Another issue with wireless access points (APs) or nodes are that they are uncontrolled units and able
to operate on their own. Meaning if those units do not have the adequate physical protection, they can
be very easily attacked in a matter of minutes, captured, compromised or hijacked. Zhang et al
mentioned that tracking down a particular node in a global scale network is not an easy task to perform
and attacks by a node that has been compromised within the network are far more damaging and even
much harder to trace out. Therefore all wireless APs or nodes must be adjusted to behave in such a
way that no peer is trusted.
The MAC protocols used in the wireless channel are easily attackable [Zhang]. Each wireless AP will
compete to be able to get the control of the transmission channel each and every time that a message
is sent out in a contention based method where the nodes will have to follow pre-defined protocols to
avoid any collision whereas in a contention free method, each node will have to request from all the
other nodes an undisputed exclusive use of the channel resource when transmitting and this regardless
of the MAC protocols used or in place thus sometimes resulting in a Denial of Service (DOS) attack.
However, this would never occur in a wired network because of the MAC layer and the physical layer is
segregated from the outside world which hence occurs and operates at the layer 3 of the gateway or
firewall. This is definitely not the case with wireless ad-hoc networks where every wireless node is
completely isolated and unprotected in the wireless communication medium.
Another reason why applications in wireless networks can be viewed as a weak point is that these
networks are often making used of proxies and also using software agents that are run in the basestations (BS) and for those in-between nodes to attain the performance gain this should be performed
through traffic shaping caching or through content transcoder. Attacks could therefore target these
proxies or software agents in order to steal the sensitive information or simply coordinating a DOS
attack by overflowing the cache with poor or fake reference or by simply forcing the content transcoder
to compute futilely [Sihan].
5
Efficient Intrusion Detection by Using Mobile Handheld Devices for Wireless Networks
Hence to recapitulate, wireless network is exposed to attacks because of its inability to effectively
secure is medium of communication, its inadaptability to manage a central monitoring, and also due to
its dynamic damaging network topology choices. Therefore, it can be deduced that further in-depth
research is needed to be able to cover these weaknesses in the area of wireless network
communication.
LIMITATIONS OF IDS
It is always a challenging task to distinguish legitimate use of a system from a possible intrusion. When
an IDS incorrectly identifies an activity as a possible intrusion it will make a false alarm, also referred to
as a false positive. Anomaly detection usually suffers from large number of false alarms. This is mainly
because of dynamic an unpredictable behavior of network activity. These detection systems also need
extensive training on normal behavior patterns in order to characterize normal behavior patterns.
Major Problems with Current IDS
The major problems encountered by current IDS are:
1. Standalone based IDS: In this type of IDS the architecture is normally based upon running each
node separately in order to locate the intrusions if perpetrated. This works efficiently in
encountering the intrusions, but it cannot stop all the intrusions as it cannot accommodate
signatures for the different type of attacks that come out every day. It needs to e updated with
new rules all the time. Moreover this type of IDS is very expensive and is not a suitable solution
for small companies as it still cannot prevent all intrusions. Moreover every decision is based and
focused upon all the information that is collected at each and every node as all the nodes are
independent and work individually as per its name itself “standalone”. Besides being totally
isolated, the nodes on the same network do not know anything about the different nodes or the
same network as no data is exchanged hence no alert information is passed on. Even though
restricted by its limitations, more adaptable in situation when each node can run an IDS on their
own. It is much more preferred for a flat network architecture (wired network) which is not suitable
for wireless networks.
2. Distributed Intrusion Detection System (DIDS): An architecture based on DIDS for wireless
networks [Zhang] was presented in this paper. In their architecture an IDS agent was running on
top of each node, sensor or agent. They have broken the IDS agent into six different modules.
Figure 2 gives a clear illustration of the 6 different components of the IDS agent. In this
architecture every single node has a crucial role to play, each node has the responsibility for
detecting any signs of intrusion and is responsible for contributing individually or entirely onto the
network. This can hence be achieved through the different parts of the IDS agent illustrated in
Figure 2 where the “local data collection” would be collecting real-time data of both user and
system activities within the radio transmission range. The IDS also triggers response if intrusion is
detected. However, if an anomaly in the local data is detected on the boarder search, then the
neighboring IDS agents will collectively associate themselves into the global intrusion detections
actions. We certainly do note that these isolated IDS agents are entirely linked together to form
the IDS system defending the mobile wireless network. This type of architecture offers a
promising choice in looking out for different type of intrusions but the major problem with such
type of systems is that the nodes, sensors or agents have low processing capability and they are
setup to look for a predefined set of attacks. Once an attack which is not defined for that sensor
6
S. Akkaladevi and A. K. Katangur
JISTP - Volume 5, Issue 12 (2012), pp. 1-12
makes through it can proceed all the way along. Periodic updating of mobile agents or sensors is
a difficult task to achieve as a result of low memory capabilities on these sensors.
This research work presents a new and innovative approach to solving the problem of intrusion
detection by using mobile handheld devices and some standalone hosts.
Figure 2: Conceptual Model for an IDS Agent
A NEW ARCHITECTURE FOR WIRELESS NETWORKS USING MOBILE
DEVICE BASED IDS
This research presents a solution to the problems outlined above. Rather than utilizing a large signature
database available on a standalone commercial IDS which can be very expensive in the range of
$50,000, this concept uses multiple handheld based devices each comprising of a small database of
signatures related to certain type of attacks. The work proposed in this paper is motivated by the fact
that it is easy and less time consuming to update small signature databases compare to large
complementary signature database continuously from time to time. By doing this we can also improve
the throughput of signature based IDS, since a packet needs to be matched with less number of
signatures in small signature database compare to one with huge number of signatures. This idea is not
new, and in fact, it has been suggested by an installation note to system administrators of Snort [J. L.
Sarang]. To turn this idea into an effective one, we need to address three major issues.
1. How to decide whether a given signature is likely to be helpful for possible attacks? We need
systematic guidelines whether to remove a given signature. Currently, configuring the signature
database is still a manual trial and error process of disabling some signatures and adding them
back in after missing some attacks.
2. What to do if we make the wrong choice and classify a useful signature as unlikely and remove it
from the database? How to protect the network in this case?
7
Efficient Intrusion Detection by Using Mobile Handheld Devices for Wireless Networks
3. What to do once a new service or protocol is added to the network? We cannot completely rely on
the administrators to remember to manually add the corresponding signatures to the database.
This process is labor-intensive and can be error prone.
We solve these problems by using small signature databases containing the most frequent attack
signatures on mobile devices, and a bigger complementary signature database containing thousands of
signatures used to update smaller databases from time to time using mobile agents on a stand-alone
computer. By distributing the signature database between multiple nodes, it is important to consider two
main aspects of the proposed model.
1. The signature database size on the mobile nodes may not always be equal; it depends on the
algorithm to update the small signature databases.
2. The size of the databases on nodes dynamically changes to optimize the detection rate on more
likely imminent threats.
In this research the IDS is implemented on mobile devices along with a couple of host computers.
Nowadays mobile devices pack a lot of power in them with dual core mobile already available in the
market. These mobile devices can do high speed processing compared to devices a couple of years
back. They come with high speed processors, more memory, and high battery life. These specifications
make these devices an ideal choice for implementing an IDS. A new IDS model is presented as shown
in Figure 3.
Figure 3: Model of IDS
In Figure 3 SD refers to a small database (SD) of signatures on the mobile device. LD refers to a
relatively large database of signatures available on a computer. The traffic from the outside world will
first be processed at the firewall according to the rules specified on the firewall and appropriately the
traffic not intended for this network is going to be dropped. Then the data is propagated along the
network by applying the signatures available on the mobile devices. Appropriate actions are taken
according to the output from each mobile device. Eventually the data passes through an inexpensive
desktop based IDS comprising of a large database to catch the intrusions that were missed by the small
databases on the mobile devices. The bidirectional lines between mobile devices and the computer
show that the signatures can be updated and configuration messages exchanged between these
devices. The mobile devices are regularly updated with new signatures from the large database when
the large database detects frequent intrusions of a type. An algorithm is provided to handle the addition
and deletion of signatures to the mobile device database.
8
S. Akkaladevi and A. K. Katangur
JISTP - Volume 5, Issue 12 (2012), pp. 1-12
IMPLEMENTATION AND RESULTS
The above discussed model has been implemented by using Snort [Sourcefire] as the signature-based
IDS. Snort stores its signatures in rule files referenced in the Snort’s configuration file.
An algorithm is developed for mobile devices to create the most frequent signature databases for the
various mobile based IDS as well as generating the large database of signatures for the desktop based
IDS. The same algorithm runs on all mobile based and desktop based IDS systems for certain intervals
to keep the signature database constantly updated. This can be done by removing signatures that are
no longer occurring frequently, and also adding any signature detected as a frequent alert by the
desktop based IDS. An algorithm as shown in Figure 4 is developed to carry out this process.
N is the total number of current signatures
Get signatures that are detected from the large signature database, LD.
for every intrusion whose signature is classified as int_sig in LD do
if N <= Max_Sig and number of occurrences of int_sig >= Num_Occur and last detection
time of int_sig >= Max_Time then
delete the signature from LD
add the signature in mobile device SD
N = N+1
endif
endfor
Restart multiple and complimentary IDS
Figure 4: Algorithm for updating the Signature database on all devices
This algorithm is based on the parameters:
N which is the number of signatures
Num_Occur which is the number of intrusions to be considered before considering that attack signature
to be added to the mobile devices short database.
Max_Time which is the time after which an attack would be threatening to the network if no action to
resolve it is taken.
Max_Sig which is the maximum number of signatures that can be stored in the IDS
The algorithm accepts three input parameters: MinFreq specifying the minimum number of attack
occurrences to be considered as frequent, ValidTime setting the time beyond which the attacks seen
are considered as valid and threatening, and MaxNum representing the maximum number of the
signatures acceptable in all IDS.
EXPERIMENTAL SETUP
All the experiments were performed by assuming the fact that the attackers have more resources
compared to the IDS. This would be an ideal situation to consider rather than the other way. DoS
attacks on the target network were performed from very fast machines running quad core processors.
The effects of having multiple smaller signature databases and how effectively it helps in improving the
throughput and decreasing packet loss rate were evaluated. A small packet loss rate directly leads to
small possibility to miss real attacks that might be hidden in false positive storms.
9
Efficient Intrusion Detection by Using Mobile Handheld Devices for Wireless Networks
The systems considered are as follows:
Attacking System: Mac Os X with Intel Quad core 2.8GHz Processor, 8GB of RAM, 10/100/1000Mbps
NIC
IDS system: Cisco firewall, 4 Apple Iphone 3Gs running iOS 4, Mac OS X with Intel dual core 2.4GHz
Processor, 2GB of RAM, 10/100Mbps NIC
The network was initially trained by attacking from the target computer. During the training period, the
attacking tools used were IDSwakeup [S. Aubert], Stick [Coretez], Sneeze [D. Bailey], and Nikto2
[Sullo], to trigger alerts by the IDS system and create a baseline of the most frequent attacks on the
designed network.
IDSwakeup is designed to test the functionality of the IDS by generating some common attack
signatures to trigger IDS alarms. Stick is used with Snort configuration files to reverse engineer threats
and create packets with signatures in the same way as those detected by Snort as attacks. It can be
also used as an IDS evasion tool by generating a lot of traffic, and camouflaging the real attacks in a
flood of false positives. Sneeze is a Perl-based tool that is very similar to Stick in terms of
functionalities. It distinguishes itself from Stick by the fact that it can accept Snort’s rules at runtime and
dynamically generate attack packets, whereas Stick needs to be configured with Snort’s rules at
compilation time. Nikto2 focuses on web application attacks by scanning and testing web servers and
their associated CGI scripts for thousands of potential vulnerabilities.
Table 1 shows the signature and the number of times it has been detected by the IDS during the
training phase.
Table 1:
Occurrences of Signature Detections
Signature
Occurrences
ICMP Destination Unreachable Communication
22
TCP Portscan
8
UDP Portscan
5
ICMP Large ICMP Packet
10
Teardrop Attack
6
MISC gopher proxy
5
FINGER query
18
FTP command overflow attempt
69
SNMP request udp
37
ICMP Echo Reply
2300
DDOS mstream client to handler
31
BACKDOOR Q access
14
WEB-CGI search.cgi access
8
TELNET SGI telnetd format bug
13
BAD-TRAFFIC tcp port 0 traffic
36
10
S. Akkaladevi and A. K. Katangur
JISTP - Volume 5, Issue 12 (2012), pp. 1-12
ANALYSIS
To test the performance of the designed IDS, we conducted our experiment using two different test
cases. In the first case, we manually enabled more than 4700 signatures and attacked the network
using Sneeze. In the second case, we automated the attack process by enabling only the most frequent
signatures (52 signatures were enabled). Table 2 shows the results of the tests in regards to the effects
on packet drop rate (throughput). These values were averaged over 10 test case iterations.
Table 2:
Throughput for the test cases considered
Test Case 1
Test Case 2
Packets received
796977
866691
Packets analyzed
783774
839538
Packets dropped by the IDS
103746
57874
% of Packets dropped
13.02%
6.67%
From table 2 it is clearly evident that there is a performance difference in the performance of the IDS
using a small signature database (52) compared to a large database (>4700). For the system designed
and developed, and by reducing the size of the database by almost 91% (4700/52), we were able to
decrease the percentage of the dropped packets by 6.35 times. This is a major improvement in
reducing the possibility of a real worm attack which can sneak in the midst of the dropped packets by
the IDS system while ensuring that all immediate and dangerous threats and intrusions are detected by
the IDS system.
CONCLUSION
This paper presents a new model of Intrusion detection using mobile agents in wireless networks. This
paper discussed the foundations and architecture of intrusion detection methods. Intrusion detection
techniques are continuously evolving, with the goal of detection and also protecting and improving our
network security. Every detection technique has issues in fully detecting the intrusions. A new model of
IDS for wireless networks which is cost-effective in real world situations has been developed using low
cost mobile devices compared to a commercial IDS system. The experiments performed by using the
developed system in a real world situation clearly proved a significant decrease in the packet drop rate,
and as a result, a significant improvement in detecting threats to the network. The ideas presented
constitute an important stepping point for further research in IDS. The system can be further improved
by considering a combination of hardware based IDS, and distributed mobile agents at various points in
the network to improve on the efficiency of the IDS system.
REFERENCES
Coretez, G. Fun with Packets: Designing a Stick, Endeavor Systems, Inc. 2002.
C. Kruegel, F. Valeur, and G. Vigna, Intrusion Detection and Correlation: Challenges and Solutions.
New York: Springer-Verlag, 2005.
D. Bailey, "Sneeze," http://www.securiteam.com/tools/5DP0T0AB5G.html, 2011.
D. J. Leversage and E. J. Byres, “Estimating a system’s mean time to compromise,” IEEE Security
Privacy, vol. 6, no. 1, pp. 52–60, Jan./Feb. 2008.
11
Efficient Intrusion Detection by Using Mobile Handheld Devices for Wireless Networks
D. Moore, C. Shannon, and J. Brown, “Code-Red: a case study on the spread and victims of an Internet
worm,” in Proceedings of Internet Measurement Workshop (IMW), Marseille, France, November
2002.
J. L. Sarang Dharmapurikar, "Fast and Scalable Pattern Matching for Content Filtering," presented at
Proceedings of the 2005 symposium on Architecture for networking and communications systems
ANCS '05, 2005.
J. M. Gonzalez, V. Paxson, and N. Weaver, “Shunting: A hardware/software architecture for flexible,
high-performance network intrusion prevention,” in Proc. ACM Comput. Commun. Security,
Alexandria, VA, 2007, pp. 139–149.
Krishnun Sansurooa, Intrusion Detection System (IDS) Techniques and Responses for Mobile Wireless
Network, Proceedings of the 5th Australian Information Security Management Conference,
December 2007
Macia-Perez, F.; Mora-Gimeno, F.; Marcos-Jorquera, D.; Gil-Martinez-Abarca, J.A.; Ramos-Morillo, H.;
Lorenzo-Fonseca, I.; , "Network Intrusion Detection System Embedded on a Smart Sensor,"
Industrial Electronics, IEEE Transactions on , vol.58, no.3, pp.722-732, March 2011
Premaratne, U.K.; Samarabandu, J.; Sidhu, T.S.; Beresh, R.; Jian-Cheng Tan; , "An Intrusion Detection
System for IEC61850 Automated Substations," Power Delivery, IEEE Transactions on , vol.25,
no.4, pp.2376-2383, Oct. 2010
Rahul Telang and Sunil Watta, An Empirical Analysis of the Impact of Software Vulnerability
Announcements on Firm Stock Price, IEEE Transactions on Software Engineering, Vol. 33, No. 8,
August 2007.
R. P. Majuca, W. Yurcik, and J. P. Kesan,The evolution of cyberinsurance. In ACM Computing
Research Repository (CoRR), Technical Report cs.CR/0601020, 2006
S. Aubert, "IDSwakeup," http://www.hsc.fr/ressources/outils/idswakeup/index.html.en, 2011
Sihan Qinga, Weiping Wena, A survey and trends on Internet worms, Computers & Security Volume
24, Issue 4, June 2005, Pages 334–346, Elsevier
Sourcefire,"Snort," http://www.snort.org, 2011, open source network intrusion prevention and detection
system (IDS/IPS) software
Sullo C, Lodge D "Nikto,” http://www.cirt.net/nikto2, 2011
Vieira, K.; Schulter, A.; Westphall, C.B.; Westphall, C.M.; , "Intrusion Detection for Grid and Cloud
Computing," IT Professional , vol.12, no.4, pp.38-43, July-Aug. 2010
Yongguang Zhang, W. Lee, and Y. Huang, Intrusion Detection Techniques for Mobile Wireless
Networks, ACM Wireless Networks Journal, Vol. 9, No. 5, pp. 545-556, Sep 2003.
12
J. D. Shorter, J. K. Smith and R. A. Aukerman
JISTP - Volume 5, Issue 12 (2012), pp. 13-22
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
ASPECTS OF INFORMATION SECURITY: PENETRATION TESTING IS
CRUCIAL FOR MAINTAINING SYSTEM SECURITY VIABILITY
Jack D. Shorter, James K. Smith and Richard A. Aukerman
Texas A&M University - Kingsville, USA
ABSTRACT
P
Penetration testing is the practice of testing computer systems, networks or web applications
for their vulnerabilities and security weaknesses. These tests can either be conducted by
automated software or manually [6.] Wikipedia has defined penetration testing as “a method
of evaluating the security of a computer system or network by simulating an attack from a malicious
source...” [11]. There are many different levels of penetration tests that can be performed on an
organizations network and security infrastructure. These tests can range from simply attempting a brute
force password attack on a particular system, all the way up to a simulated attack including, but not
limited to social engineering.
Keywords: Penetration Testing, Black Box Penetration Testing, White Box Penetration Testing,
Scanning and Enumeration, Target Testing, Internal Testing, Blind Testing, Double Blind
Testing.
INTRODUCTION
Penetration testing is the practice of testing computer systems, networks or web applications for their
vulnerabilities and security weaknesses. These tests can either be conducted by automated software or
manually. The objectives of these tests are to provide valuable security information to the company
being tested, and to serve as a blueprint for the areas that need improvement. Besides security issues,
penetration testing is also performed to monitor an “organization's security policy compliance, its
employees' security awareness and the organization's ability to identify and respond to security
incidents”. Penetration tests are sometimes referred to as “White Hat attacks” due to the break-ins
being conducted by information systems personnel who were ask to supply this service [6].
There are countless guides, books, and resources available for the modern network administrator or
information technology executive to facilitate making the sensitive data on their networks and computer
systems secure. He or she can have a highly secure network by having a robust Acceptable Use Policy
(AUP), Security Policy (SP), along with well trained staff in both the information technology department
and the organization as a whole. The data on the computers within the network can be protected by the
most expensive up-to-date firewalls and encryption methods. The equipment can be physically
protected by video monitoring, multiple security guards, and locked doors that lead to man-trap
hallways. When all of this security is in place, how can a network administrator be confident that the
13
Aspects of Information Security: Penetration Testing is Crucial for Maintaining System Security Viability
security measures that he has implemented to protect his organization's data are actually working? To
make sure that the strategies are effective, a penetration test should be performed at regular intervals
or when a change has been made to the organizations information systems or policies [13].
Penetration Testing Strategies
There are 5 basic types of penetration testing that are used for evaluation purposes: Target testing,
external testing, internal testing, blind testing and double blinded testing [6].
Target testing is performed by the organizations IT team in which both the IT team and the penetration
team work together. This test is referred to as the “Lights-turned on” approach since nothing is secretive
and everyone can see the test being performed.
External Testing is used to target a “company's externally visible servers or devices including domain
name servers (DNS), e-mail servers, Web servers or firewalls”. The objective of this test is to see if an
external user can access the companies information and if so, to see to what extent they are able to
access it [6].
Internal Testing is somewhat similar to external testing as far as its purpose. However, the main reason
for performing an internal test is to see if any employees who have access to data are practicing acts
with malicious intent [6].
Blind Testing is a “strategy that simulates the actions and procedures of a real attacker by severely
limiting the information given to the person or team that's performing the test beforehand”. Blind Testing
can require a sufficient amount of time and can be very costly. Double Blind Testing takes Blind Testing
to another level. As few as one or two people are aware of any testing that is being conducted. Double
Blind Testing can be useful for testing “an organization's security monitoring and incident identification
as well as its response procedures”. 6]
How & When
When conducting a penetration test many questions are first asked such as “Can a hacker get to our
internal and systems data from the Internet? “ “Can you simulate real-world tactics and identify what an
automatic vulnerability scan misses?” “Is my web-hosting site and service providers connected to my
network as secure as they say they are?” “Is my email traffic available for others to see?” These
questions can give business security directors nightmares. These nightmares trigger their need to
conduct a penetration test [12]. There are four steps that are usually taken when conducting a
penetration test. The four steps taken are Reconnaissance, Enumeration, Research and Evaluation,
and Penetration Testing Analysis. With reconnaissance the purpose is to identify the systems
components and data network. Enumeration is used to “Determine the application and network level
services in operation for all identified assets”. The key step in this entire procedure is the research and
evaluation aspect in which the vulnerabilities, bugs and configuration concerns are all addressed and
brought to the company’s attention. The final step is to decide what solutions are necessary to fix the
problems found by the penetration tests. “This is used to develop findings along with impact
descriptions and recommendations that take into account your individual business and network
environment 12].”
14
J. D. Shorter, J. K. Smith and R. A. Aukerman
JISTP - Volume 5, Issue 12 (2012), pp. 13-22
ADVANTAGES OF PENETRATION TESTING
Wikipedia has defined penetration testing as “a method of evaluating the security of a computer system
or network by simulating an attack from a malicious source...” 8]. There are many different levels of
penetration tests that can be performed on an organizations network and security infrastructure. These
tests can range from simply attempting a brute force password attack on a particular system, all the
way up to a simulated attack including, but not limited to social engineering, white and black box
methodologies and internet based denial of service (DoS) attacks. The advantages and insight that
these different testing methodologies provide an organization can be invaluable [13].
First and foremost, a well done penetration test will allow an organization's information technology (IT),
or information system (IS) departments to identify the vulnerabilities in their current security policies and
procedures. For instance, if the team performing the penetration test gained access to sensitive areas
through a social engineering technique, the target organization could then coach the users who were
responsible for the breach on the proper procedures for granting access to unknown or unauthorized
parties. Without the penetration test, the organization may never discover that the vulnerability was
present in the security procedures and it could have been exploited by a party with a much more
sinister motive.
There are two main types of penetration testing, aggressive and non-aggressive. With aggressive
penetration testing the information that is gathered from the company’s vulnerabilities is then tested on
the system to ensure that the vulnerability is present. If the attempt is successful the penetrator will then
document the result in an audit log. “In addition to running tests against identified potential
vulnerabilities, the Penetrator can also run buffer overflow, denial of service and brute force exploits to
test for additional vulnerabilities that cannot be found by the service probing and discovery process.” A
secured system should not be impacted by aggressive testing, however, if the system is less stable and
more vulnerable it is possible that the system could crash [10].
With Non-aggressive penetration testing the procedure is used to minimize the risk of any problems
occurring with the targeted system. As each service that is running on the targeted system is identified
the penetrator then attempts to find out information such as the version, updates and patches that may
have been applied to the system. Once this information is gathered it is then compared to the
information that the penetrator has collected in its database of exploits and vulnerabilities. If a match is
found, the results will then be added to the audit report. It should be noted that “nonaggressive testing
is much faster, but it is also more prone to reporting false positives” 10].
BLACK BOX PENETRATION TESTING
The black box method of penetration testing involves recruiting a penetration testing team, often
referred to as a “red team” or “tiger team” to perform security assessments without any prior knowledge
of the existing network. Other than the initial contact to gain authorization to perform the test, there is no
contact between the tiger team and the target organization [8]. This method would be most effective if
the organization wanted to measure how vulnerable they were to a completely unknown attacker
attempting to gain access. The tiger team would be forced to go through the entire process of gathering
information on the company through various methods and then building on that information to
eventually make their simulated attack.
15
Aspects of Information Security: Penetration Testing is Crucial for Maintaining System Security Viability
While the black box method can be effective in establishing a baseline of how secure an organization is
to outside attacks, it is highly subjective on the part of the team doing the assessment. This is because
there are countless ways of performing reconnaissance on the organization and due to restrictions on
time and money a tiger team is not likely to utilize all of the methods and techniques that are available
to them. There are always other methods of gathering information on a target that the tiger team might
not have considered, or even know about.
RECONNAISSANCE
Any well planned attack, whether it is political, military, or personal in nature will start with a
reconnaissance of the target. There are multiple avenues in which a tiger team has the opportunity to
gather information about an organization's operations. Passive reconnaissance is the most simple and
least dangerous form of gathering information. It can simply involve sitting outside of a building with a
pair of binoculars or a high powered camera waiting for a UPS or FedEx truck to arrive. When the driver
unloads the truck the surveillance team can get a view of what type of computer systems the
organization uses because hardware vendors usually print their logos and photos of the equipment on
the shipping boxes. The tiger team now has a very good idea of the type of hardware the organization
utilizes to perform its business operations.
The surveillance team can then follow up by returning later to look through the dumpster. This
technique is called “dumpster-diving.” In the dumpsters they might find packing lists, installation
manuals, and software boxes that were contained in unmarked shipping materials. This information
would be considered extremely valuable to any attacker because they now have a complete picture of
this organizations hardware and software. This information coupled with the potential hardware
vulnerabilities discovered earlier provides the tiger team with nearly everything it needs to exploit the
security vulnerabilities of this organization [13].
Once successful reconnaissance has been performed on the target organization, the tiger team will
take the information gathered during this phase and compile a report on what actions were taken. This
report will be included in the final report with recommendations to the management of the organization
on what measures might be taken to mitigate the risk the reconnaissance methods uncovered. In the
case of the previous scenario the report might suggest that all discarded manuals, software boxes, and
reference materials be shredded or burned before disposal.
SCANNING AND ENUMERATION
Once initial surveillance has been performed on the target, a tiger team will then attempt to list and
identify the systems and specific services and resources the target provides [3]. The first step in this
process is to scan down all available systems on the network. Once available systems have been
found, the tiger team will attempt to enumerate the available systems. Successful enumeration will
include information such as a host name, its associated Internet protocol (IP) address, and what
services are running on the different network hosts [3]. The data collected during the scanning and
enumeration will allow the tiger team to narrow down the scope of the simulated attack which will follow
later.
Normally the tiger team will perform their initial scan with a scanning tool application. These
applications vary in their complexity but most will perform both scanning and enumeration. These tools
have many different options from an all out scan of an entire range of IP addresses as fast as it can
16
J. D. Shorter, J. K. Smith and R. A. Aukerman
JISTP - Volume 5, Issue 12 (2012), pp. 13-22
scan, or they can spend hours, or days scanning in a more stealthy fashion. Scanning such as this is
normally done in two different phases. One scan will be performed during the day to get an idea of the
layout of the entire network. The next scanning attempt will be done in the middle of the night to find
equipment that is kept on for 24 hours at a time. This will give the tiger team the best chance to identify
the IP addresses of routers, switches, firewalls, and most importantly the servers [13].
SOCIAL ENGINEERING
Social engineering is likely the most complicated and risky aspect of attempting an intrusion into an
organization. It requires the tiger team to gain the confidence of users on the organization's system and
then provide them with either access, or information on what the team wants to know. It is essentially a
confidence game based on appearance and expectations to gain the trust of the system users of the
organization [15].
There are varying methods used to gain someone's trust. One method is for the tiger team to do
research on individuals who work at the target company. With the rise in popularity of social networking
web sites this has become a much easier task. It can be as easy as going to the social networking site
Facebook.com and doing a search on people employed at the organization. The attacker can then
create a fake profile, usually of an attractive woman due to the fact that men are much more receptive
and disarmed by women. Once this has been done the attacker only needs to pass himself off as a new
employee and start sending out friend requests to all members of the organization claiming how excited
they are to be working with everyone. Then the attacker is likely to be able to gather names, phone
numbers, and start following people on twitter. This will give the tiger team plenty of information on
which to build a back story and then use that information to gain physical access to the organization [7].
WHITE BOX TESTING
The white box penetration testing methodology is a less time consuming method of penetration testing.
When a penetration testing team performs a white box test on their target, the target is usually
somewhat, if not fully aware of what is taking place [8]. They are usually given full access to all of the
network IP ranges, topologies, operating systems, and anything else that might be requested by the
tester performing the audit. It might even involve the tester being brought into a building and shown a
network port on which to begin their scan [14].
This particular method of penetration testing provides the organization with a very thorough view of all
of its software and network vulnerabilities, but this method has some disadvantages. For instance,
unless specifically called for, there is very little need to perform social engineering in white box
penetration testing because the auditor is already aware of most of the information that he or she
needs. Therefore, the organization likely will not get the benefit of having its security policies and user
training tested. This is troublesome because of the fact that in most organizations the information
system is normally secure, but the weakest link is the people within that organization [1]. In an effort to
perform its due diligence an organization should always perform both a white box penetration test, as
well as a black box penetration test.
AUTOMATED VULNERABILITY SCANS
Automated vulnerability scans are becoming much more popular to use when performing penetration
testing. “A vulnerability scanner is a computer program designed to assess computers, computer
17
Aspects of Information Security: Penetration Testing is Crucial for Maintaining System Security Viability
systems, networks or applications for weaknesses [16]. When the appliance is finished performing its
scan the automated scanning appliance then creates a report detailing the vulnerabilities on the system
that have been located. The report can either be reviewed by an auditor, or sent directly to the
information systems team at the organization.
The automated vulnerability scanners are popular because they do not have an actual penetration
tester performing the assessment, and therefore cost less for the organization paying for the service as
well as the company providing the service. Using an automated vulnerability scan is a good idea,
however there is a problem with this method because it has started to become the de facto standard for
an organization when it considers if its network infrastructure is secure. This is partly due to the
effective marketing techniques of the companies pushing their automated hardware and software
solutions. However, the fact remains that simply relying on a vulnerability scanner on the network is not
an effective enough security posture [5].
An automated vulnerability scanners greatest weakness is it is automated. If the vulnerability has not
been identified and added to the scanner's repository by the vendor, it is missed and will return a false
negative [5]. Desautels also provides the following scenario.
A hacker decides to perform research against a common technology such as your firewall. The
hacker might spend minutes, months or even years doing research just for the purpose of
identifying exploitable security vulnerabilities. Once that vulnerability is identified the hacker
has an ethics based decision to make. Does he notify the vendor of his discovery and release
a formal advisory or does he use his discovery to hack networks, steal information and make a
profit. If the hacker decides to notify the vendor and release an advisory then there is usually a
wait period of 1-3 months before the vendor releases a patch. This lag time means that the
vendor's customers will remain vulnerable for at least that long, but probably longer. What's
even more alarming is that this vulnerability may have been discovered by someone else who
also didn't notify the vendor. If this is true then that may mean that the vulnerability has been
used to break into networks for quite a long time. Who knows, it could have been discovered
months or even years ago? That type of unpublished vulnerability is known as a 0day [exploit]
and is the favorite weapon of the malicious hacker. [5]
An additional weakness of relying solely on an automated vulnerability scanner is that it only tests the
network itself. It does not test the people within the organization. If the people within the organization
can be compromised, no amount of money spent on vulnerability scanning will protect the organization
from an intrusion by a hostile party.
The most effective technique for using an automated system to perform a vulnerability scan is in
conjunction with a manual scan performed by an authorized penetration tester. This will reduce the
frequency of false positives, and it also provides the organization with data about other security risks.
THE FUTURE OF PENETRATION TESTING
Penetration testing as a security strategy is roughly 35 years old. The original penetration test can be
traced back to the US Air Force's Multics Security Evaluation in 1974 [2]. Since then penetration and
security testing has evolved into what most consider a complicated but revealing and useful process.
18
J. D. Shorter, J. K. Smith and R. A. Aukerman
JISTP - Volume 5, Issue 12 (2012), pp. 13-22
One aspect that has not been covered so far in this paper is: the concept of doing penetration testing
on business IT systems that are completely or partially residing in the virtual world often called the
“Cloud”. Cloud computing represents a fundamentally different approach to building IT environments.
Lessons from common management tools and processes, which work with discrete processes across
static computing stacks, often are not incorporated into the new virtual environments. Predictably, this
causes gaps in security [9]. Penetration professionals need to address this dilemma, and prefect ways
to test all aspects of the organizations security infrastructure.
Recently the future of penetration testing as it is known today has become less clear than it was a few
years ago. In December of 2008 Brian Chess, an executive at Fortify Software wrote in CSO Online:
People are now spending more money on getting code right in the first place than they are on
proving it's wrong. However, this doesn't signal the end of the road for penetration testing, nor
should it, but it does change things. Rather than being a standalone 'product', it's going to be
more like a product feature. Penetration testing is going to cease being an end unto itself and
re-emerge as part of a more comprehensive security solution. [4]
Mr. Chess also alluded to the fact that both Hewlett Packard and IBM have both purchased companies
that specialize in developing penetration testing software for web applications. This indicates that
software publishers, programmers and developers are starting to take security of their programs much
more seriously and are going to attempt to write their code more securely following the secure
development life cycle (SDLC). While there will still be a need for the code to have penetration tests run
against it, it will be done as a part of the development cycle rather than after the fact.
Other professionals in the information security industry have taken umbrage to Mr. Chess' predictions
and have proceeded to write papers which dispute this claim. One of Mr. Chess' critics, Ivan Arce, has
written a twelve point rebuttal to Mr. Chess' column in CSO Online. Mr. Arce states that a practice that
is 35 years old does not simply disappear or go through drastic changes in a single year [2]. This point
is a valid one and aside from some groundbreaking idea in the world of programming, is likely to hold
true. He also argues that even if all developers were to begin adhering strictly to the SDLC when
programming new software packages, there is still existing and legacy software being used by
organizations which is not likely to be replaced within the next year [2]. Many of these legacy systems,
especially those still utilizing COBOL, may not change for the next 5 to 10 years.
Some of Mr. Arce's points include that penetration testing is operational in nature and that simply
testing an application in a lab is not enough [2]. This is a very valid point and a strong argument. This is
due to the fact that in a lab, products are set up to vendor specifications and the vendor is usually
already aware of their application's vulnerabilities. In an actual live operating environment, this is
usually not the case. People often take shortcuts when setting up software. The individual performing
the install might forget to enable a crucial setting during the setup. Executives within the organization
might decide that some security measures do not make the applications features available enough to
end users, and order the IT department to bypass security features. There is a whole range of issues
that might come up when an application is used in a live operating environment that might not be
duplicated in a lab environment [13].
Penetration testing is also a strategic methodology [2]. It allows an organization to see threats that
cannot be duplicated in a lab. A successful social networking attack by an intruder will bypass security
19
Aspects of Information Security: Penetration Testing is Crucial for Maintaining System Security Viability
measures built into applications nearly every time. People will always be the weakest link within a
security strategy. The SDLC does not account for this weakness.
Any individual knows that information technology is a constantly evolving industry. [2] The idea that this
aspect will change the slightest bit, if at all, is absurd. As new developments are made in the field, new
opportunities for malicious attackers will continue to grow. New software and technology is very seldom
released in a perfect state. This is due to a host of reasons, mostly because of limits on time due to
release dates and pressure on programmers to provide their employers with a finished product. Simply
because developers are starting to use the SDLC in their programming methodologies does not mean
that hackers will just give up and stop searching for vulnerabilities to exploit.
There are also those companies that must comply with government regulations [2]. For instance any
company working in the banking industry must perform a security audit at least once a year. This is a
mandated aspect of doing business. As slow as the government is to get rid of any regulation, or laws
regarding any aspect of business, it is not likely that penetration testing will be removed from the
security landscape any time soon.
Finally with the recent financial crisis cybercrime is likely to be on the rise [2]. As programmers and
developers begin to lose their employment the need to feed themselves and their family will begin to
grow and some might succumb to the pressure to break the law, or at the very least do something
unethical for money. Experienced programmers who have direct knowledge of designing and
implementing secure software applications are a huge danger simply because they are even more
aware of where certain vulnerabilities might be, and how to easily exploit those vulnerabilities. This will
become an even bigger threat as more companies send more critical applications into the “Cloud”.
RECOMMENDATIONS
Organizations should:
 Perform black box penetration tests to assess their exposure to attacks which involve social
engineering, reconnaissance, and where the weaknesses lie within the public domain.
 Perform white box testing to discover where all of their known vulnerabilities are. This includes but
is not limited to, automated vulnerability assessments backed up with manual scans performed by
a professional penetration tester.
 Perform target testing or the “lights-turned on” method to find the system vulnerabilities without
alienating loyal employees.
 Perform external testing to target the company’s externally visible servers and devices including
domain name servers, e-mail servers, Web servers and firewalls.
 Perform internal testing to make sure that no employees who have access to data are practicing
acts with malicious intent.
 Consider blind testing because of the benefits, but remember that the cost can become
prohibitive.
 Perform double blind testing because it can be useful for ascertaining whether an organization’s
security monitoring and incident identification is sufficient.
 These tests should be performed at regularly scheduled intervals to make sure that as new
technologies emerge and are placed within the business environment, that the systems of the
20
J. D. Shorter, J. K. Smith and R. A. Aukerman
JISTP - Volume 5, Issue 12 (2012), pp. 13-22
organization remain secure. Once again, migrating to the “cloud” brings its own set of security
risks.
 Be sure that they are in regulatory compliance with the organization's respective government
regulators regarding the use and frequency of penetration testing and vulnerability assessments.
CONCLUSION
Penetration testing is a vital part of any information security strategy. It provides an organization with
information that will allow them to better understand how effective their security strategies are within a
real world scenario. Advances in the technology are bringing more and better automation to penetration
testing. These advances should not be considered as a complete solution though due to the fact that as
the technology changes, so do the methods of attack. A professional penetration tester will always be
able to perform a specific attack and penetrate systems defenses before an automated system will be
able to do so.
The world of information security is changing and evolving, and along with it the standards in
penetration testing. There are many businesses now that perform this valuable service. The
professional penetration testers that work for these organizations are a valuable resource for any
business that is concerned about their security vulnerabilities. An organization should not rely solely on
one method of testing their strategies that are in place to protect their confidential data. Black box, white
box, target testing, external testing, internal testing, blind testing, double blinded testing and automated
strategies all have their place within the realm of penetration testing. They should be used in
conjunction with each other, and not as a standalone methodology of testing an organization's security
strategy.
The concept of the “Cloud” is transforming how business is done. Just as the Internet changed the
paradigm of business as usual, the “cloud” is doing this again. Businesses who truly want to protect
their infrastructure, must take the security risks of virtualizing IT environments into account.
REFERENCE LIST
[1] Abarca, David (2005-2007), Personal Communication/Lectures.
[2] Arce, I. (2008). 12 Reasons Penetration Testing Won't Die, in CSO: The Resource for Security
Executives. Retrieved, February 1, 2012, from http://www.cso.com.au/article/270839/12_reasons_
penetration_testing_won_t_die?rid=-302
[3] Chess, B. (December, 2008). Penetration testing is dead, long live penetration testing, Retrieved
February 1, 2012, from http://www.ncc.co.uk/article/?articleref=310511&hilight=brian+chess
[4] Bayles, A., Butler, K., Collins, A., Meer, H., et al. (2007). Penetration Tester's Open Source Toolkit.
Burlington, MA: Syngress Publishing
[5] Desautels, A. (2009). Network Vulnerability Scanning Doesn't Protect You. Retrieved, February 1,
2012, from http://snosoft.blogspot.com/2009/01/vulnerability-scanning-doesnt-work.html
[6] Gershater, J., & Mehta, P. (2003) What is pen test (penetration testing)? Software Quality.
Retrieved February 3, 2012, from http://searchsoftwarequality.techtarget.com/definition/penetra
tion-testing
[7] Goodchild, J. (2009). Social Engineering: Anatomy of a Hack. Retrieved, February 1, 2012, from
http://www.infoworld.com/d/security-central/social-engineering-anatomy-hack-693
[8] Herzog, P. (2006) Open-Source Security Testing Methodology Manual v2.2. Retrieved, February 1,
2012, from http://isecom.securenetltd.com/osstmm.en.2.2.pdf
21
Aspects of Information Security: Penetration Testing is Crucial for Maintaining System Security Viability
[9] Ottenheimer, D. (2011) VMworld Europe 2011: Penetrtation testing the cloud. Retrieved, February
6, 2012, from http://www.flyingpenguin.com/?p=13996
[10] Pathway Solutions. (2009) Penetration Testing." Network IT Managment and website development
in Seattle Washington Pathway Solutions network installation repair web design applications.
Retrieved, February 6, 2012, from http://www.itpws.com/penetrationtesting.php
[11] Penetration test. (2010, February 1). In Wikipedia, The Free Encyclopedia. Retrieved, February 1,
2012, from http://en.wikipedia.org/w/index.php?title=Penetration_test&oldid=341322433
[12] Redspin. (2011) Penetration Testing Services. Penetration Testing and IT Security Audits.
Retrieved, February 5, 2012, from http://www.redspin.com/penetration-testing/?_kk=penetration%
20testing&_kt=fa9faf0c-f06c-4102-b8c4-db5c8630701c&gclid=CMjHyvrUwqsCFcm77Qod
[13] Smith, J.K., & Shorter, J. (2010). Penetration testing: A vital component of an information security
strategy. Issues in Information Systems, XI (1), 358 – 363.
[14] Srp, Shannon (2008), Personal Communication.
[15] Sullivan, D. (2009, February) Social Engineering in the Workplace. Retrieved, February 1, 2012,
from http://nexus.realtimepublishers.com/ESMWSv3.php
[16] Vulnerability Scanner. (2012). In Wikipedia, The Free Encyclopedia. Retrieved, February 6, 2012,
from http://en.wikipedia.org/wiki/vulnerability_scanner
22
D. E. Pires
JISTP - Volume 5, Issue 12 (2012), pp. 23-33
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
IMPACT OF INFORMATION TECHNOLOGY ON ORGANIZATION
AND MANAGEMENT THEORY
Dennis E. Pires
Bethune-Cookman University, USA
ABSTRACT
I
nformation technology plays a huge role in evolution of organizational and managerial
theories. The study of the impact of IT is significant to understand the current organization and
management thought. The six core concepts utilized by organization theorists and the five
main managerial functions present the impact of IT on organization and management theory. Change is
theory has also resulted in a change in practices that has seen an increase in the importance of IT and
IT personnel management practices.
Keywords: Theory, Organization Theory, Management Theory, Information Technology, Evolution,
and Management Thought.
INTRODUCTION
Development and growth of Information Technology (IT) has seen a widespread impact on various
aspects of organizational and management practices. According to Ives and Learmonth (1984), if IT is
used properly it facilitates innovation, improves efficiency, and exhibits other characteristics that
demonstrate competitive advantage. Organization theory is the system of research looking at explaining
the different aspects of organizational activities that provide a deeper look at the processes of the
organization to unveil a better understanding of the complex organizational system. Managerial theory
is collection of ideas that help executives in performing their responsibilities of managing the
organization. Leavitt and Whisler (1958) speculate on the role of information technology in
organizations and its implications for organizational designs. This paper presents organization theory
and its modern, symbolic-interpretive, and postmodern perspectives. The six core concepts of
organizations used by organizational theorists to create theory are discussed with an evidence of the
effect of IT on each of those concepts. The second part of the paper looks at management theory and
its evolution. The five essential functions of management described by Newman (1951) are used to
present the impact of IT on the evolution of management theory. Finally, the paper looks at the change
in the role of the executives as a result of the evolution of organizational and managerial theory.
ORGANIZATION THEORY
Theory is a supposition or system of ideas explaining something (Theory, 2011). Organization can be
defined as a system of division of labor in which each member performs certain specialized activities
that are coordinated with the activities of other specialists (Mott, 1965). According to Daft (1997),
23
Impact of Information Technology on Organization and Management Theory
organizations are social entities that are goal-directed, are designed as deliberately structured and
coordinated activity systems, and are linked to the external environment. According to Pugh (2007),
organization theory is the study of the structure, functionality, performance of organizations and the
behavior of groups and individuals within them. Organization theory has different perspectives such as
modern, symbolic-interpretive and postmodern that offer distinctive thinking tools with which to craft
ideas about organization and organizing (Hatch & Cunliffe, 2006).
Early Organization Theory
The early organization theory also called as the prehistory of organizational theory is categorized by the
organizational thought conceived by the works of Adam Smith, Karl Marx, and Emile Durkheim. A great
influence of organization theory can also be seen in the management theory as organization and
management theory developed as two distinct yet connected fields.
Modern Perspective
Organization theory according to the modernist perspective is the complete knowledge as recognized
by the five senses. Modern perspective understands how and why organizations function the way they
do and how the functioning is influenced by different environmental conditions (Hatch & Cunliffe, 2006).
The modernist perspective takes into account the different issues within organizations and finds
solutions for those issues to increase the efficiency of the organizations.
Symbolic-Interpretive Perspective
The move from the modern to the symbolic-interpretive perspective was the result of the acceptance of
an alternative to the objective science of modernism. According to the symbolic-interpretive
perspective, organization theory is socially produced as members interact, negotiate and make sense
of their experience (Hatch & Cunliffe, 2006). The theory is based on the foundation of social
construction of reality that assumes that all individuals create their interpretive social realities within
which they spend their lives.
Postmodern Perspective
A more philosophical look at organization theory, the postmodern perspective looks at organizations as
a subject that is defined by the existence of text about it. Davis and Marquis (2005) conducted a study
in evaluating the prospects of organization theory in the early twenty-first century. The study states that
organization theory has seen a shift from being concept-driven to a more problem-driven work. The
emphasis of organization and management theory to a problem-driven work is a direct result of the
affect of IT on various aspects of the organization and management concepts such as increased use of
alliances and network forms, expanding role of markets, and shifts in organizational boundaries (Davis
& Marquis, 2005).
MANAGEMENT THEORY
Management is the process of designing and maintaining an environment in which individuals, working
together in groups, accomplish efficiently selected aims (Weihrich & Koontz, 1993). According to Fayol
(1949), managerial theories are a collection of principles, rules, methods, and procedures tried and
checked by experience. Management theory attempts to present in a concerted manner facts about
human behavior in organizations (Nwachukwu, 1992). The evolution of management and its theories
24
D. E. Pires
JISTP - Volume 5, Issue 12 (2012), pp. 23-33
can be divided in four parts based on their era of development. The four parts described by Wren
(1993) are the early management thought, the scientific management era, the social person era, and
the modern era.
The Early Management Theories
The early management thoughts date back to early humankind settlements that found the need for
management and authority in various aspects of human interactions in family, religion, and nations.
Chinese civilization dating as far back as 600 B.C. shows signs of management thought in the military
system of Sun Tzu. Management thoughts have been recorded in the Egyptian, Hebrew, Greek,
Roman, Indian, as well as the Catholic Church that date back to the human civilization that existed in
the pre-industrialization era. Following these early management thoughts the classical management
theories developed as a result of the industrial revolution. Technology showed its early impact on
management theories through the developments of the factory system that replaced the home
production systems. Steam engine according to many theorists was the initial breakthrough of science
and technology on management practices. The early management thoughts revolved around authority
and control and moved to production, wealth creation, and distribution. In Wealth of Nations, Adam
Smith provided the classical management theory that saw the benefits of specialized labor through the
famous example of pin makers (Wren, 1993). Robert Owen, Charles Babbage, Andrew Ure, David
McCallum, and Charles Duplin were the early pioneers of the management thoughts.
The Scientific Management Era
From the early management thought the management theory moved to the renaissance of scientific
management that showed a visible impact of IT on management as its advancement revitalized the
transportation, communication, machine making, and power sources in large-scale enterprises (Wren,
1993). The classical management theories of Frederick Taylor pioneered the scientific management era
by placing emphasis on time study that brought about the search for science in management (Taylor,
1911) that focused on finding the most efficient way of production. Classical management theorist Max
Weber’s work shows bureaucracy as the answer to the problem of managing large organizations
efficiently. The developments in IT, led specifically by the emergence of computers changed the
management theories as automation of work brought about efficiency in production and reduced the
dependence of management on humans performing specialized tasks. The scientific management era
saw the work of Mary Follett and Henri Fayol who created the first theory of management through their
principles and elements of management (Wren, 1993) that focused on connecting the work with the
available human resources that framed the administrative management theories.
The Social Person Era
The social person era directs the management theory towards the human relations through the
examples of the Hawthorne studies and the philosophies of Elton Mayo. The management theories in
the social person era looked to explore the human aspect by focusing on the human relations, needs,
behavior, leadership, and motivation. Abraham Maslow’s hierarchy of needs shows the importance of
self-actualization for humans, as managers understand the needs of the employees to improve the
efficiency of their performance. An example of this change in management thought is evident in the
view of Henry Dennison that required jobs to be modified in such a way that they would provide greater
satisfaction. Wren (1993) states that the Hawthorne studies were an important step in advancing the
idea of improving human relations in all types of organizations.
25
Impact of Information Technology on Organization and Management Theory
The Modern Era
The modern management theories are a product of the past management thoughts. The modern era of
management is defined by the work of Harold Koontz, Michael Porter, and Douglas McGregor. Harold
Koontz initially presented the six schools of management thought that served management theory. The
six schools of management are the management process school, the empirical school, the human
behavior school, social system school, and the mathematical school. Koontz later expanded by adding
five more schools. Michael Porter provided the modern era with the competitive strategy model that
looks at leadership and decision-making in the competitive markets. Douglas McGregor provided the
modern era with the Theory X and Theory Y. The contrasting theories presented by McGregor provide
different assumptions on human beings that are prevailing in the modern industrial practice (Wren,
1993). Various other contemporary theories such as the total quality management and the strategic
management theory focus on finding ways to improve organizational efficiency by making efficient
utilization of the resources available.
INFORMATION TECHNOLOGY
Orlikowski and Gash (1994) define IT as any form of computer-based information system, including
mainframe as well as microcomputer application. For the purpose of this paper IT is defined as all
technological aspects of an organization that support and serve the business needs of an organization.
It is common for IT to be referred as information systems, computer, Internet, technology, and web that
are different in meaning but have contextually been used interchangeably with IT without conflict. The
impact of IT can be seen in the reduction of costs, improvement of operations, enhancement of
customer service, and improvements in communications (Peslak, 2005) for organizations.
INFORMATION TECHNOLOGY AND ORGANIZATION THEORY
The effect of IT on the evolution of organizational theory can be studied by looking at the changes in
organization theory in relation to the six core concepts such as environment, social structure,
technology, culture, physical structure, and power/control that organization theorists rely upon to
construct their theories (Hatch & Cunliffe, 2006). A look at the six core concepts used by organizational
theorists to create theory.
Environment
According to the modernist organization theories, the environment in organization theory is
conceptualized as an entity that lies outside the boundary of the organizations, providing the
organization with raw materials and other resources and absorbing its products and services. The
symbolic-interpretivism views environment as a social construction from inter-subjectively shared
beliefs about the existence and by expectations that are set in motion by these beliefs, and the
postmodern organization theory suggests organizations and environments to be without boundaries
and virtual organizations. Looking at the three distinct explanations of environment one can see the
affect of information technology on the concept of environment. While the modernist and the symbolicinterpretivism describes the environment as an entity existing outside the parameters of the
organization itself, the postmodern organization theory looks at the environment as one without the
boundaries.
The current organization theory does not consider the environment to be separate from the
organization. The threats faced by the organization by resource dependency have been reduced with
26
D. E. Pires
JISTP - Volume 5, Issue 12 (2012), pp. 23-33
efficient information technology developments. The environment for most organizations has changed
from local or national to a more global competition with the virtual existence of organizations.
Social Structure
The theory of organizational social structure deals with assigning duties and responsibilities to its
members. The use of organizational charts best describes the organizational or social structure
function. Modernist views focus on identifying the organizational principles and structural elements that
lead to optimal organizational performance in the belief that, once basic laws governing these
relationships were discovered, the perfect organization could be designed. Modernist theories also
focus on the four dimensions of differentiation called the degree of formality, emphasis given to task
versus relationship, orientation to time, and goal orientation, and the symbolic-Interpretive approach to
social structure is that it was a human creation; a work-in-progress that emerged from social interaction
and collective meaning making, and the view of social structure in postmodern cognitive process is of
de-differentiation, feminist organizations, and anti-administration theory (Hatch & Cunliffe, 2006).
IT has reduced the rigid social structures within organizations by creating an organizational structure
that is a collective approach of every person involved in the organization towards a common
organizational goal. The views of modernist that require organizational hierarchy are present but are
diminishing with IT making the social interactions between the top, medium, and lower levels of the
organizations much more interactive than they previously were during the existence of rigid social
structure. The development of social interactions on the Internet as a result of the development of IT
has created the new social structure.
Technology
Hatch and Cunliffe (2006) describe the core concept of technology in terms of organizational theory as
the tools, equipment, machines and procedures through which work is accomplished. This description
of technology as a medium of accomplishing organizational work is different from the use of technology
that has been widely accepted in everyday use. IT has had an affect on technology within the
organization by making various IT components available for organizations and eliminating the
limitations of time and space. An example of the affect of IT on the technology of the organization is the
concept of outsourcing that makes it possible for organizations to have employees around the world to
perform organizational responsibilities to create products or provide services without having the barriers
of distance, language, and culture.
Culture
Culture according to Jacques (1952), is the customary and traditional way of thinking and doing of
things. All organizations build cultures based on the traditions, management styles, and resources
available to the actors within the organization. Once built the organizational culture becomes a
phenomenon that embodies people’s response to the uncertainties and chaos that are inevitable in
human experience (Trice & Beyer, 1993). According to O’Reilly (1989), culture is a potential social
control system. The symbolic-interpretive define culture as a concept built from the experiences of the
actors involved in the organization.
IT has affected the culture of organizations in the most dramatic way amongst the other core concepts
of an organization. The development of new organizations that embrace the idea of individuals working
27
Impact of Information Technology on Organization and Management Theory
freely and enjoying the freedom to develop and explore their abilities has been seen in the last decade.
Examples of organizations such as Google and Microsoft that have been successful in managing and
creating IT have led the way for other organizations by changing the culture within the organizations.
Power
Power within organizations is assigned to individuals based on their position in the level of hierarchy. IT
has played its part in empowering individuals within the organization by making information available to
everyone and giving individuals an opportunity to take actions based on the information available to
them. The simple definition explains how the relationship between the actors of the organizations
determines the power structure of the actors based on their position in the relationship. The modernist
perspective looks at power as the ability and knowledge to deal with organizational uncertainty; while
the symbolic-interpretive perspective assumes concept of power through the acceptance of power by
those that are controlled, monitored, or directed by those that are considered to be in power.
Physical Structure
Physical structure is defined by the relationships between the physical elements of an organization. The
main physical elements of an organization are geography, layout, landscaping, design, and décor
(Hatch & Cunliffe, 2006). IT has greatly affected the physical structure of most organizations. The
organizational theories revolving around the physical structure focused on organizations having a set
location of the organization. IT has changed the cognitive thinking of organizational theory with regards
to the physical structure such as geography, layout, landscaping, and design. The organizations have
moved from the original brick and mortar to a high virtual presence on the web with the introduction and
advancements in IT.
Looking at the six core concepts that theorists use to create organizational theory one can notice the
affect of IT on the different aspects of organizations. IT has introduced a change in the various aspects
of an organization that has led to a need for the change in the organizational theory. To understand the
complex organizational system it would be necessary to adapt to the changes within the organizations
that have been created by the affect of IT.
INFORMATION TECHNOLOGY AND MANAGEMENT THEORY
The impact of IT on the evolution of management theory can be studied by looking at the changes in
management theory in relation to the five functions of management described by Newman (1951). The
five functions of management include planning, organizing, assembling resources, directing, and
controlling and these functions are closely associated with the five elements of management defined by
Fayol (1949). A look at the five functions of management within different management theories:
Planning
One of the main functions of the management is planning to achieve organizational success through
advance determination of the directions and procedures to achieve an organizational goal. According to
Koontz and O’Donnell (1972), planning bridges the gap from where we are to where we want to go. The
process of planning in the modern management era evolved from a highly intuitive, command-oriented
concept to an activity by modern technology, sophisticated aids, and a broader understanding of
people-machine interactions in a broader system (Wren, 1993). The evolution of planning from a
process dependent on season and natural events to one that uses the knowledge and technology
28
D. E. Pires
JISTP - Volume 5, Issue 12 (2012), pp. 23-33
available to assist managerial functions is a result of the impact of IT on the function of planning. IT has
revolutionized the planning process by utilizing the resources such as computer-assisted design,
computer-assisted manufacturing, and computer-integrated manufacturing capabilities within the
organizations.
Organizing
The function of management that requires management to provide a structure that would facilitate
organizational functions is organizing. The components of organizing include defining authority,
structure, and responsibility. Organizing did not gain importance until the social person era that placed
emphasis on the structure and the hierarchy within the organization. In the modern management era
the need for a flatter organization led to management theories that recognized the limitations of
authority and hierarchy. Management theories focused on power equalization and teamwork became
the best mean to achieve organizational goals. IT facilitates the organizing function of the management
on the large scale as organizations have moved beyond the local existence to a more global presence.
Global teams working together on a common project or task is not uncommon for the organizations in
the modern era. The virtual technologies to support these cross-cultural and cross-national global
teams can be seen as an impact of IT on organizing.
Assembling Resources
Globalization is an easily accepted phenomenon of the current management theories as organizations
focus on expanding markets by reaching customers from around the world. The function of assembling
resources is to gather resources such as raw material and human resource that help organizations in
achieving their goals. The impact of IT in assembling resources can be seen in the ability of
management to obtain raw materials from any part of the world, and also being able to employ
personnel from various parts of the world that fulfill the organizations requirements by utilizing the
modern technologies such as video conferencing and digital data transfer.
Directing
Another important function of management is to direct the efforts of various individuals and resources
towards a common organizational goal. Leadership qualities of a manager play an important part in
being efficient at directing others towards the prescribed goal. The management theory evolved as a
result of the systems approach to management that focused on modern communication and production
systems that helped managers in directing work across the organizations. The ability to create an
environment to support employee efficiency through virtual teams, remote access to quality information,
and the commonly yet extensively utilized digital data is a result of the development of IT that impacts
the evolution of managerial theory. Management theory has seen a shift from the early management
thought that revolved around direct contact to the modern era that connects managers with the
employee across the globe via advance technologies.
Controlling
Controlling is the final component of the management cycle that looks at managing the resources to
meet the organizational goals. According to Terry (1970), controlling is determining what is being
accomplished – that is, evaluating performance and, if necessary, applying corrective measures so that
performance takes place according to plans. Looking at this definition of controlling and comparing it to
the modern management era, one can see the impact of IT on control. Management utilizes IT to collect
29
Impact of Information Technology on Organization and Management Theory
and store data that is used to analyze the practices of resource allocation and production. The
information derived from the data analysis is used to take necessary actions within the organizations.
The presence of IT in data collection, storage, analysis, and application of corrective measures through
computer generated models results in the evolution of management theory.
INFORMATION TECHNOLOGY AND ROLE OF EXECUTIVES
IT within this paper is defined as all technological aspects of an organization that support and serve the
business needs of an organization (Grant & Royle, 2011). IT today is the most important sector
contributing to both the economy and government service (Perlman & Varma, 2005). Considering the
growth of IT and its impact on organizational success, it is vital to understand that the role of executives
has changed significantly with the increasing use of information technology within organizations. IT has
evolved from a strictly supporting role in the back office to a competitive weapon in the marketplace
(Porter & Millar, 1985). As a result of this evolution the executives within organizations are required to
critically examine the management and use of IT. The impact of IT on the role of executives is
presented by utilizing five important managerial roles from Mintzberg (1994), and the role of executives
with regards to handling IT professionals within the organizations.
Managing Information Technology
According to Karlsen, Gottschalk, and Anderson (2002), the successful use of IT within a company to a
large extent depends on the IT department executives and the IT managers. According to Mintzberg
(1994), the five important roles of a manager are being a leader, resource allocator, spokesman,
monitor, and entrepreneur. These five roles of an executive do not cover all executive roles and
responsibilities; but they do cover a majority of the executive roles within an organization.
Leadership according to Hogan and Kaiser (2005) solves the problem of how to organize collective
efforts; consequently, it is the key to organizational effectiveness. The executives of today are expected
to be effective leaders in the changing organizational environment that utilizes the newest form of IT. IT
has on one hand made it easier for executives to have access to information that is vital in decisionmaking; on the other end of the spectrum the executives of today are required to know how to acquire
meaningful information with the assistance of IT available to them.
Organizations usually have limited resources to perform organizational activities. The limited resources
require executives to be efficient in their role of resource allocation. The manager must decide how to
allocate human, financial and information resources to the different tasks of the organization (Karlsen et
al, 2002). IT has automated much organizational process and it has made the resource allocation
responsibility easier for executives. The information available to managers as a result of the impact of
IT has made it easier to plan, organize, coordinate, and control tasks (Karlsen et al, 2002) through
systems in place that enable efficient resource allocation.
The manager’s role as a spokesperson emphasizes promoting the IT department or project within the
organization (Karlsen et al, 2002). With the growth of IT and the increased use of IT within
organizations demands executives to have knowledge of the IT involved within the organization as well
as promote IT changes to the other aspects of the organization by being a spokesman for the change.
Executives are expected to not only have the working knowledge of the current IT in use; but also keep
up with advancements in IT that could be beneficial for the organizational success by being able to
recommend or support the changes by being a spokesperson for those recommended changes.
30
D. E. Pires
JISTP - Volume 5, Issue 12 (2012), pp. 23-33
A major responsibility of the manager is to ensure that rapidly evolving technical opportunities are
understood, planned, implemented, and strategically exploited in the organization (Karlsen et al, 2002).
Efficient use of IT gives organizations a competitive advantage by improving the ability of managers to
identify the needs of the organization and the ability to find solutions to serve those needs. The new
role of executives to stay current or a step ahead of others with regards to IT is a direct influence of IT
on organizations.
One of the most noticeable changes in the role of executives as a result of the effect of IT is on
monitoring. Monitoring according to Karlsen et al (2002) is the scanning of the external environment to
keep up with the relevant technical changes and competition. IT has given the organizations a
competitive advantage and it has also increased the competition for organizations. The efficient use of
IT has changed the external environment of the organization to include vendors, contacts, professional
relationships, and personal contacts. The new role demands being connected to the vast environment
to gain access to new changes and ideas that can be utilized within the organization.
Managing Information Technology Personnel
The role of executives involving managing IT within an organization has evolved with the increased
emphasis and incorporation of IT in organizations. A look at the five important roles of executives and
the changes in the role of executives shows the impact of IT on executives. Managing IT personnel is
another important role of executives that requires a discussion of its own. Each organization has an IT
department that takes care of all aspects of IT. For the purpose of this paper we term the individuals
working in the IT department as knowledge workers. Kelly (1998) defined knowledge workers as those
workers that are equipped to maintain and expand our technological leadership role in the next century.
Munk (1998) defines knowledge workers as people who are highly educated, creative, computer
literate, and have portable skills that make it possible for them to move their intelligence, talent, and
services anywhere needed. The two definitions of knowledge workers state that these individuals
termed, as knowledge workers are not the common employees that organizations and executives
managed using prescribed organization and managerial theories. These set of employees require the
executives to understand their differences from the other employees within the organization and the
skills required to efficiently manage the knowledge workers are different compared to the other
employees.
Managing knowledge workers is essentially a specialized task that is usually assigned to executives
that have knowledge of management as well as the technical aspect of business processes. To
successfully manage knowledge workers the executives have to understand the characteristics of
knowledge workers, the work performed by knowledge workers, the motivation required by knowledge
workers (Glen, 2002). Managing knowledge workers as valued intellectual assets is critical for
capitalization on and distributing knowledge in the organization by realizing the knowledge workers own
initiative (Papacharalambous & McCalman, 2004). The acceptance of knowledge workers as an asset
that generates knowledge and helps organizational success is vital for managers. The role of
executives while managing knowledge workers is different in nature when compared to the
management of other workers within the organization.
The work performed by knowledge workers requires a certain amount of creativity and ambiguity that
requires executives to avoid micromanagement of knowledge workers and the tasks they perform. Glen
(2002) correctly identifies the problem that knowledge workers at times are more informed than their
managers. The failure to accept the reality leads to an uncomfortable situation between the executives
31
Impact of Information Technology on Organization and Management Theory
and the knowledge workers. The evolution of the role of executives has seen a shift from the traditional
organizational and managerial theories that believed that managers always know more than their
subordinates.
The role of executives while managing knowledge workers is to understand the motivation required by
knowledge workers to perform their work efficiently. The role of motivating the knowledge workers is
different for executives as knowledge workers look for motivation from executives in their efforts of
developing knowledge, creating intricate and beautiful systems, proving their potential, helping others,
and enhancing career growth. As stated earlier knowledge workers according to Munk (1998) are highly
educated, creative, computer literate, and have portable skills that make it possible for them to move
their intelligence, talent, and services anywhere needed. Knowledge workers do not look for motivation
in performing their work but in other aspects of work that impact their performance.
CONCLUSION
The evolution of organization theory and management theory is an indication of the effect of IT on the
theories as well as the practices. A look at the six core concepts of the organization shows the changes
in the six core concepts as a result of IT. This change has resulted in the evolution of organizational
theories. Change in organizational theory and managerial theory as a result of IT has also triggered a
change in management practices. A look at the evolution of management theory and the five functions
of management shows the impact of IT on managerial theories. The role of executives has evolved to
accommodate for improving and advancing IT. The new role requires executives to be knowledgeable
with regards to IT, in utilizing the best available IT within the organization as well as managing IT
personnel efficiently to achieve the best results for the organization. Overall, IT has greatly affected
organizational and managerial theory creation as well as practices within the organization.
REFERENCES
Daft, R. L. (1997). Essentials of organization theory and design. (10th ed.). Mason, OH: Thomson
South-Western.
Davis, G. F., & Marquis, C. (2005). Prospects for organization theory in the early twenty-first century:
Institutional fields and mechanisms. Organizational Science, 16(4), 332-343.
Fayol, H. (1949). General and Industrial Management, translated by Constance Storrs. London, UK:
Pitman & Sons.
Glen, P. (2002). Leading geeks. (1st ed.). San Francisco, CA: Jossey-Bass.
Grant, G. L., & Royle, M. T. (2011). Information technology and its role in creating sustainable
competitive advantage. Journal of International Management Studies, 6(1).
Hatch, M. J., & Cunliffe, A. L. (2006). Organization theory. (2nd ed.). New York, NY: Oxford University
Press.
Hogan, R., & Kaiser, R. (2005). What we know about leadership. Review of General Psychology, 9(2),
169-180.
Ives, B., & Learmonth, G. (1984). The information systems as a competitive weapon. Communications
of the ACM, 27, 1193-1201.
Jaques, E. (1952). The changing culture of a factory. New York, NY: Dryden Press.
Karlsen, J. T., Gottschalk, P., & Anderson, E. (2002). Information technology management roles: A
comparison of IT executives and IT project managers. Proceedings of the 35th Hawaii International
Conference on System Sciences.
32
D. E. Pires
JISTP - Volume 5, Issue 12 (2012), pp. 23-33
Kelly, J. (1998). Those who can…that who cannot: Winners and losers in the digital age. Vital
Speeches of the Day, 65(3), 89-92.
Koontz, H., & O’Donnell, C. (1972). Principles of management: an analysis of managerial functions. (5th
ed.). New York, NY: McGraw-Hill.
Leavitt, H. J., & Whisler, T. L. (1958). Management in the 1980’s. Harvard Business Review, 36, 41-48.
Mintzberg, H. (1994). Rounding out the manager’s job. Sloan Management Review, 36(1), 11-26.
Mott, P. E. (1965). A definition of social organization, in the organization of society. Englewood Cliffs,
NJ: Prentice Hall.
Munk, N. (1998). The new organization man. Fortune, 137(5), 62-74.
Newman, W. H. (1951). Administrative action: the techniques of organization and management.
Englewood Cliffs, NJ: Prentice-Hall.
Nwachukwu, C. C. (1992). Management theory and practice. Onitsha, Nigeria: African FEP Publishers.
O’Reilly, C. (1989). Corporations, Culture, and Commitment: Motivation and Social
Control in Organizations. California Management Review, 31, 9.
Orlikowski, W. J., & Gash, D. C. (1994). Technological frames: making sense of information technology
in organizations. ACM Transactions on Information Systems, 12(2), 174-207.
Papacharalambous, L., & McCalman, J. (2004). Teams investing their knowledge shares in the stock
market of virtuality: a gain or a loss? New Technology, Work and Employment, 19(2), 154-164.
Perlman, B., & Varma, R. (2005). Barely managing: Attitudes of information technology professionals
on management technique. The Social Science Journal, 42, 583-594.
Peslak, A. R. (2005). The importance of information technology: an empirical and longitudinal study of
the annual reports of the 50 largest companies in the United States. The Journal of Computer
Information Systems, 45(3), 32-42.
Porter, M., & Millar, V. (1985). How information gives you competitive advantage. Harvard Business
Review, 149-160.
Pugh, D. S. (2007). Organization theory: selected classic readings. (5th ed.). New York, NY: Penguin
Books.
Taylor, F. W. (1911). The principles of scientific management. New York, NY: Harper & Row.
Terry, G. R. (1970). Office management and control: the administrative managing of information. (6th
ed.). Homewood, IL: Irwin.
Theory. (2011). Oxford Dictionaries. Retrieved from http://oxforddictionaries.com.
Trice, H. M., & Beyer, J. M. (1993). The cultures of work organizations. Englewood Cliffs, NJ: PrenticeHall.
Weihrich, H., & Koontz, H. (1993). Management: a global perspective. (10th ed.). New York, NY:
McGraw-Hill.
Wren, D. A. (1993). The evolution of management thought. (4th ed.). New York, NY: John-Wiley.
33
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
IMPLEMENTING BUSINESS INTELLIGENCE IN SMALL ORGANIZATIONS
USING THE MOTIVATIONS-ATTRIBUTES-SKILLS-KNOWLEDGE INVERTED
FUNNEL VALIDATION (MIFV)
Jeff Stevens1, J. Thomas Prunier2 and Kurt Takamine3
1Workforce
Solutions Inc., USA, 2Southwestern College, USA and 3Azusa Pacific Online University, USA
ABSTRACT
V
ery few scholars and business leaders would argue that there are very few business
intelligence (BI) models in use today which focus on meeting the servicing needs of small
businesses. It is no secret that small businesses have similar BI system needs as large
organizations, yet they lack both the infrastructure and financial capabilities. To this end, the
Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV) Model will provide small
businesses with the opportunity to explore and deploy a BI system that is both effective and affordable.
Fundamentally, the MIFV is an upstream, sequentially driven competency validation model whose goal
it is to achieve a defined competency cluster, which in this case is implementing a BI system for small
organizations. To this point, far too often small organizations have a number of fragmented data marts
that often work against each other as opposed to a logical BI system that provides a small organization
with a single version of the truth. In this case, the competency cluster would investigate several
variables facing leadership development. Once it is determined which BI variables will be addressed, a
set of competencies will be developed to enable said variables. Once this set of variables is mastered,
other BI system variables can be deployed as the BI system matures. The individual competencies will
be grouped to form a competency cluster. Based on the afore mentioned foundation is developed the
MIFV will serve as the critical key related in developing an effective BI system to put a small
organization in a position to better compete today and into the future. Further, the MIFV can become
the standard for developing the competency cluster validation process that will link traditional
information service processes in small organizations with critical motivations, attributes, skills and
knowledge aspects of BI systems.
Keywords: Business Intelligence, Competency Clusters, Validation Modeling, Motivations-AttributesSkills-Knowledge, and Environmental Scanning.
34
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
MASK C.C. VALIDATION
MASK C.C. VALIDATION
WSC - Copyright 2003
WSC - Copyright 2003
3. Risk reduction.
3. Risk reduction.
2. Cost effective human capital management.
2. Cost effective human capital management.
TotalHR
HRpackage
package
1.1.
Total
Masteryofofdesired
desiredCompetency
Competency Cluster
Cluster
Mastery
HumanCapital
CapitalMngmnt
Mngmnt
3.3.
Human
Flexiblethinking
thinking
2.2.
Flexible
Speedofofaction
action
1.1.
Speed
Knowledge
Knowledge
"Composite
"Compositeofofeducation
education&&Experience"
Experience"
3. 3.
Essential
Essentialtalents
talents
2. 2.
Visionary
Visionaryand
andstrategic
strategicability.
ability.
1. 1.
Ability
Abilitytotoachieve
achievedesired
desiredcompetency
competency cluster
cluster
Skills
Skills
"Components of competency cluster tool box"
"Components of competency cluster tool box"
3. Actual implementation of "deliverables."
Actual
implementation
of "deliverables."
2. 3.
What
is our
degree of separation.
What is our
degree of
separation.
1. 2.
Evaluation
of current
situation
and value add opportunities.
1. Evaluation of current situation and value add opportunities.
Attributes
Attributes
"Ability to deliver
and add value"
"Ability to deliver and add value"
3. What will compel "us" to move?
2. What
compelled
us "us"
to look?
3. What
will compel
to move?
1. Why
arecompelled
we in our us
current
position?
2. What
to look?
1. Why are we in our current position?
Motivations
"Compelling
to action"
Motivations
"Compelling to action"
Needs and Task Analysis
1. Direction
4. Action plan
1. Direction
4. Action plan
Needs
and Task Analysis
2. Assess
needs/desires
3. Task/objective
3. Resource evaluation
6. Communication/Trainining
2. Assess needs/desires
3. Task/objective
3. Resource evaluation
6. Communication/Trainining
Environmental Scan
1. Current IT situation
4. Business environment
1. Current IT situation
4. Business environment
2. Time Environmental
value-cost
5. Future positioning
2. Time value-cost
5. Future positioning
Scan3. Company culture
3. Company culture
Figure 1: The MASK Model
LITERATURE REVIEW INTRODUCTION
Based on the complexity and empirical nature of this study that focuses on the MIFV Model and the
struggles of small organizations to implement BI systems, this literature review was split into two distinct
35
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
sections. The first section of this literature review will illustrate the various works related competency
cluster efforts and modeling efforts. The second section of this literature review will focus on the general
struggles faced by small organizations in their effort to implement, manage and use BI to survive and
hopefully thrive in the competitive global market place.
The Background of Competency Cluster Modeling
Models and theories pertaining to the field of competency cluster models and theories are as numerous
and diverse as the concepts themselves (Sanghi, 2007). As one views current competency cluster
modeling efforts, the focus is heavily upon processes and systems as they pertain to rudimentary BI
products for small organizations. While basic, low level BI products generally can work at a very basic
level, most of these type of organizations do not understand what they have and rarely have the most
critical BI needs met for their organizations. The next generation of small organizations will struggle to
provide and manage the information services they so desperately need to survive. Unfortunately, one of
the main reasons for the expedited failure of many small organizations will be their failure to effectively
and efficiently deploy BI systems. Further the emerging workforce will be more complex and introduce
many new dynamics related to the need to secure and exploit information services that this will
challenge leaders as in no time in history. Gagne (1962) argues that procedural material should be
organized into a series of sequential steps that should be analyzed and divided into subunits. Within
these series of sequential steps, the trainees must master each subunit before the entire procedure is
undertaken and thus validated as per the MIFV process (DeSimone & Harris, 1998). This alone
exposes leaders of small organizations to undertake a more in-depth exploration of the types of BI
systems that could address the needs of their small organizations.
The work undertaken by Gagne is critical in that pioneered the requirement for a sequentially driven
measurement of skills which can be expanded to address the MIFV Model. Further, Gagne (1962)
proposed that human performance could be divided into five distinguishable categories, each of which
requires a different set of conditions for maximizing learning retention as well as knowledge transfer.
Again, leaders are being provided the opportunity to better understand the type of BI system that can
benefit their organization. Yet another piece of seminal work that, while basic on the surface, can be
successfully expanded into a fit within the MIFV Model. Much of Gagne’s impact pertains to his work
related to knowledge, skills and abilities (KSA). The KSA measure process has been the foundation for
measuring and validating numerous aspects of a small organization’s system deploys leading to the
overall BI system. However, this model looks first at knowledge and skills, which is easy to understand
through the basic systems sales materials, it does not go far enough in developing an understanding for
those involved in the exploration and deployment of a BI system to address the complexities of the
modern industry in which they operate and struggle to survive.
The KSA model, unlike the MIFV model, does not validate a competency cluster model and this will
allow the MIFV to impact the competency cluster body of research through its validation process, as no
other model before as it relates to leadership development. Kurt Lewin conducted a considerable
amount of groundbreaking work in the competency clustering process, especially with the United States
Military, pertaining to competency cluster modeling (Stevens, 2003). Further, Lewin devoted a great
deal of effort and resources to devising a theoretical schema for representing environmental variables
as they impinge upon individuals and their efforts to achieve specific competency cluster mastery
(Chaplin, 1993). Lewin’s work put for excellent groundwork for taking identifying key competencies. The
MIFV used this foundational premise from which build competency clusters, which in this case focuses
on the exploration, development and deployment of BI systems for small organization’s that are both
36
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
efficient and effective. It appears that the most significant piece of work conducted by Lewin with regard
to competency clustering was related to his Force Field Analysis model. Briefly, Lewin’s Force Field
Analysis Model entails unfreezing behaviors, etc; making changes and refreezing desired behaviors.
This is pertinent to the MIFV in that this model focuses on scanning, assessment, development and
reinforcement as it relates to developing effective small organization BI system.
The last significant aspect of competency cluster body of research pertains to Albert Bandura. His work
placed him in the role of pioneer of the Social Learning Theory. Within the Social Learning Theory,
Bandura espouses that there are three major aspects that relate to competency cluster modeling
(Bandura & Walters, 1959). The three major aspects form a triad that espouses that human behavior is
a continuous cycle of cognitive, behavioral and environmental influences (Bandura, 1975). This triad
served as key inputs into the MIFV Model as too often leaders who are deploying BTI systems merely
put in place what they see as easy quantification systems. Yet generally these systems are doomed to
under usage and often out right failure. To this point, the MIFV utilize the continuous cycle of cognitive,
behavioral and environmental aspects in the first two sections of the measurement model, the
motivations and attributes aspects. It has been proven that if these first two steps do present a
favorable rating, the project’s success rate is less than 5%. This continuous and interactive cycle
provides a solid foundation from which to build a competency cluster validation model bolstering the
MIFV hypothesis in that both models have similar characteristics as well as both being in a perpetual
movement mode. Bandura was also one of the first to devise the concept of “chunk” learning process,
which is in keeping with what the MIFV is attempting to refine and advance. Chunk learning pertains to
a designated group and competency clusters are combined into group or chunk competency clusters
(Bandura, 1962). Through his work in “chunking” learning, he delineated four major steps with regard to
his chunking process. The “chunking” aspect was one of the most helpful aspects of competency
models as each section of the MIFV Model can be compared to a “chunk” of leadership development.
Bandura’s chunk learning was the impetus for the MIFV being a sequential driven model with
measurement aspect’s built into each sequential step.
Explanation of the Model
Operations
Customer
Service
Sales
Accounting
Finance
Human Resources
Figure 2: Represents the discombobulated nature of departments due to a lack of a working BI system.
37
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
The world for small organizations of the 21st Century has evolved into a rapidly changing and highly
complex, knowledge worker environment which requires an advanced approach to manage and exploit
information services through effective and efficient BI system which can found in the implantation of the
MIFV Model. This environment is based on simple-to-complex and complex-to-simple processes that
require varying degrees of competency cluster mastery (Boyatzis, 1999). The small organization sector
spend billions of dollars every on many forms of information services products with little return-oninvestment. The researchers have will display adequate needs assessment as well as the task
requirements within the MIFV Model. As a researcher delves into the task analysis process, they must
be able to link the critical aspects of the overall competency cluster validation process to this model
(Stenning, 2000). Through the research pertaining to the development of the MIFV, findings support
that attributes generally encompass increasing non-technical aspects of competency cluster mastery.
Skills are measured in hard data, such as the ability to operate a specific type of apparatus or
measurement process (Dulewicz, V., and Higgs, M. 1998). This aspect of the MIFV is the most often
utilized and measured with regard to an organization’s competency cluster process as it relates to
developing leaders.
Business in its many different forms can be measured by number of employees, financial income,
global presence, or even share size. The literature review will explore what is defined as small
business, key factors to small business failure, what is Business Intelligence (BI) and how does it
impact small business, why is data important in the BI model, and finally BI impact of small business
success. This information will develop a foundation of understanding in providing guidance to the target
business groups and implementation of a usable financial responsible model for success. One of the
most important yet overlooked aspects of the economic market is the small business. Small business
makes up 99 percent of the twenty-one million entities that file taxes in the US with half again having
fewer than five employees (Perry, 2001). A small business can be identified by either size or financial
standing that varies based of country of origin. The Small Business Administration (SBA) identifies the
maximum number of employees as five hundred (500) to be classified as a small business.
This number will vary based on the industrial class in the North American Industry Classification System
(NAICS). Financial receipt determination of small business standing can also vary based on NAICS
classification. Combination of employee numbers and sales receipts can also be used as a defining
factor to the status of the organization which varies in the United States (US) as well as the European
Union (EU). Small business plays a vital role in the sustainability of the social and economic community
environment (Samujh, 2011). Small business failure is currently estimated between 30 and 50 percent
this number is approximate based on variation data improper reporting and market flux. There are many
reasons for business failure but is most appropriately defined when a fall in revue and/or rise in
expenses are such that the business becomes incapable of obtaining new funding (Shepherd, 2009).
Importance areas of success that can measure the ability to succeed include management capabilities
and accounting (Wichmann, 1983). Managerial inadequacies have been linked to a number of factors to
include lack of guidance, poor financial abilities, lack of necessary information, and overall financial
stability (Gaskill, 1993) The second determine factor for business failure was directly related to size,
location, type of merchandise, and operator type (Star, 1981). While poor management may be an
identified factor the ability of a manger to make informed educated decisions regarding important
business factors may be limited based on the lack of business planning. Information provides the
building blocks to creating a viable business; whereas a vision to allow success as the small business
begins to move forward. The formation of a good business plan will aid in the preparation and collection
38
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
of information and aid in success however there needs to be a continual approach that is viable to a
small business success.
Numerous models pertaining to the development and creation of a small business are currently in
practice yet lack the ability to successfully implement Business intelligence (BI). Business intelligence
can be defined as a data-driven decision support system (DSS) that combines data gathering, data
storage, and knowledge management with analysis to provide input to the decision process (Burstein,
2008). BI has been shown in current and past studies to be an effective strategy in medium to large
organizations and provide invaluable information across the business areas. Evidence suggests that
most organizations independent of size or capabilities are reluctant to implement BI due to the
presumed difficulty in implementation (Seah, 2010). BI provides many areas of value but the tools and
implementations to produce those data points that potential can develop value in short or long term
requires extensive planning and preparation. As many businesses have accepted the use and shown
the value of BI that there has of late been evidence in fortune 500 organizations the formation of
business units designed as BI specific (Fuld, 1991).
Small business is reluctant to embrace a BI model to help shape its current and future business
practice. With the change in the global economic market there has been extreme pressure on the
private sector to find innovative ways to reduce cost and increase efficiency. Business is experiencing
environmental changes as a result of new economics of information and the global nature of
competition (Evans, 2000). Furthermore, organizational survival is dependent on the integration of
knowledge fostering the willingness to adapt to the environment (Dijksterhuis, 1999). One key factor to
the decision by many small business owners to avoid BI systems is the potential return on investment
(ROI). “If a product can’t survive an ROI analysis without a ‘measurable’ return, then organizations
shouldn’t buy it” (Greenbaum, 2003). The cost of the BI systems as well as the upkeep is high systems
available today, as costs for IT and upkeep continue to decline the cost does not outweigh the ROI
(Negash, 2003) therefore not an appealing process for small business owners. It will require a model
who’s ROI is more in line with the cost of implementation and continuous operation to provide true
value. The reluctance to adapt to the growing influx of information puts the small business at a
disadvantage in identifying possible advantages in the marketplace and seizing the opportunity to
expand in a niche of growth that would otherwise be missed. The underlying change in most
informational opportunities is the change of applications information technologies in the organization
(Doherty, 2003).
Many organizations large and small collect vast amounts of information as business practice but do not
have the capabilities to process the data (Ranjan, 2008). The information is collected and stored in a
data warehouse. BI tools are accepted and generally considered middleware in most IT infrastructures
(Sahay, 2008) utilizing the existing data warehouse. With vast amounts of data the task is to provide a
viable solution to first parse then identify value in the data. There are currently many types of consumer
off the shelf products available to aid in the parsing and mining of data but is not always properly
designed to provide value. A key to business success ultimately correlation of the data to provide a
clear workable business process that makes sense. Current key methods can include extraction,
transformation and loading (ETL), data warehousing, database query and reporting,
multidimensional/online analytical processing (OLAP) data analysis, data mining and visualization
(Ranjan, 2008). The idea of using automated business to disseminate information to various areas of
any industrial, scientific, or government organization (Luhn, 1958) is not new but there is the continuous
hesitation to implement automated systems to aid in informed decision making. With the continuous
39
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
changing business dynamic driven both by market stability and changing technology it has become
even more important to be able to predict consumer/customer trends with time to adjust strategies to
meet these demands (Toni, 2001). To accurately predict changes data must be gathered and evaluated
with efficiency and accuracy. Current BI models do not provide the scalability to meet the needs of the
small business to the point where use would be beneficial.
With the rate of small business failures and the ever increasing change in the global market small
business owners must determine what will put them in a position to thrive. Many look at financial criteria
to measure the success of a small business, yet there are many other motivating factors to business
owners outside of financial gain (Walker, 2004). Regardless of the motivation to success, information is
key. Obtaining the right information in a means that will add to the success and not drive the business
to failure is the proverbial precipice that must be negotiated to avoid a fatal fall. Continues improvement
between business process and IT integration is an important factor in success (Turkman, 2010).
Furthermore, with the implementation of a BI model designed for a small business the possibility of
generating the data needed for success is present.
THE MIFV MODEL DEPLOYMENT AND METHODOLOGY
Step 1: The Environmental Scan
 Current IT situation & business
environment
 Organizational culture & industry
position
 Time-value-cost & future positioning
Figure 3: Environmental scan must move in all major directions
The initial action associated with the MIFV pertains to conducting an environmental scan. The
environmental scan aspect of the MIFV Model is the critical first step in the effort to develop effective
and efficient BI systems in today’s highly complex and competitive world of work. An environmental
scan encompasses a thorough review of the various environments in which the organization operates
within during BI development activities. Currently, this is a plethora of information services ranging from
informational technology systems and software, consultants, document management services and
products. To this end, small organizations are constantly hounded by sales people in the endless
supply sector of information services products. With either a very small or non-existent information
technology departments, small organizations are at a severe disadvantage when attempting to explore
and deploy a BI system that puts them in a competitive position when it comes to information services.
It is critical that the small organization understands a number of key aspects in their environment as it
pertains to identifying and deploying an effective and efficient BI system. While there are a number of
key aspects of a small organizations environment it must understand, in order to deploy a successful BI
40
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
system, they must master the environmental areas of employee capability base, politics, industry
viability and customer behaviors. The capability of a small organizations employee base will determine
the level of success, if any, of a BI system.
More times than not, the small organization is forced to utilize talent in place as it deploys its BI system.
Very often, small organizations purchase BI systems that are beyond the capabilities of its employee
talent levels. As such, the small organization is left with a system in place that is only being used at a
fraction of its functional capabilities. Often the employee base is left frustrated and bewildered, which
causes its own set of potential problems for the small organization. To cope, employees will often
ignore functional aspects of the BI or devise a fragmented data mart that makes sense to them, which
defeats the purpose of the organizational wide BI system. Further, the BI system will become fractured
resulting in out of control data marts, spreadsheets and other types of ad-hoc systems pulling away
from the central effort.
There are a number of ways an organization can scan its employee base. First, is assessing the output
record of an employee related to efforts directed toward a BI system. This record can provide artifacts
that can be assessed against the potential BI systems being considered by the small organization.
Second, a survey and/or other type of quantifiable assessment can be deployed for the benefit of those
who become key players in the deployment and maintenance of a BI system as well as those who may
be training other small organization employees in said BI systems. The last key action that can be used
to scan the employee base environment would be interviewing those who would play key roles in the BI
system. The interview questions would be open-ended so as to allow the employee to provide insight to
their ability to play a key role in the small organizations BI system. While there are a number of tools
that can be utilized to conduct an environmental scan of the employee base, those mentioned above
are the most likely to provide the needed insight into the opportunity for the small organization to
implement an effective and efficient BI system.
Often small organizations ignore the political environment in which they operate. This can be a key
success of failure point for a BI system, especially those small organizations who operate across state
and international borders. In scanning the political environment a small organization must first look at
regulatory and compliance factors that must be included in any BI system. In scanning this
environment, the small organization must gain a keen insight into what it needs to influence this area as
well as what pressures will emanate from its political environment. The collections and assessment of
political artifacts is the most efficient way a small organization can scan this environment so as to
ensure the BI system meets the requirements facing them that are political in nature. Further, the small
organization must build flexibility into any BI system it deploys so as to make critical adjustments as the
political landscape changes.
Small organizations tend to operate in a tenuous environment as many of these companies are
purchased, absorbed or simply go out of business. Further, these type of organizations focus on a
niche aspect within a fluid industry. To this point, the small organization must scan its industrial
environment so as to ascertain the best form of BI system to put it in a position to be successful as well
as significant in its industry. Further, if the small organization successfully scans this aspect of its
environment, they can exploit information services to not only put them in a secure position, but
possibly position them to become a leader in their industry. The key aspect related to scanning the
environment with regard to the deployment of an effective and efficient BI system by a small
organization relates to customer desires. The assumption is made that the small organization has a firm
41
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
grasp of the needs pertaining their customer base. As such, this aspect will illustrate the need to
conduct an environmental scan related to customer desires. These are the aspects that will allow a
small organization to gain a superior position in the area BI. With a good understanding of customer
desire all facets of the organization can adjust to meet these needs or adjust to future needs. A small
organization can scan the customer desires environment by analyzing organizational artifacts as is the
case with other environmental scanning efforts. However, a small organization can also analyze various
social media sources to trend out the direction of customer desires within their industry. It will be this
aspect of the environmental scan that can very much aid a small organization in gaining an advantage
through BI as scanning the social network sector is often not undertaken by small organizations. With
the proper collection of data points from social media sites not only can an organization identify key
avenues of potential growth but start to identify changes in political and social changes that may not be
available elsewhere. Furthmore, data mining of social sites can provide an insightful understanding of
key competitors in the organizations niche industry.
As mentioned earlier, the environmental scan is the first critical step in the sequentially driven MIFV
Model as it pertains to small organizations implementing BI systems. Upon implementing the
environmental scan, the small organization must ascertain which critical environments will be scanned
as part of the MIFV Modeling process. This step cannot not be stressed enough as this initiates the
small organization BI MIFV Model. Afterall, if an organization does not understand the environment in
which it is attempting to explore, develop and deploy an effective and efficient BI system will be doomed
from the start.
Step 2: Needs assessment and task analysis
Figure 4: Needs assessment and task analysis
The second step within the assessment aspect of the MIFV involves the application of conducting a
needs assessment. This aspect of the MIFV allows an organization to create plan of needs that will be
addressed during the small organization BI deployment. To this point, the first critical aspect involves
the ultimate goal of the process to be undertaken. In the case of the MIFV, the ultimate goal would be
42
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
validating the desired competency cluster to be mastered, which in this case is the deployment of an
effective and efficient BI system within a small organization. However, the researcher must identify the
challenges that will be addressed within the needs assessment as well as task analysis process prior to
partaking in a competency cluster process (Stenning, 2001). As a researcher delves into the task
analysis process, they must be able to link the critical aspects of the overall competency cluster
validation process (Stenning, 2000). It is within this step of the MIFV Modeling process that analysis
and assessments began to convert into actions and steps to achieve the desired competency cluster as
it relates to the small organization BI system implementation. It is critical in the afore mentioned steps in
the MIFV Model as well as those to follow for a small organization to assess the best approach to
deploy a BI system that will benefit them to the greatest extent possible.
Step 3: Motivation
Figure 5: Motivational step within the funneling process
The motivation level of the MIFV is the first level of the action steps phase of the MIFV Model.
According to the data ascertained within this study, motivation has rarely been formally implemented
into a competency cluster model. However, the MIFV Model asserts that motivation is the pathway to
developing a fully integrated and functional competency cluster validation to meet the demands of
today’s small organization desire to deploy an effective and efficient BI system. The motivation level
allows the researcher to encapsulate the various aspects that compel a small organization to action in
implementing a BI system. This step in the model creates the initial formula for the successful
implementation of the MIFV Model. There is a critical need for a small organization to conduct some
form of motivation inventory or measurement. This will allow the small organization to assess whether it
and its employee base is compelled to undergo the arduous task of implementing a BI system.
43
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
Step 4: Attributes
Figure 6: Attributes step within MASK funneling process
The second level of the MIFV action plan pertains to measurement of attributes. The measurement of
these attributes is focused upon the various properties, qualities, characteristics needed to successfully
negotiate the MIFV process. Further, this aspect of the MIFV should encompass past aspects of quasi
competency cluster models, which are terms such as attitude, values, integrity, qualities, principles,
maturity, accountability, etc. Through the research pertaining to the development of the MIFV, findings
support that attributes generally encompass non-technical, value added aspects of competency cluster
mastery. The attribute level expands and adds to the motivation aspects of the MIFV in that it brings
value-added aspects to a cultural and work environment.
Step 5: Skills
Figure 7: Skills level within the MASK Model process
The skill(s) level of the MIFV involves the actual “tools” that a small organization possesses to deploy
the mastery needed to successfully implement a BI system through competency clustering process.
Generally, skills are measured in hard data, such as the ability to operate a specific type of apparatus
or measurement process (Phillips, 1996). This aspect of the MIFV is the most often utilized and
measured with regard to an organization’s competency cluster process as it relates to developing
44
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
leaders. While this aspect of the MIFV seems to be the quickest and easiest aspect to implement and
measure, it does not provide an all-inclusive and successful competency cluster model. Further, while
skills can often be obtained for the other aspects of the MIFV, it must be included so as to drive a
successful competency cluster process.
As illustrated in figure 7 the key aspect in the implementation of the successful BI system requires a
complete tool box. It is understood that small business does not always possess the larges talent pool
in which to build the “tool box”, however with a small investment in talent and process awareness the
ability to develop both working talent and ability is achievable.
Step 6: Knowledge
Figure 8: Knowledge level within the MASK Model process
The knowledge step in the MIFV explores what one knows in the context of the modeling process. The
traditional competency cluster models have explored knowledge first within the scope of studies.
However, the MIFV narrows their scope as well analyzes both components last in the process as it
allows a small organization to delve first into the cultural and environmental fit of the BI system within
this validation model.
Admin
Customer
Service
Accounting
Sales and
Marketing
Human
Resources
Figure 9: Continual Motion Federated Model
(This model represents the desired state of a small organization BI system)
45
Operations
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
CONCLUSION
The MIFV has shown itself to be a genuine and credible competency cluster validation model that can
be successfully applied within three critical business sectors. Among the various sectors to which this
model could be applied in the near future are as follows:
1. The business community in general within the United States and the world
2. The various school organizations throughout the United States and the world.
3. The various levels of governmental entities within the United States and the world
The research contained two clearly delineated pertinent points related to the viability of the MIFV within
the small organization population. Point one, while the various organizations participating within this
study have made varying degrees of competency clusters processes, they do not rise to the level of the
MIFV process. Point two, delineates the fact that the MIFV will strongly enhance the current efforts of
the organization. Finally, the MIFV will allow organization to provide more quantitative performance
feedback to their customers, which will add value to their BI implementation efforts. Competency cluster
validation is the key to providing small organization with the opportunity to explore, develop and
implement an effective and efficient BI that can meet the requirements of their industry and effectively
stay within their resource constraints. The MIFV can become the standard for developing the
competency cluster validation process that will link traditional “skills” processes with critical motivations,
attributes and knowledge aspects. The blending of the various components contained within the MIFV
will take the competency cluster validation process beyond the general trades into the knowledge
worker environment of today. With the appropriate research, application of the MIFV has unlimited
potential in deploying BI systems in small organizations. Creating opportunity of success in the current
turbulent business environment filled with ever shrinking avenues of success.
REFERENCES
Bandura, Albert & Walters, Richard H. (1959/2002). Juvenile delinquency: Parent and Teenager.
Ronald Press Co. (New York)
Boyatzis, Richard E. (1982). The competent manager: A model for effective performance. New York:
John Wiley & Sons.
Boyatzis, Richard. E. (1999). Self-Directed Change and Learning as a Necessary Meta-Competency for
Success and Effectiveness in the 21st Century. In Sims, R., and Veres, J.G. (eds.), Keys to
Employee Success in the Coming Decades, Westport, CN: Greenwood Publishing. pp. 15-32.
Boyatzis, Richard.E. (1999b). The financial impact of competencies in Leadership and Management of
Consulting
Boyatzis, Richard.E., Leonard, D., Rhee, K., and Wheeler, J.V. 1996. Competencies can be developed,
but not the way we thought. Capability, 2(2). P.25-41.
DeSimone, Randy L., Werner, Jon L. and Harris, David M. (2002). Human Resource Development,
Third Edition. Fort Worth, TX: Harcourt.
Dollars
Spent
on
Employee
Training
in
the
United
States
href=
`http://maamodt.asp.radford.edu/HR%20Statistics/dollars_spent_on_training.htm'>
http://maamodt.asp.radford.edu/HR%20Statistics/dollars_spent_on_training.htm>. Accessed on
Oct. 5, 2011.
Dulewicz, Victor., and Higgs, Gilleard M. (1998). Emotional intelligence: Can it be measured reliably
and Validly using competency data? Competency. 6(1). Pp. 28-37.
46
J. Stevens, J. T. Prunier and K. Takamine
JISTP - Volume 5, Issue 12 (2012), pp. 34-48
Dulewicz, Victor, & Herbert, Pardes (1999). Predicting Advancement to Senior Management from
competences and personality data: A 7-year follow up Study. British Journal of Management, 10,
13-22. (Awarded the ‘Highest Quality Rating’ by ANBAR)
Gagnè, Robert. M. (1962) Military training and principles of learning American Psychologist 17, 83-91.
Gagnè, Robert. (1985). The Conditions of Learning and the Theory of Instruction, (4th ed.), New York:
Holt, Rinehart, and Winston.
McClelland, David C. and Boyatzis, Richard E. (1982). Leadership motive pattern and long term
success in management. Journal of Applied Psychology. 67, pp. 737-743.
Maor, David. and Phillips, Robert. A. (1996). In McBeath, C. and. Atkinson, R. (Eds), Third International
Interactive Multimedia Symposium, Vol. 1. Perth, Western Australia, pp. 242-248.
Phillips, Robert. A. and Maor, David (1996). In Christie, A. (Ed), Australian Society for Computers in
Learning in Tertiary Education Conference, Vol. 1. Adelaide, Australia.
Phillips, Robert. (1997). The Developer's Handbook to Interactive Multimedia - A Practical Guide for
Educational Applications. Kogan Page, London.
Sanghi, Seema (2007). The Handbook of Competency Mapping: Understanding, Designing and
Implementing Competency Models in Organizations. Response Books, New Deli, India.
Stenning, Walt 2000. Personal correspondence.
Stenning, Walt 2001. Personal correspondence.
Stevens, Jeffery Allen (2003). The motivations-attributes-skills-knowledge competency cluster
validation model an empirical study. Doctoral dissertation, Texas A&M University, College Station.
AUTHORS BIO’S
Jeffery Stevens
Dr Stevens holds a PhD. from Texas A&M University in leadership and process engineering. He also
holds two masters degrees, one in Human Resources and the other in General Management. Dr.
Stevens has more 16 years in the areas of HR, management and process engineering within the
private sector. Currently, he is the President and CEO of an international consulting company that aids
small and mid-size companies in growth and process refinement. Within the academic realm, Dr.
Stevens has conducted research and published several articles within the areas HR, statistics, research
methodology, homeland security as well as groundbreaking research within the area of virtual
education. He has taught a wide array of courses in both the campus and on-line settings. Dr. Stevens
is a former Oregon State Baseball player who has coached football, baseball and basketball at the
college and high school. He has been married to Gloria for 20 years. They have two teenagers,
Amanda and John. He enjoys working with youth, the outdoors and most especially saltwater fishing.
As a Disabled Veteran of the United States Army, Dr. Stevens has undertaken an extensive effort to
work with the United States Military on educational initiatives. He has worked with universities and
corporations to better engage this population. Much of the work he undertakes in this area is with
Wounded Warriors at Brooke Army Medical Center. Dr. Stevens has more than 15 years of experience
in higher education. He has published widely on process engineering, human capital, the use of IT in
nurturing learning styles and military learners. He current research interests are adult learners, the use
statistics in improving business processes as well as homeland security and disaster management. Dr.
Stevens has been the recipient of many honors, to include recognition by the American Council on
Education. He is listed in “Who’s Who in American Executives” for this work. He is currently working
with various universities to create web portals with the United States military to provide a more
standardized accreditation processes to allow the two systems to better communicate.
47
Implementing business intelligence in small organizations using the Motivations-Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV)
J. Thomas Prunier
Mr. Prunier is a doctoral student at Colorado Technical University and Adjunct Professor at
Southwestern College in the field of Computer Science. Mr. Prunier was a member of the 44 th President
of the United States National Security Telecommunications Advisory Committee and is currently the
Chief Forensic Scientist/Cyber Intel Analyst Principal for a fortune 50 company. Mr. Prunier has been a
staff instructor at the Federal Bureau of Investigations training academy and provided business process
assessments for multiple government organizations.
Kurt Takamine
Dr. Kurt Takamine is Chief Academic Officer/Vice President and Academic Dean at Azusa Pacific
Online University (APOU). He is Professor of Organizational Leadership at APOU, and was previously
the Academic Dean and Associate Professor at Brandman University in the School of Business and
Professional Studies. Dr. Takamine was the Vice-Chair of the Greenleaf Center for Servant-Leadership
from 2008-2011, in a refereed editor for the International Journal of Servant-Leadership and the Journal
of Leadership Educators, and is published in peer-reviewed journals and books. Kurt received the
Distinguished Educator of the Year award from the Leadership Institute for Education in 2006 (part of
the Greenleaf Center for Servant-Leadership) and the Outstanding Teacher of the Year from Brandman
University (2011). Dr. Takamine has also consulted or trained at various Fortune 500 corporations,
including IBM, Northrop-Grumman, Raytheon, Shell Oil, Microsoft, The Boeing Company, etc.
48
D. Sams and P. Sams
JISTP - Volume 5, Issue 12 (2012), pp. 49-59
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
ROA / ROI-BASED LOAD AND PERFORMANCE TESTING BEST
PRACTICES: INCREASING CUSTOMER SATISFACTION AND
POSITIVE WORD-OF-MOUTH ADVERTISING
Doreen Sams1 and Phil Sams2
1Georgia
College & State University, USA and 2Sr. Load & Performance Engineering Consultant, USA
ABSTRACT
E
ffective and efficient load and performance testing is a critical success factor (CSF) for a
company’s e-commerce systems with regard to meeting performance and load objectives, and
thus revenue goals. It is now necessary to enable remediation of both performance and load
issues, not simply report test results. With the rapid growth of load testing and performance testing,
best practices should be identified. Any company’s assets, (e.g. time and money), are finite, thus this
paper describes tool independent load and performance testing best practices and how the trusted
consultant can help increase the return on asset (ROA) and return on investment (ROI) for clients. In
this article load and performance testing goals, methodologies, processes, procedures, and outcomes
(results) will be discussed individually. The processes provided in this paper present best practices for
both load and performance testing and can be used as a template for testing different systems.
Consequently, less effort would be required for development and maintenance, and more resources will
be free to be dedicated to the required testing project; thus increasing customer satisfaction through
increased return on asset and return on investment. Satisfied customers are expected to be return
customers and share their satisfaction through positive word-of-mouth advertising.
Keywords: Computer Information Systems, Load & Performance Testing, Return on Assets, Return
on Investment, Customer Satisfaction.
INTRODUCTION
The typical company’s critical business processes rely on a growing number of software applications
and the supporting infrastructure of those applications. For example, a typical retail company’s critical
business processes are heavily reliant on complex, distributed applications and the supporting
infrastructure. Thus, one misstep from even a single application (e.g., slow) has a negative impact on
the system that can lead to system failure (domino effect). Therefore, an information systems’ problem
is also a business problem. A risk from reliance on complex information technology results in
dissatisfied customers when applications do not meet the operating performance expectations of the
customer; which increases the risk to company’s business processes as well. Addressing this risk is
complex as there are several factors that may cause poor application performance. One prominent
factor is “wishful thinking”, where little or no attempt is made during the software development life cycle
(SDLC) to ensure acceptable performance of an application. A related factor is a tendency of
49
ROA / ROI-Based Load and Performance Testing Best Practices: Increasing Customer Satisfaction and Positive Word-of-Mouth Advertising
information technology (IT) organizations to significantly change the IT infrastructure without forecasting
the impact of changes on application performance in the short run or over time (Shunra, 2011a). The
risk to the business deepens through the servers’ ability to function under remote end-user loads. When
the end-users are remote, servers maintain each session for a longer period of time than with onsite
users; thus, the operating system resources (e.g., sockets, threads, processes) are used for prolonged
periods of time. Under these complex conditions, customer satisfaction is performance driven.
Preventing slowdowns or catastrophic system failures are no longer simple to test for and resolve. If the
application is certified for deployment and cannot handle the corporate networks, there is little anyone
on the network team can do to fix it. Thus, quality assurance (QA) (i.e., prevention of defects) is not
effective, as they do not take cascading problems within a system into account. Therefore, quality tests
typically generate skewed results. Therefore, the responsibility falls squarely on the development team
to verify that an application functions properly under real world conditions of the corporate network
(Shunra, 2011b).
According to the Aberdeen Group (June, 2008 Benchmark report), customers are won or lost in one
second based solely on the performance of web and e-commerce applications. Just a single second of
delay for an online retail site equates to a “seven percent reduction in customer conversion” and thus
lost revenue. Today, the issue is even more prolific as IT infrastructure moves into new innovations
such as cloud computing, mobile, and virtualization that introduce additional challenges. Therefore,
effective and efficient load and performance testing is a critical success factor (CSF) for a company’s ecommerce system to meet the necessary system under test (SUT) performance and load objectives,
and thus meet the company's financial goals. This concept, although relatively new, was addressed as
early as 2006. In 2006, it was recognized that functional testing of WEB or e-commerce applications or
load-performance testing alone was not sufficient (Gallagher, Jeffries, and Landauer, 2006). A study by
Gallagher, et al., (2006) revealed that application functional, regression, and load-performance testing
had become more generally accepted as a necessity in the SDLC. Front-end quality assurance [QA] is
generally thought to provide significant value (i.e., cost verses benefit) with regard to reducing
performance and load defects, thus reducing the costs of performance testing, etc. Study after study,
such as the study by Pressman (2005) ”Software Engineering: A Practitioner’s Approach,” has shown
that as a defect progresses from requirements to the next phases of the software development life cycle
(SDLC), the approximate cost of fixing a defect increases by a factor of ten at each phase of the SDLC.
According to National Institute of Standards and Technology (NIST), 80% of costs in the development
stage are spent on finding and fixing defects. The NIST suggests that an effective and efficient
methodology necessitates a preemptive approach of building performance and compliance into the
front-end product, which reduces issues and costs in the long run (Pressman, 2011). In other words, by
the time a defect makes it through requirements, design, development, testing, and into production the
costs of fixing the defect increase exponentially if a preemptive strategy is not in place to reduce
defects. Performance optimization (i.e., preemptive remediation of defects) necessitates proactively
identifying and addressing issues early in the SDLC. Consequently, in order to achieve an optimal level
of success with finite company assets, best practices should be identified and implemented.
Load and performance testing must now facilitate defect or issue remediation, not simply report test
results. Although research exists in automated software load (i.e., number of concurrent users,
transactions rate, etc.) and performance (i.e., Transaction Response Rate, maximum number of
concurrent users, etc.) testing, the benefits of a systems approach to testing (i.e., holistically examining
the system and not individual components in isolation), relative to early production development and life
cycle testing; the systems approach has received little attention by academics or practitioners. The cost
50
D. Sams and P. Sams
JISTP - Volume 5, Issue 12 (2012), pp. 49-59
of ignoring a systems approach to life cycle load and performance testing can be ruinous to the
company and or the customer. It is further recognized that application (i.e., system components that
make up the application tier) performance is one of the greatest concerns of many software
organizations and yet one of the least understood and implemented testing tasks. Software
performance testing is very different from functional (i.e., software verification process that determines
correct or proper application behavior for only one user) or load testing and as such requires a domain
of expertise far beyond conventional testing methods and practices (Gallagher et al., 2006). However,
the benefits of the inclusion of software load and performance testing relative to cost savings through
the frontend of mainstream quality assurance and or quality control (i.e., detection ‘testing’, and removal
of software defects) testing phase of the SDLC is of foremost importance in averting risks across many
types of software applications. Nevertheless, building performance into the frontend does not mean that
load or performance issues can be forgotten throughout the product life cycle. However, testing early in
the SDLC does have the potential to reduce the cost associated with software application performance,
as well as traditional software vulnerabilities.
This article fills a gap in literature by identifying tool independent load and performance testing
methodologies, processes and procedures, goals, and best practices which enable the consultant to
increase the return on asset ROA] (i.e., identifies how much profit is generated from a company’s
assets and is usually expressed as a percentage). It identifies how a trusted consultant can explain how
efficiently an asset generates net income and measures a company’s earnings in relation to all of the
resources at its disposal) and return on investment [ROI] (i.e., a financial metric used to evaluate the
efficiency of an investment which is usually expressed as a percentage) for their clients.
LITERATURE REVIEW
A financially sustainable company wisely plans the use of limited-resources (i.e., materials and human
capital); thus, to be sustainable means planning for software performance must begin in the research
and development (R&D) stage of a product’s life cycle (i.e., product development). This is a time when
expenditures may be high and sales are zero. From this stage, the product enters the introductory
stage, in which sales growth is slow, marketing costs are high, and expenses heavy. From there the
product enters its growth stage and then there is a maturity period that leads into the final stage known
as the decline stage (Kotler and Armstrong, 2011 ). However, not all products follow this typical life
cycle and many go into decline rapidly for various reasons. These products are financial failures for the
company. On this premise, benefits gained through early defect prevention enabled by early automated
testing in the R&D stage of the product life cycle are expected to significantly outweigh the financial
costs involved in fixing the problems later, loss of business, and or negative word of mouth and possible
lawsuits (Sams and Sams, 2010). Consequently, less effort would be required for development and
maintenance, and more resources will be free to be dedicated to the required testing project; thus
increasing customer satisfaction through increased ROA and ROI.
Through the performance vulnerabilities of the software product comes the financial vulnerability of the
company of which both directly affects customer satisfaction. A research study conducted by Mayer and
Scammon (1992) proposed that companies benefit financially when consumers hold a positive attitude
toward the brand and the company (Mayer and Scammon, 1992). Therefore, customer satisfaction
must be measured by the company through formal approaches measuring performance on specific
characteristics and against predefined standards. However satisfaction through the customers’ eyes is
just as important if not more important because customer satisfaction is measured through the quality of
the product's performance (i.e., ability to perform its functions effectively and efficiently in a timely
51
ROA / ROI-Based Load and Performance Testing Best Practices: Increasing Customer Satisfaction and Positive Word-of-Mouth Advertising
fashion) and conformance (i.e., freedom from defects), while high quality also involves consistency in
the product’s delivery of benefits that confirm (i.e., meets or exceeds) the customer’s expectations.
Satisfied customers are expected to be return customers and share their satisfaction through positive
word-of-mouth advertising. However, if the product meets or exceeds conformance, but does not
function at the level of the customer’s performance expectations consistently, the customer is
negatively disconfirmed and thus dissatisfied. Dissatisfied customers may abandon the product and the
company, which results in a financial loss to the company. Moreover, even greater damage comes from
negative word of mouth advertising from customers that are dissatisfied. In today's electronic age,
negative word of mouth spreads at Internet speed and the outcome to the company can be
catastrophic. Thus, engaging in holistic (i.e., systems) software performance and load testing in the
developmental stage of the product life cycle by identifying risks to the company from what may be
perceived as the smallest threat gives the company the potential by which immediate and long-term
financial risks may be avoided.
A risk analysis for the company, by its nature, must assess risk costs based on the actual risk, the size
of the risk (as to the extent of cascading affects), its immediate and long-term impact on the company’s
sustainability, prevention costs (i.e., personnel, software packages, etc.) verses benefits to the
company in the short and long run. In the short run, upfront costs come from the purchase of automated
load and performance software testing tools ranging in cost from $5,000 - $250,000+ for tools, plus
typically 20% for annual maintenance. Additionally other expenses typically include a set amount of tool
specific training factored into initial costs. Automated software load and performance testing is a highly
specialized area within the computer science field and requires extensive software systems knowledge,
and tools training as well as a minimum of a four-year computer science degree. Therefore, companies
often need to hire automated software consultants. Consultants are used for short-run initiatives and a
company may pay an automated testing software engineer anywhere from $60,000 to $150,000
annually plus travel and expenses. These consultants’ contracts typically run from three months to a
year depending on the project and the company’s perceived needs. The consultants are often
contracted for companies that have short-term needs such as load and performance testing. The
variation in salary is based on the software engineer’s expertise with automated testing tools,
experience in the field, educational degrees, and the level of risk associated with the company’s
product (e.g., medical supply companies such as Baxter Health Care must, by the nature of their
product and the level of risk to the client and the company, hire extremely talented, competent, and
experience automated test engineers).
The processes provided in this paper present best practice for both load and performance testing and
can be used as a template for testing different systems. These best practices lead to a reduction in
effort required for development and maintenance freeing up resources to be dedicated to the required
testing project; thus, increasing customer satisfaction through increased ROA and ROI.
Requirements are the cornerstone of the testing phase of the software development life cycle (SDLC),
as they should be for the entire SDLC. Thus, best practices dictate the necessity of determining what is
needed; 1) automated functional regression testing (i.e., verifying that what was working in a previous
application release still works in subsequent releases), 2) load testing, 3) performance testing, or 4) a
hybrid. Why? Because often times a client or someone expresses a need or want for load and
performance testing when in fact what they need or want is a load or a performance test. One of the
primary tasks of the consultant is to interview the client for clarification.
52
D. Sams and P. Sams
JISTP - Volume 5, Issue 12 (2012), pp. 49-59
Load and performance testing is exponentially much more challenging than automated functional,
regression testing. There is much to learn about the system under test (SUT): the system/server
hardware and software, the end users, the critical success factors (CSF) of the SUT, the network over
which end-users will access the SUT, the production datacenter environment (e.g. architecture,
topology, hardware, and software) where the SUT will operate. Additionally, identifying datacenter site
capacity limitations and network parameters (e.g. bandwidth, latency, packet loss, etc.) are also
important. Best practices further dictates accurately modeling real world users’ actions and
transactions. Along this vein, there is also a need to accurately model the networks and devices end
users employ to access the systems. For example, Shunra Performance Suite is an integrated software
and hardware solution which accurately simulates networks in the lab environment. At the center is a
high performance and scalable network appliance that acts as a router or bridge. The user is able to
configure it within a local area network (LAN) to change network impairments, such as the speed traffic
travels across the LAN, to those of the target wide area network (WAN). This accurately simulates the
production or expected network in the test lab.
Despite the challenges mentioned above, load and performance (L/P) test results should answer key
questions regarding the quality of the system under test (SUT), such as; capacity – how many users
can the SUT support, performance – how fast or slow page response times are and how long do key
transactions take, reliability – does the SUT stay up 99% of the time or the required service level
agreement (SLA). Therefore, L/P tests to be executed include; baseline, smoke test, end-to-end,
critical-path, stress, stability, isolation, diagnostic, and module test. Workloads that can be applied to
the SUT through queuing, steady state, increasing, all-day, and dynamic are viable options. Load
performance testing [L/P] (i.e., the process of testing an application to determine if it can perform
properly with multiple, possibly thousands, concurrent users) is not a matter of simply reporting test
results, it should facilitate tuning of the system such that; errors are remediated, performance issues
are resolved, and overall performance, load, and end-user experience is improved. In order to meet
requirements or SLA, other outcomes should include identifying the user load point at which; hardware
errors occur, hardware fails to scale, software systems start to error or fail to scale, and or points of
performance failure. Additionally, load and performance testing must facilitate identification of the point
at which hardware components (e.g. computer servers, CPU, memory, network bandwidth, etc.) need
to be added to increase system performance or load ability. A very important component that must be
identified and marketed to the client is the extent to which the increased asset increases capabilities of
the system (e.g. transaction per second improvement, user load, concurrent sessions, etc.).
In the beginning of the load and performance testing era, testing was conducted in a ‘lab’ test
environment, later it expanded to include a ‘staging’ environment, and recently approximately 30% of
testing is performed in a production environment (SOASTA, 2011a). Testing while in production (i.e.,
applying load against the production deployment) currently is not a best practice. Nevertheless, testing
in production may be conducted during a maintenance period, by performance testing while the site is
live, during the development of new infrastructure prior to a release (GA), or during a ‘slice’ of
production time. However there are risks from production testing that must be considered prior to
engaging in this testing choice such as, unauthorized access to confidential information, the risk of test
data comingling with production data, and the possible impact on live customers during production
testing such as unacceptable slowing of the system or the production system failing while under test
(SOASTA, 2011b).
53
ROA / ROI-Based Load and Performance Testing Best Practices: Increasing Customer Satisfaction and Positive Word-of-Mouth Advertising
BEST PRACTICES MODEL
ROA/ROI-Based Load Performance Testing Best Practices Model
Load Performance
Test Request
Interview Client:
Test Goals
Test Requirements
Test schedule
Clarify:
Test Goals
Test Requirements
Test Schedule
Other
Create Test Plan:
Common
Processes &
Procedures
Load
Load or
Performance or
Hybrid
Performance
Hybrid
Load Test Plan:
Processes
Procedures
Hybrid Test Plan:
Processes
Procedures
Perf. Test Plan:
Processes
Procedures
Execute Test Plan
Load
Load or
Performance or
Hybrid
Performance
Hybrid
Create Load Test
Results Report
Create Perf. Test
Results Report
Create Hybrid Test
Results Report
54
D. Sams and P. Sams
JISTP - Volume 5, Issue 12 (2012), pp. 49-59
The above model is a tool independent holistic best practices model for load and performance testing,
which reduces tasks and effort to only those that are necessary to provide exceptional client ROA and
ROI. Process boxes (second, third, and fourth from the top) are common for load, performance, and a
hybrid test effort. Process boxes on the far left side are Load only, process boxes on the far right are for
Performance only tests, and shaded process boxes are Hybrid (both load and performance tests).
When a request for “load and performance” testing is received the load performance test request
process is initialized. At this point a senior consultant should drive the following processes: Interview
Client, Clarify Test Goals, Create Test Plan Common Processes Procedures. The second process is to
interview the client to ascertain critical success factors (CSF’s) of the proposed engagement. These
CSF’s should include, but not be limited to; testing goals, test requirements, schedule, system under
test architecture, topology, and implementation technology, etc.
The test goals and test schedule will drive the test requirements. The test goals should be reduced
down to one and to no more than three goals, which are usually expressed as questions. While
determining test goals, the desired test schedule should also be discovered. Sometimes the schedule
will be the constraining factor, other times the goals will drive the project.
After the Interview Client process, the senior consultant reviews all data gathered thus far, then seeks
clarification on ambiguous areas during the Clarify process. It is important in the Clarify process to gain
understanding with the client regarding test goals, requirements, and schedule. These will determine
many aspects of the project. This is where the test requirements are finalized, preferably in structured
English.
In the Create Test Plan: Common process, the common tasks from load, performance, and hybrid tests
are addressed. These include such tasks as:
 Documentation of test goals, requirements, and schedule
 Identify contacts for the following:
o Network administrators
o Server administrators
o Database Management Systems (DBMS) administrators
o Test environment operations lead
The test plan should be based on test requirements and tell all stakeholders “what” is going to be
tested, not how, and what information the test results report will contain, not specific metrics, unless
specifically required by the client. The common test plan should contain the following:






Test goals
Test requirements
Test schedule
Test team members
Define test scenario(s)
Define final test report contents
Next is the Decision process for Load, or Hybrid, or Performance specific test plan processes. The
Load or Performance or Hybrid decision process is where information gained thus far will be used to
55
ROA / ROI-Based Load and Performance Testing Best Practices: Increasing Customer Satisfaction and Positive Word-of-Mouth Advertising
determine what type of tests are going to be executed and analyzed. For example, if one of the goal
questions is; “will the system support 1,000 users, 800 browsing, and 200 purchasing from 1-3
products”, this would indicate a “Load” test, with 1,000 users, and an increasing workload. If however
the goal question is; “will the system perform a search transaction in under three (3) seconds”, then we
have a performance test. If the goal question is something like; “will the system perform a search
transaction in less than three (3) seconds with 1,000 browsing users”, then the test is a hybrid (Load
And Performance).
The Load or Hybrid or Performance Test Plan Process Procedures process should include the following
tasks:









Define thresholds and boundaries
Define user types
Define user profiles
Define user groups (User groups are a combination of user type and user profile (e.g. connection
and network properties))
Define transactions
o For each transaction, create a site map of the system under test, documenting the navigation
paths the users will take
o Define actions to be taken by each user type
Define profile settings
Define the workload(s)
o Workload duration, warm-up and or close-down times (where required)
Define the data to be randomized
Document actions for each transaction and user type (e.g. browser, buyer, etc.)
The next process is Execute Test Plan, which will contain the following tasks:






Create initial recording of each test case according to the test plan
Per the test plan, modify and debug the initial recordings to work in a variety of situations
Create and debug unique user types and their specific action sets
Run a minimum load test (1 user/baseline) as a benchmark
Run the tests based on test plan criteria
Verify test results are valid (e.g. no errors caused by the test itself)
The final process is Create Test Results Report for: Load or Hybrid, or Performance Test.
Within these processes one should:




Form hypotheses
Analyze the results of the test
Draw conclusions based on test results
Create a test results report which clearly identifies test report contents specified in the test plan.
The errors and issues should be reported in a fashion that non-IT stakeholders can understand.
Additionally, for each error or issue, remediation steps should be identified.
 Lastly have a trusted colleague review the test results report before presentation to the client and
or stakeholders.
56
D. Sams and P. Sams
JISTP - Volume 5, Issue 12 (2012), pp. 49-59
MARKETING BEST PRACTICES OPERATIONALIZATION
A well-structured, well-trained sales force is expected to create a competitive advantage. In the
complex information technology (IT) industry, a technical sales force (i.e., experienced in the complexity
of the industry, products, and customer type) is essential. Every type of sales position requires the
proper procedural knowledge; however, high-tech sales require the ability to engage in complex mental
processes (i.e., sophisticated and complex information processing gained through experience,
customer interactions, and industry specific education) in order to deal with customer objections,
objectives, and constraints for high-tech products (Leonidou, Katsikeas, Palihawadana, Spyropoulou,
2007; Darmon, 1998). A technical sales force (i.e., mobile informers) working closely with other sales
force members offers the company a competitive advantage by having subject matter experts in the
field to overcome customer objectives and to clarify how the company’s product can overcome
constraints and meet the company’s objectives. The technical sales force also provides valuable
feedback to other sales force members combining superior market intelligence and sophisticated
industry knowledge to gain a competitive advantage through the development of targeted sales
proposals and effective demonstrations (Darmon, 1998).
Perhaps one of the biggest challenges of marketing best practices in which an IT technical sales force
is extremely valuable is when making the case to potential clients for a test environment that accurately
reproduces the production (deployment) environment. However, some clients find it difficult to justify
such a significant investment for such a short period of time of use because they cannot envision a
significant return on the investment. Best practices dictates demonstrating to the potential client return
on assets, not return on investments when marketing the test environment. The concept of return on
assets is easier to envision because it identifies how efficiently an asset generates net income.
Nevertheless, in the case where clients cannot afford or justify an adequate test environment, it is
strongly recommended that the technical sales force demonstrate the value and constraints of market
testing in the production environment and methodology for working with a sufficient degree of success
within those constraints.
LIMITATIONS OF THE STUDY
This study focuses on tool independent load and performance testing best practices, thus by necessity
much valuable information, such as details of tool specific implementation of these best practices, is not
possible. Additionally it would be interesting to compare these best practices using commercial tools
(e.g. SilkPerformer, Rational Performance Tester) against open source tools (e.g. there are many).
Another limitation is only the processes and tasks necessary to satisfy minimal best practices were
presented. Again it would be an interesting study to research the delta in customer satisfaction between
these best practices and a more extensive list of best practices with a commercial tool as well as an
open source tool.
FUTURE RESEARCH
To empirically test these concepts across industries; interviews should be conducted across industries,
companies, managers, and software engineers. Findings from these interviews could then be used to
create an appropriate survey instrument to capture a larger sample. An empirical investigation is
expected to add value to strategic management decision-making by revealing the extent of the benefits
of life cycle performance and load testing and the role of the technical sales force.
57
ROA / ROI-Based Load and Performance Testing Best Practices: Increasing Customer Satisfaction and Positive Word-of-Mouth Advertising
For many companies revenue generating comes through e-commerce and other multi-user software
systems were beyond mission critical in 2008 (Krishnamurthy, Shams, and Far, 2009). Today, the
revenue generated from e-commerce is a primary financial lifeline for many business entities and multiuser software systems are the norm under mission critical conditions compounded by cloud computing,
mobile, and virtualization. Increasingly, enterprises are relying on e-commerce systems to support CSF
business tasks, and poor performance of software systems can negatively impact enterprise
profitability. Business leaders also recognize the impact mobility will have on their bottom line when it
comes to customer-facing applications such as those that enable m-commerce and self-service. The mcommerce market is predicted to reach $119 billion by 2015 (Anonymous, 2010), and over one billion
people worldwide are expected to engage in mobile finance by that time (Anonymous, 2010; PRWeb,
2010). The model brought forth in this article is yet untested in the m-commerce arena and should be
tested across m-commerce before considering it a best practice for this industry segment. The role of
the technical sales force is important in overcoming objections and identifying system points of failure
that are expected to circumvent a company’s IT and financial objects; thus, providing a level of
understanding of the importance of the holistic approach to software testing to assure an appropriate
ROA/ROI.
WORK CITED
Aberdeen Group (June, 2008 Benchmark Report) “The Performance of Web Applications: Customers
are Won or Lost in One Second,” URL: http://www.aberdeen.com/aberdeen-library/5136/RAperformance-web-application.aspx . Date Accessed: December 26, 2010
Anonymous (2010) “Shopping by Mobile Will Grow to $119 Billion in 2015,” (February 16). ABI
Research. URL: http://www.abiresearch.com/press/337-Shopping+by+Mobile+Will+Grow+to+$119
+Billion+in+2015. Date Accessed: November 16, 2011.
Darmon, R. (1998) “A Conceptual Scheme and Procedure for Classifying Sales Positions,” The Journal
of Personal Selling & Sales Management , 18(3): 31-46.
Gallagher, T., Jeffries, B., & Landauer, L. (2006). Hunting Security Bugs. Washington: Microsoft Press.
Kotler, P. & Armstrong, G. (2011). Marketing an Introduction. 10th Ed. Massachusetts: Prentice Hall.
Krishnamurthy, D., M. Shams, and B. Far (2009) “A Model-Based Performance Testing Toolset for Web
Applications,” International Association of Engineers (January 20).
Leonidou, L. C., Katsikeas, C. S., Palihawadana, D., and Spyropoulou, S. (2007) “An Analytical Review
of the Factors Stimulating Smaller Firms to Export,” International Marketing Review, 24(6): 735770.
Mayer, R. N. and Scammon, D. L. (1992) Caution: Weak Product Warnings May Be Hazardous to
Corporate Health. Journal of Business Research (June)24: 347-359.
Pressman (2005) Software Engineering: A Practitioner’s Approach. Ed. 5, McGraw-Hill, New York, NY.
Pressman, R. (2011) Software Engineering a Practitioners Approach. Ed. 7, McGraw-Hill, New York,
NY.
PRWeb (2010) “Global Mobile Banking Customer Base to Reach 1.1 Billion by 2015, According to New
Report by Global Industry Analysts, Inc.,” Global Industry Analysts (February 16): URL:
http://www.prweb.com/releases/2010/02/prweb3553494.htm. Date Accessed: November 16, 2011.
Sams, P. and D. Sams (2010) “Software Security Assurance A Matter of Global Significance Within the
Product Life Cycle”, Global Digital Business Association Conference: (Oct. 12), Washington DC.
Shunra (2011a) “The Mandate for Application Performance Engineering,” Shunra Software Website,
URL: http://www.shunra.com/resources/white-papers/mandate-application-performance-engineerin
g-whitepaper. Date Accessed: November 15, 2011.
58
D. Sams and P. Sams
JISTP - Volume 5, Issue 12 (2012), pp. 49-59
Shunra (2011b) “Understanding the Impact of Running WAN Emulation with Load Testing,” Shunra
Software, URL: http://ape.shunra.com/WP-understanding-impact-of-WAN-emulation-with-loadtesting.html. Date Accessed: November 17, 2011.
SOASTA (2011a) “Cloud Testing Production Applications,” SOASTA Inc., URL: http://www.soasta.c
om/cloud-testing-production-applications-ppc/. Date accessed: December 12, 2011.
SOASTA (2011b) “Cloud Test Strategy and Approach,” SOASTA Inc., URL:
http://www.cloudconnectevent.com/downloads/SOASTA_TestingInProduction_WhitePaper__v1.0.
pdf. Date Accessed: December 11, 2011.
59
I. Elmezni and J. Gharbi
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
MEDIATION OF EMOTION BETWEEN USERS’ COGNITIVE ABSORPTION
AND SATISFACTION WITH A COMMERCIAL WEBSITE
Imen Elmezni1 and Jameleddine Gharbi2
1Tunis
University, Tunisia and 2University of Jendouba, Tunisia
ABSTRACT
T
The purpose of the current research is to investigate to what extent website continuance
intention is determined by website satisfaction. Moreover, we desire to test the mediating
impact of emotions between cognitive absorption and website satisfaction. Data collection is
carried out on a sample of 300 respondents and is conducted by an experiment in laboratory, followed
by a survey administration. Results suggest that positive emotions mediate the relation between
cognitive absorption and satisfaction; and that website satisfaction positively influences users
continuance intention. Theoretical and practical implications are considered.
Keywords: Cognitive Absorption, Emotion, Website Satisfaction, Website Continuance Intention,
Human-Machine Interaction.
I. INTRODUCTION
In recent years, information and communication technology (ICT) has played a critical role in the digital
economy and changed the way we do business. Commercial websites have become a significant tool
for companies to increase profitability. Although website initial acceptance and use is a necessary step
toward Information Systems (IS) success, long term viability of an IS depends highly on its continued
use rather than its first-time use (Bhattacherjee, 2001) due to the fact that retaining actual clients is five
to seven less expensive than acquiring new ones (Parthasarathy and Bhattacherjee, 1998 ; Khalifa and
Liu, 2005) . User's intention to continue using a website is hence considered a major determinant of its
success because initial use of the website is the first step toward realizing its success. Clearly,
understanding the factors influencing the customer's intention to continue using the website is a critical
issue for researchers and practitioners. As a consequence, many theories and models have been
developed in order to understand the factors underlying the formation of continuance intention. Hence,
and according to information system continuance model (Bhattacherjee, 2001), website continuance
intention is determined by website satisfaction. Moreover, according to Agarwal and Karhanna (2000),
cognitive absorption and other variables (perceived usefulness, ease of use, subjective norms) impact
intention. They suggest that the state of cognitive absorption is an important concept explaining users’
behavior in computer-mediated environments. They explain the importance of this concept by the fact
that it is an important antecedent of two motivational factors of technology usage; perceived usefulness
and ease of use. In disconfirmation theory, satisfaction is also antecedent of intention.
60
I. Elmezni and J. Gharbi
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
In this research, we are willing not only to contribute to the literature on technology acceptation by
integrating in the same model variables issued from TAM and expectation-confirmation theory, but we
are the first to test empirically the mediating impact of emotion that may exist between users’ cognitive
absorption and their website satisfaction.
The remainder part of this paper will be structured as follows: Section 2 describes briefly the constructs
of this study. Section 3 presents the proposed model and research hypotheses. Section 4 provides the
research methodology. Section 5 presents and discusses the results of our empirical analysis. The
paper ends with a conclusion and some directions for future research.
II. THEORETICAL BACKGROUND
II.1. Cognitive Absorption
Agarwal and Karahanna (2000) define cognitive absorption as « a state of deep involvement and
enjoyment with software ». It represents a subjective experience of interaction between the individual
and the computer in which the former looses the notion of time and is so intensely involved in an activity
that nothing else seems to matter; the experience itself is so enjoyable that people will do it even at
great cost, for the sheer sake of doing it. This state is characterized by loss of self-consciousness, by
responsiveness to clear goals and a deep sense of enjoyment. It is derived from three theoretical
streams of research; absorption as a personality trait, the flow state and the notion of cognitive
involvement; and reflects the sensations resulting from the immersion in the virtual environment.
Agarwal and Karahanna (2000) identify five dimensions of cognitive absorption: temporal dissociation,
focused immersion, heightened enjoyment, control and curiosity. According to Shang, Chen and Shen
(2005), the dimensions of cognitive absorption represent different forms of intrinsic motivation.
- Temporal dissociation: It is the individual incapacity to perceive time passage during the
interaction. It is qualified by Novak et al (2000) as « time distortion ».
- Heightened Enjoyment: Enjoyment refers to the extent to which the activity of using a computer
system is perceived to be personally enjoyable in its own right aside from the instrumental value
of the technology (Ryan and Deci, 2000). The intrinsic pleasure of the activity is the own
motivation of the individual.
- Control: This dimension refers to the user's perception of being in charge of the interaction with
the commercial website.
- Curiosity: It taps into the extent the experience excites the individual curiosity.
II.2. Emotion
According to Mehrabien (1970, 1977), emotion is a reaction of an individual toward an environment,
which refers to the tangible and intangible stimuli which influence individual perception and reaction
(Bitner, 1992). Gouteron (1995) defines this concept as an affective momentous response, multiform
and intense toward an external disturbing factor to the individual. Emotion is part of the affective system
and as demonstrated by Bagozzi et al. (1999), affect is a « generic term designing emotions, moods
and feelings ». It is a sudden reaction, with strong intensity, having a relatively short duration and
related to a given object. It also comes along by expressive and physiological demonstrations
(acceleration of the beatings of heart in the case of an enjoyment or a fear).
61
Mediation of Emotion Between Users’ Cognitive Absorption and Satisfaction with a Commercial Website
As for its dimensionality, literature revue demonstrate that scholars didn’t use the same dimensions of
emotion. In fact, two approaches of emotion are used: The first is dimensional approach, where
emotion is operationalized by two types of dimensions : the PAD (pleasure-arousal-dominance), and
positive affect and negative affect. The second is discreet approach where emotion is described by
specific categories of emotions. In the present study, we adopted the conceptualization of emotion as
presented by Richins (1997) which posits the existence of 16 categories of emotions (8 positive
emotions and 8 negative emotions).
II.3. Satisfaction
The literature review on the concept of satisfaction reveals a lack of consensus regarding the
conceptual definition of this concept. Giese and Cote (2000) argue that all the definitions share
common elements: the satisfaction is a response (emotional and/or cognitive in nature) toward a
particular object (product, site) which is produced after a certain time (after purchase, after
consumption,..). Based on Vanhamme (2002), we define satisfaction as the user’s cognitive judgement
that occurs after the website visit. Satisfaction is an important area of research in the marketing
literature as well as in the information system field. The former focused on the satisfaction formation
process (Oliver, 1993; 1997), whereas the latter concentrated on the relation between user satisfaction
and system characteristics (Bailey and Pearson, 1983; DeLone and McLean, 2002, 2003; Khalifa and
Liu, 2003). Satisfaction is treated by many scholars as a uni-dimensional concept (Oliver, 1980) and by
others as multidimensional (Mc Haney et al, 2002). In this paper, website satisfaction is a latent variable
with 5 dimensions: site content, accuracy, format, ease of use and timeliness (Abdinnour-Helm et al,
2005; Zviran et al, 2006).
II.4. Intention
Fishbein and Ajzen (1975) define intention as a conative component between attitude and behavior. It
represents the desire to conduct the behavior. Website users’ continuance intention is an extension of
their initial usage decision. According to innovation diffusion theory (Rogers, 1995), adopters reevaluate
their earlier acceptance decision and decide whether to continue or discontinue using an innovation.
In fact, many theories have attempted to explain behavioral intention: the TRA, theory of reasoned
action (Fishbein, 1980); the TPB, theory of planned behavior (Ajzen, 1991); the TAM, technology
acceptance model (Davis, 1989) and the UTAUT, unified theory of acceptance and usage of technology
(Venkatesh et al, 2003). The TRA argues that behavior is preceded by intentions which are determined
by the individual’s attitude toward the behavior and subjective norms (i.e. social influence). The TPB
extends the TRA by introducing perceived control as an additional determinant of both intentions and
behavior. It is defined as the individual perception of his/her ability to perform the behavior (Limayem,
Khalifa and Frini, 2000). The TAM developed by Davis (1989) predicts user acceptance of a technology
based on the influence of two factors: perceived usefulness and ease of use. TAM posits that user
perceptions of usefulness and ease of use determine attitudes toward using the technology. The
UTAUT posits that use behavior is determined by behavioral intention which is, in turn, determined by
performance expectancy, effort expectancy, social influence and facilitating conditions.
III. CONCEPTUAL MODEL
The proposed model (fig 1) suggests that emotions mediates the relationship between cognitive
absorption and satisfaction, which in turn affect intentions within the context of website usage:
62
I. Elmezni and J. Gharbi
Cognitive Absorption
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
Website
Satisfaction
Emotions
Continuance
usage Intention
Figure 1: Conceptual model
In fact, satisfaction research has evolved from the exclusive consideration of cognitive processes,
stemming from expectancy-disconfirmation theory (Oliver, 1980), to the acknowledgment of the impact
of affective states (Yu et Dean, 2001 ; Liljander et Strandvik, 1997). This is due to the realization that
cognitive models are not adequate in explaining all phenomenon of satisfaction. Liljander and Strandvik
(1997) demonstrate that a pure cognitive approach seems to be inadequate to evaluate satisfaction
models, and it is crucial to include emotional variables in order to better understand satisfaction
determinants. The tendency was to conciliate both cognitive approach of disconfirmation paradigm and
emotional paradigm.
The inclusion of emotion in satisfaction models originated in the marketing literature in the 1980’s
(Hirschman et Holbrook, 1982 ; Holbrook et Hirschman, 1982 ; Westbrook, 1987 ; Westbrook et Oliver,
1991 ; Oliver, 1993 ; Mano et Oliver, 1993). The authors propose that affect influence satisfaction
judgements independently from any cognitive evaluation. In general, the link between emotions and
satisfaction is explained by the “affect as information view” (Shwarz, 1990) according to which “an
individual uses his affective reactions (emotions) as an information source when he evaluates his
satisfaction toward an object”. This link has also been supported by Hunt (1977) who posits that in any
experience, emotion is an affect (whether the experience is enjoyable or no) but satisfaction is the
evaluation of the experience (was it as good as expected?)
Agarwal and Karahanna (2000) indicate that cognitive absorption is a central construct in the
explication of human behavior in computer-mediated environments. They explain the importance of this
construct by the fact that it is an important antecedent of two motivational factors of technology usage;
perceived usefulness and ease of use (TAM model). In the present study, we propose to conciliate the
cognitive paradigm of user satisfaction (TAM model) and the emotional paradigm; by exploring the
impact of cognitive absorption on emotion, which is a human reaction. The main purpose is to better
understand human behavior in computer-mediated environments and especially the cognitive and
emotional determinants of user satisfaction.
The proposed assumptions are the following :
H1. Emotion mediates the relationship between cognitive absorption and website satisfaction
H1.a: Positive emotions mediate the relationship between cognitive absorption and website
satisfaction
H2.b: Negative emotions mediate the relationship between cognitive absorption and website
satisfaction
The impact of user satisfaction on continuance intention is supported by the post-acceptance model of
IS continuance proposed by Bhattacherjee (2001) which posits that user satisfaction is a significant
predictor of continued usage of a technology. He theorizes that users’ website continuance decision is
similar to consumers’ repurchase decision because both (1) follow an initial (acceptance or purchase)
63
Mediation of Emotion Between Users’ Cognitive Absorption and Satisfaction with a Commercial Website
decision, (2) are influenced by the initial use (of technology or product), and (3) can potentially lead to
ex post reversal of the initial decision. Many IS researchers have provided empirical support for the
relationship between user satisfaction and continuance intention. Cheung and Limayem (2005) and
Chiu et al. (2005) found a positive impact of website satisfaction on continuance intention within the
context of e-learning. Therefore, we propose:
H2.
Users’ website satisfaction is positively associated with website continuance intention.
IV. METHODOLOGY
A survey was conducted in this study to test the hypotheses discussed in the previous sections, the
data collection method used and the measures of the constructs are presented in the following
sections.
IV.1. Data Collection Method
Data collection is conducted via an experimentation followed by a survey administration. A pretest of
the questionnaire (including all constructs) was carried out to ensure content validity and to improve the
wording of several items. The sample consisted of 300 university students (66,7% male and 33,3%
female). On average, the respondents were 22 years old and had 3 years of experience in using
commercial websites. This study recruited students subjects because they are expected to become the
primary customers in online shopping in the near future (Han and Ocker, 2002; Metzger et al, 2003).
IV.2. Measures
The four measures used in this study were mainly adapted from relevant prior studies. Absorption and
intention items were measured using a seven-point Likert scales anchored from ‘Strongly disagree’ to
‘Strongly agree’, those related to website satisfaction and emotion were measured using, respectively,
five-point and four-point Likert scale. Table 1 summarizes the different measurement scales used in this
study.
Table 1:
Summary of measurement scales
Number
of
items
Alpha
Cognitive Absorption
20
0.94
Positive Emotion
8
-
Richins (1997)
Negative Emotion
8
-
Richins (1997)
Website Satisfaction
Satisfaction
12
0.94
Abdinnour-Helm et al (2005)
Continuance usage Intention
Intention
3
0.94
Chiu et al (2005)
Concepts
Dimensions
Cognitive Absorption
Emotion
Source
Agarwal and Karahanna (2000)
V. DATA ANALYSIS AND DISCUSSION
V.1. Measurement Model
The reliability and validity of the measurement instrument was evaluated using reliability and
convergent validity criteria. Reliability of the survey instrument was established by calculating
Cronbach’s alpha to measure internal consistency. As shown in Table 2, most of all values were above
64
I. Elmezni and J. Gharbi
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
the recommended level of 0.7. Each of the analysis indicates a KMO>0.5 and a significant test of
Bartlett (table 2). We conducted a confirmatory factor analysis (CFA) to test the convergent validity of
cognitive absorption and website satisfaction construct. As for the variables absorption, Satisfaction and
Intention, regression analysis demonstrate the uni-dimensionality of constructs.
For the variable Emotion, we used the « Consumption Emotion Set » of Richins (1997). This scale
comprises 16 dimensions and 43 items ; 8 dimensions for negative emotions and 8 dimensions for
positive emotions. In first time, factor analysis is used for each of the 16 dimensions of emotional state
and then for all dimensions (the scale). This second factor analysis demonstrates that the scale is bidimensional. The first factor includes emotion categories related to positive emotions (joy, optimism,
contentment, excitation, surprise, love and romantic love) and the second factor includes those related
to negative emotions (sadness, anger, anxiety, shame, fear, discontentment, loneliness and et
jealousy).
Table 2:
Concept
Dimensions
Absorption
Absorption
Constructs factor analysis and reliability
Nb
d’items
KMO
Test of Bartlett
Alpha
20
0.805
2=518,312; p=0,000
0.91
43
0.895
2=7320,832; p=0,000
Anger
3
0.711
2=306,807 ; p=0,000
0.813
discontentment
2
0.5
2=159,795 ; p=0,000
0.781
Anxiety
3
0.660
2=152,750 ; p=0,000
0.688
Sadness
3
0.655
2=231,972 ; p=0,000
0.749
fear
3
0.639
2=175,492 ; p=0,000
0.704
Shame
3
0.703
2=277,230 ; p=0,000
0.797
Jealousy
2
0.5
2=75,414 ; p=0,000
0.642
Loneliness
2
0.5
2=58,845 ; p=0,000
0.594
Joy
3
0.754
2=739,739 ; p=0,000
0.929
Contentment
2
0.5
2=220,908 ; p=0,000
0.839
Tranquillity
2
0.5
2=145,741 ; p=0,000
0.766
Optimism
3
0.724
2=480,815 ; p=0,000
0.875
Love
3
0.717
2=528,426 ; p=0,000
0.885
Romantic love
3
0.615
2=111,225 ; p=0,000
0.620
Excitation
3
0.718
2=325,725 ; p=0,000
0.822
Surprise
3
0.713
2=352,422 ; p=0,000
0.833
Negative Emotion
Positive Emotion
Negative
Emotion
Emotion
Positive
Emotion
0.898
0.936
Satisfaction
Satisfaction
12
0.843
2=765,985 ; p=0,000
0.91
Intention
Intention
3
0.769
2=814,370 ; p=0,000
0.94
65
Mediation of Emotion Between Users’ Cognitive Absorption and Satisfaction with a Commercial Website
V.2. Assumptions Verification Model Testing Results
V.2.A. Mediating Impact of Emotion Between Cognitive Absorption and Website
Satisfaction
Based on Baron and Kenny (1986), a mediating variable M is a variable that permits to explain the
process by which a variable X influences a variable Y; X is the independent variable, Y the dependent
variable and M the mediating variable. They precise also that:
 If the influence of X on Y disappears totally by the introduction of M, it is the case of a complete
mediation. Baron and Kenny (1986) propose four conditions:
 If Y= a1 + b1 X + error1, b1 is significant;
 If M= a2 + b2 X + error2, b2 is significant;
 If Y= a3 + b3 X + b4 M + error3, b4 is significant;
 b3 in the third condition is not significant.
 If the influence of X on Y has simply reduced but not disappeared, it is the case of a partial
mediation. Hence, only one part of the influence of X on Y is exerted via the mediating variable
and the other part is directly exerted on the variable Y.
 If all the conditions are verified excepted the last, we must calculate h as follows:
In the present study, and in order to test the mediating impact of emotion (M) between users’ cognitive
absorption (X) and website satisfaction (Y), regression method is used.
V.2.A.1. Mediation of positive emotion between cognitive absorption and website satisfaction:
Condition 1:
This condition is supported : CA has a positive impact on satisfaction. Regression model indicates that :
Satisfaction = 0,673 Absorption (t= 15,662 ; p=0,000).
Condition 2:
Regression analysis reveal that cognitive absorption explains 28,7 % of positive emotions variance (R
2= 0,287 ; Adjusted R2 = 0,285) and that the model issued from this regression is significant (F=
119,136 ; P=0,000) and indicates that:
Positive Emotions = 0,536 Absorption
(t= 10,915 ; p=0,000)
Condition 3:
We notice that the introduction of positive emotions (Table 3) as a mediating variable let the impact of
cognitive absorption on satisfaction has been reduced (β=0,589 instead of 0,673) and still be significant
(p=0,000). The results indicate a positive influence of positive emotions on satisfaction. Conditions 1, 2
and 3 are supported with the exception of condition 4 (p= 0,002), and hence we must calculate h.
b2 = 0,536 ; b4= 0,157, s2 =0,049, s4 =0,050
66
I. Elmezni and J. Gharbi
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
0.536*0.157
h=
= 3> 1.96
(0.1572 * 0.0492) + (0.5362 * 0.0502) + (0.0492 *
0.0502)
H=3 >1,96. We conclude that positive emotions mediate partially the relationship Absorption cognitiveWebsite Satisfaction.
Table 3:
Impact of positive emotions and absorption on satisfaction
Y= +X+M (N=300, R2=0,470 ; R2=0,467)
variable
β
E std
constante
-,004
0,042
Cognitive abs
,589
,050
Positive Emot
,157
,050
β std
t
p
-,106
,916
,589
11,727
,000
,157
3,129
,002
V.2.A.2. Mediation of negative emotion between cognitive absorption and website satisfaction:
Condition 1:
Website Satisfaction = 0.673 Absorption (t= 15,662 ; p=0,000).
Condition 2:
Regression analysis reveal that cognitive absorption explains 7,4 % of negative emotions variance (R
2= 0,074 ; Adjusted R2 = 0,070) and that the model issued from this regression is significant (F=
23,485; P=0,000) and indicates that:
Negative Emotions = -0,271 Absorption
(t= -4,846 ; p=0,000)
Condition 3:
As presented in Table 4, conditions 1 and 2 are supported but for conditions 3 and 4, a significant effect
of X and a non significant effect of M are obtained. Caceres and Vanhamme (2003) explain this type of
result by the fact that Y (website satisfaction) and M (cognitive absorption) are two independent effects
of the variable X. The variable M (negative emotions) is neither a partial mediator nor a complete
mediator of the relation X-Y (Absorption-Satisfaction). It is called a « spurious association », which is
presented as follows:
Variable M
Variable X
Table 4:
Variable Y
Impact of negative emotions and absorption on satisfaction
Y= +X+M (N=300, R2=0,454 ; R2=0,451)
variable
β
E std
constante
-,005
0,043
Cognitive abs
,661
,045
Negative Emo
-,046
,045
β std
67
t
p
-,120
,905
,660
14,782
,000
-,046
-1,019
,309
Mediation of Emotion Between Users’ Cognitive Absorption and Satisfaction with a Commercial Website
V.2.B. Impact of Website Satisfaction on Intention
Regression analysis reveal that website satisfaction explains 37,8% of the website continuance
intention variance (R deux= 0,378 ; R deux ajusté= 0,376). The model issued from this regression is
significant (F= 181,316; P=0,000) and is presented as follows:
Website continuance intention = 0,615 website satisfaction
(t=13,465 ; p=0,000)
This model indicates that website satisfaction impact on website continuance intention is positive and
significant. Hence, H2 is supported. The findings are consistent with prior studies in information system
(Chiu et al, 2005; Cheung and Limayem, 2005; Hsu et al, 2006; Tong et al, 2006) and provide strong
support for the post-acceptance model of IS continuance (Bhattacherjee, 2001) indicating that website
satisfaction is the main determinant of continuance intention. As a result, more the individual is satisfied
with the site, more his continuance intention will be.
VI. CONCLUSION
The objective of this study is to gain a better understanding of factors influencing website satisfaction
and continued usage. This study not only enhances our understanding of consumer behavior on the
Web but also enriches the satisfaction literature by supporting the existence of cognitive and affective
processes, shaping users behavior in human-computer environments. Our findings showed that
website continuance usage intention is determined by user satisfaction, and in an episode of interaction
with a website, positive emotions mediate partially the relationship Absorption-Satisfaction. This paper
is one of the few, if not the first empirical study which proves that cognitive absorption is an important
antecedent of emotions, expressed by users while interacting with a commercial website.
The findings of the present study have various implications for research as well as practice.
Theoretically, we have integrated in the same model constructs issued from many theories (TAM
Model, disconfirmation theory, affect theory) and disciplines (Psychology, Marketing and Information
System) in order to better understand online human behavior. In addition, we extend the TAM Model by
supporting the impact of cognitive absorption on an affective variable, Emotion. This result is consistent
with previous research on the impact of cognitive variables on affective reactions (Liljander et Strandvik,
1997 ; Yu et Dean, 2001).
The study also provides several practical implications. First, website satisfaction, which constitutes a
challenge for enterprises because of users’ little switching cost from one site to another, is an important
determinant of website continuance intention. Website designers must develop attracting and
stimulating websites and use multimedia in order to increase users’ cognitive absorption state and as a
result their positive emotions. As stated by Rowley (1996), website navigation must be enjoyable.
Although the findings are encouraging and useful, the present study has certain limitations that
necessitate future research. First, whether our findings could be generalized to other systems or
eservices or populations and in other cultural contexts is unclear. Further research is necessary to
verify the generalizability of our findings. Another limit is that we have tested the impact of two big
dimensions of emotion (positive and negative emotions). Future research should test the mediating
impact of each of the 16 specific emotions on satisfaction.
68
I. Elmezni and J. Gharbi
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
REFERENCES
Abdinnour-Helm S.F, Chapparo B.S et Farmer S.M (2005), “Using the end-user computing satisfaction
(EUCS) instrument to measure satisfaction with a Web site”, Decision Sciences, 36, 2, 341-364.
Agarwal R. et Karahanna E (2000), “Time flies when you’re having fun: Cognitive absorption and beliefs
about information technology usage”, MIS Quarterly, 24, 4, 665-694.
Ajzen I. (1991), “The theory of planned behavior”, Organisational Behavior and Human Decision
Processes, 50, 179-211.
Bagozzi R., Gopinath M. et Nyer P.U (1999), “The role of emotions in marketing”, Journal of the
Academy of Marketing Science, 27, 2, 184-206.
Bailey J.E et Pearson S.W (1983), “Development of a tool for measuring and analysing computer user
satisfaction”, Management Science, 29, 5, 530-545.
Baron R.M. et Kenny D.A. (1986), ‘‘The moderator-mediator variable distinction in social psychological
research: conceptual, strategic and statistical considerations’’, Journal of Personality and Social
Psychology, 51, 6, 1173-82.
Bhattacherjee A. (2001), “Understanding information system continuance: An expectation-confirmation
model”, MIS Quarterly, 25, 3, 351-370.
Bitner M.R (1992), “Servicescapes: The impact of physical surroundings on customers and employees”,
Journal of Marketing, 56, 57-71.
Caceres R.C et Vanhamme J (2003), “Les processus modérateurs et médiateurs : distinction
conceptuelle, aspects analytiques et illustrations”, Recherche et Applications en Marketing, 18, 2,
67-99.
Cheung C.M.K, Chan G.W.W et Limayem M (2005), “A critical review of online consumer behaviour:
Empirical research”, Journal of Electronic Commerce in Organisation, 3, 4, 1-19.
Cheung C.M.K et Limayem M (2005), “Drivers of university students’continued use of advanced
Internet-based learning technologies”, 18th Bled eConference eintegration in action, Slovenia.
Chiu C.M, Hsu M.H, Sun S.Y, Lin T.C et Sun P.C (2005), “Usability, quality, value and e-learning
continuance decisions”, Computers&Education, 45, 399-416.
Clarke K et Belk R.W (1979), “The effects of product involvement and task definition on anticipated
consumer effort”, Advances in Consumer Research, William L Wilkie, Association for Consumer
Research, 313-318.
Davis F.D (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information
technology”, MIS Quarterly, 13, 3, 319-340.
DeLone W.H et McLean E.R (1992), “Information system success revisited: The quest for the
dependent variable”, Information Systems Research, 3, 1, 60-95.
DeLone W.H et McLean E.R (2003), “The DeLone and McLean model of information system success: A
ten-year update”, Journal of Management Information Systems, 19, 4, 9-30.
Fishbein M (1980), “A theory of reasoned action: Some applications and implications”, Nebraska
Symposium on Motivation, 27, 65-116.
Fishbein M et Ajzen I (1975), “Belief, attitude, intention and behaviour: An introduction to theory and
research”, Reading, MA, Addison-Wesley.
Gharbi J-E, Ettis S et Ben Mimoun M.S (2002), “Impact de l’atmosphère perçue des sites commerciaux
sur leurs performances”, Actes de la 1ère journée Nantaise de Recherche sur le e-marketing.
Giese J.L et Cote J.A (2000), “Defining consumer satisfaction”, Academy of Marketing Science Review,
1, 1-29; http//www.amsreview.org/articles/giese01-2000.pdf.
Gouteron J (1995), “Vers une connaissance des émotions en situation d’achat, application au marché
du disque”, Revue Française de Marketing, 152, 2, 35-48.
69
Mediation of Emotion Between Users’ Cognitive Absorption and Satisfaction with a Commercial Website
Han H. et Ocker R.M. (2002), “Is it worthwhile to target university students?”, Decision Line,
September/October, 18-20.
Hirschman E et Holbrook M.B (1982), “Hedonic consumption: Emerging concepts, methods and
propositions”, Journal of Marketing, Vol 31, 397-408.
Holbrook M.B et Hirschman E (1982), “The experiential aspects of consumption: Consumers fantasies,
feelings, and fun”, Journal of Consumer Research, 9, 132-140.
Hsu M.H, Yen C.H, Chiu C.M et Chang C.M (2006), “A longitudinal investigation of continued online
shopping behavior: An extension of the theory of planned behaviour”, International Journal of
Human-Computer Studies, 64, 889-904.
Hunt H.K (1977), “CS/D-Overview and future research directions”, In H.K.Hunt (Ed), Conceptualization
and Measurement of Customer Satisfaction and Dissatisfaction, Marketing Science Institute,
Cambridge, M.A.
Khalifa M et Liu V (2003), “Determinants of satisfaction at different adoption stages of Internet-Based
Services”, Journal of the Association for Information Systems, 4, 5, 206-232.
Khalifa M et Liu V (2005), “Online consumer retention: Development of new habits”, Proceedings of the
38th Hawaii International Conference on System Sciences, 1-8.
Liljander V et Strandvik T (1997), “Emotions in service satisfaction”, International Journal of Service
Industry Management, 8, 2, 148-169
Limayem M, Khalifa M et Frini A (2000), “What makes consumers buy from Internet? A longitudinal
study of online shopping”, IEEE Transactions on Systems, Man, and Cybernetics_Part A: Systems
and Humans, 30, 4, 421-432.
Mano H et Oliver R.L (1993), “Assessing the dimensionality of consumption experience: evaluation,
feeling, and satisfaction”, Journal of Consumer Research, 20, 3, 451-466.
McHaney R, Hightower R et Pearson J (2002), “A validation of the end-user computing satisfaction
instrument in Taiwan”, Information&Management, 39, 503-511.
Mehrabien A (1970), “A semantic space of nonverbal behaviour”, Journal of Consulting and Clinical
Psychology, 35, 248-257.
Mehrabien A (1977), “Individual differences in stimulus screening and arousability”, Journal of
Personality, 45, 237-250.
Metzger M.J, Flanagin A.J et Zwarun L (2003), “College student Web use, perceptions of information
credibility and verification behaviour”, Computers and Education, 41, 270-290.
Novak T.P, Hoffman D.L et Yung Y.F (2000), “Measuring the customer experience in online
environments: A structural modeling approach”, Marketing Science, 19, 1, 22-42.
Oliver R.L (1980), “A cognitive model of the antecedents and consequences of satisfaction decisions”,
Journal of Marketing Research, 17, 460-469.
Oliver R.L (1993), “Cognitive, affective, and attribute bases of the satisfaction response”, Journal of
Consumer Research, 20, 418-430.
Oliver R.L (1997), Satisfaction: A behavioural perspective of the consumer, McGraw-Hill, New York,
NY.
Parthasarathy M et Bhattacherjee A (1998), “Understanding post-adoption behaviour in the context of
online services”, Information Systems Research, 9, 4, 362-379.
Reichheil F.F et Schefter P (2000), “E-loyalty: your secret weapon on the Web”, Harvard Business
Review, 78, 4, 105-113.
Richins M.L (1997), “Measuring emotions in the consumption experience”, Journal of Consumer
Research, 24, 2, 127-146.
Rogers E.M (1995), Diffusion of Innovation, 4ème editions, The Free Press, New York.
70
I. Elmezni and J. Gharbi
JISTP - Volume 5, Issue 12 (2012), pp. 60-71
Rowley J. (1996), “Retailing and shopping on the Internet”, Internet Research: Electronic Network
Applications and Policy, 6, 1, 81-91.
Ryan R.M et Deci E.L (2000), “Self-determination theory and the facilitation of intrinsic motivation,
Social development, and well-being”, American Psychologist, 55, 1, 68-78.
Schwarz N (1990), Feelings as information: Informational and motivational functions of affective states,
In Higgins E.T. et Sorentino R. (Eds), Handbook of motivation and cognition: Foundations of social
behaviour, 2, New-York, Guilford, 527-561.
Thong J.Y.L, Hong S-J et Tam K.Y (2006), “The effects of post-adoption beliefs on the expectationconfirmation model for information technology continuance”, International Journal of HumanComputer Studies, 64, 799-810.
Vanhamme J (2002), “La satisfaction des consommateurs spécifique à une transaction: Définition,
antécédents, mesures et modes”, Recherche et Applications en Marketing, 17, 2, 55-85.
Westbrook R.A (1987), “Product/Consumption-based affective responses and post purchase
processes”, Journal of Marketing Research, 24, 258-270.
Westbrook R.A et Oliver R.L (1991), “The dimensionality of consumption emotion patterns and
consumer satisfaction”, Journal of Consumer Research, 18, 1, 84-91.
Yoo Y-T et Dean A (2001), “The contribution of emotional satisfaction to consumer loyalty”,
International Journal of Service Industry Management, 12, 3, 234-250.
Zviran M, Glezer C et Avni I (2006), “User satisfaction from commercial websites: The effect of design
and use”, Information&Management, 43, 157-178.
71
J. S. Siekpe
JISTP - Volume 5, Issue 12 (2012), pp. 72-79
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
INVESTIGATING PERSONALITY TRAITS, INTERNET USE AND USER
GENERATED CONTENT ON THE INTERNET
Jeffrey S. Siekpe
Tennessee State University, USA
ABSTRACT
T
here is an increasing demand on citizens to participate in social network websites and to
create and share their own user-generated content (UGC), such as photographs, videos,
and blogs. While many organizations are turning to such technologies aid their innovation
process, little is known about what motivates individual to engage in UCG. We are of the position that
investigation of UGC is essential to ensure both users and organizations gain true value from the active
participation in content creation. This study examines the impact of personality types on UGC. Given
that the level of Internet usage is often discretionary rather than mandated, and thus more likely to
reflect personal motives, needs, values, preferences and other personality attributes, we investigated
the mediating role on inter use on personality traits and UGC. The study proposes a research model
and questions that postulates links between three personality types (extraversion, neuroticism, and
psychoticism), Internet use, and four types of user generated content (UGC). A methodology and a
design layout for data collection and analysis are presented.
Keywords: User-Generated Content, Consumer-Generated Media, Personality Traits, Internet Use.
INTRODUCTION
Advances in Internet technologies are not only transforming businesses but are changing the ways
users communicate, learn, and research. Lately, it has expanded beyond its origins as a reading tool,
and has become an ever more interactive and recursive tool. The recent rise of web 2.0 has led to an
interesting new phenomenon: user generated content (UGC), also known as consumer-generated
media (CGM) or user-created content (UCC). Some examples of UGCs are Wikipedia, Youtube,
Facebook, and Flickr to name a few. Though a universally accepted definition of UGC is lacking many
have pointed out its elements to include: i) content made publicly available over the Internet, ii) which
reflects a certain amount of creative effort, and iii) which is created outside of professional routines and
practices (Vickery & Wunsch-Vincent, 2006; OECD, 2007). In fact, Internet users are increasingly
visiting social-networking sites – sites that promote networks of relationships around users and
information (Hales & Arteconi, 2005) – for entertainment and news, business relationships, consumer
product reviews, connecting with friends, and more. But the users are doing more than just visiting;
instead, they contribute content in the form of journal entries, photos, videos, and weblogs, becoming
producers and authors. Definitively, the value of these sites is derived not from the contents provided by
the site’s owners, but from the emergent relationships among users and the contents they create and
consume.
72
J. S. Siekpe
JISTP - Volume 5, Issue 12 (2012), pp. 72-79
PROBLEM STATEMENT
While the benefit derived from user generated content for the content host is clear, the benefit to the
contributor is less direct. If these UGC users are not driven by monetary motives, then why do they
engage in this type of peer production (i.e., mass collaboration)?
For decades, information systems (IS) researchers have recognized how important users’ personal
factors are for predicting technology adoption and use (e.g., Amiel and Sargent, 2004; Lucas, 1981;
Rosengren, 1974). Personal factors in previous IS research can be classified into two broad categories:
relatively mutable factors and dispositional factors (McElroy et al., 2007). More mutable factors include
individual attitudes and personal perceptions, and less mutable dispositional factors include general
personality factors, cognitive style, and self-efficacy. Dispositional factors, such as personality, have
been largely ignored in the MIS context, and there has also been a lack of targeted research on the role
of dispositional factors in IS adoption and use (McElroy et al., 2007). Although the role of user
perceptions, such as perceived ease of use and usefulness, continues to dominate models of
technology acceptance, personal factors affecting IT use also need to be acknowledged as important
variables (Agarwal et al., 2000). This study, therefore, identifies the role of individual differences in the
acceptance of UGC.
RESEARCH OBJECTIVES
This study seeks to investigate the fundamental motivation for users’ participation based on their
personal factors—that is, on their individual differences. Specifically, this manuscript explores the effect
of personality Internet use, and in turn the impact of use on behavior (i.e., UGC usage). As usage of the
Internet is regularly engaged in by many individuals in all walks of life, (NTIA Release, 2000), it is a
logical area to investigate from a personality perspective, particularly since level of usage is often
discretionary rather than mandated, and thus more likely to reflect personal motives, needs, values,
preferences and other personality attributes.
There are several important reasons why this area of research merits attention. Personality traits
represent relatively enduring characteristics of individuals that show consistencies over their lifespans
and across a wide range of situations (Pervin & John, 1997; Shaffer, 2000). For example Landers and
Lounsbury(2006) cited several studies that found personality traits to be related to a broad spectrum of
human activities and types of behavior, including school attendance, gambling behavior, parent– infant
bed, confessing to crimes in police interrogations, blood donations, housing behavior, music listening
preferences, leadership behavior, behavioral aggression, television-viewing, drug use, sexual behavior,
job performance, and participation in sports.
We believe that understanding the impact of personality differences together with Internet use can be a
valuable addition to researchers in their efforts to comprehend the determinants of UGC. As
Csikszentmihalyi (1990) asserted, the differences among individuals are not merely in their skills, but
also may lie in their personality. “More research is needed within the area of individual differences and
autotelic personality to help HCI researchers clarify which individual measures influence the extend of
internet usage and UGC.
Many studies in various disciplines including IS have examined the motivations for internet use. With
regard to theoretical advancement, for researchers interested in extending this line of work, the first
critical issue relates to the exploration of UGC usage types. Most of the traditional models of technology
73
Investigating Personality Traits, Internet Use and User Generated Content on the Internet
acceptance focus on the utilitarian aspect (e.g., Goodhue & Thompson, 1995), but recent research on
mobile computing (e.g., Nysveen et al., 2005) has found that usage of some types of computing is
driven more by hedonic purposes.
From the perspective of practice, as organizations are relying more heavily on UGC to aid their
innovation process, organizations may leverage the findings from this study to motivate employee
Internet use and refine their websites to encourage UGC usage.
LITERATURE REVIEW
Research has just begun to explore the connection between personality, flow, and behavior (i.e., UGC
usage) in computer interactions. In a recent study, Amiel and Sargent (2004) examined the relationship
between personality types, Internet use, and usage motives for undergraduate students. They found
that people with different personality types showed distinctive patterns of Internet use and usage
motives. Hamburger and Ben-Artzi (2000) examined the relationship between personality traits and
Internet services. They demonstrated that extraversion and neuroticism showed different patterns of
relationships with the factors of the Internet-Services Scale, which was classified into social services,
information services, and leisure services. In their study, Hamburger and Ben-Artzi illustrated that
extraversion had a significantly positive influence on the use of leisure services, while neuroticism had a
negative influence on information services. Landers and Lounsbury (2006) also found that a person
who is normally conscientious and neurotic is less likely to use the Internet for what he or she sees as
unproductive activities, such as watching YouTube clips. Another study’s findings indicated that even
though extraverts spend less time on the Internet, they use the Internet as a tool to acquire things to
share with others, such as surfing Wikis (Amiel and Sargent, 2004). Thus, the personality trait may
influence individual attitudes and behaviors. While researchers are beginning to explore the influence of
personality, to date studies incorporating all of the personality types have been absent.
It is important to consider first the issue of what personality traits to investigate in relation to Internet
usage, since there are so many different traits to choose from in the broader psychological literature.
Fortunately, there is a general consensus regarding the Big Five model as a unified, parsimonious
conceptual framework for personality (Digman, 1990, 1997; Wiggins & Trapnell, 1997). Empirical
studies have verified the overall factor structure and integrity of the five constructs (often referred in the
literature as the Big Five) of Openness, Conscientiousness, Extraversion, Agreeableness, and
Neuroticism in many different settings and areas of inquiry (Costa & McCrae, 1994; Landers &
Lounsbury, 2006; De Raad, 2000). There is, however, a growing debate about whether validity
relationships can be enhanced by considering narrow personality traits in addition to the broad, Big Five
constructs (e.g. Ashton, 1998; Ones & Viswesvaran, 2001; Paunonen, Rothstein, & Jackson, 1999;
Schneider, Hough, & Dunnette, 1996). For the present study, we used two criteria to select narrow traits
likely to add variance beyond the Big Five Based on prior research by the second author (Lounsbury et
al., 2003; Lounsbury, Loveland,&Gibson, 2002), we selected three narrow traits, similar to Tosun and
Lajunen (2009), for inclusion in the present study - extraversion (as opposed to introversion),
neuroticism (as opposed to stability), and psychoticism (as opposed to impulse control).
Extraversion relates to an individuals’ “ability to engage the environment” (Clark and Watson, 1999,
p.403). Extraverts seek out new opportunities and excitement. Neuroticism, however, represents a lack
of psychological adjustment and emotional stability, and it illustrates the degree to which an individual
perceives the world as threatening, problematic, and distressing (Clark and Watson, 1999, p.403).
Highly neurotic people tend to be fearful, sad, embarrassed, and distrustful, and they have difficulty
74
J. S. Siekpe
JISTP - Volume 5, Issue 12 (2012), pp. 72-79
managing stress (McElroy et al., 2007). Finally, particular to the Eysenck model is the psychoticism
trait. Highly psychotic people show a disregard for authority and society’s rules and regulations,
exhibiting a need to be on the edge (Eysenck et al., 1985; Eysenck, 1998).
Personality Traits Internet Use
Personality is a stable set of characteristics and tendencies that determine people’s commonalities and
differences in thoughts, feelings, and actions (Maddi, 1989). Eysenck et al. (1985) insisted that “traits
are essentially dispositional factors that regularly and persistently determine our conduct in many
different types and situations” (p. 17). The Eysenck Personality Questionnaire (EPQ) has been
prominently used in personality research (Amiel and Sargent, 2004; Hambruger and Ben-Artzi, 2000;
Shim and Bryant, 2007; Mikicin, 2007; Ebeling-Witte et al., 2007; Swickert et al., 2002) and has shown
consistent results across a variety of samples. For example, Scealy et al. (2002) found that shyness
was related to specific types of Internet usage. Leung (2002) found that loneliness was not significantly
correlated with usage of the online instant messaging program, ICQ (‘‘I seek you,’’), but was related to
amount of self-disclosure. Armstrong, Phillips, and Saling (2000) found that low self-esteem was related
to heavy Internet usage. Hamburger and Ben-Artzi (2000) found that extraversion and neuroticism were
related to different types of Internet usage. Therefore, considering that research has shown the ability of
personality traits to predict important behavioral variables (Zmud, 1979; Barrick and Mount, 1991), we
adopt personality as a forerunner to behavior formation, internet use and UGC.
User Generated Content (UGC)
In the evolution of the Web changing from Web 1.0, a read to Web 2.0, a read/write form, social
network and community websites have changed the way people use the Internet, in creating personal
profiles and content, sharing photographs, videos, blogs, wikis, and UGC in general. The most popular
social networking and UGC sites, Facebook, MySpace, and YouTube are among the 10 most-visited
websites worldwide, according to Alexa Internet2 (Alexa, 2007). The increasing popularity of social
networking and UGC demonstrates that the importance of the Internet in work, education, and daily life
is incontrovertible. The evolution of the Internet and the increasing significance of UGC therefore pose
certain social challenges as well. Some studies have been conducted on the characteristics, social, and
personal aspects of social software and social network sites. For an overview, see e.g. (Boyd & Ellison,
2007). Yet, these studies rarely focus on personality types of users or on UGC types.
Although additional functionalities like social connections exist they are centered around a specific type
of content like forum postings, photos, videos, or articles. Based on the way how singular content items
are created within the community, Schwagereit, Scherp and Staab (2011) suggested three
subcategories: (1) Collaborative - has no single creator but is created, modifed, and improved by
diferent users (e.g., Wikipedia2); (2) Non-collaborative - has one single creator who is responsible for it
(e.g., Flickr3, Youtube4); (3) Interactive - documents the communication of diferent users (e.g, the
Question/Answering Platform). Nonetheless, research has consistently shown the broad categories for
using the Internet are information acquisition, entertainment, and communication. These variables are
presumed to be the underlying categories of content types that are generated even within the Web 2.0
environment.
RESEARCH MODEL
This paper proposes a model of the relationships around individual personality, internet use, and UGC.
This model combines the Eysenck et al.’s personality model (Eysenck et al., 1985) and the Internet use
75
Investigating Personality Traits, Internet Use and User Generated Content on the Internet
to predict UGC usage, as shown in Figure 1. We postulate that three different personality trait types are
related differently with Internet use and UGC usage behavior. In addition, we propose that the level of
Internet use has a positive relationship with UGC usage.
Internet use
extraversion
neuroticism
entertainment
psychoticis
m
info acquisition
communication
Figure 1: Research Model
Research Questions
We addressed three research questions:
(1) Are the personality traits of extraversion, neuroticism, and psychoticism related to Internet usage?
(2) Are the personality traits of extraversion, neuroticism, and psychoticism related to UGC?
(3) Does Internet usage account for UGC?
METHODOLOGY
This study employs a survey research design. Survey research is one of the most important areas of
measurement in applied social research. The broad area of survey research encompasses any
measurement procedures that involve asking questions of respondents. Survey research provides for
efficient collection of data over broad populations, amenable to various ways of administration such as
in person, by telephone, and over the Internet.
Measures
Eysenck et al.’s personality questionnaire (EPQ) is used to measure personality in this study. According
to the suggestion of Amiel and Sargent (2004), a short version of the EPQ is slightly changed in
language, and adapted to our target sample. EPQ consists of 36 self-report items, with 12 measures for
each personality type on a seven-point Likert-type scale ranging from “strongly disagree” (1) to “strongly
agree” (7). The survey instrument is also designed to measure Internet consumption (Internet use) with
respect to how many hours per day one uses the Internet. As a dependent variable, we define UGC
usage as “the degree to which users utilize UGC” (Davis, 1989), and we classified UGC usage into
three different usage types based on the purpose: entertainment, communication, and informationacquisition.
Sample and Data Analysis
A convenient sample Business students in a university in Tennessee was used for data collection. The
research model will be subject to the data analysis by employing structural equation modeling (SEM).
76
J. S. Siekpe
JISTP - Volume 5, Issue 12 (2012), pp. 72-79
SEM is a statistical technique for testing and estimating causal relationships using a combination of
statistical data and qualitative causal assumptions. SEM allows both confirmatory and exploratory
modeling, meaning they are suited to both theory testing and theory development. Results of the
research can be discussed in three different areas: construct validity, reliability, and correlation. Straub
et al. (2004) suggested multiple validation guidelines for the information system research. For the
current study, coefficient factor analysis will be used to determine the convergent and discriminant
construct validity. Cronbach’s Alpha will also be employed to assess the internal consistency reliability.
CONCLUSION
In this paper we intend to examine the impacts personality traits, Internet use on user-generated
content on the Internet. Literature review show there could be significant linkages between personality
traits and purposes of content generation on the Internet. The paper has selected three salient
personality traits and four categories of UGC forms with respect to their usage (their specific real
function). A model is proposed linking the personality traits with UGC and also introduces Internet use
as a mediating factor between the linkages. A survey research designed is outlined to collect data from
a convenient sample of College students. A confirmation of these expectations will have impacts for
both the user and organization.
REFERENCES
Agarwal, R., Sambamurthy, V., & Stair, R.M. (2000) Research Report: The evolving relationship
between general and specific computer self-efficacy: An empirical assessment. Information
Systems Research, 11, 418-430.
Alexa. (2007). The Web Information Company. Retrieved 2007, 01.09.07, available from
www.alexa.com.
Amiel, T., & Sargent, S.L. (2004) Individual differences in internet usage motives. Computers in Human
Behavior, 20, 711-726.
Armstrong, L., Phillips, J. G., & Saling, L. L. (2000). Potential determinants of heavier Internet usage.
International Journal of Human-Computer Studies, 53(4), 537–550.
Ashton, M. C. (1998). Personality and job performance: The importance of narrow traits. Journal of
Organizational Behavior, 19, 289–303.
Barrick, M. R., & Mount, M. K. (1991) The big five personality dimensions and job performance: a meta
analysis. Personnel Psychology, 44, 1-26.
Boyd, D.M. and Ellison, N.B. (2007). Social network sites: Definition, history, and scholarship. Journal
of
Computer-Mediated
Communication,
13(1),
article
11,
available
from
http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.html..
Clark, L. A., & Watson, D. (1999) Temperament: A new paradigm for trait psychology. In: Handbook of
Personality (2nd ed), Pervin, L.A., & John, O.P. (eds.), pp. 399-423. Guilford Press, New York.
Costa, P., & McCrae, R. (1994). Stability and change in personality from adolescence through
adulthood. In C. F. Halverson, G. A. Kohnstamm, & R. P. Martin (Eds.), The developing structure of
temperament and personality from infancy to adulthood (pp. 139–155). Hillsdale, NJ: Erlbaum.
Csikszentmihalyi, M. (1990) Flow: The Psychology of Optimal Experience, Harper and Row, New York.
Davis, F.D. (1989) Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Quarterly, 13, 319-339.
De Raad, B. (2000). The Big Five personality factors (The psycholexical approach to personality).
Seattle: Hogrefe & Huber.
Digman, J. (1990). Personality structure: Emergence of the five-factor model. Annual Review of
77
Investigating Personality Traits, Internet Use and User Generated Content on the Internet
Ebeling-Witte, S., Frank, M.L., & Lester, D. (2007) Shyness, internet use, and personality.
CyberPsychology Behavior, 10, 713-716.
Eysenck, H.J. (1998) Dimensions of Personality, Transaction Publishers, New Brunswick, NJ.
Eysenck, S.B.G., Eysenck, H.J., & Barrett, P.T. (1985) A revised version of the psychoticism scale.
Personality and Individual Differences, 6, 21-29.
Goodhue, D.L. and Thompson, R.L. (1995).Task-Technology Fit and Individual Performance,. MIS
Quarterly (19:2), pp. 213-236.
Hales, David & Arteconi, Stefano (2005) Friends for Free: Self-Organizing Artificial Social Networks for
Trust and Cooperation.. In Algorithmic Aspects of Large and Complex Networks.
Hambruger, Y.A., & Ben-Artzi, E. (2000) The relationship between extraversion and neuroticism and the
different uses of the internet. Computers in Human Behavior, 16, 441-449.
http://dblp.uni-trier.de/db/conf/dags.
Landers, R., & Lounsbury, J.W. (2006) An investigation of big five and narrow personality traits in
relation to internet usage. Computers and Human Behavior, 22, 283-293.
Landers, Richard N. & Lounsbury, John W. (2006) An investigation of Big Five and narrow personality
traits in relation to Internet usage Original Research Article Computers in Human Behavior, Volume
22, Issue 2, March 2006, Pages 283-293
Leung, L. (2002). Loneliness , self-disclosure, and ICQ (‘‘I seek you’’) use. CyberPsychology &
Behavior, 5(3), 241–251.
Lounsbury, J. W., Loveland, J. M., & Gibson, L. W. (2002). Job performance validity of optimism. In
Paper presented at the Society for Industrial and Organizational Psychology, Toronto, Canada.
Lounsbury, J. W., Loveland, J. M., Sundstrom, E. D., Gibson, L. W., Drost, A. W., & Hamrick, F. L.
(2003). An investigation of personality traits in relation to career satisfaction. Journal of Career
Assessment, 11, 287–307.
Lucas, H.C., Jr. (1981) Implementation, the Key to Successful Information Systems, Columbia
University Press: New York.
Maddi, S.R. (1989) Personality Theories: A Comparative Analysis (5th ed.), Dorsey, Homewood: IL.
McElroy, J.C., Hendrickson, A.R., Townsend, A.M., & DeMarie, S.M. (2007) Dispositional factors in
internet use: personality versus cognitive style. MIS Quarterly, 31, 809-820.
Mikicin, M. (2007) Relationships between experiencing flow state and personality traits, locus and
control and achievement motivation in swimmers. Physical Education and Sport, 51, 61-67.
National Telecommunications and Information Administration (NTIA) and Economics and Statistics
Administration. (2000). Executive Summary FFTN_00. Retrieved September 15, 2003, from
http://www.ntia.doc.gov/ntiahome/digitaldivide/execsumfttn00.htm.
Nysveen, H., Pedersen, P., & Thorbjornsen, H. (2005) Explaining intention to use mobile chat services:
moderating effects of gender. Journal of Consumer Marketing, 22, 247-256.
Ones, D. S., & Viswesvaran, C. (2001). Personality at work: Criterion-focused occupational personality
scales used in personnel selection. In B. W. Roberts & R. Hogan (Eds.), Personality psychology in
the workplace (pp. 63–92). Washington, DC: American Psychological Association.
Organisation for Economic Cooperation and Development (2007) Participative web: User-Created
Content, Accessed Online: http://www.oecd.org/dataoecd/57/14/38393115.pdf
Paunonen, S. V., Rothstein, M. G., & Jackson, D. N. (1999). Narrow meaning about the use of broad
personality measures for personnel selection. Journal of Organizational Behavior, 20(3), 389–405.
Pervin, L. A., & John, O. P. (1997). Personality: Theory and research (7th ed.). Oxford: John Wiley and
Sons.
Psychology, 41, 417–440.
78
J. S. Siekpe
JISTP - Volume 5, Issue 12 (2012), pp. 72-79
Rosengren, K.E. (1974) Uses and gratifications: a paradigm outlined. In: The Uses of Mass
Communications: Current Perspectives on Gratifications Research, Blumler, J.G., & Katz, E. (eds.),
pp. McElroy, J.C., Hendrickson, A.R., Townsend, A.M., & DeMarie, S.M. (2007) Dispositional
factors in internet use: personality versus cognitive style. MIS Quarterly, 31, 809-820.
Scealy, M., Phillips, J. G., & Stevenson, R. (2002). Shyness and anxiety as predictors of patterns of
Internet usage. CyberPsychology & Behavior, 5(6), 507–515.
Schneider, R. J., Hough, L. M., & Dunnette,M. D. (1996). Broadsided by broad traits: how to sink
science in five dimensions or less. Journal of Organizational Behavior, 17, 639–655.
Schwagereit, F, Scherp, A, Staab, S. (2011) Survey on Governance of User-generated Content in Web
Communities, 3rd International Conference on Web Science, ACM WebSc11, Koblenze, Germany,
June 14-17.
Shaffer, D. R. (2000). Social and personality development (4th ed.). Belmont, CA: Wadsworth/Thomson
Learning.
Shim, J.W., & Bryant, P. (2007) Effects of personality types on the use of television genre. Journal of
Broadcasting & Electronic Media, 51, 287-304.
Straub, D. W., Boudreau, M.-C. and Gefen, D. (2004) “Validation Guidelines for IS Positivist Research,”
Communications of the Association for Information Systems, 14, 380-426.
Swickert, R.J., Hittner, J.B., Harris, J.L., & Herring, J.A. (2002) Relationships among internet use,
personality, and social support. Computers in Human Behavior, 18, 437-451.
Tosun, Leman Pinar & Lajunen, Timo (2009). Why do young adults develop a passion for Internet
activities? The associations among personality, revealing ―True self‖ on the internet, and passion
for the internet. CyberPsychology & Behavior, 12(4), 401−406.
Vickery, G.., & Wunsch-Vincent, S. (2006) Participative Web And User-Created Content: Web 2.0 Wikis
and Social Networking, Organization for Economic Co-operation and Development: Paris.
Wiggins, J. S., & Trapnell, P. D. (1997). Personality structure: The return of the Big Five. In R. Hogan,
J. Johnson, & S. Briggs (Eds.), Handbook of personality psychology (pp. 737–765). San Diego,
CA: Academic.
Zmud, R.Q. (1979) Individual differences and MIS success: a review of the empirical literature.
Management Science, 25, 966-997.
79
M. O. Thirunarayanan
JISTP - Volume 5, Issue 12 (2012), pp. 80-86
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
WHY DO THEY CONSIDER THEMSELVES TO BE ‘GAMERS’?:
THE 7Es OF BEING GAMERS
M. O. Thirunarayanan and Manuel Vilchez
Florida International University, USA
ABSTRACT
V
ideo games are not just a favorite pastime among youth and adults, but they are also a
multi-billion dollar industry (NPD Group, 2012.) In spite of the fact that playing electronic
games is becoming an important fact of modern life, practically no research has been
done to find out why some who play video games consider themselves to be “gamers”. Using
qualitative data this study explores the various facets or characteristics of being a gamer, as described
by those who consider themselves to be gamers. The results of this study indicate that there are at
least 7Es related to being a gamer and these are: engagement, enjoyment, equipment, existential,
experience, expertise, and extent. This study also found that there could be what can be labeled a
“gamer divide” or “gamer gap” among male and female game players.
Keywords: Electronic Games, Gamers, Gamer Characteristics, Gamer Divide / Gap.
BACKGROUND INFORMATION
The term “gamer” is used commonly to refer to those who play computer or video games for long
periods of time. The Merriam-Webster dictionary defines gamers as “a person who plays games;
especially: a person who regularly plays computer or video games.” Another definition provided by
SearchMobileComputing.com states that “A gamer is a devoted player of electronic games, especially
on machines especially designed for such games and, in a more recent trend, over the Internet.”
Shepard (2008) has defined the term as follows: “Gamer: One who has taken a form of gaming (e.g.
Board Games, Pen & Paper, Video Games) and made it a part of their life-style”. He (Shepard, 2008)
also differentiates between three categories of gamers that he labels “Casual,” “Gamer,” and
“Hardcore.” Beck and Wade (2004) define a gamer as someone who “grew up playing” games. While
there are these and a few other definitions of the term “gamers,” practically no research exists that tries
to determine why those who play computer or video games consider themselves to be gamers. This
study will attempt to fill this gap in the literature.
Before mentioning the purpose of this study, it might be helpful to state what this study is not about.
This study does not attempt to classify gamers into different categories. This study does not attempt to
determine what motivates people to play games and why.
80
M. O. Thirunarayanan
JISTP - Volume 5, Issue 12 (2012), pp. 80-86
The purpose of this study is to determine the meaning or meanings of the word “gamers” based on why
people who play games consider themselves to be gamers. The study will identify the different
characteristics associated with being a gamer.
BRIEF DESCRIPTION OF THE SAMPLE
Two hundred and three students who were enrolled in a large university located in the southeastern
part of the United States, participated in a survey of video game players. Both the study and the survey
instrument were approved by the University’s Institutional Review Board (IRB). Some of the results from
this study, including a description of the sample has been reported elsewhere, (Thirunarayanan et al.,
2010) but characteristics of the sample that are relevant to this study will be reported in this paper.
One hundred and ninety-nine of the two hundred and three study participants responded to this survey
item. Data for four participants were missing. Of the total number of hundred and ninety-nine
respondents, sixty-six males (55%) indicated that they were gamers, while fifty-four (45%) did not
consider themselves to be gamers. Only thirteen females (16.5%) considered themselves to be gamers
while sixty-six (83.5%) did not think of themselves as gamers. Of the total number of seventy-nine
males and females who responded that they were gamers, only seventy-five provided written
statements about why they thought they were gamers.
QUANTITATIVE RESULTS FROM A SURVEY OF VIDEO GAME PLAYERS
Among the many questions that were included in the survey instrument, one asked the respondents if
they considered themselves to be gamers. Respondents were asked to select “yes” or “no” in response
to the question. Those who selected “yes” were asked to explain in writing in the space provided why
they considered themselves to be gamers. Those who chose “no” were asked to explain in writing why
they did not consider themselves to be gamers. Only a small number of thirteen (16.5%) of the females
considered themselves to be gamers while a much larger number of sixty-six (83.5%) indicated that
they did not think of themselves as being gamers. The proportion of males who reportedly considered
themselves gamers is much larger than the proportion of females who indicated that they were gamers.
This difference between males and females was statistically significant, as shown in Table 1.
This result is not surprising because more males play video games in general and more males also play
games competitively than females. Could this be labeled a “gamer divide” or a “gamer gap” between
males and females?
81
Why Do They Consider Themselves to be ‘Gamers’?: The 7Es of Being Gamers
Table 1: Significant Differences Between Males and Females Who Consider Themselves to be Gamers
Do you consider yourself
a gamer?
Yes
Sex
Female
13
Expected
count = 47.6
31.4
79.0
16.5%
100.0%
66
120
47.6
120.0
55.0%
100.0%
79
199
79.0
199.0
% within
Do you consider yourself
to be a “Gamer”? = 83.5%
Count = 54
Expected
count = 72.4
No
Total
Total
Male
Count = 66
% within
Do you consider yourself
to be a “Gamer”? = 45.0%
Count = 120
Expected
count = 120.0
% within
Do you consider yourself
39.7%
to be a “Gamer”? = 60.3%
Chi-Square Value = 29.565 | 2-sided p = 0.0
Fisher’s Exact Test 2-sided p= 0.0
Number of missing cases = 4
79
100%
QUALITATIVE RESULTS FROM A SURVEY OF VIDEO GAME PLAYERS
The qualitative statements provided by the study participants about why or why not they considered
themselves to be gamers were coded, categorized and analyzed using techniques and procedures
recommended by Bogdan and Biklen (2006).
Samples of written statements from the study participants include statements such as “I enjoy it, and I
make time for it,” “I enjoy video gaming by myself or with friends,” “I like to play games, I enjoy video
games,” and “I really like them.” Because these statements relate to the joy that that people feel when
they play games, the category of “enjoyment” was derived from statements that made references to
such feelings.
Based on such analyses of the qualitative data, the following seven categories emerged regarding why
participants considered themselves to be gamers. They’re reported in alphabetical order:
 Engagement
 Enjoyment
 Equipment
82
M. O. Thirunarayanan




JISTP - Volume 5, Issue 12 (2012), pp. 80-86
Existential
Experience
Expertise
Extent
Because the derived labels for all the seven categories start with the letter “E,” we decided to name
them the “7Es.” The above categories are also characteristics associated with being a gamer. Each of
the 7Es will be discussed in alphabetical order.
Engagement
Being actively engaged with various aspects of gaming is a characteristic of being a gamer. Some
study participants stated “I follow industry news, know major companies and play a lot,” “Because I am
familiar with the gaming culture,” “I read video game Journalism,” “I travel for tournaments,” “I follow
industry news, know major companies and play a lot,” and “I constantly keep myself in the know with
games.”
Enjoyment
Gamers play games because they enjoy playing games. This characteristic refers to the love for games
and the enjoyment that comes with playing games. Examples of comments that were used to form this
category have been provided earlier. It stands to reason that if one does not enjoy playing games, he or
she will not voluntarily continue to play them.
Equipment
One cannot play games unless one has access to the equipment needed to play games. If one owns
gaming equipment, the more access one has and can play the game anytime he or she wants to play.
Some of the statements made by study participants that were used to derive this category included “I
own several systems,” “I own all of them,” and “’cause because] I own more games than friends.” A
definition of gamers should include ownership of equipment, because even if people start playing
games on equipment owned by others, they will eventually purchase them if they continue to play the
game for extended periods of time.
Existential
One of the most recognized statements in philosophy is Rene Descartes’ “I think, therefore I am.” The
spirit of this statement is not lost on our participants. The responses show that at times they don’t need
possessions, hours played or someone to tell them what they are – “gamers”. Some of the study
participants felt that they were gamers just because they play games. There is no need for any reason
to play games other than the fact that they play games. “Because I game,” “cuz because] I play
games,” “Because I play games,” “Because I play video games,” “Because I am,” “Because I do,” “I play
video games,” and “I play games.” Since these statements were similar to the statement “I exist,
therefore I am,” these were grouped under a category labeled “Existential.”
Experience
Experience with gaming comes from being involved with gaming over a long period of time. Statements
made by study participants that provide support for the existence of this category include “I grew up
83
Why Do They Consider Themselves to be ‘Gamers’?: The 7Es of Being Gamers
with them,” “Been doing it since a baby,” “I have played video games my entire life,” and “I used to do it
professionally.”
Expertise
Study participants who considered themselves to be gamers thought that they had more expertise than
non-gamers. Some of the statements that were used to form this category included “I know the ins and
outs of my favorite game,” “I usually follow the news and people sometimes consult me,” “Because I
know more about Halo than most people should,” “I constantly keep myself in the know with games,”
“Because I know enough about games to give accurate info. to others,” and “I'm in depth into the game
I play.”
Extent
The amount of time spent daily or weekly or over a short duration is another characteristic associated
with being a gamer. Participants stated that they “Spend several hours a day playing online,” “Time
spent gaming is not "average",” “Play often, play online very often,” and “I spend most of my free time
gaming.”
Sanger, Wilson, Davies, and Whittaker (1997) report that gamers often lose track of time and can
become addicted to video games. They go on cite the fact that games such as Enderfun are riddled
with subliminal messages that influences the behavior of the participant. This occurs because parents
are not interested in the games that their children play. Furthermore, they do not monitor what games
their children are playing. Game consoles and computer games have the capabilities to connect to the
Internet. The immediate result of this capability is that gamers now have the ability to play video games
with others gamers anywhere and at any given time (Cox and Frean, 1995).
The 7Es of being gamers is represented diagrammatically in Figure 1.
Figure 1: The 7Es of being gamers.
84
M. O. Thirunarayanan
JISTP - Volume 5, Issue 12 (2012), pp. 80-86
Non-Gamers
The students who don’t consider themselves to be gamers unequivocally cite the amount spent playing
video games as the reason they don’t consider themselves a gamer. They consistently describe
themselves as players. The fact that they play socially or just play one is not enough for them to
consider themselves gamers. Some of the responses provided by the study participants who did not
consider themselves gamers included written statements such as “Don't play enough video games,” “I
hardly play. Just rock band,” “I don't play all the time,” “It's more casual,” “I don't play games that often
and it's not a priority,” “I play casually, given my priorities,” and “I do not play often, only when I find time
and not to an extreme.”
DISCUSSION OF THE FINDINGS
The study found a statistically significant difference between males and females. A statistically
significant proportion of males considered themselves to be gamers than females. This means that
lesser proportions of females play games competitively than males. This can be interpreted as a “gamer
divide” or “gamer gap.”
The participants of this study considered themselves to be gamers for several reasons. These reasons
are also aspects, facets, dimensions, or characteristics associated with gaming. Based on an analysis
of qualitative data, the 7Es associated with gaming have been identified. The results of this study show
that term ‘gamers’ means much more than merely playing video games.
The findings of this study suggest that there are many kinds of gamers, not just one. There are those
gamers who play for the enjoyment of playing games. Others play games because they are successful
at playing them. The results of this study provide a 7Es framework for studying gamers and analyzing
game playing behavior.
According to Carstens and Beck (2004):
This "game generation" will soon outnumber their elders in the workplace. Their way of
thinking will soon pass the business tipping point and become standard operating procedure,
Sooner or later, those who grew up without video games will have to understand the gamers.
That means not only learning what they're all about, but finding ways to redesign educational
and training curricula around their needs.
(p. 22)
This study has found the different aspects or characteristics of gaming as described by gamers
themselves, thus furthering the understanding of gamers.
CONCLUSION
The 7Es explain the term ‘gamer’ in a multidimensional manner, because being a gamer means
different things to different people who play games. In this paper no attempt has been made to
determine if one or more of the 7Es are more important than some of the other 7Es from the point of
view of gamers. Other studies can determine the importance that gamers themselves attribute to the
different E’s. Findings of such a study will help identify the most important aspects or facets of being a
gamer.
85
Why Do They Consider Themselves to be ‘Gamers’?: The 7Es of Being Gamers
Additional criteria regarding why people consider themselves to be gamers may also emerge from the
findings of future studies. Other studies conducted in the future could also offer insights into the
relationships between the 7Es. What are the relationships among the 7Es? How do the 7Es influence
each other when it comes to defining oneself as a gamer?
The sample of participants of this study was drawn from students who were enrolled in classes in a
Hispanic serving university. Future studies should include samples drawn from other racial and ethnic
groups. Similar studies should also be conducted with samples of participants drawn from diverse
socioeconomic backgrounds. Larger numbers of females could also be included in future studies.
Developers of electronic games can use the 7Es to guide the development of games for educational
and non-educational purposes.
REFERENCES
Bogdan, R. & Biklen, S. K. (2006). Qualitative research for education: An introduction to
theories and methods (5th ed.). Boston, MA: Allyn & Bacon.
Beck, J. C., & Wade, M. (2004). Got game: How the gamer generation is reshaping business
forever. Boston, MA: Harvard Business School Press.
Carstens, A., & Beck, J. (2004). Get ready for the gamer generation. TechTrends, 49(3), 2225. Retrieved from the World Wide Web on August 15, 2011:
http://www.springerlink.com/content/8j296408p60u4n86/
Cox, J. & Frean, A. (1995, July 22). Internet generation lifts computer sales sky high. Times,
pp. 7.
Merriam-Webster. Retrieved from the World Wide Web on August 2, 2011:
http://www.merriam-webster.com/dictionary/gamer
NPD Group. (2012). The NPD Group: U.S. consumer electronics holiday sales revenue drops
6 percent from 2010. Retrieved from the World Wide Web on January 10, 2012:
https://www.npd.com/wps/portal/npd/us/news/pressreleases/pr_120108
SearchMobileComputing.com. DEFINITION: gamer. Retrieved from the World Wide Web on
August 2, 2011: http://searchmobilecomputing.techtarget.com/definition/gamer
Shepard, D. (2008, October 20). Defining a gamer. Rarityguide.com. Retrieved from the World
Wide Web on August 2, 2011: http://www.rarityguide.com/articles/articles/16/1/Defining-aGamer/Page1.html
Thirunarayanan, M.O., Vilchez, M., Abreu, L., Ledesma, C., and Lopez, S. (2010). A survey of
video game players in a public, urban, research university. Educational Media
International, 47(4), 311 – 327. Available at the following URL:
http://dx.doi.org/10.1080/09523987.2010.535338
86
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
ACADEMIC INTEGRITY, ETHICAL PRINCIPLES, AND NEW TECHNOLOGIES
John Mankelwicz, Robert Kitahara and Frederick Westfall
Troy University, USA
ABSTRACT
T
o prevent and police academic dishonesty, schools have increasingly turned to modern
technologies. The outcomes have been at best mixed, as current social, technological, and
legal trends may have sheltered and favored the cheaters. This paper examines academic
dishonesty and the tools, practices and strategies to mitigate the problem from a formal ethical
perspective, with special attention to more currently prominent technologies. However, technologies do
not address the many underlying pressures, skill factors, and value traits driving students to cheat.
Hybrid approaches, integrating technology into the development of personal virtues and ethical culture
at schools may prove more potent (Kitahara, et. al., 2011).
Keywords: Academic Integrity, Technology, Biometrics, Electronic Monitoring, Ethics.
INTRODUCTION
Both the academic literature and the popular press are replete with reports concerning the growing
problem of academic dishonesty across the globe. This parallels the attention to dramatic cases of
cheating in nearly every facet of today’s society: family, business, sports, entertainment, politics, etc.
Students believe that cheating is more prevalent and accepted today, and is present in every facet of
life. For example, results from the 29th Who's Who Among American High School Students Poll taken
in 1998 indicate that; 80% of the country's best students cheated to get to the top of their class, more
than half the students surveyed said that they do not think cheating is a big deal, 40% cheated on a
quiz or a test; 67% copied someone else's homework, and 95% of cheaters say they were not caught.
Ercegovac and Richardson (2004) found that 58.3 percent of high school students let someone else
copy their work in 1969 and 97.5 percent did so in 1989. Over the same time period the percentage of
students who report ever using a cheat sheet doubled from 34 to 68 percent. They reported that at
Virginia Polytechnic Institute various forms of cheating have more than tripled from 80 in 1995 to 280 in
1997. These results imply that educators must assume a much more proactive and attentive role in
order to preserve the academic integrity of their courses and programs. The literature also suggests
that academic dishonesty may be growing at disturbing rates worldwide (McCabe, et. al., 2001a;
Eckstein, 2003).
In response to assaults on academic integrity, institutions are searching for the best policies,
procedures and tools to minimize the problem (Academy of Management Panel, 2009). Olt (2002)
classifies the approaches to combating academic dishonesty for online courses into three categories of
policing, prevention, and virtue. Policing seeks to catch and punish cheaters. Prevention seeks to
87
Academic Integrity, Ethical Principles, and New Technologies
reduce both the pressure to cheat and the opportunities to do so. A virtues approach, the slowest but
probably the most effective, builds a culture for students so that they do not want to cheat. This
classification structure provides a convenient and concise way to discuss potential approaches to
ensuring academic integrity.
Increasingly, schools have turned to technology. The methods commonly employed include; electronic
and procedural mechanisms for controlling the classroom/exam-room environment, software aids for
detecting plagiarism, biometric systems for student identification, and statistical methods for analyzing
unusual patterns in student performance compared to class or historical norms. However, even when
schools have employed advanced technology, most solutions to date have involved straightforward
policing methods for detection and punishment, as well as preventive security measures.
It seems clear that the nature of cheating is changing. As data storage, access, distribution and
communication technologies have advanced, so too has the sophistication of the methods by which
offending students practice their deceptions (Conradson & Hernandez-Ramos 2004, Argetsinger,
2003). Collaborative environments like team projects and the Internet are making the distinction
between honest and dishonest behavior much more fuzzy.
Issues of academic dishonesty in general or technology do not exist in a vacuum, but are influenced by
a broad cultural context. This paper will first discuss this context. It will then describe the issue of
academic dishonesty, including factors that drive students to cheat. A formal ethical analysis will treat
the actions of the academic offender, the educational institution, and society. It will then consider in
detail the ethical issues surrounding widely used policing/prevention technologies. Focus will be first
upon the consequences of these actions for specific types of stakeholders and then upon the patterns
of Moral Intensity (Jones, 1991) surrounding the actions and consequences.
SOCIETAL CONTEXT
The societal context provides both concrete elements and events and the intellectual assumptions and
outlooks within which issues like academic dishonesty and preventive technologies are considered. In
the developed societies, this context is one of rapid change and increasing demand for skills and
education. As a society, the US presents a particularly complex case. A nation of immigrants with still
increasing cultural diversity, it affords an unusual variety of outlooks on life, ethics, education, and even
technology. However, the US has a long history of valuing individual effort and allowing great social
mobility. Educational attainment is not only a driver of wealth, but also the increased social status
derived from wealth increases desire for both cultural amenities and formal educational credentials.
There are new personal aspirations and social demands. Patterns of explanation and attribution and
become immensely complicated. Causal textures, if they have any objective meanings, may be
bidirectional and across levels of social organization.
Family expectations, consumerism, a general belief in both individual enterprise and now globalized
capitalism are among many factors producing on the contemporary student a tremendous pressure to
succeed, perhaps at any cost. Parents, schools, and society in general send mixed messages. It
becomes unclear whether the offense is dishonesty or just getting caught. At the same time, there
appears to be more tolerance of cheating. Constant pressure and delayed gratification have their
impacts. Yes, they can indeed strengthen character, especially patience and hope. At the same time,
they can produce a certain sense of unreality, giving life a certain dreamy or even game-like quality as
88
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
an unending series of real time moves - perhaps like a video game. Stressed students often complain
that coursework is a guessing game, or mind reading. They may not be so far off base.
Why is education like a strategic game for many students? It is because the outcome for a player is
dependent not only on that player’s actions but on those of other players. It may also be partly
dependent on unforeseen, even random factors, or on luck. In the context of academic dishonesty, the
analogy of poker seems especially suitable. First, there are real stakes: present or future monies or
other amenities. Second, a very good single player beats most single cheaters, just as diligent and
bright students exceed those who depend on cheating. However, collusive cheating is much more
effective. Collusive cheating involving the dealer is deadly, and may be almost impossible to detect; this
would be analogous to involvement of an academic employee as an accomplice. Also, in poker bluffing
is not considered cheating; it is part of the game. Viewing education as poker is a problem, with social
consequences. Not quite so obvious, however, is that the destructiveness is made worse because most
student offenders (and many school officials) are bad players. There is an appropriate old poker adage:
“a fool bluffs on anything, a sucker calls on anything.” Some cheaters try to bluff an exam, bluff when
confronted, and actually “call” by commencing a risky legal proceeding. Some of course survive
because institutional authorities back down.
Attitudes toward technology may be as diverse as those toward ethics. However, for the most part, the
current generation of students is savvy in using information technology and the new consumer
technologies. They have used them as learning aids. They have utilized and helped drive three of the
most important trends in personal tech: wireless, miniaturization, and convergence. Indeed, these tech
“toys” in general (not just video games) have also been a source of comfort – momentary relief from the
intense pressures for performance while growing up in a confusing world. It is not hard to understand
that students would use them to alleviate perceived academic dangers. Consider a smart phone, with
integrated zoom digital camera, voice recorder, and Bluetooth – easily concealed and paired with micro
ear buds. It would a powerful tool, opening opportunities for academic cheating; it would greatly
facilitate communication – especially asynchronous communication - with confederates. Instructors and
institutions may employ equally robust counter technologies. Educational technology as well as
consumer technology is in rapid change, and many tools for mitigation of cheating techs are still
relatively new technologies, for which implementation is the technology (Weick, 1995). Hence, analysis
must focus on the impacts upon the new technology as well as possible early impacts from it; causal
attributions may validly be bidirectional. The term “new” is best understood as relative to the user.
Offending students may be more savvy in some ways than the instructors or officials monitoring them,
they may in fact be shaping and creating the real new technology, the one forming in practice.
ACADEMIC DISHONESTY
Research on identifying causal factors (personal, social, demographic, and institutional) continues but
thus far has produced mixed and sometimes conflicting results. Dowd (1992) concluded from his review
of the literature and from surveys taken at the Lincoln Land Community College (LLC) in Springfield
Illinois that students feel stress in the academic environment and that stress may cause them to act
improperly. Dominant influences were peer pressure and improper management of the
classroom/examination environment, e.g. close proximity to other test takers and large class sizes.
Students reporting poor study conditions, such as those that limited their study time, were more likely to
cheat. McCabe and Trevino (1997) found that "peer-related contextual factors" had the most influence
on whether a student would commit an act of academic dishonesty. The research on gender as a
discriminator for cheating has yielded mixed results and may necessitate investigation of secondary
89
Academic Integrity, Ethical Principles, and New Technologies
gender-related factors (Crown & Spiller, 1998; McCabe, et. al., 2006; Ruegger & King, 1992). Being
male and/or younger than 24 years of age were characteristics associated with greater involvement in
academic misconduct (Ercegovac, 2004). On the other hand biological age, social class, and work
status had no effect in the study by Pino and Smith (2003). Interestingly, those authors found that
students who watched television and engaged in student clubs or groups were more likely to cheat,
dramatically illustrating societal and technological influences on student behavior
Whatever the influencing variables, most research suggests that cheaters are generally less mature,
less reactive to observed cheating, less deterred by social stigma and guilt, less personally invested in
their education, and more likely to be receiving scholarships yet performing poorly (Diekhoffet. al.,
1996). Not surprisingly cheaters tend to shun accountability for their actions and blame their parents
and teachers for widespread cheating, citing increased pressure on them to perform well (Greene &
Saxe, 1992). Students are likewise more apt to blame their dishonest or unethical patterns to external
influences and rationalizations for which they are cannot be held accountable; in this they follow the
“fundamental attribution error” (Kelley, 1967), attributing their own failings to adversity, while attributing
the failings of others to character flaws. Worse yet, society as a whole has become increasingly more
tolerant and even accepting of the practice of cheating, often citing the need to survive in today’s
competitive environment as justification for that shift in attitude (Slobogin, 2002; Vos Savant, 2006).
They are more apt to threaten lawsuits with the belief that the university will ultimately back down.
Clearly these factors imply that it will require more thought, time and energy to maintain academic
integrity in today’s academic environment.
The literature is largely consistent on one aspect as reiterated in investigations by Hardy-Cox (2003)
that cheating is not simply a student issue but is shared by the institution and community/society. Dowd
(1992) concluded that to encourage academic integrity the academic institution must establish itself as
a role model for proper behavior, and faculty and institutions must educate students on why not to cheat
and demand no less. Additionally, policies empower both instructors and students and consequently
crafting and enforcing them must be a collaborative effort including administration and institutional
leadership. Likewise, environmental influences on dishonest behavior must be minimized, integrity must
be stressed, and administration’s continuous support is essential. Several studies indicate universities
that have implemented a student honor code have experienced lower rates of cheating among their
students (McCabe, 2005; McCabe, et. al., 1993). Some institutions adopt hybrid approaches and
strategies with significant technology-based tools as key policing and detection elements (Kitahara and
Westfall, 2007), while starting the long-term tasks of building ethical academic culture. In a more
proactive and integrity-building manner, many have adopted honor code based systems with
participation and commitment by students, instructors and administration in the development and
implementation of strong, formally-derived academic standards of conduct and honor codes with the full
realization that these efforts to build a culture of honesty will likely require a good deal of time (Kitahara
and Westfall, 2008).
ETHICAL ANALYSIS
Ethics apparently evolved in response to two shortages: that of physical goods and amenities and also
of human sympathy for others. Lacking such sympathies, people began to harm each other,
intentionally or not. In response, society developed norms, mores, and formal ethical systems to create
clear expectations and make behaviors more predictable. Ethics deals with values, obligations, and the
relationships between them. It is productive to begin ethical analysis – and policy considerations – with
a thoughtful rather than merely moralistic tone. First, issues of academic dishonesty are very complex,
90
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
demanding sober rumination. Also, these are not pure ethical issues for either the offender or the
educational institution; rather, they are actually management issues, combining both practical matters
of fact with premises of value (Simon, 1945). An individual student is managing her or his efforts, time,
and attention within a personal goal structure and set of constraints, just as an educational institution is
managing its resources and processes. The very nature of the issue suggests Utilitarian ethics as the
immediate, dominant, relevant framework. Other schools of thought - Virtue, Justice, Rights, or
Deodontic approaches - do indeed have significant input, primarily to provide limits and constraints in
specific areas of activity. The immediate concern is with achieving desirable consequences and
avoiding undesirable ones. Moral rationality for institutions here lies mainly in maintaining consistent
outcome preferences and incorporating them into goals.
However, there is no universal consensus on desirability or on exact standards for evaluating
consequences, and many students seem to have a fuzzy understanding of many integrity issues,
especially in regard to plagiarism. Without some clear consensus, it will be hard to develop a strong
sense of Moral Intensity (Jones, 1991) – an important concept to be discussed shortly. Also, it is usually
beyond human rationality to correctly predict all consequences or even their probabilities. Unless the
situation becomes clouded with factors such as personal animosity, academic dishonesty basically
involves deliberately false representations and potential denials, rather than deliberate harm to another
or oneself. This gives the issues a special character. The negative consequences of academic
dishonesty might include inappropriately appropriated gains to the cheater, undeserved ill treatment to
the honest student, the social consequences of later job related incompetence, the distrust among
peers, parents, or others, the reputation damage to the academic institution, etc. All of these are
immediately recognizable as consequences, or they can be easily rephrased as such. But if he
perceives these stakeholders as competitors or as hostile, the student may ask how much of the truth
he actually owes them, or how much concern he should have for them. In this sense the ethics of
modern academe are very similar to that of business; concerns arise and are articulated as ethical
when one party senses actual loss, potential loss, or lost opportunities. Further, much if not most
academic dishonesty involves sins of omission rather than commission. Omissions of proper citation
(plagiarism) are particularly common. There is also the equally significant but less visible negligent
omission of appropriate mindfulness by the student, teacher, or administrator. Such negligent omissions
are generally unintentional; this lowers the apparent moral culpability of the actor, raises the burden of
proof, and discourages immediate imposition of a legalistic framework.
Why is Cheating Wrong?
From this starting point, consider why society (or at least of society) believes that cheating is wrong.
Intuitively, the failure to convey good reasons to the students would seem to be a strong causal factor in
the problem. Surely there is much diversity of opinion, but are the critics of academic dishonesty
themselves unclear? Consider five explanations. These are essentially Utilitarian, but they need not
depend on any specific school of ethical thought, and most people are somewhat familiar with them.
They all appear to have validity for this discussion.
It is often asserted that common sense is the final practical test of an ethical position or action. Certainly
the almost universal presence of Double Effect in our actions suggests this. Virtually every school of
thought will somehow attempt to draw on it. The criticism here is that common sense is not only that
hard to formally define, but it is not really very common. It is so often used as a purr word or growl word.
In ordinary speaking
91
Academic Integrity, Ethical Principles, and New Technologies
Academic dishonesty is an offense against truth. We may be taught to honor truth for many reasons,
with or without belief in a Supreme Being. Some would argue that a preference for truth is hard wired
within us, and that our well being, self esteem, and even our health, suffer as we depart from it. If this is
so, then dishonesty is an offense against ourselves. However, this may be a little too philosophical for
some.
Dishonesty would not pass the Disclosure Principle of ethics. This admonishes the actor to consider the
reaction if his decisions and actions were known publicly. This idea is related to the argument from
truth, but adds the emotional and concrete reactions of others to the area of concern. Here, actions
cannot be kept secret, and there can be no secret consequences. It is a powerful preventive principle.
Cheating breaks an implied – and sometimes explicit - social contract. This is true even when there is
no possibility of legal enforcement. There may be psychological hurt to others. There may also be the
loss of others’ respect. Clearly this is a more specific explanation, but even here there will be
disagreements. Defensive offenders will say no contract existed.
Dishonesty violates the ethics of fair competition; this is often cited, e.g., Knight (1923). It is a major
belief of US and many other cultures. But even in the US attitudes toward competition are diverse, and
there is some disagreement about what those rules actually are, or should be.
Finally, academic dishonesty is actually a violation of professional ethics. The argument is that most
students attend school for vocational gain, usually career growth as a professional. Further, this applies
not just to doctors, lawyers, accountants, etc., but also to professional occupations that are unlicensed even to self-employment. A student who cheats in professional preparation will later be an incompetent
and probably dishonest professional.
Moral Intensity
Three Utilitarian models are central to contemporary discussion of business ethics: the stakeholder
approach, with many advocates (e.g., Mitchell, et. al., 1997), the Jones (1991) Issue Contingent Model,
and Integrative Social Contracts Theory (Donaldson & Dunfee, 1994). Each has a contribution for our
understanding of academic integrity, and the three work well together. Applying these models together
yields some interesting insights. First, the superiority of building a culture of virtue rather than policing
becomes very clear, as it would galvanize stakeholders and perhaps add new ones. By raising the
consensus regarding academic integrity, ethical culture would also increase the probability of negative
consequences to the perpetrator. There would be greater social disapproval as a consequence, and
likely more immediate consequences, and possibly more upper level leader support.
The school or college must cope with these complexities, while retaining its effectiveness and
adherence to the values and norms of its own organization culture. Thus more issues demanding
ethical decisions arise on the side of the institution than on the side of the academic offender. This very
difficulty is part of the explanation why faculty and administrators are so sluggish in acting. The
discussion will treat first the Jones (1991) Issues Contingent Model and its importance for academic
integrity. Next the issues will be discussed first with regard to the offender(s), and then from the
vantage of the educational institution and society. The impacts upon and probable reactions of
stakeholders are cardinal elements.
92
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
Ethical issues are not truly quantitative, but they are not without degrees of comparison. A very
important tool for making such comparisons is Jones’ (1991) Issue Contingent Model, which introduced
the concept of Moral Intensity. It is useful in analysis of Double Effect and general stakeholder
considerations. This model, shown as Figure 1, provides crude surrogates for both the degree to which
values are generally violated in a situation as well as the sense of obligation of a focal actor to
deliberate or to act. However, this model deals with issues and outcomes, and does not prescribe
specific values or ethical principles. Having recognized an ethical problem, managers then pass
through the stages of moral judgment, intentions to act, and finally actions (Rest, 1986). The greater the
Moral Intensity (Jones, 1991) of an issue or problem, the more likely an individual will continue through
the steps of this process, proceeding toward intentional moral behaviors. The moral intensity concept
encompasses six measurable dimensions: magnitude of effect, probability of effect, concentration of
effect on a specific group(s), social consensus on the issues, immediacy in time, and proximity in
space. Many aspects of organizational context may act to facilitate behaviors or impede intentions from
becoming behaviors (Jones, 1991). High magnitude of effect and strong social consensus regarding an
issue greatly facilitate initial moral awareness in competitive business environments, providing that the
discussion contains explicit moral language (Butterfield, et. al., 2000).
Figure 1: Issue Contingent Model (Jones, 1991)
Individuals' philosophical differences also interact with the characteristics of the issue, as defined by
moral intensity. "Utilitarians" become aware of fewer moral issues than "Formalists" (Reynolds, 2006),
although everyone seems to attach moral significance to unjustified and clear-cut instances of physical
or financial harm, i.e., to serious consequences. Reynold's utilitarians seem to have less moral
awareness generally, and especially in issues like cheating, that may appear victimless. Other research
appears to indicate that persons who rely heavily on numbers oriented paradigms may also slowly
become less morally aware. For instance, partners in CPA firms generally showed lower moral
reasoning than lower level employees (Ponemon, 1992); accountants and accounting students similarly
scored lower than comparison groups from other fields (Lampe & Finn, 1992).
93
Academic Integrity, Ethical Principles, and New Technologies
Integrative Social Contracts Theory naturally links ethics to law and social pressure. The theory would
hold that cheating violates an implied or explicit “contract” at the local or micro level of students and
schools, the level of contract administration and adjudication. Academic contracts derive their
legitimacy and ultimate enforceability from adherence to “hypernorms” prescribed by some larger social
entity(s) at the macro level, which might include society and the state and federal levels of government.
Note that public bodies are important third party payers for education services. Industry also values the
benefits from education. Stakeholders at the macro level have not, however, provided the constraint
and detailed guidance needed to inhibit academic misconduct.
The Offender
In his classic Lies and Truth, the psychiatrist Marcel Eck (1971) emphasizes that a system of personal
loyalties – to family, clan, faith, school, etc. - is developed in childhood well before the child can achieve
any sophistication at discerning verisimilitude to reality. This appears to be true in all cultures. Almost all
individuals develop these loyalties early, while many people may never develop keen faculties for
practical or moral reasoning. Loyalty demands greater affection, greater truthfulness, and a greater
sense of protectiveness toward the targets of devotion. It may also involve baser attitude towards
people or things outside the circle. There is of course a paradoxical friction here: the bondings from
early loyalties and dependencies often eventually lead to more “white lies” told to family members,
close friends, etc. than to outsiders. At a higher level, information screens are often tighter within
organizations than between them.
There are many ramifications to this pattern of human development, which affects not only action and
speech, but also perception itself. The fundamental attribution error (Kelley, 1967) is very common.
Similarly, negative or positive impacts to those in the circle of loyalty are more easily perceived than
consequences to strangers; there is a psychological homologue to Jones’ (1991) dimension of physical
proximity. In ethical consideration of their own activity, individuals simply may not see many of the
perfectly valid stakeholders impacted by their actions. On the other hand, the actions of those very
stakeholders impacted may have a high Moral Intensity for them.
One immediate ramification concerns application of the Principle of Double Effect. Since almost all
serious or complex actions will have both good (desirable) and bad (undesirable) consequences, an
accurate assessment and comparison of effects requires that they be anticipated before the action.
When people, because of the pattern of their personal loyalties, omit stakeholders, they also omit many
consequences from the comparison. This applies to good as well as bad consequences. Double Effect
analysis becomes distorted, probably misleading, and possibly useless. In particular, the search for
better alternative actions is severely truncated.
Not many educational or learning entities can command truly close personal loyalty of this kind: military
academies, quality circles, and religious communities, perhaps a few others. Peer pressure on the other
hand is present in many situations and is often very powerful. Indeed, in the academies, circles, and
communities just mentioned, peer pressure is likely to be particularly high. Students situated there are
also more likely to perceive their fellows as valid stakeholders in their actions. Perhaps even in these
venues camaraderie (and peer pressure) can supersede moral awareness as the governing dynamic.
However, there is also self-selection. The idea is ancient. Emphasizing that people are social as well as
rational creatures, Aristotle emphasized that they universally tend toward association with those at a
similar level of virtue (Ross, 1984) – moral and intellectual – whether in business or social life. The
dishonest deal with the dishonest in perpetually shifting alliances, and honorable people congregate
94
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
together in long lasting bonds of mutual profit and friendship. Each group is uncomfortable with the
other; personal misrepresentation of one’s character is unsustainable over the long term. Since
Aristotle’s virtues are to be developed and increased through practice, the honorable group has an
apparent self-sustaining pattern.
Most students, and especially those who are dishonest, do not recognize the complex impacts that
academic dishonesty has on a variety of stakeholders. Table 1 provides a preliminary summary listing
of the major impacts. While some students may indeed be morally corrupt, most do not see the impacts
because their mindset is fixed in what Diesing (1962) would have called “economic rationality.”
Educational institutions, by contrast, are fixed on “technical rationality,” in which cheating is a very
disruptive element in the process of assessing and controlling learning process. Later these
organizations may be required to exercise a difficult “legal rationality,” in the wake of student offenses.
Table 1:
Stakeholder Impacts from Academic Dishonesty
The Educational Institution and Society
Institutions of learning are cultural guardians who nurture and perpetuate both existing and new
knowledge. Their clients may be considered to be students, their parents, the government, and society
at large. Future employers of the students are also often said to be stakeholders. Schools and colleges
are functioning, productive organizations as well. Although their products are hard to define, these
institutions must operate on budgets, abide by the laws, and fulfill reporting requirements. In addition to
these “crisp” requirements, they must fulfill innumerable “softer” tasks in maintaining their societal
legitimacy. The tasks are many, multifaceted, hard, and usually ambiguous. Academic dishonesty
disrupts these tasks. Institutions must be vocal, fair, and proactive in confronting it. Yet the actions a
school will take will themselves have significant impacts. Table 2 provides a preliminary summary listing
of the major impacts upon the most important stakeholders.
95
Academic Integrity, Ethical Principles, and New Technologies
Table 2:
Stakeholder impacts from academic discipline.
Academic governing committees may employ measures beyond those dictated by purely academic
concerns and policy violation as they resolve accusations of academic dishonesty. They must inquire if
the University’s Academic Policies and Procedures - to which all incoming students must agree - are
truly effective. They must determine the role for technology in resolving such cases. In particular, they
must determine what level of evidence, i.e. burden of proof, is necessary? Applying courtroom
standards to such cases marks a retreat from the historic independence of universities. It requires that
colleges publish policies and procedures for dealing with cases of cheating to protect students’ rights to
Fourteenth Amendment rights to due process (SMU Office of Student Conduct and Community
Standards, 2008). Currently, the consequences for a student caught cheating are often grossly
disproportionate to the costs of policing, preventing and adjudicating cases of academic dishonesty. In
enforcement, all too often the criteria become that which can be proven beyond a shadow of a doubt in
a court of law. Logic and common sense are important ethical considerations, but they don’t always
carry full weight in such proceedings.
Sometimes academic dishonesty involves collusion, which raises the complexity immensely for the
school. Here the analogy to collusive cheating in poker is particularly cogent. Consider this continuum.
Collusion may be entirely unconscious by one inept party, as they inadvertently tip their hands to an
unscrupulous second party, hurting themselves and third parties. The inept player deserves scolding,
warning, and instruction – not immediate ostracism from the game. Slightly more culpable is the player
who simply ignores an individual cheater. Allowing a losing cheater to stay in the game may be fitting
punishment in poker, but the consequences to other stakeholders forbid it in academe. Ignoring
consciously collusive cheating is more serious, since collusive cheaters are usually not poor players.
Most culpable are the collusive cheaters themselves. Their tactics and sophistication vary. There may
be more than two of them at any moment. When done well, collusive cheating may be undetectable, or
at least unprovable. The only solution is expulsion from the game. However, these cheaters could
96
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
become dangerous, later if not sooner. The honest, adept player will normally act accordingly, avoiding
situations of suspected collusive cheating altogether. (By analogy, would an honest student leave for
another school?) At present there seems little alternative but to expel, publicize, and refuse readmission
to those involved in cases of conspiratorial cheating.
TECHNOLOGICAL APPROACHES
Present systems range from robust course management systems to more advanced techniques that
provide scoring metrics predictive of cheating behavior. Many incorporate technologies to control the
classroom environment and hardware and/ or provide physiological biometric characteristics to identify
students and monitor examination performance. They employ straightforward strategies using
somewhat conventional technical mechanisms. Again, these “first generation” policing and prevention
approaches involve new technologies, and may not directly increase actual virtues of academic culture
beyond impact as symbols of change. For the most part, these technologies have not been disruptive or
caused organization restructuring or power shifts, except perhaps among the information technology
groups. The flow of causal influence is thus from the organization to the new technology, as it is
implemented (Weick, 1995). Interesting research and development efforts are now focused on the next
generation of tools and techniques.
Consider now five technologies commonly in use to monitor examinations and submitted work.
Consider them in decreasing order of intrusiveness. All of these technologies are supported by
intelligent software. Each is installed by executive fiat or legislative mandate, but each is also event
driven, activated by input from the student. These monitor the physical inputs – the students
themselves For examining these technologies, the most useful of the classical frameworks would
appear to be that of Perrow (1971). Only the oldest, the fingerprint scanner, seems to have been in use
long enough that it is not defined by the manner of its implementation aspect (Weick, 1995). The
fingerprint scanner and the lesser-used facial recognition technology can identify the individual actually
taking an examination with great accuracy. Clearly, impersonating another student is clearly fraudulent,
sometimes criminal. However, faces seem to be more diverse and difficult to study than fingerprints, so
that task variability (Perrow, 1971) is higher for facial recognition while, task analyzability is less.
The omni directional antennae can identify unusual activity around the student during an examination,
that is, during the process. The task variability and task analyzability for this technology would both
seem to be high. One commonly used tool, the Remote ProctorTM, combines technologies, employing a
360-degree camera, omni-directional microphone, fingerprint scanning and intelligent software.
ProctorU provides a live-proctoring alternative using web-camera based services. These systems
operate during, not just prior to testing, and their output requires inspection and judgment. This makes
them the only of the five technologies that creates the possibility for corruption of the human monitor,
analogous to the dealer in poker.
The Responds Lockdown Browser also works in the process, denying students access to unapproved
websites and materials during the examination. This relatively straightforward preventive technology
would have a low task variability and presumably high task analyzability.
Plagiarism typically tops the list of the most common ways students in which cheat (Konnath, 2010);
Turnitin and similar software systems have proven to be effective tools against plagiarism. These
monitor the output: the finished paper or essay. Anti-plagiarism systems have been most effective to
combat academic dishonesty and have survived privacy and legal challenges. They are less intrusive to
97
Academic Integrity, Ethical Principles, and New Technologies
the individual, often working seamlessly and invisibly; they are the most generally accepted technology.
However, very the nature of the inputs implies high task variability. Task analyzability is tricky, since
ideas go beyond words, and the system can only compare words and character strings.
The above technologies do not directly influence stakeholders. While they may be formidable
prevention/policing tools with some deterrent effect, they have little impact on the Jones (1991) Issue
Contingent Model dimensions. For the most part they increase the probability and immediacy of
detection, not necessarily the probability of consequences - except of course for the very raising of the
issue at all as a form of consequence. Only web-cam based human proctoring provides a clear, real
time source of testimonial evidence, so important in both academic and legal proceedings. Providing
more immediate alarm systems from the other methods to a live proctor could of course, strengthen this
facet. Also, the technologies could possibly be used to add a new category of consequences as a
deterrent. By more judiciously providing signals to suspected perpetrators during the suspected
offenses, the technologies might cause some undesired behaviors to cease. Technical information
provided by these hardware/software solutions finds its way to academic administrators slowly. This
information typically reaches societal decision makers level stakeholders as aggregated report data
after passing through many hands. Thus support from this macro level will be slow.
DISCUSSION
It appears that standard policing and prevention strategies are largely ineffective in curbing the upward
trend of cheating in academia (Academy of Management panel, 2009). Technological solutions are
inherently limited and are likely to serve only as stopgap measures. Students inclined to cheat will
always find a way to do so once the mitigation strategy is known and they gain experience with the
measures implemented. Present reactionary approaches to mitigation of academic dishonesty seem to
lack penalties/consequences with sufficient deterrent capability. The “cost exchange ratio”, i.e. relative
costs to the student compared to the relative costs to the institution, is currently in students’ favor.
Some institutions have turned to much more significant penalties such as permanent notations on
“official” and publicly releasable transcripts but the effectiveness of this strategy on deterring would-be
cheaters is yet to be determined. Many institutions place large emphasis on policing, detection and
punishment approaches complemented by education of students on what constitutes cheating and
emphasizing honesty and personal integrity. In the long term the prevailing wisdom is that the problem
must be addressed and solved at the societal level, a responsibility shared by students, instructors,
institutions and all other stakeholders in the education process. Their peers and the values of the local
and general societies within which they function most dominantly influence students. Implementation of
a virtues approach will require time to turn the tide on the present trend towards a “culture of cheating”
– one that seems to be more tolerant of dishonest practices in almost every aspect of daily life. These
latter issues require further investigation. It is appropriate to introduce and discuss the subject of ethics
and ethical behavior.
McNamara (1997) concluded that the problem of ethics is extremely complex and most approaches to
managing business (and personal) ethics have been far too simplistic and largely distracted by
fundamental myths. Business ethics is now in fact a core discipline in 90% of most academic business
curricula. The myth that the administration of ethics is straightforward must be replaced with the
realization that ethics is extremely complex, driven by many conflicting value interests and is prone to
large “areas of gray” when applying ethical principles. The myth that ethics is a superfluous concern
since people want to “do good” must be replaced with a well-established formal code of ethics and
corresponding codes of conduct - living documents that change with the needs of society and the
98
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
organization. The myth that ethics cannot be managed must be replaced with adherence to established
laws, regulations and rules that account for the interests of all stakeholders, i.e. “the common good.”
Indeed establishing a culture of honesty within an organization requires commitment to established
norms. Freeman, et. al. (2009) likewise dispute common myths about human behavior in organizations;
human beings are always driven by self-interest, economic models (in business) can explain most
behavior, and ethics is focused on altruistic concerns. The more appropriate perspective is that in our
capitalistic society all stakeholders cooperate to create value for each other, business is about purpose,
money and profits will follow, and fundamentally people tell the truth, keep their promises, and act
responsibly most of the time.
If students must be educated to even see the stakeholders harmed in academic dishonesty, schools
must be particularly careful in systematic stakeholder analysis. Like any other kind of organization
facing any strategic issue, the commitment must be shown conspicuously from the top. This begins by
carefully identifying and then prioritizing the stakeholders by salience. Dealings toward all must
meticulously meet the “moral minimum.” That is, actions must follow respected principles, conduct must
consistently follow value preferences articulated in clear goals, and procedure and judgments must be
visibly scrupulously impartial. As discussed above, these principles should come first from Utilitarian
ethics, conditioned by Justice and Rights doctrine. Academic institutions must be careful to consider the
Principle of Double Effect in their actions. Disciplines should match the severity of the “crime.” However,
the requirements of fairness, reasonableness, and visible impartiality may require that the system
sometimes be lenient, sadly allowing some offenders to slip through the cracks.
REFERENCES
Academy of Management Panel (2009). Ethics in Higher Education: The Cheating and Plagiarism
Challenge, Joint SBE/SIM Workshop, Annual Meeting of the Academy of Management, Chicago,
Illinois, August 8.
Argetsinger, A. (2003). U-Maryland Says Students Use Phone to Cheat – Text Messaging Delivers Test
Answers. Retrieved on June 15, 2006 from the Washington Post website.
Butterfield, K. D., Trevino, L. K. & Weaver, G. R. (2006) Moral Awareness in Business Organizations:
Influences of Issue-related and Social Context Factors. Human Relations, 53: 981-1018.
Conradson, S., Hernandez-Ramos, P. (2004). Computers, the Internet, and Cheating Among
Secondary School Students; Some Implications for Educators. Retrieved on June 15, 2006 from
Santa Clara University website.
Diekhoff, G., LaBeff, E. Clark, R., Williams, L., Francis, B., Haines V. (1996). College Cheating: Ten
Years Later. Research in Higher Education, Vol. 37, No. 4, pp 487-502.
Diesing, P. (1962) Reason in society: Five types of decisions and their social conditions. Chicago:
University of Illinois Press.
Donaldson, T, & Dunfee, T. W. 1994. Toward a unified conception of business ethics: Integrative social
contracts theory. Academy of Management Review. 19(2):252-284.
Dowd, S.B. (1992). Academic Integrity – A Review and Case Study, University of Alabama –
Birmingham report EDS349060, August, 27pp.
Eck, M. (1971) Lies and truth. New York: McMillan.
Eckstein, M. (2003). Combating academic fraud – Towards a culture of integrity. International Institute
for Education Planning, UNESCO booklet 2003, ISN92-803-124103. Retrieved from
http://unesdoc.unesco.org/images/0013/001330/133038e.pdf
99
Academic Integrity, Ethical Principles, and New Technologies
Educational Testing Center (n.d.) Academic Cheating Fact Sheet, ETS Ad Council Campaign to
Discourage Academic Cheating. Retrieved December 1, 2009 from http://www.glasscastle.com/clients/www-nocheating-org/adcouncil/research/cheatingfactsheet.html
Ercegovac, Z. and Richardson, J.V. (2004) Academic Dishonesty, Plagiarism Included, in the Digital
Age: A Literature Review. College and Research Libraries v. 65 no 4 (July 2004): 301-318.
Retrieved December 1, 2009 from http://www.baruch.cuny.edu/facultyhandbook/documents/Pla
giarismLiteratureReview.pdf
Freeman, R.E., Stewart, L., Moriarty B. (2009). Teaching Business Ethics in the Age of Madoff,
Change: The Magazine of Higher Learning, 41(6): 37-42.
Greene, A.S., Saxe, L. (1992). Everybody (Else) Does It: Academic Cheating Education. Retrieved on
June 15, 2006 from Resources Information Center website.
Hardy-Cox, D. (2003). Academic integrity: Cheating 101 - a literature review. The News About Teaching
and Learning at Memorial, Newsletter of the Instructional Development Office, Volume 6 Number 2,
Winter. Retrieved December 1, 2009 from http://www.distance.mun.ca/faculty/news_winter_2003.pdf
Jones, T. M. (1991) Ethical Decision making by Individuals in Organizations: An issue-contingent
model. Academy of Management Review, 16:366-395
Kelley, H. H. (1967) Attribution Theory in Social Psychology, in D. Levine (ed.), Nebraska Symposium
on Motivation, 15: 192-240, Nebraska: University of Nebraska Press.
Kitahara, R.T., Westfall, F. (2007). A Multi-Faceted, Technology-Rich Approach to Academic Honesty
for Distance Learning Courses, International Academy of Business Disciplines, Proceedings of the
19th IABD Annual Meeting - Leadership in a Turbulent Global Environment, Orlando, Florida,
March 29 – April 1.
Kitahara, R.T., Westfall, F. (2008). The Identification and Mitigation of Academic Dishonesty – A
Problem on a Global Scale, Review of Higher Education and Self-Learning, Intellectbase
International Consortium, July.
Kitahara, R.T., Mankelwicz, J. M. & Westfall, F. (2011). Addressing Academic Integrity with New
Technology: Can It be Done? Journal of Information Systems Technology and Planning. 4(9): 3148.
Knight, F. (1923) The ethics of competition. The Quarterly Journal of Economics. 37: 579-624.
Konnath, H., (2010), Academic dishonesty cases increase at UNL, better detection is to credit,
DailyNebraskan.com, Online, retrieved March 2011 from http://www.dailynebraskan.com/new
s/academic-dishonesty-cases-increase-at-unl-better-detection-is-to-credit-1.2421099
Lampe, J. & Finn, D. (1992) A Model of Auditors' ethical decision process. Auditing: A Journal of
Practice and Theory. Supplement, 1-21.
McCabe, D., Klebe Trevino, L. (1997). Individual and Contextual Influences on Academic Dishonesty: A
Multicampus Investigation. Research in Higher Education, Vol. 38, no. 3, pp379-396.
McCabe, D. K. Trevino, L. & Butterfield, K. (2001a), Cheating in Academic Institutions: A Decade of
Research, Ethics and Behavior 11(3), 219-232.
McCabe, D. L., Trevino, L. K., & Butterfield, K. D. (2001b). Dishonesty in academic environments: The
influence of peer reporting requirements. The Journal of Higher Education, 72(1), 29-45.
McCabe, C., Ingram, R., Conway Dato-on, M. (2006). The Business of Ethics and Gender. Journal of
Business Ethics; Vol. 64, No. 2, pp 101-116.
McCabe, D. (2005); Levels of Cheating and Plagiarism Remain High, Honor Codes and Modified Codes
are Shown To Be Effective in Reducing Academic Misconduct. Retrieved on June15, 2006 from
Center for Academic Integrity, Duke University website.
McCabe, D. & Klebe Trevino, L. (1993). Academic Dishonesty: Honor Codes and Other Contextual
Influences. Journal of Higher Education, Vol. 64.
100
J. Mankelwicz, R. Kitahara and F. Westfall
JISTP - Volume 5, Issue 12 (2012), pp. 87-101
McNamara, C. (1997). Complete Guide to Ethics Management: An Ethics Toolkit for Managers, Free
Management Library. Retrieved December 2, 2009 from http://managementhelp.org/
ethics/ethxgde.htm
Mitchell, R. K., Agle, B. A. & Wood, D. J. 1997 Toward a theory of stakeholder identification and
salience: defining the principle of who and what really counts. Academy of Management Review,
22:853-886.
Olt, M. (2002), Ethics and Distance Education: Strategies for Minimizing Academic Dishonesty in Online
Assessment, Online Journal of Distance Learning Administration, 5(3).
Perrow, C. (1970) Organizational analysis: A sociological view. Belmont, CA: Wadsworth.
Pino, N.W. & Smith, W.L. (2003). College Students and Academic Dishonesty, College Student Journal,
December.
Ponemon, L. (1992) Ethical Reasoning and selection-Association in Accounting. Accounting,
Organizations, and Society, 17: 239-258.
Rest, J. R. (1986). Moral Development: Advances in Research and Theory. New York, NY: Praeger
Publisher.
Reynolds, S. J. (2006) Moral Awareness and Ethical Predispositions: Investigating the Role of
Individual Differences in the Recognition of Moral Issues. Journal of Applied Psychology, 91(1):233243.
Ross, D. (translator) (2000) Aristotle: The Nichomachean Ethics. New York: Oxford U. Press.
Ruegger, D., King, E. (1992). A Study on the Effect of Age and Gender Upon Student Business Ethics.
Journal of Business Ethics, Vol. 11, No. 3, pp 179-186.
Simon, H. A. (1945) Administrative Behavior. 3rd Ed. New York: Free Press.
Slobogin, K (2002). Many Students Say Cheating’s OK – Confessed Cheater: “What’s important is
getting ahead”. Retrieved on June 15, 2006 from CNN website.
SMU Office of Student Conduct and Community Standards (2008), The Role of Faculty in Confronting
Academic Dishonesty at Southern Methodist University, Honor Council Brochure. Retrieved on
October 12, 2008 from http://smu.edu/honorcouncil/PDF/Brochure_Acad_Honesty_Faculty_Role.pdf
Vos Savant, M. (2006). Ask Marilyn. Parade Magazine, April 9, 2006.
Weick, K. (1995) Sensemaking in Organizations. Thousand Oaks, CA: Sage.
101
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
THE AUSTRALIAN CONSUMER LAW AND E-COMMERCE
Arthur Hoyle1
University of Canberra, Australia
OVERVIEW
R
ecent and significant changes to Australian consumer protection legislation provide an
opportunity to dovetail the burgeoning world of e-commerce with the existing bricks and
mortar based consumer legislation Australians had become rather familiar and indeed
comfortable with. Consumer protection has since the Trade Practices Act of 1974 (TPA) was enacted,
become part and parcel of everyday transactions, but have only been loosely adapted to the emerging
electronic environment, and this has raised some significant issues. This has implications for both
Australians undertaking online transactions, and those seeking to do similar business with them. The
operative components of the TPA have been largely mirrored in Schedule 2 of the Consumer and
Competition Act 2010, which is itself part of the Australian Consumer Law (ACL) enacted towards the
end of last year and which became law on 1st January 2011. There are however some subtle
differences worthy of study and exploration, and this paper sets out to achieve just that. By way of an
example, the definitions of ‘goods’, ‘services’ and ‘consumer’ in the ACL have regard to implied
“consumer warranties” which are something more than the previously implied contract conditions. The
ACL achieves this change by implying these consumer guarantees into every contract for the sale of
goods regardless of where in the world the contract originates from as long as there is an ‘Australian
connection’. This eliminates the traditional common law distinction between conditions and warranties,
and this can be expected to result in easier access to the law by consumers without the need to resort
to specialist legal advice, and indeed no longer require resort to the law of Contract in any detailed way
to access appropriate remedies for breaches of such guarantees. Following the Gutnick decision
Australian courts can now seize jurisdiction with regard to an international electronic transaction as long
as the effect on the consumer (and this of course applies to businesses as consumers) is felt in
Australia through an appropriate connection. In addition, the ACL has provided a broader catch-all
section dealing with misleading and deceptive conduct by replacing the previous restrictive applicability
to “a corporation” with the broader “a person” in the critical Section 18. And although this does not
completely eliminate dealings under caveat emptor, it has made the provision universal in its
application. The changes in the Consumer Protection regime made under this legislation are being
viewed as an opportunity to bring the rapidly expanding area of e-commerce within its reach. Recent
research indicates that as a society Australians engage in online transactions more readily than ever
before, with major implications for those doing business in the traditional ‘bricks and mortar’ business
world, or making the changeover to the world of e-commerce. The paper therefore explores the relevant
1
Arthur Hoyle BA, Grad Dip Leg St, LLB(Hons), LLM, Grad Dip Leg Prac, Barrister and Solicitor, Senior Lecturer in Law and Technology,
Faculty of Law, University of Canberra Australia, with the research assistance of Ms Alexandra Otevrel, Lecturer in Law, University of
Canberra College.
102
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
provisions of the ACL as they relate to consumer protection under both traditional and electronic
contract, through an analysis of the role of e-commerce in Australia under the new law, together with its
implications for Australians and those seeking to do online business with Australians and will analyse
the implications of this.
Keywords: E-Commerce, Australian Consumer Law (ACL), Consumer Protection Legislation, Trade
Practices Act, Consumer and Competition Act.
THE ADOPTION OF E-COMMERCE
It is often said that legal practitioners, and the law industry generally, are slow to adopt to change. Many
here will of course know that is far from the truth of the matter. Change, as evidenced by the
widespread adoption of electronic communications and means of doing business, is here and now, and
continues at a breakneck pace.
A number of factors have driven the strong and ongoing growth in online shopping in Australia over the
past five years. Internet access and speeds have increased, making it possible for a larger proportion of
the population to shop online2. The increased demand has coincided with a proliferation of online stores
offering a range of products at competitive prices. Improvements to the security and reliability of online
retailers have helped make the internet a credible medium for shopping to the extent that many of
Australia's largest retailers are belatedly entering the fray online, reinforcing the role of the internet in
today's retail landscape. Added to this, the widespread use of the iphone (and m-business, an
extension of e-business) as a further development of the trend started with the desktop and laptop
online sites is evidenced by the development of new “apps” (read applications) has stimulated the
growth of online shopping with the result that in mature markets such as the USA such sale account for
approximately 12% of sales nationwide and in Australia this is expected to increase by 8.6% pa over
the next five years to be worth $A10 billion ($USD10.2 billion)3.
It is these new types of application of IT and the speed and ease with which purchasing can be
accessed by consumers which is driving e-business and which makes it essential that the new laws be
as applicable in that arena as in the conventional one.
Business-to-Consumer (B2C) e-commerce is becoming an increasingly common part of daily life,
conferring on Australian consumers substantial economic and social benefits such as greater choice
and convenience, increased competition and more information on the products and services they
purchase. E-commerce also provides Australian businesses with the opportunity to develop new
markets and to create broader and deeper relationships with their customers than was previously
possible.
At the same time as it delivers these benefits, e-commerce also presents consumers with a number of
new challenges and concerns due to the differences between shopping online and in the traditional
retail environment. Left unaddressed, these issues have the potential to impair confidence in ecommerce and to inhibit the growth of online markets, denying consumers and businesses the full
advantages that these markets have to offer.
2
3
For statistical purposes this does not include sale to online retailers outside of Australia, the sale of goods or services by agents whom do
not take ownership of the goods and services, and the sale of goods by individuals
IBIS World research http://www.ibisworld.com.au/industry/default.aspx?indid=1837
103
The Australian Consumer Law and E-Commerce
The online industry does not solely rely on the sales of goods, but there is also an important and
growing services industry sector4. The rapid improvement in information technology networks
throughout the past five years created many growth opportunities for this industry sector with its ease of
access to news, corporate information and directories creating an entirely new industry based on rapid
communication of current data 5.
This paper will focus on B2C, but will also consider where appropriate the volumetrically larger
Business to Business (B2B) and important Government to Consumer (G2C) as appropriate.
THE MOVE FROM THE TPA TO THE ACL
On 1 January 2011, the Australian Consumer Law (ACL) commenced and the Trade Practices Act
1974 (TPA) was renamed the Competition and Consumer Act 2010 (CCA)6 but through a legislative
quirk in the naming of legislation is commonly referred to as the ACL.
Although frequently updated, the TPA with its 1970s Californian consumer rights roots had become
outdated and in dire need of replacement. The ACL is intended to accomplish that, and is enforced and
administered by a quasi Government authority, the Australian Competition and Consumer Commission
(ACCC)7, each State and Territory’s consumer agency, and, in respect of financial services, the
Australian Securities and Investments Commission (ASIC) which looks after corporate law matters8.
That said, most of the existing laws appear to have been incorporated directly into the new and
expanded legislation9.
Australia is a Federation in much the same way the United States is, with a multiplicity of often
overlapping laws, and the new law replaced twenty then existing State, Territory and Commonwealth
laws with one comprehensive law, making it easier for consumers to understand and enforce their
rights because they will be the same across Australia, as:
 it is simpler and clearer than the equivalent provisions of the Trade Practices Act and the State
and Territory Fair Trading Acts; and
 consistent with the ‘plain English’ movement in drafting (deleting all old fashioned legalese),
previously complex legal terms have been replaced with terms that consumers can understand.
Broadly, the ACL consists of:
 national consumer protection and fair trading laws;
 enhanced enforcement powers and redress mechanisms;
4
5
6
7
8
9
Firms in this industry provide information storage and retrieval services (other than library or bibliographic services). This includes, but is
not exclusive to, individual contact information and general news services. Recently burgeoning online information sites, most prominently
Wikipedia and YouTube, are not included in this industry as the public provides the information, and the services conduct minimal, if any,
data gathering services.
http://www.ibisworld.com.au/industry/default.aspx?indid=556
This can be found at http://www.austlii.edu.au/au/legis/cth/consol_act/caca2010265/
The ACCC promotes competition and fair trade in the market place to benefit consumers, businesses and the community. It also regulates
national infrastructure services. Its primary responsibility is to ensure that individuals and businesses comply with the Commonwealth
competition, fair trading and consumer protection laws. Its website is http://www.accc.gov.au/content/index.phtml/itemId/142
ASIC Enforces company and financial services laws to protect consumers, investors and creditors; regulates and informs the public about
Australian companies Its website is http://www.asic.gov.au/asic/asic.nsf
For example, the existing and powerful Section 52 prohibition of misleading and deceptive conduct in business transactions has been
incorporated into the new legislation as Section 18 with only minor (but none the less significant) amendment, meaning that the extensive
body of existing case law can be carried over
104
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
 a national unfair contract terms law;
 a new national product safety regime; and
 a new national consumer guarantees law.
It is supported by provisions in the CCA to give it full effect as a law of the Commonwealth, which is
significant in terms of dispensing with the confusion caused by so many similar, but often different,
State and Territory laws in this area.
This means that, after 36 years of faithful service, with the massive changes which have occurred in
that time (due to globalisation, internationalisation, changes in the marketplace and consumer
expectations) and the amendments necessitated by these, it had become like grandfather’s axe, and it
was indeed time it was pensioned off.
Since its enactment in 1974, the TPA had as intended become part of everyday transactions, and in
that fact lay a problem in its continued applicability. As a society we have changed in many ways, not
the least of which being the widespread adoption of electronic commerce by individual consumers,
ranging from the use of the internet to shop for everyday goods and for the provision of services
(represented by B2C), and even to communicate with Government (represented by B2G). We have
both become more computer savvy and able to use high speed internet connections for vastly more
commercial activities, and with this confidence, our reliance on electronic transactions is rapidly
increasing.
It seems that this latest manifestation of Australian Consumer Law, while a significant step forward, still
lags significantly behind the pace of change both domestically and globally. This is neither unusual in
itself, nor ought it to be unexpected.
Replacement of such a familiar and well entrenched piece of legislation and its accompanying body of
case law with legislation that not only fits the needs of the connected economy, but enhances its use, is
not something that should be treated lightly, and this paper does not suggest that it has.
A significant factor in the development of both the TPA and the ACL (and indeed a significant
impediment to its reach) has been the Constitutional setting. There being no specific power conferred
on the Commonwealth by the Constitution to regulate trade practices, such powers are sources
indirectly under Section 51(xx) – foreign corporations, and trading or financial corporations formed
within the limits of the Commonwealth, Section 51(i) – trade and commerce with other countries, and
among the States, and S 122 relating to make laws for the government of any territory surrendered by
any State to and accepted by the Commonwealth, or of any territory accepted by the Commonwealth.
Although this uniform piece of legislation is much easier to use, in that it provides consumers with an
easier to understand, uniformly applicable consumer protection legislation generally congruent with the
matching legislation of the States and Territories, it appears not to cover e-Commerce transactions in
many of the areas of acknowledged concern.
Is e–business growing vis-a-vis conventional ‘bricks and mortar’ business? Some dated but relevant
figures showing the rate of growth are provided by the TELSTRA Yellow Pages Business Index: Ebusiness Report of July 2003 which indicated that the proportion of SMEs that used the Internet to sell
products and services has increased. 33 per cent of all Small to Medium Enterprises (SMEs) took
105
The Australian Consumer Law and E-Commerce
orders online in 2003, as compared with 30 per cent in 2002. The proportion of SMEs that received
payment for sales over the Internet grew more quickly, increasing from 26 per cent (in 2002) to 32
per cent (in 2003) for small businesses and from 50 per cent to 63 per cent for medium sized
businesses. There is no reason to think that this rate of growth has slowed and much anecdotal and
other evidence to show that it is in fact increasing exponentially.
The report found that the strongest area of growth in Internet use by SMEs has been accessing and
using online catalogues (undertaken by 53 per cent of SMEs in 2003, as compared with 46 per cent in
2002) and receiving payments for products and services (a rise from 27 per cent to 34 per cent).
Interestingly, and perhaps against expectations, the largest group of customers that SMEs sold to
online were those based in the same city or town as the SME.
WHAT IS THE EFFECT OF THE CHANGE FROM THE TPA TO THE ACL ON
E-BUSINESS?
The ACL is silent on the issue of electronic contracts (other than through the Australian Guidelines for
Electronic Commerce), however it is safe to say that there appears to be no reason why it should not be
applied to consumer contracts for the purchase of goods and or services, whether these contracts are
person to person or whether these are made on-line.
Every person in Australia is entitled to know that under the Australian Consumer Law they have rights
to things like consumer guarantees that guarantee the quality of goods and services, no matter whether
the product was purchased at home, interstate or online.
This includes a national system for product safety enforcement and new laws to protect consumers
from unfair terms in standard consumer contracts.
IS E-COMMERCE SIGNIFICANT IN AUSTRALIAN COMMERCE?
It helps to understand the extent to which E-commerce has been adopted by consumers., as illustrated
by this survey undertaken in 2003
106
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
Australian Household Internet Access 1998 – 200210
The rate of growth evident in 2002 has continued to increase exponentially, with the latest figures from
the ABS showing that at the end of December 2010, there were 10.4 million active internet subscribers
in Australia (excluding internet connections through mobile handsets). This represents annual growth of
16.7% and an increase of 9.9% since the end of June 2010 11. This is perhaps more evident in the
following but earlier table, with the growth in broadband being of particular significance for e-commerce
(which is dependent on medium to high speed connections, 12
The phasing out of dial-up internet connections continued in 2010 with 93% of internet connections
being non dial-up. Australians also continued to access increasingly faster download speeds, with 81%
of access connections offering a download speed of 1.5Mbps or greater.
10
11
12
Use of the Internet by Households, 8147.0, ABS (February 1998 –
http://www.ecommerce.treasury.gov.au/bpmreview/content/DiscussionPaper/03_Chapter2.asp
Internet Activity Australia 2010 ABS http://www.abs.gov.au/ausstats/[email protected]/mf/8153.0/
Household use of IT Australia 2008-2009 http://www.abs.gov.au/ausstats/[email protected]/mf/8146.0
107
November
2000)
also
cited
in
The Australian Consumer Law and E-Commerce
IS THE LEVEL OF COMPLAINTS ABOUT E-BUSINESS ACTIVITIES
SIGNIFICANT?
This then raises the issue of just what are the problem areas experienced by these consumers, with a
2003 analysis of ACC complaints at about the same time (leaving aside domain name complaints which
have been dealt with through other processes) revealing a focus on misleading and deceptive conduct
TYPES OF E-COMMERCE COMPLAINTS RECEIVED BY THE ACCC
Issues or conduct
Percentage of complaints
Misleading advertising or prices
23
Domain name renewals
20
Pyramid selling and other scams
7
Unsolicited goods or services
4
Warranty matters
4
Anti-competitive arrangements
2
Unconscionable conduct
1
The ACCC has revealed that sixteen online traders accounted for nearly half of all online-related
complaints received by the Commission during this period. These traders generated consumer
complaints concerning: domain name renewal; dissatisfaction with Internet broadband/ADSL services;
modem jacking; warranty issues in relation to goods sold over the Internet; Internet service refunds;
misleading advertising on the Internet; slow Internet downloads; STD charges for dialling ISPs; and
changes to terms and conditions which resulted in additional charges for Internet usage13.
E-CONSUMER COMPLAINTS TO THE ACC IN THE YEAR ENDING 31
DECEMBER 200214
13
14
http://www.ecommerce.treasury.gov.au/bpmreview/content/DiscussionPaper/03_Chapter2.asp
http://www.ecommerce.treasury.gov.au/bpmreview/content/DiscussionPaper/03_Chapter2.asp
108
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
It is therefore reasonably clear that consumer complaints are focussed on the very areas that the ACL
seeks to address, and we need to examine the changes that have occurred to see whether these have
been adequately addressed, but first let’s address the legislative setting.
JURISDICTION IS AN EVER PRESENT E-COMMERCE ISSUE
Establishing jurisdiction is clearly a critical precursor to any consideration of the application of the ACL
to an agreement concluded electronically.
Never was it more important to insert a jurisdiction clause in a contract than in E-Business, as the
issues raised by establishing jurisdiction take an already complex area of law to new levels.
By its very nature E-business easily flows across State and even National law areas (or jurisdictions)
with ease, as the general principle is that the laws of a State do not have application outside the
political territorial boundaries of that State15. The application of this fraught area in E-Business in
Australia has not been straightforward, being rejected in Macquarie Bank v Berg16in 1999 (holding that
applying NSW law on the internet would be inappropriate), but allowed in Gutnick v Dow Jones17 which
allowed for defamation to be assessed at both the place of uploading (in that case the State of
Delaware in the USA) and at the place of downloading (in Melbourne Australia), effectively allowing a
choice of jurisdiction based on traditional rules 18.
COMPLIANCE
Competition and consumer laws are enforced by three national regulators.
 The Australian Competition and Consumer Commission (ACCC) is responsible for enforcing
the Competition and Consumer Act (CCA) and the ACL.
 The Australian Securities and Investments Commission (ASIC) is responsible for enforcing
the consumer protection provisions of the Australian Securities and Investments Commission Act
2001, the Corporations Act 2001 and the National Credit Code.
 The National Competition Council (NCC) is responsible for making recommendations on the
regulation of third party access to services provided by monopoly infrastructure under Part IIIA of
the CCA.
The extent to which e-commerce falls within the purview of these regulators will of course depend on a
number of factors, the most important of these being the establishment of appropriate connecting
factors.
WHAT STANDARDS APPLY TO E-COMMERCE?
The ACL includes generally:
 a new national unfair contract terms law covering standard form contracts;
15
16
17
18
Cyberlaw in Australia Clark, Cho, Hoyle & Hynes Wolters Kluwer 201, page 219
(1999) NSW SC 526
(2002) H.C.A. 56 (10 December) 2002
For more on this topic see Cyberlaw in Australia Clark, Cho, Hoyle & Hynes Wolters Kluwer 2010, The Netherlands pages 219 to 234
generally.
109
The Australian Consumer Law and E-Commerce
 a new national law guaranteeing consumer rights when buying goods and services, which
replaces existing laws on conditions and warranties;
 a new national product safety law and enforcement system;
 a new national law for unsolicited consumer agreements, which replaces existing State and
Territory laws on door-to-door sales and other direct marketing;
 simple national rules for lay-by agreements; and
 new penalties, enforcement powers and consumer redress.
Australian E-commerce falls within these through the application of the E-Commerce Best Practice
Model, which sets standards for consumer protection in e-commerce, and is encompassed in the ACL.
It provides industry groups and individual businesses with a voluntary model code of conduct for
dealing with consumers online, which is underpinned in several areas by legislative requirements, most
notably the CCA and the ACL.
WHAT HAS CHANGED UNDER THE LEGISLATION
It is important to view the ACL as the latest development in a discernable line of changing authority in
business law generally19 and consumer protection in particular, which began with the TPA in 1974, and
was followed by the Competition Policy Reform Act in 1995 under which the Commonwealth and the
States and territories signed20:
 A Conduct Code agreement which placed the competition elements within the Constitutional
reach of the Commonwealth;
 An Agreement to implement a National Competition Policy and Related Reforms
 A Competition Principles Agreement to establish a National Competition Council
The consumer protection provisions formerly found in Parts IVA, V, VA, VB and VC of the TPA are now
by and large located in a Schedule (Schedule 2) to the CCA, known as the Australian Consumer Law
(ACL). The relocation of these provisions into a Schedule to the CCA was necessary to address the
complex constitutional requirements of Australia's federal system. Because the ACL has been enacted
by each of the States and Territories it provides a set of uniform and harmonious consumer laws
throughout Australia.
The ACL contains a number of significant changes one such change is the introduction of consumer
guarantees, which replace the implied warranties and conditions in consumer contracts found in
sections 66 to 74 of the TPA.
WHAT IS THE EFFECT OF THESE CONSUMER GUARANTEES?
These consumer guarantees can now be found in Schedule 2, Part 3-2, Division 1 of the ACL, and
apply to the supply of goods and services to consumers by whatever means, including of course ECommerce. By reason of the definition of consumer, these guarantees apply where the goods or
services acquired do not exceed $40,000 or where they are of a kind ordinarily acquired for personal,
domestic or household use. It is not possible to contract out of the guarantees. In fact, ACL regulation
90 provides that specific wording is to be included in consumer terms and conditions which refer to the
19
20
See Annex A to this paper for a chronicle of Business law development in Australia generally
Gibson (electronic version) page 764
110
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
existence of the guarantees and the fact that they cannot be excluded, Such guarantees are not limited
to the period of the manufacturer's warranty and consumers can claim against the supplier or the
manufacturer.
The second notable change is the repeal and replacement of the unfair practices provisions – goodbye
section 52, welcome section 18. The wording of section 18 has changes from a “corporation” to a
“person”. This means that the provision now applies to all persons, whether they are individuals or
companies, corporations or bodies corporate. The section applies to conduct “in trade or commerce”.
Schedule 1 defines “trade or commerce” as meaning trade or commerce within Australia or between
Australia and places outside Australia, providing that at least some aspect of that trading has taken
place in Australia, and includes any business or professional activity whether for profit or not.
In addition to the obvious changes, there are more subtle changes having an influence on the
application of the law scattered through the ACL, including in section 3 the definition of “consumer”.
UNCONSCIONABILITY
Given that e-commerce (along with almost all business activity) relies extensively on the common law of
contract to regulate its dealings with its customers, regulation of the dealings between the parties will
need to be closely scrutinised to ensure no sharp practice occurs through taking advantage of the
remote connections involved. Traditionally, the primary remedy that the consumer had in regard to
defects in goods was a remedy for breach of the contract under which those goods were supplied to
him or her. Judicial decisions and more importantly the development of unconscionability legislation
through the ACL and similar State legislation has done much to provide a remedy to consumers who
find themselves the victims of unconscionable conduct.
While Australian courts have evolved a general common law doctrine of unconscionability, firstly the
TPA and now the ACL has given statutory support to that concept. In addition, there is a wider range of
remedies available under the ACL than would generally be available at common law. There is (it seems
intentionally) no statutory definition of the word ‘unconscionable’ for the purposes of this section
contained within either the TPA or the ACL. Rather, the legislation has provided a list of factors that a
court exercising jurisdiction may take into account in determining the dispute before it. These are:
(a) the relative strengths of the bargaining positions of the corporation and the consumer;
(b) whether, as a result of conduct engaged in by the corporation, the consumer was required to
comply with conditions that were not reasonably necessary for the protection of the legitimate
interests of the corporation;
(c) whether the consumer was able to understand any documents relating to the supply or possible
supply of the goods and services;
(d) whether any undue influence or pressure was exerted on, or any unfair tactics were used against,
the consumer or a person acting on behalf of the consumer by the cooperation or a person acting
on behalf of the cooperation in relation to the supply or possible supply of the goods and services;
and
(e) the amount for which, and the circumstances under which, the consumer could have acquired
identical or equivalent goods or services from a person other than the corporation
111
The Australian Consumer Law and E-Commerce
THE ADAPTATION OF ‘STANDARD’ METHODS OF CONTRACTING TO AN
ELECTRONIC ENVIRONMENT
Contracts remain the favoured bargaining method of all types of business, and as these are by and
large creatures of the common law (having been developed more or less constantly over several
hundred years), they have been adapted to e-commerce through a continual process. This has been
achieved through firstly “shrink wrap” where by terms of agreements concluded online but only
disclosed when the goods or services were delivered were regarded as in the same class as those
included in pre-packaged goods (hence the analogy with shrink wrapped goods)21.
The development of fully online contracts saw the recognition of ‘clickwrap’, in which acceptance is the
contract formed in the country where customer clicks the ‘I accept’ icon or where the server for the
website is located or when the acceptance is communicated to the suppliers in their country of location.
Common law principles suggest that a contract will be formed when acceptance is actually
communicated to the offeror – the receipt rule22.
Browsewrap agreements are the most recent manifestation of online contracts, and are used by many
websites and deem that the act of browsing the website constitutes acceptance of their terms. While
broadly the law of contract holds that terms cannot be incorporated into a contract after acceptance of
an offer, the question of incorporation of terms is particularly relevant to shrink-wrap and browsewrap
agreements, where purported terms are not notified until after a product is purchased or until after
access to a website has been granted23.
Nothing in the ACL would appear to erode the legal effect of these contracting methods.
EXCLUSION CLAUSES
Inequality of bargaining power is also reflected in the extensive use of exclusion clauses in consumer
contracts whereby suppliers seek to contract out of terms implied by statute or the common law, and
the common law has developed some control over the use of such clauses. Documents purporting to
contain exemption clauses may be characterised by the courts as non-contractual if such documents,
for instance, are provided to the customer in the form of a downloadable, or even emailed receipt after
the contract is concluded. In addition, terms of exclusion clauses are ordinarily construed strictly against
the supplier of goods and services. The ACL provides that any provision in contracts for goods and
services to which the Act applies (whether express or incorporated by reference) ‘that purports to
exclude, restrict or modify, or has the effect of excluding, restricting or modifying the application of any
of the provisions of the Act’ are excluded from the contract as of law.
B2B ISSUES
As mentioned above, although B2C is seem by the public as the face of e-commerce, it is in B2B that
most electronic contracting takes place, as though this innovation, great advances have been made in
economic efficiency.
21
22
23
See Hill v Gateway 2000 Inc, 105 F3d 1147 (7th Cir 1997), cert denied , 522 U?A 808 (1997) and ProCD Inc v Zeidenberg 86 F3d 1447 (7th
Cir 1996)
Cyber law page 158
Pollstar v Gigmania Ltd, No CIV-F-00-5671, 2000 WL 33266437 (ED Cal, Oct 17, 2000)
112
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
The advent of just-in-time ordering and delivery (with the attendant savings in warehousing and
financing costs), automated re-ordering and billing, and account ordering/delivery matching to name
just a few developments have made significant differences to the –ecommerce economic process.
The Competition laws incorporated into the ACL will continue to have effect as they have under the
TPA.
HOW DOES THE ACL THEN APPLY TO E-BUSINESS?
Currently there is a large body of legislation governing e-commerce. Which piece of legislation we refer
to very much depends on jurisdiction. There is some controversy as to whether jurisdiction is decided
according to where the electronic contracts are made. But for the purposes of every day contracts, the
provisions of the ACL should not be excluded. The ACL provides statutory consumer guarantees that
apply to consumer goods and services and certain purchases made by businesses up to the value of
$40,000.
Under these guarantees, goods purchased by whatever means must be of acceptable quality and
perform the function for which they were purchased, while services must be undertaken with due care
and skill.
There has been some updating to bring the legally mandated Australian E-Commerce Best Practice
Model (the BPM) which set standards for consumer protection in e-commerce, but it is not at present
directly in line with the ACL24. This standard was adopted to provide industry groups and individual
businesses with a voluntary model code of conduct for dealing with consumers online, which was
underpinned in several areas by legislative requirements. The Australian Government’s Expert Group
on Electronic Commerce was commissioned to undertake a review of the BPM, and in 2006 this was
updated and replaced by The Australian Guidelines for Electronic Commerce25 but are still commonly
referred to as the BPM.
DISPUTE SETTLEMENT
Courts in Australia have for some time been active in their use of technology. A recent development is
Australia’s first permanent privately hosted electronic arbitration room located in Sydney, but able to be
connected to everywhere – a joint venture between Auscript and Counsel’s Chambers comprised of a
fully networked room with facilities for electronic evidence, document imaging, video conferencing26,
webcasting, chat facilities, real-time transcription, and point-to-point data lines to access external
services. Most courts now have video links meaning the ‘tyranny of distance’ so applicable to ecommerce concluded agreements is minimised through the very technology through which it is
accomplished27.
Mediation or third-party assisted facilitation of disputes is very popular in Australia and used by both
courts as well as independently, and is clearly compatible with the operation of the ACL. Mediation as a
24
25
26
27
For an extensive analysis of the BPM see the
Department of the Treasury discussion paper
http://www.ecommerce.treasury.gov.au/bpmreview/content/DiscussionPaper/04_Chapter3.asp
http://www.treasury.gov.au/contentitem.asp?NavId=014&ContentID=1083
Cyberlaw page 235
The Michigan Cybercourt constitutes and interesting official foray into this area, joining similar ventures in Singapore and Perth Australia,
see further http://www.ncjolt.org/abstracts/volume-4/ncjltech/p51
113
The Australian Consumer Law and E-Commerce
process within Alternate Dispute Resolution (ADR) has become a viable alternative dispute resolution
process. This legitimisation of alternative processes has evolved from an acceptance of the legal
profession and the wider community that disputes can be resolved in a constructive rather than a
confrontational way. Combined with this is the advancement of technology which has introduced yet
another level to ADR – online mediation. As this concept, as both a tool and a process, is still in its
infancy, it remains to be seen whether communication between parties involved in intense conflict can
be helped or hindered by a mediation process conducted electronically. It is recognized, however, as
potentially offering many advantages unavailable in traditional dispute resolution. Perhaps more
importantly, it has the scope to evolve as an important venue for the future resolution of certain types of
conflicts. In order to reduce the disputes, the Government has legislated for codes of conduct both
voluntary and mandatory (including under the ACL)28. Online ADR seems especially suited for global
consumer disputes where the amount in dispute is small and jurisdiction and conflict of laws questions
are prevalent. Such systems play a vital role in giving greater confidence in e-business. An important
component in all of this is the need for consumer and business education about programmes, codes,
and the benefits as well as the limits of ADR, and responsibility for this would appear lo lie jointly with
government through regulation and industry through adoption of best practice models and a flexible
approach to a dynamic environment, with the ACC and ASIC taking on this important role under
Government mandate through the ACL.
28
Cyberlaw page 237
114
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
ANNEX A
History of Australian Business Law29
This chronology includes significant events, mainly dealing with company law, and will be added to in
the future to cater for the electronic environment.
Prior to Federation, all the colonies had company legislation based on the English
Companies Act of 1862. Despite the common origins in the English statute, however,
1800s
variations in the legislation developed around the country and it was not until the late 1950s
that a momentum towards a uniform company law began to build.
Federal Council of Australasia introduces, but fails to pass, 2 uniform companies bills. The
Australasian Joint Stock Company (Arrangement) Bill 1897 allowed joint stock companies to
1886- make arrangements with creditors in other colonies, while the Australasian Corporations Bill
1889 (introduced 3 times in 1886, 1888 and 1889) would have provided for the registration in other
colonies of corporations whose activities extended beyond one colony. (Source: FCA.
'Journals and printed papers', & 'Official records of debates')
1887- Dr WE Hearn drafts a Code of Commercial Law for Victoria but it was never adopted.
88 (Source: G. Davidson, The Rise and Fall of Marvellous Melbourne)
Section 51 (xx) of the Commonwealth Constitution 1901 provides for the federal Parliament
to legislate in the area of "foreign corporations, and trading or financial corporations formed
1901
within the limits of the Commonwealth". States continued to legislate for the incorporation
(establishment) of companies.
Australian anti-trust laws begin with the Australian Industries Preservation Act 1906 designed
to protect a local manufacturer against the International Harvester Company. Part of the
legislation was declared invalid by the High Court in its very first challenge in 1909 in Huddart
1906
Parker & Co Pty Ltd v Moorehead and was effectively rendered unworkable by a further
successful challenge in 1913 in Adelaide Steamship Company Limited and Others v The
King and the Attorney-General of the Commonwealth.
A uniform Companies Act based upon the Victorian legislation by the States and the
1961Commonwealth (for the ACT, NT and PNG) was passed. However, in subsequent years the
62
various jurisdictions did not co-ordinate amendments.
Trade Practices Act creates a Commissioner of Trade Practices and a Trade Practices
1965 Tribunal to examine business agreements and practices to determine whether they were
contrary to the public interest.
Because of constitutional difficulties highlighted by the High Court in its decision in Strickland
1971 v. Rocla Concrete Pipes Ltd, the Restrictive Trade Practices Act is passed which repeals the
Trade Practices Act 1965 and confines itself to trading corporations.
29
http://www.aph.gov.au/library/intguide/law/buslaw.htm#electronic
115
The Australian Consumer Law and E-Commerce
The Senate Select Committee on Securities and Exchange (the Rae Committee,
Parliamentary Paper no. 98/1974) recommends the establishment of a Commonwealth
regulatory body with responsibility for the securities industry.
Signing of the Interstate Corporate Affairs Agreement by NSW, Victoria and Queensland.
The participating states amended their companies legislation to ensure a large degree of
uniformity.
1974
New Trade Practices Act repeals Restrictive Trade Practices Act 1971. Using the
corporations power the act covers monopolies, mergers and consumer protection issues
within trading corporations. The States and Territories still need to pass fair trading legislation
to cover activities outside trading corporations eg in small businesses. Section 7 of the act
replaces the Office of the Commissioner of Trade Practices with the Trade Practices
Commission.
Trade Practices Review Committee (Swanson Committee) report (P.P. no. 228/1976)
confirms the Trade Practices Act and makes recommendations re boycotts etc which were
1976
implemented by the Trade Practices Amendment Act 1978 and the Trade Practices
Amendment Act (No 2) 1978.
Establishment of a national companies co-operative scheme. Under this scheme the
Commonwealth Parliament enacted the Companies Act 1981 applying in the ACT and the
States passed legislation giving effect to the Commonwealth law in their jurisdictions. The
uniform law was generally known as the Companies Code. The Commonwealth also
1978
established the National Companies and Securities Commission (NCSC) to oversee and coordinate the scheme. While the scheme delivered uniformity of text, in practice the
enforcement and administration of the scheme was not uniform, as this was the function of
the 8 state and territory corporate affairs commissions.
Senate Standing Committee on Constitutional and Legal Affairs in its report The Role of
Parliament in relation to the National Companies Scheme in April 1987 (P.P. no. 113/1987)
concluded that the cooperative scheme had outlived its usefulness. It unanimously
recommended that the Commonwealth introduce comprehensive legislation to assume
1987
responsibility for all areas covered by the existing scheme. The Committee's
recommendation was founded on an opinion of Sir Maurice Byers, QC, which asserted that
the Commonwealth had the power to enact comprehensive legislation covering company
law, takeovers and the securities and futures industries.
The Commonwealth passes the Corporations Act 1989 to establish a national scheme of
1989 companies and securities regulation based upon the corporations power. The Australian
Securities Commission Act replaces the NCSC with the Australian Securities Commission.
116
A. Hoyle
JISTP - Volume 5, Issue 12 (2012), pp. 102-117
High Court declares part of the Corporations Act 1989 invalid. In New South Wales v. The
Commonwealth (the incorporations case) the Court held that section 51(xx) relates only to
'formed corporations' and that as a consequence it was constitutionally invalid for the
Commonwealth to rely on the section to legislate in respect of the incorporation of
companies.
In response, the Alice Springs Heads of Agreement was concluded in Alice Springs on 29
June 1990, by representatives of the Commonwealth, the States and the Northern Territory.
Pursuant to this agreement, the Commonwealth passed the Corporations Legislation
1990
Amendment Act 1990 to apply to the Australian Capital Territory pursuant to s 52(i) of the
Commonwealth Constitution, and the States and the Northern territory passed acts applying
the Commonwealth Law in their jurisdictions, via State legislation entitled Corporations
([name of particular State]) Act 1990 and the Corporations (Northern Territory) Act 1990,
respectively. The uniform law, now known as the Corporations Law, was to be found in
section 82 of the Corporations Act. The Commonwealth undertook to compensate the States
for loss of income from State regulatory bodies with the ASC taking over sole administrative
and regulatory responsibilities for corporate law.
1993
Hilmer report (National Competition Policy) recommends extending competition policies to
more business and government sectors on a nationwide uniform basis.
Competition Policy Reform Act abolishes the Trade Practices Commission and the Prices
Surveillance Authority, and establishes the Australian Competition and Consumer
Commission (regulatory body) and the National Competition Council (advisory and research
1995
body). The Act also amends the Trade Practices Act 1974 to extend the scope of the
competition provisions to include Commonwealth, State, and Territory government
businesses. The States are also to pass uniform mirror competition legislation.
To remedy deficiencies in the framework of corporate regulation revealed by the High Court
decisions in the cases of Re Wakim; ex parte McNally and The Queen v Hughes, a national
2001 Corporations Act is passed. The act substantially re-enacts the existing Corporations Law of
the ACT as a Commonwealth Act applying throughout Australia. The Commonwealth was
referred the constitutional power to enact this legislation by the Parliaments of each State.
Review of the competition provisions of the Trade Practices Act (Dawson report)
recommends that competition provisions should protect the competitive process, rather than
2003
particular competitors, and that competition laws should be distinguished from industry
policy.
2009
The Commonwealth passes the National Consumer Credit Protection Act 2009 to bring
consumer credit within Commonwealth control
2010 The Competition and Consumer Act 2010 renames the Trade Practices Act 1974
117
P. Wu and P. Chang
JISTP - Volume 5, Issue 12 (2012), pp. 118-126
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
LIVE USB THUMB DRIVES FOR TEACHING LINUX SHELL SCRIPTING
AND JAVA PROGRAMMING
Penn Wu1 and Phillip Chang2
1Cypress
College, USA and 2Aramis Technology LLC, USA
ABSTRACT
T
he price of Universal Serial Bus (USB) thumb drives is affordable to most college students
compared to other portable and bootable storage devices. There are many technologies
available for installing and running Linux operating system (O.S.) in a bootable USB thumb
drive from a computer that can boot with the USB device. Plus, computers with BIOS that support USB
boot have been sold for several years. Using bootable thumb drive to teach Linux shell scripting and
Java programming becomes practical and cost-effective. In this paper, the authors describe how they
use bootable USB thumb drives to teach Linux shell scripting and Java programming.
Keywords: USB Drive, USB Bootable Drive, Portable Lab, Portable Programming Lab, USB
Programming Lab.
INTRODUCTION
Teaching Linux shell scripting and Java programming in Linux requires both students and teachers to
have a Linux machine handy inside and outside the classroom. In the past, the authors attempt to
convince students to make their computer Windows-Linux dual-bootable. However, students were
uncoordinated to the request.
Asking students to do assigned works in the computer laboratory was another option. These computers
are protected by Faronics’ Deep Freeze which is a tool to protect the core operating system and
configuration files on a computer by restoring a computer back to its original configuration each time the
computer restarts (Faronics Corporation, 2011). Such settings do not allow students to save their works
in the laboratory computers. Many students purchased a USB thumb drive to save their works.
Students also expressed preferences to use their laptop computers for a higher degree of portability.
The authors evaluated the options to use virtual laboratory or VMWare as discussed in Collins’ (2006)
paper. These options require funding which is simply not viable in the short run when the budget is
tight.
In an accelerated course with duration of seven to nine weeks, spending time in installing, configuring,
and troubleshooting Linux operating system will squeeze the time for learning the scripting and
programming. The authors felt they need to provide a simple, low-cost, portable, and easy-to-maintain
118
P. Wu and P. Chang
JISTP - Volume 5, Issue 12 (2012), pp. 118-126
solution for students to build their toolkits. USB thumb drive can be a practical solution (Chen et al.,
2011).
BUILDING THE TOOLKIT
Many Linux distributions, such as Fedora and Ubuntu, provide “live CDs” which are bootable CD-ROMs
containing pre-configured software to allow users to run operating system without accessing any hard
drives (Pillay, 2005). Linux running on live CDs are compact in size. They can be installed on a USB
thumb drive with a recommended minimum capacity of 2GB instead of a CD-ROM media.
Similar to “live CDs”, a bootable USB thumb drive can be configured to be “live USB” thumb drives. A
Live USB thumb drive lets users boot any USB-bootable computer into a Linux operating system
environment without writing to that computer’s hard disk (Fedora Project, 2011). Unlike live CDs which
does not allow writing changes, live USB thumb drives can be set to be “persistent”. A persistent thumb
drive reserves an area, known as a “persistent overlay”, to store changes to the Linux file system. It can
also have a separate area to store user account information and data such as documents and
downloaded files (Fedora Project, 2011).
Live USB support console terminals and just enough tools to learn shell script. They are also
functionally sufficient to learn Java programming with the installation Java supports. A later section will
describe how to install these Java supports.
Creating a bootable USB thumb drive with the ability to run a complete operating system such as Linux
is a well-discussed topic in the Information Technology (IT) community. Tutorials are plentifully
available on the Web, for example:
 Fedora: http://docs.fedoraproject.org/en-US/Fedora/15/html/Installation_Guide/Making_USB_Media.html
 Ubuntu: http://www.ubuntu.com/download/ubuntu/download
In practice, the authors recommend students to create a persistent O.S. which allows automatic and/or
manual saving of changes made by students. The instructions to create persistent live USB thumb
drives are available at the following sites. Notice that Fedora 15 Live USB requires a temporary
workaround to allow a persistent storage due to a known defect. Instruction of this workaround is
available in Appendix A.
 Fedora: http://fedoraproject.org/wiki/How_to_create_and_use_Live_USB
 Ubuntu: http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/
One popular tool for creating a persistent bootable USB thumb drive is Universal USB Installer. It allows
users to choose from a selection of Linux Distributions, including Fedora and Ubuntu, to install Linux
O.S. on a thumb drive (Pen Drive Linux, 2011). The Universal USB Installer is easy to use and
instructions are posted in many web sites and blogs.
JDK AND JRE
According to Oracle, the company that acquired Sun Microsystems and became owner of the Java
language, Java Development Kit (JDK) needs to be installed and configured in order to develop Java
119
Live USB Thumb Drives for Teaching Linux Shell Scripting and Java Programming
application. The JDK is a software development kit (SDK) for producing Java programs. It includes the
entire API classes, a Java compiler, and the Java Virtual Machine interpreter.
It is necessary to distinguish JDK from Java Runtime Environment (JRE). Although a computer must
have a copy of the JRE to run Java applications and applets, JRE is not a software development tool.
The JRE simply provides the libraries, Java virtual machine, and other components to run Java applets
and applications, but it does not provide any support to creating and compiling Java applets and
applications.
In addition to Oracle’s JDK and JRE, a popular alternative is OpenJDK (short for Open Java
Development Kit). It is a free and open source implementation of the Java programming language
(Oracle, 2011a). OpenJDK also support both Linux and Windows platforms. When teaching Java
programming, instructors typically explain how to download, install, and configure JDK or OpenJDK.
Noticeably there exist minor compatibility issues because Oracle’s JDK contains some precompiled
binaries that are copyrighted and cannot be released under an open-source license to share with
OpenJDK.
LECTURES
The authors’ lectures start with demonstrating how to convert a Live CD ISO image file (either Ubuntu
or Fedora) to a persistent live USB thumb drive. The authors then teach basic shell scripting or Java
programming with these thumb drives. Remarkably the authors also teach other Java programming
courses in Windows O.S., not Linux. Seeing that running the Windows operating system in a USB
thumb drive is achievable but unfeasible, the authors will limit the discussion to the installation of JDK in
a thumb drive and how it can be accessed from Windows’ Command Prompt.
Linux Shell Scripting
The authors use the bootable USB thumb drive to run Linux operating system in the GUI mode. The
GUI is typically GNOME which is a user-friendly desktop for Linux. However, KDE (another GUI
desktop for Linux) is also used occasionally. By launching a terminal console, similar to Windows’
Command Prompt, the authors demonstrate the shell scripting in a command-line environment. Topics
typically include basic Linux commands, shell variable and arrays, operators, conditional structures,
repetition structure, string processing, storing and accessing data, regular expression, modular
programming, and user interactions.
At the end of the semester, based on the students’ demand, the authors may also demonstrate how to
format a USB thumb drive for students to use the drive in other courses. The procedure of formatting a
USB drive is no different than formatting any other drive in Windows operating system. The following is
a step-by-step procedure to format USB thumb drives with Linux file systems in Linux.
1.
2.
3.
4.
5.
Use “su” or “sudo” to obtain the root privilege (which is the administrative privilege in Linux).
To find out the USB device name use “fdisk –l”, assuming it is “/dev/sdb1” in this example.
Be sure to remove the thumb drive from the file system using “umount /dev/sdb1”.
To format with ext3 filesystem use “mkfs.ext3 /dev/sdb1”.
To label the thumb drive use “e2label /dev/sdb1”.
120
P. Wu and P. Chang
JISTP - Volume 5, Issue 12 (2012), pp. 118-126
Linux-based Java Programming
JDK 7 is released as of the time this paper is written. The instruction to download and install JDK 7 to is
available at http://download.oracle.com/javase/7/docs/webnotes/install/linux/linux-jdk.html. Instruction to
download and install prebuilt OpenJDK 7 packages is available at http://openjdk.java.net/install/.
OpenJDK 6 runtime is the default Java environment since Fedora 9 (Red Hat, Inc., 2011), as well as
Ubuntu 9.04 and later version. OpenJDK 7 runtime is not yet the default as of the time this paper is
written. Students may have to manually add the software development kit of OpenJDK 6 unless it is
installed by default. Use the followings to check if OpenJDK runtime and software development kit are
properly installed.
java -version
javac -version
To develop Java programs with OpenJDK 6, students need to install the java-1.6.0-openjdk-devel
package. A sample statement to install such packages in Fedora is:
su -c "yum install java-1.6.0-openjdk*"
To install OpenJDK in Ubuntu, use:
sudo apt-get update
sudo apt-get install openjdk-6-jdk
Those who wish to upgrade to OpenJDK 7 from OpenJDK 6 can visit http://openjdk.java.net/install/ for
installation instruction. Be sure to upgrade both runtime (JRE) and software development kit (JDK). The
procedures are similar to those for installing OpenJDK 6.
Oracle’s Java JDK may be required to develop applications that are not compatible with OpenJDK. For
those who prefer using Oracle’s Java 7 instead of OpenJDK, the instructions for Fedora users are
available at http://fedoraunity.org/Members/zcat/ using-sun-java-instead-of-openjdk. Instructions for
Ubuntu users can be found at http://askubuntu.com/questions/55848/how-do-i-install-oracle-jdk-7.
With a GUI desktop, students can use vi, gedit (of GNOME), or kedit (KDE) editor to create Java source
files. The vi editor is a console-based screen editor. Many students who are new to Linux frequently find
vi editor hard to learn. In an accelerated course that lasts for seven to nine weeks, the instructors may
not have spare time to teach students how to use vi editor. The authors believe it is more practical to
use gedit (or kedit) to write Java code similar to the one below. Students are encouraged to learn handcoding without using Integrated Development Environments (IDEs), such as NetBeans and Eclipse.
public class Hello {
public static void main(String[] args) {
System.out.println("Hello!");
}
}
The authors believe that JDK and OpenJDK are ideal candidates for teaching Java programming.
Topics to be covered can include the Java programming language core, object-oriented programming
121
Live USB Thumb Drives for Teaching Linux Shell Scripting and Java Programming
using Java, exception handling, file input/output, threads, collection classes, GUI, applet, and
networking.
Windows-based Java Programming
Creating a bootable USB thumb drive to run Windows operating system is doable but unfeasible. The
implementation describe in this section is just an expedient measure for students or instructors who do
not have administrator privilege to modify the Windows O.S. settings. Installing JDK typically requires
the administrator privilege.
Freshly installed Windows operating systems do not have JDK and JRE. The users must manually
download and install JDK and JRE. Instructors teaching Java programming typically discuss how
students install and configure the JDK prior to lectures of Java programming, with or without using the
thumb drives.
The file jdk-7-windows-i586.exe is the JDK installer for 32-bit systems. The file jdk-7-windows-x64.exe
is the JDK installer for 64-bit systems. Detailed information is provided by the “JDK 7 and JRE 7
Installation Guide” which can be downloaded from Oracle’s web site (Oracle, 2011b).
Unlike the tradition Windows-based installation which installs JDK in a hard drive (such as C:), the
authors instruct students to download and extract (unzip) the JDK installer file to a USB thumb drive.
Assuming the thumb drive is recognized as E: drive and the version of JDK is 7.0, the authors ensure
that students manually set the JDK directory to “E:\Java\jdk1.7.0” during the installation. The JDK
Custom Setup dialog enables students to choose a custom directory for JDK files, such as
“E:\Java\jdk1.7.0.”
The Java compiler (javac.exe) typically resides in the “bin” sub-directory of the JDK directory. The full
path to find the Java compiler is “E:\Java\jdk1.7.0\bin\javac.exe” if “E:” is the drive name of the thumb
drive. As specified in the “JDK 7 and JRE 7 Installation Guide,“ students can add the full path of the
“jdk1.7.0\bin” directory (such as “C:\Program Files\Java\jdk1.7.0\bin\”) to the PATH environment
variable (Oracle, 2011b).
The Command Prompt of Windows operating systems (cmd.exe) has many environment variables
(Laurie, 2011). These variables determine the behavior of the command prompt and the operating
system. Among them, the PATH variable specifies the search path for executable files. By adding the
full path of “jdk1.7.0\bin” directory to the PATH variable, Windows’ Command Prompt will recognize the
JDK executable files, including javac.exe and java.exe. This arrangement allows developer to
conveniently run the Java compiler (javac.exe) and application launcher (java.exe) without having to
type the full path of them, as illustrated by Table 1.
Table 1:
Running JDK files with and without setting the PATH environment variable
With
Without
E:\>javac.exe Hello.java
E:\>E:\Java\jdk1.7.0\bin\javac.exe Hello.java
E:\>Java.exe Hello
Hello!
E:\>E:\Java\jdk1.7.0\bin\java.exe Hello
Hello!
122
P. Wu and P. Chang
JISTP - Volume 5, Issue 12 (2012), pp. 118-126
There are two methods to set the PATH variable in Windows. One is to permanently add the full path to
the PATH variable through the Environment Variable dialog box (see Figure 1). The other is to create a
batch file and execute the batch file each time a Command Prompt is opened for developing and testing
Java applications (see Figure 2).
Figure 2
Figure 1
Figure 3
A known technical problem is that the Windows O.S. assign the thumb drive to use “E:”, “F”, or other
letters as drive name. Students cannot set the PATH variable permanently assume that the thumb drive
is always the “E:” drive. Table 2 is the sample the authors used to explain how to dynamically obtain
current drive letter and use the retrieved value to set the path. Notice that the syntax is retrieve value of
an environment variable is %VariableName%.
Table 2:
Permanent path vs. dynamic path settings
Permanent
Dynamic
PATH=%PATH%;E:\Java\jdk1.7.0\bin\;
PATH=%PATH%;%CD:~0,2%\Java\jdk1.7.0\bin\;
The CD environment variable prints the current working directory (Microsoft TechNet, 2011). For
example, the following will print the drive letter followed by “:\” from the root of the thumb drive.
E:\>echo %CD%
and the output looks:
E:\
Another remarkable technical problem is that CD is an environment variable designed to print the path
of the current working directory. If the current working directory is not the root of the thumb drive, the
output will include the directory name. For example, the output of the following is “E:\test”, not “E:\”.
E:\>test\echo %CD%
Windows’ batch scripting allows developers to define regular expression. For example, %CD:~0,2%
specifies to print only the first two characters counting from the first place. Accordingly, it will print the
drive letter followed by a colon (:) such as “E:”, and “%CD:~0,2%\Java\jdk1.7.0\bin\” will produce
123
Live USB Thumb Drives for Teaching Linux Shell Scripting and Java Programming
“E:\Java\jdk1.7.0\bin\” if “E” is the correct drive letter (see Figure 3). According to the authors’
experience, many students are not familiar with command-line interface. Instructors need to explain
how to create a .bat that sets the PATH variable as discussed above.
BARRIERS AND ISSUES
USB boot may fail because the BIOS and USB storage device are not compatible. In order to verify
compatibility, the authors will demonstrate how to configure the BIOS settings to boot only from the
USB storage device. This is a step to will prevent the BIOS from proceeding to another boot device if
USB boot fails. The authors recommend a free open source tool, named “Memtest86+”, which can test
a computer system for USB boot compatibility. This tool is available at http://www.memtest.org/.
Most USB thumb drives are pre-loaded with hidden “autorun” U3 partition. The U3 technology was
developed by ScanDisk to automatically load the contents of the thumb drive to the computer. This U3
partition can possibly be another source of incompatibility. Good news is that SanDisk began phasing
out support for U3 Technology in late 2009 (ScanDisk, 2011). The authors believe it is necessary to
remove the U3 partition. Some thumb drives such as the Cruzer of SanDisk come with a U3 Launchpad
software application that allows users to delete the U3 partition.
Linux started supporting USB 3.0 in the September 2009 release of the 2.6.31 kernel. Although Ubuntu
9.10 and Fedora 14 and their later version all includes the support of USB 3.0, the technology to create
a bootable USB 3.0 thumb drive is still in its early in fancy.
Most textbooks of Java programming use Oracle’s JDK to develop sample codes. Instructors should
test the sample codes provided by textbooks to avoid unnecessary compatibility issues between
Oracle’s JDK and OpenJDK as well as JDK version 7 and version 6.
CONCLUSION
The dimension of thumb drives is compact, the weight is light, and the cost is very affordable. Currently
available technologies and open source tools have made the process of building of a live USB easy.
Although minor compatibility with BIOS and other hardware may exist, USB thumb drives are ideal for
building a portable laboratory to teaching Java programming and Linux shell scripting. The cost is low to
both students learning them and school teaching them.
FUTURE DEVELOPMENT
At the time this paper is developing, the authors are working on evaluating feasibility to teach students
writing Visual C# coding with thumb-drive-based toolkit. There are solutions such as Mono that can be
installed in a Linux-based bootable thumb drive to develop and test C# application. Mono is an open
source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the
Common Language Runtime.
REFERENCES
Chen, F., Chen, R., & Chen, S. (2011). Advances in Computer Science, Environment, Ecoinformatics,
and Education Communications in Computer and Information Science, 218, p. 245-250
Collins, D. (2006). Using VMWare and live CD's to configure a secure, flexible, easy to manage
computer lab environment. Journal of Computing Sciences in Colleges, 21(4), p.273-277
124
P. Wu and P. Chang
JISTP - Volume 5, Issue 12 (2012), pp. 118-126
Faronics Corporation. (2011). Deep Freeze Standard. Retrieved on August 17, 2011 from
http://www.faronics.com/standard/deep-freeze/
Fedora Project. (2011). How to create and use Live USB. Retrieved on August 17, 2011 from
http://fedoraproject.org/wiki/How_to_create_and_use_Live_USB
Laurie, V. (2011). Environment Variables in Windows XP. Retrieved on August 15, 2011 from
http://vlaurie.com/computers2/Articles/environment.htm.
Microsoft TechNet. (2011). Command shell overview. Retrieved on August 15, 2011 from
http://technet.microsoft.com/en-us/library/bb490954.aspx
Oracle. (2011a). OpenJDK FAQ. Retrieved on August 15, 2011 from http://openjdk.java.net
Oracle. (2011b). JDK 7 and JRE 7 Installation Guide. Retrieved on August 15, 2011 from
http://download.oracle.com/javase/7/docs/webnotes/install/
Pen Drive Linux. (2011). Universal USB Installer – Easy as 1 2 3. Retrieved on August 17, 2011 from
http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/
Pillay, H. (2005). What are live CDs, and how do they work? Free Software Magazine.
http://www.freesoftwaremagazine.com/files/www.freesoftwaremagazine.com/nodes/1103/1103.pdf
Red
Hat,
Inc.
(2011).
Java/FAQ.
Retrieved
on
August
17,
2011
from
http://fedoraproject.org/wiki/Java/FAQ
SanDisk Corporation. (2011). U3 Launchpad End of Life Notice. Retrieved on August 17, 2011 from
http://kb.sandisk.com/app/answers/detail/a_id/5358/kw/u3%202009/r_id/101834
125
Live USB Thumb Drives for Teaching Linux Shell Scripting and Java Programming
APPENDIX A: FEDORA 15 LIVE MEDIA TEMPORARY WORKAROUND
1. When booting live media, press [Tab] at the following bootloader screen.
Welcome to Fedora-15-i686-Live-Desktop.sio!
Boot .
Boot (Basic Video)
Verify and Boot
Memory Test
Boot from local drive
Press [Tab] to edit options
2. Add rd.break=pre-trigger to end of the boot arguments on screen as shown in the
following figure, and then press [Enter].
Welcome to Fedora-15-i686-Live-Desktop.sio!
Boot .
Boot (Basic Video)
Verify and Boot
Memory Test
Boot from local drive
> vmlinuz0 initrd=initrd0.img root=live:UUID=AA2F-CE85 rootfstype=vfat rw
live img overlay=UUID=AA2F-CE85 quiet rhgb rd.luks=0 rd.md=0 rd.dm=0
rd.break=pre-trigger
3. When the boot process stops and presents a shell prompt, type mkdir /overlayfs ;
exit and press [Enter].
Dropping to debug shell.
Sh: can’t access tty; job control turned off
Pre-trigger:/# mkdir /overlayfs ; exit
4. When the boot process stops again and presents another shell prompt, simply type exit and
press [Enter].
Dropping to debug shell.
Sh: can’t access tty; job control turned off
Pre-trigger:/# exit
126
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
Full Article Available Online at: Intellectbase and EBSCOhost │ JISTP is indexed with Cabell’s, JournalSeek, etc.
JOURNAL OF INFORMATION SYSTEMS TECHNOLOGY & PLANNING
Journal Homepage: www.intellectbase.org/journals.php │ ©2012 Published by Intellectbase International Consortium, USA
QUEUING THEORY AND LINEAR PROGRAMMING APPLICATIONS
TO EMERGENCY ROOM WAITING LINES
Melvin Ramos, Mario J. Córdova, Miguel Seguí and Rosario Ortiz-Rodríguez
University of Puerto Rico, Puerto Rico
ABSTRACT
T
his paper applies a queuing model for multiple servers and a single line (M/M/s) to study the
process of admission of the Adult Emergency Room at Good Samaritan Hospital in
Aguadilla, Puerto Rico. Based on analysis of historical data on the arrival rates of patients to
the emergency room, as well as their service times, we determined the probability distributions that
these rates followed, respectively. The data utilized here was collected from hospital registrations
corresponding to the years 2007-2009. Queuing analysis was used to determine the minimum number
of health care providers needed to maintain a desired waiting time in the emergency room, in this case
set at an average of five minutes. Furthermore, linear programming was applied to determine the
optimal distribution of personnel needed in order to maintain the expected waiting times with the
minimum number of care givers on the payroll, thus reducing the cost. Thus, we conclude that it is
possible to reduce the average waiting time in the emergency room environment while keeping the
corresponding costs under control. This paper is meant to illustrate the potential benefits of applying
Operations Research techniques to the health care management environment. It should serve as a
valuable decisional tool for health professionals.
Keywords: Queuing Theory, Linear Programming, Emergency Room Waiting Times, Operations
Research, Health Care Personnel Assignment.
INTRODUCTION
The need to provide medical services efficiently and effectively is a major concern for both health
services professionals and patients. This level of importance can only increase as we face the particular
case of the emergency rooms of hospitals. Many of the strategies dedicated to the effective
management of resources in the emergency room in hospitals are usually performed without the aid of
model-based quantitative analysis (Laffel & Blumenthal, 1989). One of the most serious concerns in
managing the quality of services in such an environment is the waiting time to enter service during the
visit to the room. The waiting time in emergency room increases each year in the United States, as
suggested by a study published in the journal "Health Affairs" in 2008. The study conducted by
researchers at Harvard Medical School belonging to the Cambridge Health Alliance, Massachusetts, is
the first detailed analysis of trends in emergency room waiting time through the United States. Using
data from the "National Center for Health Statistics (NCHS)," studied on more than 90,000 patient visits
to emergency rooms, where they analyzed the time between arrivals of patients until they were seen by
127
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
a doctor, researchers found that the increase in the waiting time affect all patients equally, including
those with or without insurance, and people of different races and ethnic groups.
It is for the reasons mentioned above, the fact that the average waiting time in emergency rooms is
increasing, that this waiting time expected is an opportunity cost to patients, and that there is a health
risk related to long delays, that we conduct this research in the Adult Emergency Department at Good
Samaritan Hospital in Aguadilla, Puerto Rico to study in the admission process, the average waiting
time of patients in line and thus be able to identify the best arrangement of personnel required to
maintain a reasonable timeout.
The Good Samaritan Hospital, Inc. is a nonprofit corporation of the community, which seeks to offer the
highest quality health services in the northwest area of Puerto Rico. The corporation was registered in
the State Department on June 27, 1999. It began managing the hospital on July 1, 1999, ending the
sale process in March 2000. It belongs to the San Carlos Health System, which includes the
Community Hospital Good Samaritan Hospital in Aguadilla and San Carlos Borromeo Hospital in Moca.
Since its inception, the hospital has demonstrated its commitment to the community and its employees.
Its purpose is to provide hospital medical services of excellence to the northwest area population. With
over 450 employees, the Good Samaritan Hospital makes its contribution to "improving the quality of
life of patients in the region, meeting their health needs", as set forth in its organizational mission,
keeping skilled employees in the field of health and service oriented. The Hospital has a Pediatric
Emergency Room and Adult Emergency Room operating 24 hours throughout the year (Good
Samaritan Hospital, Inc., 2010).
THEORETICAL BACKGROUND AND LITERATURE REVIEW
Rapid access to a service is a critical factor in the good performance of an emergency room. In an
environment where regularly the emergency rooms lack the necessary personnel, analysis of arrival
patterns and the use of queuing models can be extremely useful in identifying effective ways of
allocating staff (Green, Soares, Giglio, & Green, 2006). In research conducted by L. V. Green in 2006
along with other researchers entitled "Using Queuing Theory to Increase the Effectiveness of
Emergency Department Provider Staffing" made use of the queuing model M/M/s to estimate the
appropriate number of personnel in different periods in order to reduce the number of patients leaving
the emergency room before being seen. Among his observations found that making staff adjustments
as recommended by the model could reduce the number of patients who left the system in spite of the
increased demand for the service. At the end they conclude that the queuing model, by its ability to
provide a rigorous, scientific basis for predicting the waiting time for patients is a good tool to adjust
staff schedules and improve the effectiveness of the emergency room.
Using queuing models can positively affect the waiting time faced before being treated at an emergency
room. Professor Gerard F. Santori, a member of the faculty of the Universidad Nacional de La Plata in
Argentina conducted a research into a trauma room of a public hospital. In his work applied the queuing
model M/M/s to study the performance of an emergency room. Within the methodology, he used the
arrival and service rates modeling them using a negative exponential probability distribution. He used
as number of servers the number of available beds in the room. The system studied was a single line
with multiple servers. Results showed that it is possible to reduce the waiting time of a patient as it
increases the number of servers. At the end of the research, he concluded that a reduction in arrivals
rate causes a decrease in waiting times in line, but in the case of an emergency room it is not desirable
128
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
to limit the arrival of patients. So in his research he indicates that an acceptable strategy is to try to
increase the service rate (Santori, 2005).
Another study related to this research was performed by Cheng-Hua Wang, Yuan-Duen Lee, Lin Wei-l
and the aspiring student to the degree of Doctor, Pang-Mau Chung Lin of Taiwan's Jung Christian
University with his article "Application of Queuing Model in Healthcare Administration with Incorporation
of Human Factors" indicate that due to changes in social structures, forces the need to improve health
services in it. They state that more resources are being directed to the industry of health care due to the
increasing demand for this service and related issues. They note that, with limited resources, many
countries are beginning to realize that the costs of health care become more difficult to bear as time
goes on. As a result, several studies emerging globally using methods such as simulation, route
("scheduling"), queuing theory models and other tools are designed to help increase productivity. They
conducted a study, using queuing theory and simulation to construct a useful model for organizations of
health care in Taiwan. They made use of the Model M/M/s / FCFS / ∞ / ∞ to provide quantitative
information to the hospital and objective suggestions in its operations and number of servers (Wang,
Lee, Wei-Lin, & Lin, 2006).
A statistical description of the arrival and service rates is necessary for the analysis of queuing. The aim
of the theory is to describe several performance measures, including the following: the time a customer
waits in line to receive the service, the total waiting time of customer in the system, the length of the line
or number of customers that make up the line, the number of customers in the system, both the line and
servers, and the interval of time that begins when all servers are busy and ends when at least one
server is released, known as the busy period. Other performance measures include the number of busy
servers, the proportion of crashed clients and the proportion of customers who decide to join the line
(Heyman, 2001).
One of the most important contributions in the use of mathematical models based on queuing theory is
to determine the appropriate number of servers in a multi-server system. Some examples of using
queuing models are determining the number of toll booths on a toll (Edie, 1954), the number of tellers in
a bank (Foote 1976, Deutsch and Mabert 1980, Kolesar 1984), the number of voting machines in
elections (Grant, 1980), the number of operators in a telephone system (Linder 1969, Globerson 1979,
McKeown 1979), the number of available beds in a hospital (Esogbue and Singh, 1976), the number of
Police patrols in the city (Green and Kolesar, 1984). In addition, a substantial percentage of research
deals with the problem of finding the appropriate number of servers in queuing situations. A large
number of mathematical models of queuing have been suggested by the literature that emerged in the
area, having been applied to verify how well it fits the situation (Grassmann, 1988).
OBJECTIVES
The aim of this paper is to present how the use of tools of operations management, such as queuing
theory and linear programming can help make more informed decisions. Particularly in the allocation of
resources, such as nurses or administrative assistants, are parts of the admission process. In this way,
it would provide a valuable resource for hospital management to help them devise strategies that will
result in a more efficient service, able to allocate the necessary resources when needed most, allowing
them to influence the average time a patient waits in line at different times of day.
129
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
METHODOLOGY
For the analysis of waiting time in the queue, we used a widely known tool of operations managers and
researchers, known as queuing theory, which is part of a series of mathematical tools used for the
analysis of probabilistic systems of clients and servers. The queuing theory was developed to provide a
model to predict the behavior of the line in a system that offers services in times of random demands.
The first investigation under this method was performed by Agner Krarup Erlang, a Danish-born
mathematician, who studied telephone traffic congestion and, in 1908, published "Use of Waiting-Line
Theory in the Danish Telephone System". He noted that the phone system was characterized by a
Poisson distribution, retention time ("holding-time") exponential, and a single server. From this, works
on applying theory to telephone problems continued. Thus, Molina in 1927, published "Application of
the Theory of Probability to Telephone Trunking Problems", and Thornton Fry in 1928 presented
essentially the previous work of Erlang in a book entitled "Probability and Its Engineering Uses." In
1930, Felix Pollaczek studied several systems with Poisson distribution, arbitrary detention time, and
multi-servers. Further works in this field were made by Kolmogorov in 1931 in his research entitled "Sur
la Problème d'Attente"; Khintchine, who in 1932 studied problems with Poisson distribution, arbitrary
detention time, but with a single server, in addition to Crommelin, who in 1932 published "Delay
Probability When the Holding Times Are Constant" (Saaty, 1957). Today, still continue to develop
queuing applied research in various areas of human endeavor, from computer systems, airlines, and
fast food restaurants to emergency room.
Definitions
For a better understanding of the concepts and factors that are studied in queuing theory, it is
necessary to define the terms that are generally used:
Table 1:
Definition of general terms for queuing theory
Term
Definition
Arrival rate (λ)
It refers to the average clients that require service in a specific period of time.
Queue capacity
It refers to the maximum number of clients allowed to wait in line.
Clients
The clients could be people, inventory, raw material, incoming digital messages, or
any other entity that can wait in line to a process take place or receive some service.
Queue
Series of clients waiting for a process or service
Queue discipline
It refers to the priority rule of the system where the next client that receives service is
selected from among a series of waiting clients. A common line discipline is firstcome-first-served or better known as FCFS.
Server (s)
It refers to a human worker, machine, or any other entity that can be modeled to
execute some process or service to the clients still in line.
Service rate (µ)
It refers to the average clients that a system can handle within a determined period of
time.
Stochastic process
It refers to a system of events where times between them are random and
independent from others. In queuing models, the trend in arrival and service times are
modeled as stochastic processes.
Utilization ()
It refers to the proportion of time that a server is occupied attending the clients.
Modified table (Juran, 2010)
130
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
Queuing Models Notation
In queuing theory the Kendall notation is the standard used to describe and classify a model. The
system was originally proposed by David George Kendall in 1953 as a notation system of three factors
(A / B / C) to identify the lines, since then has expanded to include six different factors. A line can be
described as A / B / C / K / N / D or a more concise way as A / B / C. In the short version is assumed
that K = ∞, N = ∞, D = FCFS (Gross & Harris, 1998) and (Wikipedia contributors, 2010).
Table 2:
Definition of terms for arrivals into a queue
Symbol
Name
Description
M
Markovian
Poisson process of arrivals
D
Degenerated Distribution
Fix time between arrivals
Ek
Erlang Distribution
An Erlang distribution with a K form parameter
G/GI
General Distribution
Independent arrivals
Modified table (Wikipedia contributors, 2010)
Table 3:
Definition of terms for service in a queue
Symbol
Name
Description
M
Markovian
Exponential service time
D
Degenerated Distribution
Constant service time
Ek
Erlang Distribution
An Erlang distribution with a K form parameter
G/GI
General Distribution
Modified table (Wikipedia contributors, 2010)
Independent service time
C: Number of parallel servers
K: is the restriction of system capacity or the maximum number of allowed clients in the system
including those being served. When you ignore this number it is assumed that the capacity of the
system tends to infinity (∞).
N: The population from which the customers come. If you omit this number assume that the population
is unlimited or infinite (∞).
Table 4:
Definition of terms for the priority discipline for service in the queue
Symbol
Name
Description
FIFO/FCFS
First In First Out/ First Come First Served
The clients are served in arrivals order.
LIFO/LCFS
Last In First Out/ Last Come First Served
The first client served is that of last arrival.
RSS
Random Selection for Service
Do not take into account the arrival order.
PR
Priority
Modified table (Wikipedia contributors, 2010)
Priority levels are assigned.
131
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
Some authors choose to exchange the notation, placing the discipline of the line before the capacity
and population, this usually does not cause major problems because the notation is different.
Model
For the analysis of the line was used queuing model (M/M/s) where:
λ = average arrival rate (number of arrivals per unit time)
μ = average service time rate
s = Number of servers
ρ = Average system utilization
Formula 1
P0 = probability that the system is empty
Formula 2
Pn = probability that n clients are in the system
Formula 3
Table 5:
Formulas for mean metrics for the state of a queue
Number
Formula
1
Lq = average number of clients in line
2
Wq = average time waiting in line
3
W = average time in the system
4
L = average number of clients in the system
5
132
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
The queuing model M/M/s assumes that the servers are identical in terms of serviceability. In a multiple
server clients or patients wait in a single line until a server becomes free. These formulas are applicable
if the following conditions exist (Anderson, Sweeney, & Williams, 2006):
1.
2.
3.
4.
Patients’ arrivals follow a Poisson probability distribution.
The service time of each server follows an exponential probability distribution.
The service rate (μ) is the same for each server.
Patients wait in a single line and then move to the first server available for treatment.
About the Sample
Description
We performed an analysis of the arrival and service time of patients who use the Adult Emergency
Room of the Good Samaritan Hospital for the calendar years 2007, 2008, and 2009. The data includes
such information as patient arrival time, time of entry and exit from the triage station, time of entry and
exit from the station of REGISTRATION for each patient using the emergency room during that period.
It is estimated about 35,000 patients per year.
Data Collection
The Good Samaritan Hospital provided a database for all admissions occurred during the years 2007,
2008 and 2009 in the adult emergency room. The number of admissions ranges from 35,000 patients
per year. The database only included data related to the arrival time and other times at which each
patient went to a different stage in the admissions process. The data do not include any information that
is personally identifiable or of health that compromises the anonymity of all patients, only work with
numeric data. They were used to determine the ratio of arrivals of patients and service rates.
ANALYSIS
Kolmogórov-Smirnov Test for Poisson
Once the outliers are removed we proceeded to examine the statistical distribution which fits the data of
time between arrivals and service time, expecting to be Poisson and exponential respectively to thereby
fulfill part of the queuing model M/M/s assumptions. Kolmogorov-Smirnov Test for Poisson of goodness
of fit was used for each hour through each year using the SPSS statistical package. For those periods
where we obtained a P-value greater than an alpha of 0.05 was not rejected the null hypothesis that the
arrival of patients fit a Poisson distribution. For both lines can be assumed that most of the periods were
adjusted to a Poisson distribution over the different years, with only a case of refusal for 2007 in both
lines. However, given the case that for most periods there is not enough statistical evidence to reject
that fit, a Poisson distribution is assumed to remain the same even in those periods where there could
be other statistical evidence to reject a Poisson distribution, and therefore it was noted as a limitation.
F Test for Exponential
We examined the service time for the triage and registration areas, to study if they are exponential and
fulfill queuing model M/M/s assumption that indicates service time is exponentially distributed. For this
purpose, we used the F test to prove the exponential service times, as recommended by Donald Gross
and Carl M. Harris in his book "Fundamentals of Queuing Theory" in which assure that the test is more
accurate when identifying the goodness of fit of the data to an exponential distribution than other tests.
133
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
Where r (= n / (2)) and n-r in a number n of inter-occurrences
quantity:
grouped randomly. It follows that the
Formula 4
Is the ratio of two Erlangs and is distributed as an F distribution with 2r and 2 (n-r) degrees of freedom
when the hypothesis of the exponential is true. Therefore, a two-tailed F-test was conducted under the
F calculated from the data set to determine whether the sequence is actually really exponential (Gross
& Harris, 1998). For those periods when the F value was among the critical values it is accepted that
the service times fit an exponential distribution. The test results showed that for most periods it is not
rejected to be exponential, with only one case of rejection for the years 2008 and five in 2009. For
purposes of this research it is assumed that all periods are exponential.
RESEARCH RESULTS
Application of the Model M/M/s
As we have seen in our model, the arrival rate to receive triage and registration services follows an
exponential distribution. Therefore, we conclude that the queuing model M/M/S is appropriate for this
case. To perform queuing analysis was necessary to define the ratio of arrivals per hour and the service
rate per hour for each line in the three years, to be understood as the parameters (λ) and (μ)
respectively. Both serve as a starting point for the rest of the analysis. In the following graphs is
presented in summary form the average number of patients who joined the line per hour for different
periods.
Figure 1: Average Arrivals of Patients Per Hour for Triage Line Chart
134
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
The graph shown above shows the average arrival rate of the patients according to time of day for the
line of triage of each year. It can be seen that at about 10:00 a.m. occurs on average the largest
number of arrivals of patients. Also it seems to be the same for the line of the registration area (below).
Figure 2: Average Arrivals of Patients Per Hour in the Line for Registration Chart
Similarly, we analyzed the average rate of service time, which is the number of patients served by
server per hour. The service rate is independent of the number of patients in the line (Srivastava,
Shenoy, & Sharma, 1989). For the triage station service rates are presented in the graph below:
Figure 3: Chart of Patients Treated Per Hour in Triage
It can be seen that for three years the service rate is approximately 9 to 11 patients per hour, with a
significant variation for the period from 6:00 AM to 7:00 AM. For the triage station it can be noticed that
135
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
the service rate remains similar levels in contrast to rates of service of registration area (below) that
show more variation in service rates with respect to the time of day, being true for 2007, 2008 and
2009.
Figure 4: Chart of Patients Treated Per Hour in Registration
After identifying arrival and service rates, we proceeded to perform a queuing analysis for both lines for
each year using both parameters. For analysis we applied queuing model M/M/s formulas in a
spreadsheet. Analysis was performed increasing progressively from 1 up to 5 servers in both lines to
notice how the line behaved as they grew.
From all the performance measures that were obtained through the analysis of queuing attention was
fixed primarily on measuring the average time a patient waits in the queue (Wq) because the interest of
this research is to reduce the waiting time in that area but not the time spent in the entire system
because it includes the examination to which the patient is subjected, and it can vary depending on the
different health situations of each. Put another way, our interest is to reduce the line but not the service
each patient receives individually. Since the queuing analysis period used was 24 hours for each of the
years, the analysis in general tends to be very long, for that reason, it is practical to discuss the results
of a particular period. To this end, we take as example the period from 8:00 AM. For the average time
that each patient waits in the queue (Wq) it is very noticeable that reduction is achieved by adding an
additional server. For example, for 2007 in line at triage, the time a patient waits in line when you have
only one server is of 6.12 minutes (= 0.1021 * 60 minutes), but adding an additional server timeout in
the line approaches zero, being 0.41 minutes which means that the waiting time would be almost
nonexistent for the line of triage in that period. So the results would be for 2008 of 8.64 minutes to 0.58
minutes and 9.01 minutes in 2009 to 0.60 minutes when adding an additional server, which would be an
additional nurse for triage.
136
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
Triage 2009
Triage 2008
Triage 2007
Table 6:
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
Queuing Analysis for Triage Line in the Period from 8:00 AM
Number of servers
L (# of patients in the system)
Lq (# of patients in line)
W (hour)
Wq (hour)
L (patients)
Lq (patients)
W (hour)
Wq (hour)
L (patients)
Lq (patients)
W (hour)
Wq (hour)
1
0.9480
0.4613
0.2097
0.1021
1.3189
0.7501
0.2531
0.1440
1.3137
0.7459
0.2643
0.1501
2
0.5173
0.0306
0.1144
0.0068
0.6188
0.0500
0.1188
0.0096
0.6176
0.0498
0.1243
0.0100
3
0.4894
0.0027
0.1083
0.0006
0.5738
0.0050
0.1101
0.0010
0.5728
0.0050
0.1152
0.0010
4
0.4869
0.0002
0.1077
0.0001
0.5692
0.0005
0.1093
0.0001
0.5683
0.0005
0.1143
0.0001
5
0.4867
0.0000
0.1077
0.0000
0.5688
0.0000
0.1092
0.0000
0.5678
0.0000
0.1143
0.0000
For the registration line, the scenario is similar to the above, for 2007, we obtained a reduction from
2.27 minutes to 0.12 minutes, from 4.07 minutes to 0.25 minutes in 2008, and 4.81 minutes to 0.31
minutes for the 2009. Although it is desirable to obtain the shortest possible waiting time for each of the
periods it is necessary to consider that the inclusion of an additional server may represent additional
costs to the hospital. It is for this reason that the functionality of the analysis is to identify the critical
periods where it is absolutely necessary to accommodate an additional server. To this end, we
establish a maximum waiting time of less than or equal to five minutes for each line. The value of five
minutes is emerging as an observation on our part to what would be an acceptable time to wait. Then,
under this new criterion, we proceeded to increase the number of servers from one to two, only in those
periods that had an average time of waiting in line greater than five minutes.
Table 7:
Queuing Analysis for Registration Line for the Period from 8:00 AM
Registration 2009 Registration 2008 Registration 2007
Number of servers
1
2
3
4
5
L (patients)
0.4277
0.3064
0.3000
0.2996
0.2996
Lq (patients)
0.1281
0.0069
0.0004
0.0000
0.0000
W (waiting time in registration system in hours)
0.1265
0.0907
0.0888
0.0886
0.0886
Wq (hour)
0.0379
0.0020
0.0001
0.0000
0.0000
L (patients)
0.6695
0.4178
0.4023
0.4011
0.4010
Lq (patients)
0.2685
0.0168
0.0013
0.0001
0.0000
W (hour)
0.1691
0.1055
0.1016
0.1013
0.1013
Wq hour)
0.0678
0.0042
0.0003
0.0000
0.0000
L (patients)
0.7389
0.4450
0.4265
0.4250
0.4249
Lq (patients)
0.3140
0.0201
0.0016
0.0001
0.0000
W (hour)
0.1885
0.1135
0.1088
0.1084
0.1084
Wq (hour)
0.0801
0.0051
0.0004
0.0000
0.0000
137
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
For several years in each period of two lines are recommended the following number of servers
(employees):
Table 8:
Amount of Staff Recommended in Triage Per Hour
Year 2007
Triage
Period Employees
Hour 0
1
Hour 1
1
Hour 2
1
Hour 3
1
Hour 4
1
Hour 5
1
Hour 6
1
Hour 7
1
Hour 8
2
Hour 9
2
Hour 10
2
Hour 11
2
Hour 12
2
Hour 13
2
Hour 14
2
Hour 15
2
Hour 16
2
Hour 17
2
Hour 18
2
Hour 19
2
Hour 20
2
Hour 21
2
Hour 22
1
Hour 23
1
Year 2008
Triage
Period Employees
Hour 0
1
Hour 1
1
Hour 2
1
Hour 3
1
Hour 4
1
Hour 5
1
Hour 6
1
Hour 7
1
Hour 8
2
Hour 9
2
Hour 10
2
Hour 11
2
Hour 12
2
Hour 13
2
Hour 14
2
Hour 15
2
Hour 16
2
Hour 17
2
Hour 18
2
Hour 19
2
Hour 20
2
Hour 21
2
Hour 22
1
Hour 23
1
138
Year 2009
Triage
Period Employees
Hour 0
1
Hour 1
1
Hour 2
1
Hour 3
1
Hour 4
1
Hour 5
1
Hour 6
1
Hour 7
1
Hour 8
2
Hour 9
2
Hour 10
2
Hour 11
2
Hour 12
2
Hour 13
2
Hour 14
2
Hour 15
2
Hour 16
2
Hour 17
2
Hour 18
2
Hour 19
2
Hour 20
2
Hour 21
2
Hour 22
1
Hour 23
1
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
Table 9:
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
Amount of Staff Recommended in Registration Per Hour
Year 2007
Registration
Period Employees
Hour 0
1
Hour 1
1
Hour 2
1
Hour 3
1
Hour 4
1
Hour 5
1
Hour 6
1
Hour 7
1
Hour 8
1
Hour 9
1
Hour 10
1
Year 2008
Registration
Period Employees
Hour 0
1
Hour 1
1
Hour 2
1
Hour 3
1
Hour 4
1
Hour 5
1
Hour 6
1
Hour 7
1
Hour 8
1
Hour 9
2
Hour 10
2
Year 2009
Registration
Period Employees
Hour 0
1
Hour 1
1
Hour 2
1
Hour 3
1
Hour 4
1
Hour 5
1
Hour 6
1
Hour 7
1
Hour 8
1
Hour 9
1
Hour 10
2
Hour 11
Hour 12
1
1
Hour 11
Hour 12
2
2
Hour 11
Hour 12
2
2
Hour 13
Hour 14
Hour 15
1
1
1
Hour 13
Hour 14
Hour 15
2
2
2
Hour 13
Hour 14
Hour 15
2
2
2
Hour 16
Hour 17
Hour 18
1
1
1
Hour 16
Hour 17
Hour 18
2
2
2
Hour 16
Hour 17
Hour 18
2
2
2
Hour 19
Hour 20
Hour 21
Hour 22
Hour 23
1
1
1
1
1
Hour 19
Hour 20
Hour 21
Hour 22
Hour 23
2
2
1
1
1
Hour 19
Hour 20
Hour 21
Hour 22
Hour 23
2
2
2
1
1
Through observation may be noted that the periods of greater co-management in the emergency room
is located approximately between the hours from 8:00 AM to 9:00 PM for triage and from 9:00 AM to
10:00 PM for registration which are the periods that require an additional server to maintain the average
waiting time in line at 5 minutes or less, with the exception of 2007 that in the registration area did not
require additional server.
Use of Linear Programming
After receiving the recommended number of servers through queuing analysis it is considered the
creation of a linear program that minimizes the number of employees arranging staff schedules of triage
and registration area so they could meet the amounts of staff recommended suggested by queuing
model. For this purpose, we conducted a linear program of "work planning", using eight-hour periods of
139
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
work in the restrictions which had to comply with the recommended number of servers for each period,
as a constraint which is defined as follows:
Be:
Objective Function:
Subject to:
Linear programming provided arranging itineraries for staff of triage and registration area complying
with the requirement of the number of servers that was obtained by the queuing model. The resulting
routes to the triage area were an employee who should have started work at 12:00 AM, then two
employees who start work at 8:00 AM, and then two employees at 4:00 pm, this will have fulfilled the
condition of two nurses during the period from 8:00 am to 9:00 pm for three years, requiring the same
number of servers for the same period. For the registration area there were different scenarios for each
year, by 2007 it would have recommended an administrative assistant at 12:00 AM, one at 8:00 am,
and another at 4:00 pm because for that year no periods were identified which need two servers. By
2008 it would have recommended an employee who started working at 12:00 AM, another at 8:00 AM,
and immediately another employee that enter at 9:00 AM, and finally two employees who began their
140
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
day at 4:00 PM, so that they will comply with the period from 9:00 AM to 8:00 PM that required for an
increase in the number of servers to 2. For 2009 it was recommended one staff at 12:00 AM, one
employee at 8:00 AM, one employee at 10:00 AM, and two employees at 4:00 pm for that way you
could cover the period from 10:00 am to 9:00 pm requiring two employees for every hour.
CONCLUSION
As a result of this study, we concluded that the tools of queuing and linear programming are useful for
creating models to assist in making decisions regarding the allocation of staff to the emergency room,
particularly in the functions of triage and registration. The prudent application of these techniques has
the potential to represent large savings in operating costs of emergency room to optimize the number
and schedule of health professionals and of those responsible for administrative functions preliminary to
consultation with the doctor on duty. It also plays a critical role in improving social efficiency as an
essential service to the community served by health institutions. This is achieved by promoting greater
control of length of waiting time to the patient during his visit to the emergency room.
At the end of this research, we could offer a personal arrangement that reduces the average waiting
time in line at a maximum of 5 minutes for each of the phases of the admissions process throughout the
day. Recommending maintain a server at those hours that had not an average waiting time at the line
greater than 5 minutes and doubling the amount of servers in those periods that had a waiting time
longer than 5 minutes. It was observed that for the three years studied there was a similar behavior in
the lines, with rates of arrival and service similar in each hour, for each of the years studied, so the
linear program developed recommended staffing arrangements similar to the two phases. Taking as a
decisional basis, similarities in the line in the years 2007, 2008 and 2009, it motivates the use of staff
suggested itineraries, in subsequent years. However, it is recommended to study regularly when
patients enter the system, and when entering and exiting each server, so that proper analysis can be
conducted in future years. In summary, the analysis serves as a more reliable decision-making tool
when trying to handle the average waiting time in lines that patients face before receiving each of the
services that are part of the admissions process in the Emergency Room at Good Samaritan Hospital in
Aguadilla, Puerto Rico.
RECOMMENDATIONS
For future research on this subject, it is suggested to be included in the analysis, the third line, where
patients wait for treatment by the physician, to thus obtain a more complete view of the admissions
process. It would be interesting to use other analysis tools such as simulation or other models of
queuing to see what new findings are obtained when applying them. On the other hand, is suggested
the inclusion of other emergency rooms in western and northwest areas of the island to make a
comparative analysis between them and consider whether the results from this research can be applied
equally to other emergency rooms in other hospitals. Finally, it is recommended to extend the research
to other divisions of a hospital as would be the pediatric emergency room, maternity ward, ambulatory
surgery area, among others.
BIBLIOGRAPHY
Anderson, D. R., Sweeney, D. J., & Williams, T. A. (2006). Quantitative Methods for Business. Ohio:
Thomson Higher Education.
141
Queuing Theory and Linear Programming Applications to Emergency Room Waiting Lines
Brochmeyer, E., Halstrom, H. L., & Jensen, A. (1948). The Life and Works of A. K. Erlang. Transaction
of the Danish Academy Technical Sciences.
Bruin, A. M., Rossum, A. C., Visser, M. C., & Koole, G. M. (2006). Modeling the emergency cardiac inpatient flow: An application of queuing theory. Springer Science, 1-23.
Crosby, P. B. (1979). Quality is Free. New York: McGraw-Hill.
Davis, M. M., & Vollman, T. (1990). A Framework for Relating Waiting Time and Customer Satisfaction
in Service Operation. Journal of Service Marketing, 55-48.
Deutsch, H., & Mabert, V. A. (1980). Queueing theory and teller staffing: A successful application.
Interfaces, 63-67.
Edie, L. C. (1954). Traffic delays at toll booths. Journal of the Operations Research Society of America,
107-138.
Esogbue, A. O., & Singh, A. J. (1976). A stochastic model for an optimal priority bed distribution
problem in a hospital ward. Operations Research, 884-887.
Evans, J. R., & Lindsay, W. H. (1996). The Management and Control of Quality. St. Paul, MN: West.
Feigenbaum, A. V. (1983). Total Quality Control. New York: McGraw-Hill.
Foote, B. .. (1976). A queuing case study in drive-in banking. Interfaces, 31-37.
Garvin, D. A. (1988). Managing Quality. New York: Free Press/Mcmillan.
Globerson, S. (1979). Manpower planning for a telephone service department. Interfaces, 105-111.
Grant, R. H. (1980). Reducing voter waiting times. Interfaces, 19-25.
Grassmann, W. K. (1988). Finding the Right Number of Servers in Real-World Queuing Systems.
Interfaces, 94-104.
Green, L. V., & Kolesar, P. J. (2004). Improving Emergency Responsiveness with Management
Science. Informs, 1001-1014.
Green, L. V., Soares, J., Giglio, J. F., & Green, R. A. (2006). Using Queuing Theory to Increase the
Effectiveness of Emergency Department Provider Staffing. Society for Academic Emergency
Medicine, 1-8.
Gross, D., & Harris, C. M. (1998). Fundamentals of Queuing Theory. New York: John Wiley & Sons.
Heyman, D. P. (2001). Queueing Theory. In S. I.Gass, & C. M. Harris, ENCYCLOPEDIA OF
OPERATIONS RESEARCH AND MANAGEMENT SCIENCE (pp. 679-686). New Jersey: Springer
US.
Hospital Buen Samaritano, Inc. (2010 november). hbspr.org. Retrieved 2010 10-november from
Hospital Buen Samaritano Website: http://www.hbspr.org/
Ishikawa, K. (1986). Guide to Quality Control. White Plains, NY: Kraus International Publications.
Juran, D. (2010 19-november). NYU: Leonard N. Stern School of Business. Retrieved 2010 йил 10november from Introduction to Queuing Theory: http://pages.stern.nyu.edu/~djuran/queues.doc
Kolesar, P. (1984). Stalking the endangered CAT: A queueing analysis of congestion at automatic teller
machines. Interfaces, 16-26.
Krueger, A. B. (2009 9-february). The New York Times. Retrieved 2010 7-november from The New
York Times Online: http://economix.blogs.nytimes.com/2009/02/09/a-hidden-cost-of-health-carepatient-time/
L. Green, P. K. (1984). The feasibility of one officer patrol in New York City. Management Science, 964981.
Laffel, G., & Blumenthal, D. (1989). The case for using industrial quality management science in health
care organizations. JAMA, 2869-2873.
Linder, R. W. (1969). The development of manpower and facilities planning methods for airline
telephone reservations offices. Operational Research Quarterly, 3-21.
142
M. Ramos, M. J. Córdova, M. Segui and R. Ortiz-Rodriguez
JISTP - Volume 5, Issue 12 (2012), pp. 127-143
Mango, P. D., & Shapiro, L. A. (2001). Hospitals get serious about operations. The McKinsey Quarterly,
74-85.
McKeown, P. G. (1979). An application of queueing analysis to the New York State child abuse and
maltreatment telephone reporting system. Interfaces, 20-25.
Preater, J. (2002). Queues in Health. Health Care Management Science, 283.
Purnell, L. (1995). Reducing Waiting Time in Emergency Department Triage. Nursing Management, 64.
Saaty, T. L. (1957). Résumé of Useful Formulas in Queuing Theory. Informs, 161-200.
Santori, G. F. (2005). ¿Modelos de filas de espera en la gestión hospitalaria? Congreso Internacional
de la Mejora Continua y la Innovación en las Organizaciones. Córdoba, Argentina.
Srivastava, K., Shenoy, G. V., & Sharma, S. C. (1989). Quantitative Techniques for Managerial
Decisions. New Delhi: New Age International.
Sureshchandar, G. S. (2002). The relationship between management perception of total quality service
and costumer perceptions of service quality. Total Quality Management, 69-88.
The Associated Press. (2008 7-july). MSNBC.COM. Retrieved 2010 15-november from Health Care On
MSNBC: http://www.msnbc.msn.com/id/25475759/ns/health-health_care
Wang, C.-H., Lee, Y.-D., Wei-Lin, & Lin, P.-M. (2006). Application of Queuing Model in Healthcare
Administration with Incorporation of Human Factors. The Journal of American Academy of
Business, 304-310.
Weiss, E. N., & McClain, J. O. (1986). Administrative Days in Acute Care Facilities: A Queuing Analytic
Approach. Operations Research Society of America, 35-44.
Whipple, T. W., & Edick, V. L. (1993). Continuous Quality Improvement of Emergency Services. 26-30.
Wikipedia contributors. (2010 4-october). Kendall's Notation. Retrieved 2010 16-november from
Wikipedia, The Free Encyclopedia: http://en.wikipedia.org/w/index.php?title=Kendall%27s_notatio
n&oldid=388704262
143
CALL FOR ACADEMIC PAPERS AND PARTICIPATION
Intellectbase International Consortium Academic Conferences
TEXAS – USA
Nashville, TN – USA
Atlanta, GA – USA
Las Vegas, NV – USA
International Locations
April
May
October
December
Spring and Summer
Abstracts, Research-in-Progress, Full Papers, Workshops, Case Studies and Posters are invited!!
All Conceptual and Empirical Papers are very welcome.
Email all papers to: [email protected]*
Intellectbase International Consortium provides an open discussion forum for Academics, Researchers, Engineers and
Practitioners from a wide range of disciplines including, but not limited to the following: Business, Education, Science,
Technology, Music, Arts, Political, Sociology - BESTMAPS.
B
E
S
T
M
A
P
S
EDUCATION
BUSINESS
SCIENCE
INTELLECTUAL
PERSPECTIVES
SOCIOLOGY
&
MULTI-DISCIPLINARY
TECHNOLOGY
FOUNDATIONS
POLITICAL
MUSIC
ARTS
* By submitting a paper, authors implicitly assign Intellectbase the copyright license to publish and agree that at least
one (if more authors) will register, attend and participate at the conference to present the paper.
All submitted papers are peer reviewed by the Reviewers Task Panel (RTP) and accepted papers are published in a
refereed conference proceeding. Articles that are recommended to the Executive Editorial Board (EEB) have a high
chance of being published in one of the Intellectbase double-blind reviewed Journals. For Intellectbase Journals and
publications, please visit: www.intellectbase.org/Journals.php
All submitted papers must include a cover page stating the following: location of the conference, date, each author(s)
name, phone, e-mail, full affiliation, a 200 - 500 word Abstract and Keywords. Please send your submission in
Microsoft Word format.
For more information concerning conferences and Journal publications, please visit the Intellectbase website at
www.intellectbase.org. For any questions, please do not hesitate to contact the Conference Chair at
[email protected]
REGISTRATION GUIDELINES
Registration Type
Fee^
Early Registration
$375.00^
Normal Registration
$450.00^
Student Registration
$195.00^
Additional Papers (No More than 3 Articles per Conference)
$150.00^ ea.
Second & Subsequent Author Attendance
$75.00^ ea.
^ Prices subject to change
INTELLECTBASE DOUBLE-BLIND REVIEWED JOURNALS
Intellectbase International Consortium promotes broader intellectual resources and publishes reviewed
papers from all disciplines. To achieve this, Intellectbase hosts approximately 4-6 academic conferences
per year and publishes the following Double-Blind Reviewed Journals
(http://www.intellectbase.org/journals.php).
JAGR
IJAISL
RHESL
IJSHIM
RMIC
JGIP
JISTP
JKHRM
JIBMR
Journal of Applied Global Research – ISSN: 1940-1833
International Journal of Accounting Information Science and Leadership – ISSN: 1940-9524
Review of Higher Education and Self-Learning - ISSN: 1940-9494
International Journal of Social Health Information Management - ISSN: 1942-9664
Review of Management Innovation and Creativity - ISSN: 1934-6727
Journal of Global Intelligence and Policy - ISSN: 1942-8189
Journal of Information Systems Technology and Planning - ISSN: 1945-5240
Journal of Knowledge and Human Resource Management - ISSN: 1945-5275
Journal of International Business Management & Research - ISSN: 1940-185X
The US Library of Congress has assigned ISSN numbers for all formats of Intellectbase Journals - Print,
Online and CD-ROM. Intellectbase Blind-Review Journals are listed in major recognized directories: e.g.
Cabell’s, Ulrich’s, JournalSeek and Ebsco Library Services and other publishing directories. Intellectbase
International Consortium publications are in the process to be listed in the following renowned Journal
databases e.g. ABI/INFORM, ABDC, Thomson SCIENCE and SOCIAL SCIENCE Citation Indexes, etc.
Note: Intellectbase International Consortium prioritizes papers that are selected from Intellectbase conference proceedings for
Journal publication. Papers that have been published in the conference proceedings, do not incur a fee for Journal
publication. However, papers that are submitted directly to be considered for Journal publication will incur a US$195 fee to
cover the cost of processing, formatting, compiling, printing, postage and handling if accepted. Papers submitted direct to
a Journal may be emailed to [email protected]† (e.g. [email protected]†,
[email protected]†, etc.). †By submitting a paper, authors implicitly assign Intellectbase the copyright license to
publish and agree that at least one (if more authors) will order a copy of the journal.
Call for Article Submissions
Journal of Information Systems Technology & Planning
The Journal of Information Systems Technology & Planning (JISTP) is seeking submissions of original
articles on current topics of special interest to practitioners and academics, including but not limited to:
communications, decision-support systems, technological planning, problem-solving, strategies that deal
with organizational development, IST planning, technological innovation, business models and services.
Research or application oriented articles that describe theoretical explanations, original developments and
practical elements of systems infrastructure will be considered. Moreover, complex compositions on
information systems management integrating the authors’ philosophies both qualitatively and quantitatively
are encouraged.
All articles are refereed by a rigorous evaluation process involving at least three blind reviews by qualified
academic, industrial, or governmental professionals. Submissions will be judged not only on the suitability of
the content, but also on the intellectual framework and significance to society in general. All papers are peer
reviewed.
The Executive Editorial Board (EEB) of the Journal of Information Systems Technology & Planning (JISTP)
strongly encourages authors to submit their article(s) to an IIC conference prior to Journal consideration.
Author’s who submit their articles to an IIC conference receive the benefit of feedback from the IIC
Reviewers’ Task Panel (RTP) and conference attendees. This feedback generally improves the quality of
submissions to the Journal of Information Systems Technology & Planning (JISTP). Articles that are
accepted for presentation at an IIC conference have a higher likelihood of being published in JISTP.
JISTP solicits only original contributions that have not been previously published or submitted elsewhere,
with the exception of a submission to IIC refereed conference proceedings. Note! IIC refereed proceedings
are a partial fulfillment of Intellectbase International Journals. Papers awaiting presentation or already
presented at IIC conferences must be revised (ideally, taking advantage of feedback obtained at the
conference) and have a slightly modified title to be considered for journal inclusion. All manuscripts selected
for publication must maintain a high standard of content, style and value to the readership. Acceptance
criterion for manuscript publication is based on research innovation and creative excellence.
JISTP REVIEW PROCESS
The author submits his/her paper electronically and the paper is sent to the Managing Editor. A confirmation
of receipt will be e-mailed to the author(s), usually within 2 days of your submission. The Managing Editor
assigns the paper an ID number, removes author(s) names and affiliations, and sends the paper to
reviewers. Reviewers usually have 2 weeks to perform the review; however, on occasions the reviewer may
take up to 4 weeks to complete a review. Once review comments are returned, the Managing Editor puts
them together and provides the original paper and the reviews to the Editor-in-Chief. The Editor-in-Chief,
based on the comments of the reviewers and his reading of the manuscript, forms an overall
recommendation regarding publication. On occasion, the Editor-in-Chief will consult the Senior Advisory
Board and if necessary, request that an additional review be performed. Once the Editor-in-Chief has formed
an opinion on the acceptability of the paper, an email will be sent to the corresponding author of the outcome
of the review process. The full review process currently takes anywhere from 1-4 weeks from receipt of
manuscript.
SUBMISSION INSTRUCTIONS
JISTP only accepts electronic submissions of manuscripts. To submit a manuscript for the review process,
you should send an email with the paper as an attachment to [email protected]†. In the body of your
email message include the author(s) name(s), contact information for the corresponding author, and the title
of your submission. Your submission will be acknowledged via return email. All submissions must be in
English and in Word format.
Intellectbase International Consortium prioritizes papers that are selected from Intellectbase conference
proceedings for Journal publication. Papers that have been published in the conference proceedings, do not
incur a fee for Journal publication. However, papers that are submitted directly to be considered for Journal
publication will incur a $195 fee to cover the cost of processing, reviewing, compiling and printing if
accepted.
Page 1 of your submission should contain the title of the paper and should identify all authors, including
authors’ names, mailing addresses, and email addresses. Authors’ names should not appear anywhere else
in the manuscript, except possibly as part of the reference list. Author details should be followed by an
Abstract of 200-500 words. Following the Abstract, Key Words should be identified and the Key Words are
followed by the text of the paper.
The manuscript must be single-spaced, contain a single column, utilize 11 point Arial Narrow justified font,
and contain 1” margins on all sides.
TITLE
Centered across the top of the first page, 16 point Arial Narrow bold font, all letters capitalized.
MAJOR HEADINGS
14 point Arial Narrow bold font, left aligned, all letters capitalized.
First Level Sub-Headings
13 point Arial Narrow bold font, left aligned, capitalize each word.
Second level sub-headings
12 point Arial Narrow bold italic font, left aligned, capitalize each word.
Third level sub-headings
12 point Arial Narrow italic font, left aligned, first word capitalized.
No blank line is to appear between a sub-heading and the text. Tables and figures should be included in the
text, approximately where the author thinks that they should appear. Manuscripts must be edited for spelling
and grammar.
Reference citation ordering and format must follow Harvard (or APA) Style referencing. Reference entries
should be ordered alphabetically (in text and Reference section) according to authors’ or editors’ last names,
or the title of the work for items with no author or editor listed. Any reference contained in the text must be
included in the Reference section and vice versa.
References in the text should be of the format: (Harris et al., 1995; Johnson, 1996). Quotes from a source
should include the page number (Johnson, 1996, pp. 223). References must be complete.
The paper should not normally exceed 10 single-spaced pages, including all sections, figures, tables, etc.
However, the Editor-in-Chief reserves the right to consider longer articles of major significance.
Electronic submissions should be sent to [email protected]†. Please visit the Intellectbase
International Consortium website: www.intellectbase.org for further information.
†
By submitting a paper, authors implicitly assign Intellectbase the copyright license to publish and agree that at least one (if
more authors) will order a copy of the journal.
Journal of Information Systems Technology & Planning
Individual Subscription Request
Please enter my subscription for the Journal of Information Systems Technology and Planning
Name ________________________________________________________________________________
Title _____________________________________ Telephone ( ______ ) __________________________
Mailing Address _________________________________________________________________________
______________________________________________________________________________________
City________________________________ State _________________ Zip Code ____________________
Country ____________________________________ Fax ( _____ ) ___________________
E-mail ________________________________________________________________________________
Please check the appropriate categories:
Within the United States
Outside the United States
□ Annual Individual Subscription - US$125
Begin the annual subscription with the:
□ Annual Individual Subscription - US$150
□ Current Issue □ Next Issue
□ Single Issue - US$75
□ Single Issue - US$85
If you are requesting a single issue, which issue (Volume, and Issue) are you requesting ? ______________
Payment by check in U.S. Dollars must be included.
Make check payable to: Intellectbase International Consortium
Send this Subscription Request and a check to:
JISTP Subscription
Intellectbase International Consortium
1615 Seventh Avenue North
Nashville, TN, 37208, USA
All e-mail enquiries should be sent to: [email protected]
Journal of Information Systems Technology & Planning
Library Recommendation
(Please complete this form and forward it to your Librarian)
Dear ___________________________________ (Librarian’s name)
I recommend that ________________________________________ (Library’s name) subscribe to the following
publication.
□
Journal of Information Systems Technology & Planning (JISTP) ISSN: 1945-5240 (US$225 /Year)
I have indicated the benefits of the above journal to our library:
(1=highest benefit; 2=moderate benefit; 3=little benefit)
1 2 3 REFERENCE: For research articles in the field of Information Systems Technology and Planning.
1 2 3 STUDENT READING: I plan to recommend articles from the above to my students.
1 2 3 PUBLICATION SOURCES: This journal is suitable to my current research agenda.
1 2 3 PEER EVALUATION: This journal is highly regarded by my peers around the world.
Name ________________________________________________________________________________
Title _______________________________________ Telephone ( ______ ) _________________________
Mailing Address ________________________________________________________________________
______________________________________________________________________________________
City _________________________________ State ____________________Zip Code _______________
Library Subscriptions:
Within the US - US$225
Outside the US (includes air mail postage) - US$255
Payment by check in U.S. Dollars must be included.
Make check payable to: Intellectbase International Consortium
Send this Subscription Request and a check to:
JISTP Subscription
Intellectbase International Consortium
1615 Seventh Avenue North
Nashville, TN, 37208, USA
Tel: +1 (615) 944-3931; Fax: +1 (615) 739-5124
www.intellectbase.org
All e-mail enquiries should be sent to: [email protected]
For information about the following Journals, please visit www.intellectbase.org
JIBMR-Journal of International
Business Management &
Research
IJSHIM-International Journal of
Social Health Information
Management
JAGR-Journal of Applied Global
Research
JGIP-Journal of Global
Intelligence and Policy
RHESL-Review of Higher
Education and Self-Learning
JWBSTE-Journal of Web-Based
Socio-Technical Engineering
IJEDAS-International Journal of
Electronic Data Administration
and Security
JKHRM-Journal of Knowledge
and Human Resource
Management
JOIM-Journal of Organizational
Information Management
IJAISL-International Journal of
Accounting Information
Science and Leadership
RMIC-Review of Management
Innovation and Creativity
IHPPL-Intellectbase Handbook
of Professional Practice and
Learning
B
E
S
T
M
A
P
EDUCATION
BUSINESS
SCIENCE
MULTI-DISCIPLINARY
FOUNDATIONS
SOCIOLOGY
&
INTELLECTUAL
TECHNOLOGY
PERSPECTIVES
POLITICAL
MUSIC
ARTS
S