- ePrints Sriwijaya University

Transcription

- ePrints Sriwijaya University
ISSN: 1978 - 8282
P R O C E E D I N G S
Tangerang - Indonesia
Published by
CCIT Journal, Indonesia
The Association of Computer and Informatics Higher – Learning Instituition in Indonesia (APTIKOM) and
STMIK Raharja Tangerang Section
Personal use of this material is permitted. However, permition on reprint/republish this material for
advestising or promotional or for creating new collective work for resale or redistribution to servers or lists,
or to reuse any copyrighted component of this work in other works must be obtained form the Publisher
CD – Conference Proceedong
CCIT Catalog Number:
ISSN: 1978 – 8282
Technically Co-Sponsored by
National Council Informatioan Techonology Of Indonesia
Raharja Enrichment Centre (REC)
The Institutional of Information Communication and Technology
Digital Syinztizer Laboratory of Computers system Processing
Design and typeset by:
Sugeng Widada & Lusyani Sunarya
Contents
Steering Committee
- Message from Steering Committee
1
Programme Committee
4
- Message from Programme Committee
5
Organizing Committee
9
- Message from Organizing Committee
10
Paper Participants
13
Revievwers
17
- Panel of Reviewers
18
Keynote Speeches
19
- Richardus Eko Indrajit, Prof
20
- Rahmat Budiarto, Prof
31
Paper
32
Author Index
284
Schedule
286
Location
288
Steering Committee
Message from Steering Committee
Chairman:
R. Eko Indrajit, Prof.
(ABFI Institute, Perbanas)
Welcome Speech from the Chairman
The honorable ladies and gentleman, on behalf of all Programming Committee and Steering Committee, I
would like to welcome you all to the International Conference on Creative Communication and Innovative
Technology.
It is indeed a great privilege for us to have all of you here joining this international gathering that is professionally
organized by Perguruan Tinggi Rahardja. The ICCIT-09 has an ultimate objective to gather as many creative
ideas as possible in the field of information communication and technology that we believe might help the
country in boosting its economic development.
Since we are strongly confidence in the notions saying that great innovations are coming from the great
people, we have decided to invite a good number of young scholars originating from various campuses all
over the countries to join the gathering. It is our wish that the blend between new and old generations, wisdom
of legacy and the emerging modern knowledge, the proven technology and the proposed enabling solutions,
can lead to the invention of new products and services that will bring benefits to the society at large.
Let me use this opportunity to thank all participants who have decided to share their knowledge in this
occasion. Also a great appreciation I would like to express to the sponsors and other stakeholders who have
1
Message from Steering Committee
joined the committee to prepare and to launch such initiative successfully. Without your helps, it is impossible
to have the ICCIT-09 internationally commenced. Last but not least, this fantastic gathering will not come true
without the great efforts of Perguruan Tinggi Rahardja. From the deepest down of my heart, please allow me
to extend our gratitude to all management and staffs involved in the organizing committee.
I wish all of you have a great sharing moment. And I hope your stay in Tangerang, one of the industrial satellite
city of Jakarta, can bring the unforgettable experience to remember.
Thank you.
Prof. Richardus. Eko IndrajitHasibuan, Ph.D
Chairman of ICCIT 2009
August 8, 2009
2
Steering Committee
Chairman:
R. Eko Indrajit, Prof.
Co Chairman:
Zainal A. Hasibuan, Ph.D
(ABFI Institute, Perbanas)
(University of Indonesia)
Members:
Tri Kuntoro Priyambodo, M.Sc.
Members:
Rangga Firdaus, M.Kom.
Members:
Arief Andi Soebroto, ST.
Members:
Tommy Bustomi, M.Kom.
(Gajah Mada University)
(Lampung University)
(Brawijaya University)
(STMIK Widya Ciptadarma)
Members:
Yusuf Arifin, MT.
Members:
Once Kurniawan, MM.
Members:
Achmad Batinggi, MPA.
Members:
Philipus Budy Harianto
(Pasundan University)
(Bunda Mulia University)
(STIMED Nusa Palapa)
(University Sains Technology
Jayapura)
3
Programme Committee
Message from Programme Committee
Untung Rahardja, M.T.I.
(STMIK Raharja, Indonesia)
Dear Friends and Colleagues
On behalf of the Organizing Committee, we are pleased to welcome you to International Conference On
Creative Communication And Innovative Technology 2009 (ICCIT’09).
The annual event of ICCIT’09 has proven to be an excellent forum for scientists, research, engineers an
industrial practitioners throughout the world to present and discuss the latest technology advancement as well
as future directions and trends in Industrial Electronics, and to set up useful links for their works. ICCIT’09 is
organized by the APTIKOM (The Association of Computer and Informatics Higher-Learning Institutions in
Indonesia).
ICCIT’09 received overwhelming responses with a total of 324 full papers submission from 40 countries/
regions. All the submitted papers were processed by the Technical Program Committee which consists of the
one chair, 3 co-chair and 18 track chairs who are worldwide well known experts with vast professional
experience in various areas of the conference. All the members worked professionally, responsibly and diligently in soliciting expert international reviewers. Their hard working has enabled us to put together a very
solid technical program for our delegates. The technical program includes 36 papers for presentations in 36
oral sessions and 2 interactive session. Besides the parallel technical session, there are also 2 keynote speeches
and 3 distinguished invited lectures to be delivered by eminent professor and researchers. These talks will
address the state-of-the-art development and leading-edge research activities in various areas of industrial
electronics.
We are indeed honored to have Professor Richardus Eko Indrajit of APTIKOM (The Association of Computer and Informatics Higher-Learning Institutions in Indonesia), Professor Rahmat Budiarto of Universiti
5
Message from Organizing Committee
Sains Malaysia, Professor Professor Suryo Guritno of Gadjah Mada University, Indonesia, as the keynote
speakers for ICCIT’09. Their presence would undoubtedly act prestige to the conference as they are the
giants in their respective fields. We would like to express our sincere appreciation to all the 3 keynote speaker
and the 7 distinguished invited lectures speakers for their contribution and supports to ICCIT’09.
A CD-ROM containing preprints of all paper schedule in the program and Abstract Book will be provided at
the conference to each registered participant as part of the registration material. The official conference proceedings will be published by ICCIT’09 and included in the ICCIT Xplore Database.
We understand that many delegates are here in Tangerang Banten for the first time. We would like to encourage you to explore the historical and beautiful sight of Tangerang Banten during you stay. To make this
conference more enjoyable and memorable. During the conference, a travel agent will provide on-site postconference tour service to our delegates to visit historical sites. The conference will also organize technical
tours to the famous higher educational and research institution STMIK Raharja, one of the organizers.
On behalf of the Organizing Committee, we would like to thank all the organizers of the special session and
invite session and the numerous researchers worldwide who helped to review the submitted papers. We are
also grateful to the distinguished International Advisory Committee members for their invaluable supports and
assistances. We would like to gladly acknowledge the technical sponsorship provided by the APTIKOM
(The Association of Computer and Informatics Higher-Learning Institutions in Indonesia) and Perguruan
Tinggi Raharja Tangerang Banten Indonesia.
We hope that you will find your participant in ICCIT’09 in Tangerang Banten stimulating, rewarding, enjoyable and memorable.
Ir. Untung Rahardja M.T.I
Programme Committee of ICCIT 2009
August 8, 2009
6
Programme Committee
Abdul Hanan Abdullah, Prof.
Arif Djunaidy, Prof.
Djoko Soetarno, Ph.D
(University Technology Malaysia)
(Sepuluh November
(STMIK Raharja, Indonesia)
Institute of Technology Indonesia)
Edi Winarko, Ph.D
E.S. Margianti, Prof.
Iping Supriyana, Dr.
(Gajah Mada University, Indonesia)
(Gunadarma University,
(Bandung Institute of Technology,
Indonesia)
Indonesia)
Jazi Eko Istiyanto, Ph.D
K.C. Chan, Prof.
Marsudi W. Kisworo, Prof.
(Gajah Mada University,
(University of Glasgow,
United Kingdom)
(Swiss-German University,
Indonesia)
Indonesia)
7
Programme Committee
Rahmat Budiarto, Prof.
Stepane Bressan, Prof.
Suryo Guritno, Prof.
(University Sains Malaysia)
(National University of
Singapore)
(Gajah Mada University,
Indonesia)
Susanto Rahardja, Prof.
T. Basaruddin, Prof.
Thomas Hardjono, Prof.
(Nanyang Technological
(University of Indonesia)
(MIT, USA)
University Singapore)
Untung Rahardja, M.T.I.
Wisnu Prasetya, Prof.
Y. Sutomo, Prof.
(STMIK Raharja, Indonesia)
(Utrecht Unversity, Netherland)
(STIKUBANK University,
Indonesia)
8
Organizing Committee
Message from Organizing Committee
General Char;
Po. Abas Sunarya, M.Si.
(STMIK Raharja, Indonesia)
It’s a great pleasure to welcome everyone to The International Conference on Creative Communication and
Innovative Technology 2009 (ICCIT-09). It is being held in the campus of Raharja Institution is a credit to
Banten and which emphasizes the global nature of both ICCIT and our networking research community.
ICCIT is organized by Raharja Institution together with APTIKOM (The Association of Computer and
Informatics Higher-Learning Institutions in Indonesia). We hope that this conference facilitates a stimulating
exchange of ideas among many of the members of our international research community.
ICCIT has been made possible only through the hard work of many people. It is offers an exceptional forum
for worldwide researchers and practitioners from academia, industry, business, and government to share their
expertise result and research findings in all areas of performance evaluation of computer, telecommunications
and wireless systems including modeling, simulation and measurement/testing of such systems.
Many individuals have contributed to the success of this high caliber international conference. My sincere
appreciation goes to all authors including those whose papers were not included in the program. Many thanks
to our distinguished keynote speakers for their valuable contribution to the conference. Thanks to the program
10
Message from Organizing Committee
committee members and their reviewer for providing timely reviews. Many thanks to the session chairs for
their efforts. Thanks are also due to FTII, APJI, ASPILUKI, APKOMINDO, MASTEL, IPKIN and AINAKI,
for her fine support.
Finally, on behalf of the Executive and Steering Committees of the International Conference on Creative
Communication and Innovative Technology, ICCIT-09, and the Society for Modeling and The Association of
Computer and Informatics Higher-Learning Institutions in Indonesia (APTIKOM), I invite all of you to be us
in Raharja Institution, at ICCIT -09.
Drs. Po. Abas Sunarya, M. Si.
General Chair, ICCIT-09
11
Organizing Committee
Chairman:
Po. Abas Sunarya, M.Si.
Members:
Augury El Rayeb, M.MSi.
Members:
Maria Kartika, SE.
12
Members:
Eko Prasetyo Windari
Karso, Ph.D
Members:
Muhamad Yusup, S.Kom.
Co Chairman:
Sunar Abdul Wahid, Dr.
Members:
Euis Sitinur
Aisyah, S.Kom.
Members:
Mukti Budiarto, Ir
Co Chairman:
Henderi, M.Kom.
Members:
Junaidi, S.Kom.
Members:
Lusyani Sunarya, S.Sn.
Members:
Padeli, S.Kom.
Members:
Sugeng Santoso, S.Kom.
Paper Participants
7
Paper Participant
Gede Rasben Dantes
- Doctoral Student in Computer Science Department, University of Indonesia
Widodo Budiharto, DjokoPurwanto, Mauridhi Hery Purnomo
- Electrical Engineering Department Institue Technology Surabaya
Untung Rahardja, Valent
- STMIK RAHARJA Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
Diyah Puspitaningrum, Henderi
- Information System, Faculty of Computer Science
Wiwik Anggraeni, Danang Febrian
- Information System Department, Institut Teknologi Sepuluh Nopember
Aan Kurniawan, Zainal A. Hasibuan
- Faculty of Computer Science, University of Indonesia
Untung Rahardja, Edi Dwinarko, Muhamad Yusup
- STMIK RAHARJA Raharja Enrichment Centre (REC) Tangerang - Banten, Republic of Indonesia
- GADJAH MADA UNIBERSITYFaculty of Mathematics and Natural SciencesYogyakarta,
Sarwosri, Djiwandou Agung Sudiyono Putro
- Department of Informatics, Faculty of Information Technology
- Institute of Technology Sepuluh Nopember
Chastine Fatichah, Nurina Indah Kemalasari
- Department, Faculty of Information Technology
- Institut Teknologi Sepuluh Nopember, Kampus ITS Surabaya
Untung Rahardja, Jazi Eko Istiyanto
- STMIK RAHARJA Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
- GADJAH MADA UNIVERSITY Yogyakarta, Republic of Indonesia
Bilqis Amaliah, Chastine Fatichah, Diah Arianti
- Informatics Department – Faculty of Technology Information
- Institut Teknologi Sepuluh Nopember (ITS), Surabaya, Indonesia
Tri Pujadi
- Information System Department – Faculty of Computer Study Universitas Bina Nusantara
Jl. Kebon Jeruk Raya No. 27, Jakarta Barat 11530 Indonesia
Untung Rahardja, Retantyo Wardoyo, Shakinah Badar
- Faculty of Information System, Raharja University Tangerang, Indonesia
- Faculty of Mathematics and Natural Science, Gadjah Mada University Yogyakarta, Indonesia
- Faculty of Information System, Raharja UniversityTangerang, Indonesia
14
Paper Participant
Henderi, Maimunah, Asep Saefullah
- Information Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
Yeni Nuaraeni
- Program Study Information Technology University Paramadina
Sfenrianto
- Doctoral Program Student in Computer Science University of Indonesia
Asep Saefullah, Sugeng Santoso
.- STMIK RAHARJA Raharja Enrichment Centre (REC) Tangerang - Banten, Republic of Indonesia
Henderi, Maimunah, Aris Martono
- STMIK RAHARJA Raharja Enrichment Centre (REC) Tangerang - Banten, Republic of Indonesia
M. Tajuddin, Zainal Hasibuan, Abdul Manan, Nenet Natasudian, Jaya
- STMIK Bumigora Mataram West Nusa Tenggara
- Indonesia University
- PDE Office of Mataram City
- ABA Bumigora Mataram
Ermatita, Edi Dwinarko, Retantyo Wardoyo
- Information systems of Computer science Faculty Sriwijaya University
(Student of Doctoral Program Gadjah Mada university)
- Computer Science of Mathematics and Natural Sciences Faculty Gadjah Mada University
Junaidi, Sugeng Santoso, Euis Sitinur Aisyah
- STMIK RAHARJA Raharja Enrichment Centre (REC) Tangerang - Banten, Republic of Indonesia
Ermatita, Huda Ubaya, Dwiroso Indah
- Information systems of Computer science Faculty Sriwijaya University
(Student of Doctoral Program Gadjah Mada university)
- Computer Science Faculty of Sriwijaya University Palembang-Indonesia.
Mauritsius Tuga
- Jurusan Teknik Informatika Universitas Katolik Widya Mandira Kupang
Padeli, Sugeng Santoso
- STMIK RAHARJA Raharja Enrichment Centre (REC) Tangerang - Banten, Republic of Indonesia
M. Givi Efgivia, Safarudin, Al-Bahra L.B.
- Staf Pengajar STMIK Muhammadiyah Jakarta
- Staf Pengajar Fisika, FMIPA, UNHAS, Makassar
- Staf Pengajar STMIK Raharja, Tangerang
Primantara, Armanda C.C, Rahmat Budiarto, Tri Kuntoro P.
- School of Computer Sciences, Univeristi Sains Malaysia, Penang, Malaysia
- School of Computer Science, Gajah Mada University, Yogyakarta, Indonesia
15
Paper Participant
Hany Ferdinando, Handy Wicaksono, Darmawan Wangsadiharja
- Dept. of Electrical Engineering, Petra Christian University, Surabaya - Indonesia
Untung Rahardja, Hidayati
- STMIK RAHARJA Raharja Enrichment Centre (REC) Tangerang - Banten, Republic of Indonesia
Dina Fitria Murad, Mohammad Irsan
- STMIK RAHARJA Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
Asep Saefullaf, Augury El Rayeb
- STMIK RAHARJA Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
Richardus Eko Indrajit
- ABFI Institute, Perbanas
Azzemi Arifin, Young Chul Lee, Mohd. Fadzil Amiruddin, Suhandi Bujang, Salizul Jaafar, Noor
Aisyah, Mohd. Akib
- AKIB#6#System Technology Program, Telekom Research & Development Sdn. Bhd., TMR&D
Innovation Centre, Lingkaran Teknokrat Timur, 63000 Cyberjaya, Selangor Darul Ehsan, MALAYSIA
Division of Marine Electronics and Communication Engineering, Mokpo National Maritime
University (MMU) 571 Chukkyo-dong, Mokpo, Jeonnam, KOREA 530-729
Sutrisno
- Departement of Mechanical and Industrial Engineering, Gadjah Mada University,
Jl. Grafika 2 Yogyakarta. 52281
- Faculty of Mathematics and Natural Sciences, Gadjah Mada University,
- Departement of Geodetical Engineering, Gadjah Mada University,
Saifuddin Azwar, Untung Raharja, Siti Julaeha
- Faculty Psychology, Gadjah Mada University Yogyakarta, Indonesia
- Faculty of Information System Raharja University Tangerang, Indonesia
Henderi, Sugeng Widada, Euis Siti Nuraisyah
- Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
16
Reviewers
Panel of Reviewers
Abdul Hanan Abdullah, Prof.
Susanto Rahardja, Prof.
Universiti Teknologi Malaysia
Nanyang Technologycal University, Singapore
Arif Djunaidy, Prof.
T. Basaruddin, Prof.
Sepuluh November Institute of Technology,
Indonesia
University of Indonesia,
Thomas Hardjono, Prof.
Djoko Soetarno, Ph.D
MIT, USA
STMIK Raharja, Indonesia
Untung Rahardja, M.T.I.
Edi Winarko, Ph.D
STMIK Raharja, Indonesia
Gajah Mada University, Indonesia
Wisnu Prasetya, Prof.
E.S. Margianti, Prof.
Utrecht University, Netherland
Gunadarma University, Indonesia
Y. Sutomo, Prof.
Iping Supriyana, Dr.
Bandung University of Technology, Indonesia
Jazi Eko Istiyanto, Ph.D
Gajah Mada University, Indonesia
K.C. Chan, Prof.
University of Glasgow, United Kingdom
Marsudi W. Kisworo, Prof.
Swiss-German University, Indonesia
Rahmat Budiharto, Prof.
Universiti Sains Malaysia
Stephane Bressan, Prof.
National University of Singapore
Suryo Guritno, Prof
Gajah Mada University, Indonesia
18
STIKUBANK University, Indonesia
Keynote Speech
Saturday, August 8, 2009
13:30 - 13:50
Room M-AULA
DIGITAL SCHOOL: Expediting Knowledge Transfer and Learning through
Effective Use of Information and Communication Technology within Education
System of Republic Indonesia
Richardus Eko Indrajit, Prof.
(APTIKOM)
Abstract
The involvement of Information and Communication Technology (ICT) within the educational system has been widely
discussed and implemented by various scholars and practitioners. A good number of cases have shown that the effective
use of such technology can bring positive and significant improvement to the quality of learning deliveries. For a country
which believes that a serious development on ICT for education system could gain some sorts of national competitive
advantage, a series of strategic steps has been undergone. Such effort is started from finding the strategic role and context
of ICT within the country’s educational system, followed by defining the architectural blue print of the various ICT
implementation spectrum and developing an implementation plan framework guideline. This article proposes one perspective and approach on how the ICT for education should be developed within the context of Indonesia’s educational
system.
Schools in Indonesia
As the biggest archipelago country in the world, Indonesia consists of more than 18,000 islands nationwide. In
2005, there are more than 230 million people living in this 5
million square meter area where almost two third of it is
water. The existence of 583 languages and dialects spoken
in the country is the result of hundreds of ethic divisions
split up by diverse separated island. According to statistics, 99 million of Indonesia population are labors with
45% of them works in agriculture sector. Other data source
also shows that 65% of total population are within productive age, which is 15-64 years old. The unbalanced
region development since the national independent’s day
of August 17th 1945 has made Java as the island with the
highest population density (average of 850 people per
20
square meter), comparing to the nation average of 100
people per square meter. It means that almost 60% of total
Indonesia population live in this island alone1.
In the year 2004, the number of formal education institutions (schools) in the country – ranging from primary school
to universities – has succeeded 225,000 institutions. There
are approximately 4 million teachers who are responsible
for more than 40 million students nationwide2. Note that
almost 20% of the schools still have problems with electricity as they are located in very remote area. For the purpose of leveraging limited resources and ensuring equal
yet balance learning quality growth of the society, the
government adopts a centralized approach of managing
education system as all policies and standards are being
set up by the Department of National Education lead by a
Minister of Education3.
ICT in Education Institution
The involvement of ICT (Information and Communication
Technology) within education institution in Indonesia
started from the higher-learning organization such as university and colleges. As the rapid development of such
technology in the market, several state universities and
prominent colleges that have electrical engineering related
fields introduced what so called as computer science program of study4. At that time, most of the computers were
used for two major purposes: s organizations in taking
care of their academic administrations, and supporting students conducting their research especially for the purpose
of finishing their final project as a partial requirement to be
awarded a bachelor degree. Currently, in the existence of 7
million fixed telephone numbers and 14 million mobile
phone users5, there are at least 12.5 million of internet users in Indonesia6. Data from May 2005 has shown that
there are more than 21,762 local domain name (.id) with the
total accumulative of IPv6 address of 131,0737. From all
these domain, there are approximately 710 domains representing education institutions8 (e.g. with the “.ac.id” subdomain). It means that only less than 0.5% of Indonesian
schools that are “ICT literate” – a ration that is considered
very low in Asia Pacific region.
History has shown that a significant growth of ICT in education started from the commencement of the first ICT related ministry, namely Ministry of Communication and Information in 20019. Through a good number of efforts and
socialization programs supported by private sectors, academicians, and other ICT practitioners, a strategic plan
and blueprint of ICT for National Education System has
been produced and announced in 2004 by the collaboration of three ministries which are: Ministry of Communication and Information, Department of National Education,
and Department of Religion10.
The National Education System
Indonesia’s national education system is defined and regulated by the UU-Sisdiknas RI No.20/2003 (Undang-Undang
Sistem Pendidikan Nasional Republik Indonesia)11. This
last standard has been developed under the new paradigm
of modern education system that is triggered by new requirements of globalization. All formal education institutions – from primary schools to universities – have to develop their educational system based on the philosophy,
principles, and paradigms stated in this regulation.
experts, the conceptual architecture of an education institution can be illustrated through the following anatomy.
Vision, Mission, and Value
Every school has its own vision and mission(s) in the society. Most of them are related to the process of knowledge acquisition (learning) for the purpose of increasing
the quality of people’s life. As being illustrated above, the
vision and mission(s) of an education institution is very
depending upon the needs of stakeholders that can be
divided into 7 (seven) groups, which are (Picture 1):
Picture 1: The Conceptual Architecture of Educational
System
1. Owners and commissioners – who are coming from
various society, such as: religious communities,
political organizations, education foundation,
government, private sectors, etc.;
2. Parents and Sponsors – who are taking an active
portion as the parties that decide to which schools
their children or employees should be sent to;
3. Students and Alumni – who are at one aspect being
considered as the main customers or subject of
education but in other perspective represent output/
outcome’s
quality
of
the
institution;
4. Management and Staffs – who are the parties that run
education organization and manage resources to
achieve targeted goals;
5. Lecturers and Researchers – who are the main source
of institution’s most valuable assets which are
intellectual property assets;
6. Partners and Industry – who are aliening with the
institutions to increase practical knowledge
capabilities of the institution graduates; and
7. Government and Society – who are setting regulation
and shaping expectation for ensuring quality
education being delivered.
Based on the national education system that has been
powered by many discourses from Indonesia’s education
21
Four Pillars of the Education System
Through depth analysis of various performance indicators chosen by diverse education management practitioners – backing up also by a good number of research by
academicians on the related fields – there are at least 4
(four) aspects or components that play important roles in
delivering quality educations. Those four pillars are:
1. Content and Curriculum – the heart of the education
lies on the knowledge contained (=content) within the
institution communities and network that are structured
(=curriculum) so that it can be easily and effectively
transferred and acquired by students;
2. Teaching and Research Delivery – the arts on
acquiring knowledge through various learning
activities that promote cognitive, affective, and
psychomotor competencies transfers;
3. Human Resource and Culture – by the end of the day,
human resource are the people who are having and
willing to share all knowledge they have to other people
within a conducive academic environment and culture
through appropriate arrangements; and
4. Facilities and Network – effective and quality
education deliverables nowadays can only be done
through adequate existence of facilities and
institutional network (i.e. with all stakeholders).
Some of institutions consider these four pillars as critical
success factors while some of them realize that such components are the minimum resources (or even a business
model) that they have to carefully manage14 as educators
or management of education institutions. Note that there
are some local regulations that rule the education institution to have minimum physical assets or other entities
within specific ratio to be able to operate in Indonesia.
Such requirements will be checked by the government
during the process of building new school and in the ongoing process of the school operations as quality control.
Institution Infrastructure and Superstructure
Finally, all of those vision, missions, objectives, KPIs, and
pillars, are being built upon a strong holistic institution
infrastructure and superstructure foundation. It consists
of three components that build the system, which are:
1. Physical Infrastructure – consist of all assets such as
building, land, laboratory, classes, technology, sports
center, parking space, etc. that should be required to
establish a school15;
2. Integrated Services – consist of a series of processes
integrating various functions (e.g. strategic to
operational aspects) exist in school to guarantee
effectiveness of education related services; and
3. Quality Management System – consist of all policies,
22
standards, procedures, and governance system that
are being used to manage and to run the institution to
guarantee the quality16.
ICT Context on Education
While trying to implement these education principles, all
stakeholders believe that information is everything, in the
context of:
•
•
•
Information is being considered as the raw
mateinformation are mandatory;
Information is something that is very crucial for man
aging and governing purposes è since the
sustainability of a school can be seen from all data
and/or relevant and reliable information are very im
portant; and
Information is a production factor in education
services è since in every day’s transactions,
interaction, should be well managed.
ICT Context and Roles in National Education System
Based on the defined National Education System, there
are 7 (seven) context and roles of ICT within the domain,
which are (Picture 2):
1.
ICT as Source of Knowledge;
2.
ICT as Teaching Tools and Devices;
3.
ICT as Skills and Competencies;
4.
ICT as Transformation Enablers;
5.
ICT as Decision Support System;
6.
ICT as Integrated Administration System; and
7.
ICT as Infrastructure.
Picture 2: The Role and Context of ICT in Education
It can be easily seen that these seven context and roles are
derived from the four pillars and three institution infrastructure/superstructure components within the national
education system architectural framework. Each context
and/or role supports one domain on the system17. The
followings are the justification on what and why such context and roles exist.
ICT as Source of Knowledge
The invention of internet – the giant network of networks
– has shift on how education and learning should be done
Objectives and Performance Indicators
In order to measure the effectiveness of series of actions
taken by institution in order to achieve their vision and
missions, various objectives and performance indicators
are being defined. Previously, for all government-owned
schools, the measurements have been set up by the states.
But nowadays, every education institution is given a full
right to determine their control measurements as long as it
does not violate any government regulation and education principles (and ethics)12. Good selection of indicators
portfolio can represent not just only the quality level of
education delivery status, but also the picture of
sustainability profile of the institution13.
nowadays. As more and more scholars, researchers, and
practitioners are being connected to internet, a cyberspace
has been inaugurated as source of knowledge. In other
words, ICT has enabled the creation of new world where
knowledge are being collected and stored. Several principles that are aligned with the new education system paradigm are as follows18:
•
New knowledge are being found at a speed of
thought today, which make any scholar has to be able to
recognize its existence è through ICT (e.g. internet), such
knowledge can be easily found and accessed in no time;
•
Most of academicians, researchers, scholars, students, and practitioners disclose what they have (e.g. data,
information, and knowledge) through the internet so that
many people in other parts of the world can take benefit
out of it è through ICT (e.g. website, database), all those
multimedia formats (e.g. text, picture, sound, and video)
can be easily distributed to other parties; and
•
New paradigm of learning states that the source
of knowledge is not just coming from the assigned lecturer
or textbooks of a course in a class, but rather all experts in
the fields and every reference found in the world are the
source of knowledge è through ICT (email, mailing list,
chatting, forum) every student can interact with any lecturer and can have accessed to thousands of libraries for
references.
various approaches;
3. Community of Interests Groupware – how community
of lecturers, professors, students, researchers, man
agement, and practitioners can do collaboration,
cooperation, and communication through meeting in
cyber world;
4. Institution Network – how school can be a part of and
access a network where its members are education in
stitutions for various learning-based activities;
5. Dynamic Content Management – how data or content
are dynamically managed, maintained, and preserved;
6. Standard Benchmarking and Best Practices – how
school can analyze themselves by comparing their
knowledge-based acquisition with other education in
stitutions worldwide and learning from their success;
and
7. Intelligence System – how various scholars can have
the information regarding the latest knowledge they
need without having to search it in advance.
ICT as Teaching Tools and Devices
Learning should become activities that are considered
enjoyable by people who involve. It means that the delivery processes of education should be interesting so that
either teachers and students are triggered to acquire and
to develop knowledge as they convenience. As suggested
by UNESCO, Indonesia has adopted the “CompetenceBased Education System” that force the education institution to create curriculum and to conduct delivery approaches that promote not just cognitive aspect of competence, but also affective and psychomotor ones. There
are several paradigm shifts that should be adapted related
to teacher’s learning style to promote the principle (Picture 3). The followings are some transformation that should
be undergone by all teachers in education institution20.
With respect to this context, at least there are 7 (seven)
aspects of application any education institution stakeholder should be aware of, which are:
1. Cyber Net Exploration – how knowledge can be found,
accessed, organized, disseminated, and distributed
through the internet19;
2. Knowledge Management – how knowledge in many
forms (e.g. tacit and explicit) can be shared through
Picture 3: The Paradigm Change in Teaching Delivery
From above paradigm, it is clearly defined on how ICT can
help teachers in empowering their delivery styles to the
23
students and how students can increase their learning
performance. There are at least 17 (seventeen) applications related to this matter as follows:
1. Event Imitation – using technology to create
animation of events or other learning subjects
representing real life situation;
2. Case Simulation - enabling teachers and students to
study and to perform “what if” condition in many cases
simulation;
3. Multimedia Presentation – mixing various format of
texts, graphics, audio, and video to represent many
learning objects;
4. Computer-Based Training (CBT) – technology module
that can help students to conduct independent study;
5. Student Learning Tools – a set of programs to help
students preparing and storing their notes,
presentation, research works, and other learning re
lated stuffs;
6. Course Management – an application that integrates
all course related activities such as attendees
management, materials deliverable, discussion forum,
mailing list, assignments, etc.
7. Workgroup Learning System – a program that can
facilitate teachers and students group-based
collaboration, communication, and cooperation;
8. Three-Party Intranet – a network that links teachers,
students, and parents as main stakeholders of
education;
9. Examination Module – a special unit that can be used
to form various type of test models for learning
evaluation purposes;
10. Performance Management System – software that can
help teacher in managing student individual learning
records and tracks for analyzing his/her specific study
performance;
11. Interactive Smart Book – tablet PC or PDA-based
device that is used as intelligent book;
12. Electronic Board – a state-of-the-art board that acts as
user interface to exchange the traditional blackboard
and whiteboard; and
13. Blogger – a software module that can help the teacher
keep track of student progress through their daily
experience and notes written in the digital format.
ICT as Skills and Competencies
Since teachers and students will be highly involved in
using many ICT-based application, the next context and
role of ICT that should be promoted is its nature as a thing
that every teacher and student should have (e.g. skills and
competencies). This digital literacy (or e-literacy) should
become pre-requisites for all teachers and students who
want to get maximum benefit of ICT implementation in education system. In other words, a series of training program
24
should be arranged for teachers and range of preliminary
courses should be taken by students so that at least they
are familiar in operating computer-based devices and applications21. To be able to deliver education and to learn in
an effective and efficient way – by using ICT to add value
– several tools and applications that should be well understood by both teachers and students are listed below:
1. Word Processing - witting software that allows the
computer to resemble a typewriter for the purpose of
creating reports, making assignments, etc.;
2. Spreadsheet - type of program used to perform various
calculations, especially popular for mathematic,
physics, statistics, and other related fields;
3. Presentation Tool – a software to be used for creating
graphical and multimedia based illustration for
presenting knowledge to the audience;
4. Database - a collection of information that has been
systematically organized for easy access and analysis
in digital format;
5. Electronic Mail - text messages sent through a
computer network to a specified individual or group
that can also carry attached files;
6. Mailing List - a group of e-mail addresses that are used
for easy and fast distribution of information to multiple
e-mail addresses simultaneously;
7. Browser - software used to view and interact with re
sources available on the internet;
8. Publisher – an application to help people in creating
brochures, banners, invitation cards, etc.;
9. Private Organiser - a software module that can serve as
a diary or a personal database or a telephone or an
alarm clock etc.;
10. Navigation System – an interface that acts as basic
operation system that is used to control all computer
files and resources;
11. Multimedia Animation Software - system that supports
the interactive use of text, audio, still images, video,
and graphics;
12. Website Development– a tool that can be used to
develop web-based content management system;
13. Programming Language – a simple yet effective pro
gramming language to help people in developing small
application
module;
14. Document Management – a software that can be used
in creating, categorizing, managing, and storing
electronic documents;
15. Chatting Tool – an application that can be utilized by
two or more individuals connected to Internet in hav
ing real-time text-based conversations by typing
messages into their computer; and
16. Project Management - an application software to help
people in planning, executing, and controlling event
based activities.
ICT as Transformation Enablers
As the other industrial sectors, ICT in the education field
has also shown its capability to transform the way learning is delivered nowadays. It starts from the facts that
some physical resources can be represented into digital or
electronic forms type of resources22. Because most of education assets and activities can be represented by digital
forms23, then a new world of learning arena can be established and empower (or alternate) the conventional ones.
There are some entities or applications of these transformation impacts, which are:
1. Virtual Library - A library which has no physical
existence, being constructed solely in electronic form
or on paper;
2. E-learning Class - any learning that utilizes a network
(LAN, WAN or Internet) for delivery, interaction, or
facilitation without the existence of physical class;
3. Expert System - computer with ‘built-in’ expertise,
which, used by a non-expert in an education area as an
exchange of a teacher or other professional in particu
lar field (expert);
4. Mobile School – a device that can be used to process
all transactions or activities related to student-school
relationships (e.g. course schedule, assignment
submission, grade announcement, etc.);
5. War Room Lab – a laboratory consists of computers
and other digital devices directly linked to many
network (e.g. intranet, internet, and extranet) that can
be freely used by teachers or students for their various
important activities; and
6. Digital-Based Laboratory - a room or building that
occupied by a good number of computers to be used
for scientific testing, experiments or research through
diverse digital simulation system.
ICT as Decision Support System
Management of school consists of people who are responsible for running and managing the organization. Accompany by other stakeholders such as teachers, researchers,
practitioners, and owner, management has to solve many
issues daily related to education deliveries – especially
with related to the matters such as: student complains,
resource conflicts, budget requirements, government inquiries, and owner investigation. They have also needed
to dig down tons of data and information to back them up
in making quality decisions24. With regard to this matter,
several ICT applications should be ready and well implemented for them, such as:
1. Executive Information System - a computer-based
system intended to facilitate and support the
information and decision making needs of senior
executives by providing easy access to both internal
and external information relevant to meeting the
strategic goals of the school;
2. Decision Support System - an application primarily
used to consolidate, summarize, or transform
transaction data to support analytical reporting and
trend analysis;
3. Management Information System - an information
collection and analysis system, usually computerized,
that facilitates access to program and participant
information to answer daily needs of management,
teachers, lecturers, or even parents; and
4. Transactional Information System – a reporting and
querying system to support managers and
supervisors in providing valuable information
regarding daily operational activities such as office
needs inventory, student attendance, payment
received, etc.
ICT as Integrated Administration System
The Decision Support System that has been mentioned
can only be developed effectively if there are full integrated transaction system in the administration and operational levels. It means that the school should have an
integrated computer-based system intact. Instead of a “vertical” integration (for decision making process), this system also unites the four pillars of ICT context in some
ways so that a holistic arrangement can be made. The system should be built upon a modular-based concept so that
it can help the school to develop it easily (e.g. fit with their
financial capability) and any change in the future can be
easily adopted without having to bother the whole system. Those modules that at least should be developed are:
1. Student Management System – a program that records
and integrates all student learning activities ranging
from their detail grades to the specific daily progresses;
2. Lecturer Management System – a module that helps
the school in managing all lecturer records and affairs;
3. Facilities Management System – a unit that manages
various facilities and physical assets used for
education purposes (e.g. classes, laboratories,
libraries, and rooms), such as their schedules,
allocations, status, etc.);
4. Courses Management System – a system that handles
curriculum management and courses portfolio where
all of the teachers, students, and facilities interact;
5. Back-Office System – a system that takes care all of
documents and procedures related to school’s records;
6. Human Resource System – a system that deals with
individual-related functions and processes such as:
recruitment, placement, performance appraisal, train
ing and development, mutation, and separation;
7. Finance and Accounting System – a system that takes
charge of financial management records; and
25
8. Procurement System – a system that tackles the daily
purchasing processes of the school.
ICT as Core Infrastructure
All of the six ICT contexts explained can not be or will not
be effectively implemented without the existence of the
most important assets which are technologies themselves.
There are several requirements for the school to have physical ICT infrastructure so that all initiatives can be executed.
In glimpse, these layers of infrastructure look like the seven
OSI layer that stack up from the most physical one to the
intangible asset type of infrastructure. There are 9 (nine)
components that are considered important as a part of
such infrastructure, which are:
1. Transmission Media – the physical infrastructure that
enables digital data to be transferred from one place to
another such as through: fiber optic, VSAT, cable sea,
etc.;
2. Network and Data Communication – the collection of
devices that manage data traffic in one or more net
work topology system(s);
3. Operating System – the core software to run
computers or other microprocessor-based devices;
4. Computers – the digital-based processing devices that
can execute many tasks as programmed;
5. Digital Devices – computer-like gadgets that can have
a portion of capability as computers;
6. Programming Language – a type of instructions set
that can be structured to perform special task run by
computers;
7. Database Management – a collection of digital files
that store various data and information;
8. Applications Portfolio – a set of diverse software that
have various functions and roles; and
9. Distributed Access Channels – special devices that
can be used by users to access any of the eight com
ponents mentioned.
Measurements of Completeness
Every school in the country has been trying to implement
made, a performance indicator should be defined. The basic indicator that can be used as measurement is portfolio
completeness. The idea behind such measurement is to
calculate how many percent of the applications on each of
it reflects the completeness measurement (Picture 4). A
“0% completeness” means that a school has not yet implemented any system while a “100% completeness” has a
meaning that a school has been implementing all applications portfolio25.
26
Picture 4: The Calculation Formula for Portfolio Completeness
In the calculation above a weighting system is used based
on the principles that the existence of human resources
and physical technologies are the most important things
(people and tools) before any process can be done (Picture 5)26. People means that they have appropriate competencies and willingness to involve ICT in the education
processes while technology represents minimum existence
of devices and infrastructure (e.g. computers and internet).
Picture 5: Recommended Step-By-Step Implementation
Stakeholder-System Relationship Framework
The next important thing that should be addressed is the
Stakeholer-System Relationships Framework. It consists
of one-to-one relation between a system pillar and a stakeholder type – where shows that at least there is a major
stakeholder that concerns with the existence of a application type. The seven one-to-one relationships are (Picture
6)27:
1. Parent or sponsor of student will only select or favor
the school that has embraced ICT as one of education
tools;
2. Student will expect the school to use ICT intensively
in learning processes;
3. Owner of the school should think how to transform the
old conventional school into the new modern
institution;
4. Teacher or lecturer must be equipped with appropriate
skills and competencies to operate and use various
ICT applications;
5. Employee of the school has no choice not to use
integrated ICT system for helping them doing every
day’s administration activities;
6. Management of the institution should use ICT to em
power their performance especially in the process of
decision making; and
7. Government of Indonesia has main responsibility to
provide the education communities with affordable ICT
infrastructure to be used for learning purposes.
In principle, there are 6 (six) level of maturity as follows:
0. Ignore – a condition where a stakeholder does not
really care about any issue related to ICT;
1. Aware – a condition where a stakeholder has some
kind of attention to the emerging role of ICT in
education but only rest in the mind;
2. Plan – a condition where a stakeholder has decided to
conduct some actions in the future with favor to the
ICT existence;
3. Execute – a condition where a stakeholder is actively
using ICT for daily activity;
4. Measure – a condition where a stakeholder applies
quantitative indicator as quality assurance of ICT use;
and
5. Excel – a condition where a stakeholder has success
fully optimized the use of ICT as its purposes.
Picture 7: Education Stakeholders Maturity Level Tabel
Picture 6: Stakeholder-System Relationship Framework
Stakeholder Maturity Level
It is extremely important – for a developing country like
Indonesia with relatively low e-literacy – to measure the
maturity level of each stakeholder in education, especially
after realizing the existence between stakeholder and the
system and among the stakeholders themselves. By adapting the 0-5 level of maturity as used firstly by Software
Engineering Institute28, each stakeholder of the school can
be evaluated in their maturity (Picture 7).
By crossing the six level of maturity with all seven stakeholders, it can be generated the more contextual conditional statements29 based on stakeholder’s nature.
Mapping into ICT-Education Matrix
So far, there two parameters or indicators that can show
the status of ICT for education development in Indonesia,
which are: portfolio completeness and maturity level. Based
on the research involving approximately 7,500 schools in
Indonesia – from primary school to the college level – the
existing status of ICT development can be described as
(Picture 8):
27
•
•
•
•
Rookie – the status where majority of schools (73%)
only implement less than 50% of complete applications
and have average maturity level of stakeholders less
than 2.5;
Specialist – the status where 17% of schools has high
maturity level (more than 2.5) but only for implement
ing less than 50% of total application types;
Generalist – the status where more than 50%
applications have been implemented (or at least bought
by the schools) but with the maturity level of less than
2.5 (approximately 9% of the schools are in this type);
and
Master – the status where more than 50% application
types have been implemented with the maturity level
above 2.5 (only 1% of schools fit with this ideal
condition).
Indonesia education system setting – and through depth
understanding of the existing conditions – a strategic action can be planned as follows33:
•
•
•
2005-2007 – there should be 200 selected pilot schools
that have been successfully implemented all
applications portfolio with the high maturity level of
stakeholders (master class) spreading out in the 33
provinces of Indonesia;
2007-2009 – these 200 schools have responsibilities to
develop 10 other schools per each so that 2,000 schools
in 2009 that are in master level class;
2009-2010 – the same task apply to the new 2000
schools so by 2010, approximately 20,000 schools can
set the national standard of ICT in education (since it
already covers almost 10% of total population).
References
[1] Computers as Mindtools for Schools – Engaging Critical Thinking.
[2] E-Learning: An Expression of the Knowledge-Economy
– A Highway Between Concepts and Practice.
[3] E-Learning Games: Interactive Learning Strategies for
Digital Delivery.
[4] E-Learning: Building Successful Online Learning in Your
Organization – Strategies for Delivering Knowledge
in the Digital Age.
Picture 8: The ICT-Education Matrix
[5] E-Learning and the Science of Instruction: Proven
Guidelines for Consumers and Designers of Multimedia Learning.
Also coming from the research, several findings show that:
•
•
•
•
Most schools that are in a “master” type are located in
Java Island and considered as “rich institution30”
Most schools that are in “rookie” type are considered
as “self-learning entrepreneur” since their knowledge
to explore the possibilities to use ICT in education is
coming from reading the books, attending the
seminars, listening the experts, and other sources;
Most schools that are in “specialist” type are profiled
schools31 that have pioneered themselves in using ICT
from sometimes ago; and
Most schools that are in “generalist” type are the ones
that receive one or more funding or helps from other
parties32.
The Plan Ahead
So far, there two parameters or indicators After understanding all issues related to the strategic roles of ICT within
28
[6] Evaluating Educational Outcomes - Test, Measurement, and Evaluation.
[7] Implementasi Kurikulum 2004 – Panduan Pembelajaran
KBK.
[8] Integrating ICT in Education – A Study of Singapore
Schools.
[9] Konsep Manajemen Berbasis Sekolah (MBS) dan
Dewan Sekolah.
[10] Manajemen Pendidikan Nasional: Kajian Pendidikan
Masa Depan.
[11] Manajemen Berbasis Sekolah – Konsep, Strategi, dan
Implementasi.
[12] Paradigma Pendidikan Universal di Era Modern dan
Post Modern: Mencari Visi Baru atas Realitas Baru
Pendidikan Kita.
[13] Sistem Pendidikan Nasional dan Peraturan
Pelaksanaannya.
[14] Smart Schools – Blueprint of Malaysia Educational
System.
[15] Starting to Teach Design and Technology – A Helpful
Guide for Beginning Teachers.
[16] Teaching and Learning with Technology – An AsiaPacific Perspective.
[17] The ASTD E-Learning Handbook.
[18] Undang-Undang Republik Indonesia Nomor 20 tahun
2003 tentang Sistem Pendidikan Nasional.
Richardus Eko Indrajit, guru besar ilmu
komputer ABFI Institute Perbanas, dilahirkan di Jakarta
pada tanggal 24 Januari 1969. Menyelesaikan studi program Sarjana Teknik Komputer dari Institut Teknologi
Sepuluh Nopember (ITS) Surabaya dengan predikat Cum
Laude, sebelum akhirnya menerima bea siswa dari
Konsorsium Production Sharing Pertamina untuk
melanjutkan studi di Amerika Serikat, dimana yang
bersangkutan berhasil mendapatkan gelar Master of Science di bidang Applied Computer Science dari Harvard
University (Massachusetts, USA) dengan fokus studi di
bidang artificial intelligence. Adapun gelar Doctor of Business Administration diperolehnya dari University of the
City of Manyla (Intramuros, Phillipines) dengan disertasi
di bidang Manajemen Sistem Informasi Rumah Sakit. Gelar
akademis lain yang berhasil diraihnya adalah Master of
Business Administration dari Leicester University (Leicester City, UK), Master of Arts dari the London School of
Public Relations (Jakarta, Indonesia) dan Master of Philosophy dari Maastricht School of Management
(Maastricht, the Netherlands). Selain itu, aktif pula
berpartisipasi dalam berbagai program akademis maupun
sertifikasi di sejumlah perguruan tinggi terkemuka dunia,
seperti: Massachusetts Institute of Technology (MIT),
Stanford University, Boston University, George Washington University, Carnegie-Mellon University, Curtin University of Technology, Monash University, Edith-Cowan
University, dan Cambridge University. Saat ini menjabat
sebagai Ketua Umum Asosiasi Perguruan Tinggi
Informatika dan Komputer (APTIKOM) se-Indonesia dan
Chairman dari International Association of Software Architect (IASA) untuk Indonesian Chapter. Selain di bidang
akademik, karir profesionalnya sebagai konsultan sistem
dan teknologi informasi diawali dari Price Waterhouse Indonesia, yang diikuti dengan berperan aktif sebagai
konsultan senior maupun manajemen pada sejumlah
perusahaan terkemuka di tanah air, antara lain: Renaissance
Indonesia, Prosys Bangun Nusantara, Plasmedia, the Prime
Consulting, the Jakarta Consulting Group, Soedarpo
Informatika Group, dan IndoConsult Utama. Selama kurang
lebih 15 tahun berkiprah di sektor swasta, terlibat langsung
dalam berbagai proyek di beragam industri, seperti: bank
dan keuangan, kesehatan, manufaktur, retail dan distribusi,
transportasi, media, infrastruktur, pendidikan,
telekomunikasi, pariwisata, dan jasa-jasa lainnya. Sementara
itu, aktif pula membantu pemerintah dalam sejumlah
penugasan. Dimulai dari penunjukan sebagai Widya Iswara
Lembaga Ketahanan Nasional (Lemhannas), yang diikuti
dengan beeperan sebagai Staf Khusus Bidang Teknologi
Informasi Sekretaris Jendral Badan Pemeriksa Keuangan
(BPK), Staf Khusus Balitbang Departemen Komunikasi
dan Informatika, Staf Khusus Bidang Teknologi Informasi
Badan Narkotika Nasional, dan Konsultan Ahli Direktorat
Teknologi Informasi dan Unit Khusus Manajemen
Informasi Bank Indonesia. Saat ini ditunjuk oleh pemerintah
Republik Indonesia untuk menakhodai institusi pengawas
internet Indonesia ID-SIRTII (Indonesia Security Incident
Response Team on Internet Infrastructure). Seluruh
pengalaman yang diperolehnya selama aktif mengajar
sebagai akademisi, terlibat di dunia swasta, dan menjalani
tugas pemerintahan dituliskan dalam sejumlah publikasi.
Hingga menjelang akhir tahun 2008, telah lebih dari 25 buku
hasil karyanya yang telah diterbitkan secara nasional dan
menjadi referensi berbagai institusi pendidikan, sektor
swasta, dan badan pemerintahan di Indonesia – diluar
beragam artikel dan jurnal ilmiah yang telah ditulis untuk
komunitas nasional, regional, dan internasional. Seluruh
karyanya ini dapat dengan mudah diperoleh melalui situs
pribadi http://www.eko-indrajit.com atau http://www.ekoindrajit.info. Sehari-hari dapat dihubungi melalui nomor
telepon 0818-925-926 atau email [email protected].
(Footnotes)
1
Many people believe that more than 80% of economic
and business activities are being conducted or/and happened in this island.
2
Taken from the annual report of the Department of National Education of Republic Indonesia at the end of year
2004.
3
In the previous time, such department was also taking
care of national culture affairs (e.g. the Department of Education and Culture).
4
It started with the first batch of informatics students in
29
Bandung Institute of Technology in 1984, followed by
Sepuluh November Institute of Technology Surabaya in
1985, and then University of Indonesia in 1986, and Gadjah
Mada University in 1987.
5
According to PT Telkom Tbk. report on telecommunication profile in early year of 2005.
6
Data from APJII, the Association of National Internet
Providers, with regards to number of internet users in mid
year of 2005.
7
Also taken from APJII website at http://www.apjii.or.id.
8
Statistics from http://www.cctld.or.id.
9
It was formed under President Megawati period as the
embryo of today
’s Department of Communication and Information Technology that was formalized by President Susilo Bambang
Yudhoyono.
10
The involvement of Department of Religion is very important as thousands of schools are owned by specific
religion-based communities.
11
UU (Undang-Undang) is the highest regulation form
(act) of the national constitution system.
12
As the education industry grows so fast, there are phenomena of commoditizing education
’s products and services that might against government
regulation and common ethics in promoting quality education.
13
Unlike commercial company, never in mind of any educator to close the schools in any circumstances.
14
The Robert Kaplan
’s concept of Balanced-Scorecard can be easily adapted in
the education institution by using these 4 (four) pillars as
the scorecard domains.
15
There are minimum standard requirements set up by the
government that should be obeyed by anybody who would
like to form a school.
16
Although the Indonesian government has formed several standards and guidance that can be used by the school
for these superstructure aspects, some of them have been
following other international standards such as
ISO9001:200, Malcolm Balridge Quality Award, etc.
17
This concept has been agreed to become the paradigm
used to develop National ICT Blueprint for Education in
Indonesia.
18
The “Competence Based Curriculum” (CBC)
– a new learning/education paradigm that should be
adapted by all schools in Indonesia
– is very much supporting the existence of ICT within
education system.
19
At least a skill to do advance search techniques is required for this purpose.
20
It is understood that the transformation can be only
done if the school itself undergoes a series of fundamental
change in paradigm, principles, and philosophy of manag-
30
ing today
’s modern educational organization.
21
Note that the government of Indonesia is in the middle
of discussion on putting the e-literacy as a “must to have”
skills and competencies for civil servants in the country,
including teachers.
22
It is also true with the blending between physical value
chain (a series of processes that require physical resources)
and virtual value chain (a series of process that involve
the flow of digital goods).
23
Don Tapscott refers this phenomenon as “digitalization”
on his principle of “Digital Economy”.
24
Having a good decision becomes something important
as government urges all organizations to implement a “good
governance” system.
25
Note that the stakeholders aware the impacts on new
technology to be included in the portfolio. But for the time
being, it has been agreed that no new application should
be added to the portfolio until a further decision has been
made based on the evaluation.
26
The weights have been defined through a national consensus in implementing the blue print on ICT for education.
27
The all assumptions being made are based on the trend
and phenomena appear in the Indonesia market setting.
28
It is highly used in many areas such as project management, ICT human resource development, IT governance,
etc.
29
Collection of statements that pictures the “mental condition” of a stakeholder in how their perceive the important of ICT in education.
30
Meaning that such institution has strong financial resources.
31
For example schools that are owned by religious community or industrial-related business groups.
32
Including from Microsoft PIL Program or other sources
such as USAID, ADB, JICA, etc.
33
It has been approved and endorsed by the Ministry of
Education and the Department of Communication and Information Technology.
Keynote Speech
Rahmat Budiarto, Prof.
(Universiti Sains Malaysia)
Name
Institution
Position (Please underline)
Area(s) of Expertise
: RAHMAT BUDIARTO
: NAv6 CENTRE USM
: Associate Professor. Dr
: COMPUTER NETWORK, AI
Brief Biodata:
Rahmat Budiarto received B.Sc. degree from Bandung Institute of Technology in 1986, M.Eng, and
Dr.Eng in Computer Science from Nagoya Institute of Technology in 1995 and 1998 respectively.
Currently, he is an associate professor at School of Computer Sciences, USM. His research interest
includes IPv6, network security, Mobile Networks, and Intelligent Systems. He was chairman of
APAN Security Working Group and APAN Fellowship Committee (2005-2007).. He has been a JSPS
Fellow in the School of Computer and Cognitive Sciences of Chukyo University, Japan (2002). He
has published 26 International Journal papers, and more than 100 International and Local Conference
papers.
31
Paper
Saturday, August 8, 2009
13:30 - 13:50
Room L-210
MULTI-FACTOR ENTERPRISE METHODOLOGY : AN APPROACH
TO ERP IMPLEMENTATION
Gede Rasben Dantes
{1Doctoral Student in Computer Science Department, University of Indonesia,
Email: [email protected]}
Abstract
As further investigation on the Information and Communication Technology (ICT) investment especially in Indonesia
showed that a larger capital of investment does not automatically bring more benefit for the company, for example
Enterprise Resource Planning (ERP) system implementation. The present research was aimed at developing a methodology for ERP Implementation which was fundamental problem for achieving a successful implementation. This methodology will be contained some factors that influenced ERP implementation success (technical or non-technical) as an
activity each phase. Because, some of methodologies that common used by consultant more concentrating on technical
factors without considering non-technical factors. Non-technical factors were involved in the new proposed of ERP
implementation methodology, such as: top management commitment, support, and capability; project team composition,
leadership, and skill; organizational culture; internal/external communication; organization maturity level; etc. The
conclusion of the study was expected to be useful for private or public sectors when implementing ERP in order to gain
optimal return value from their investment.
Keywords: Enterprise Resource Planning (ERP), Methodology, Return value.
1. Introduction
Enterprise Resource Planning (ERP) is one of the integrated
information systems that support business process and
manage the resources in organization. This system integrates a business unit with other business unit in the same
organization or inter-organization. ERP is needed by organization to support day to day activity or even to create
competitive advantage.
In the ERP implementation, a business transformation is
always made to align ERP business process and company’s
business strategy. This transformation consists of
company’s business process improvement, cost reduction,
service improvement, and minimizing the effect on the
company’s operation (Summer, 2004). Consequently, there
needs to be an adjustment between the business process
that the ERP system has and the business process that
exists in the company to give value added for the company.
There are some ERP systems that are currently developed.
In the study conducted by O’Leary (2000) it is shown that
SAP (System, Application, Product in Data Processing) is
a system that has the largest market share in the world,
32
which is between 36% to 60%.
Different from information systems in general, ERP is an
integration of hardware technology and software that has
a very high investment value. However, a larger capital
investment on ERP does not always give a more optimal
return value to the company. Dantes (2006) found out that
in Indonesia, almost 60% of companies implementing the
ERP systems did not succeed in their implementations.
While Trunick (1999) and Escalle et al. (1999) found that
more than 50% of the companies implementing ERP in the
world failed to gain optimal return value.
Various studies have been conducted to find the keys to
ERP implementation success, while some other studies also
try to evaluate it. Some factors that influence the organization to choose ERP system as a support, such as: industrial standards, government policies, creditor-bank policies, socio-political conditions, organization maturity level,
implementation approach or strategic reason. Finally, we
found that the choosing of ERP adoption does not exactly
base on organization requirement, especially in Indonesia.
On the other hand, Xue et.al. (2005) found that organization culture & environment and technical aspects influenced ERP implementation success. Others research also
shown that 50% of the companies implementing ERP failed
to gain success (IT Cortex, 2003), while in China, only 10%
of the companies gained success (Zhang et.al, 2003). These
continuing study on the success of ERP implementation
show how critical ERP implementation is yet in IT investment.
Related to this study, Niv Ahituv (2002) argues that ERP
implementation methodology is the fundamental problem
in implementation success. In line with this, the present
research is aimed at developing ERP implementation methodology, taking into account the key success factor (technical or non-technical factors) that will be included in ERP
implementation methodology.
2. Theoretical Background
One of the major issues in ERP implementation is the ERP
software itself. What should come first, the company’s
business needs or the business processes available in the
ERP software? The fundamental invariant in system design and implementation is that the final systems belong
to the users.
A study by Deloitte Consulting (1999) indicated that going live isn’t the end of ERP implementation, but “merely
the end of the beginning”. The radical changes in business practices driven by e-commerce and associated
Internet technologies are accelerating change, ensuring
that enterprise systems will never remain static.
Because of the uniqueness of ERP implementation, methodologies to support ERP systems implementation are vital (Siau, 2001, Siau and Tian 2001). A number of ERP implementation methodologies are available in the marketplace.
These are typically methodologies proposed by ERP vendors and consultants. We classify ERP methodologies into
three generations – first, second, and third generations
(Siau, 2001). Each successive generation has a wider scope
and is more complex to implement.
Most existing ERP implementation methodologies belong
to the first generation ERP methodologies. These methodologies are designed to support the implementation of an
ERP system in an enterprise, and the implementation is
typically confined to a single site. Methodologies such as
Accelerated SAP (from SAP), SMART, and Accelerated
Configurable Enterprise Solution (ACES) are examples of
first generation ERP implementation methodologies.
Second generation ERP methodologies are starting to
emerge. They are designed to support an enterprise-wide
and multiple-site implementation of ERP. Different busi-
ness units can optimize operations for specific markets,
yet all information can be consolidated for enterprise-wide
views. A good example is the Global ASAP by SAP, introduced in 1999. This category of methodologies supports
an enterprise-wide, global implementation strategy that
takes geographic, cultural, and time zone differences into
account.
Third generation ERP methodologies will be the next wave
in ERP implementation methodologies. The proposed methodologies need to include the capability to support multienterprise and multiple-site implementation of ERP software so that companies can rapidly adapt to changing
global business conditions, giving them the required agility to take advantage of market or value chain opportunities. Since more than one company will typically be involved. The methodologies need to be able to support the
integration of multiple ERP systems from different vendors, each having different databases. The multi-enterprise architecture will need to facilitate the exchange of
information among business units and trading partners
worldwide. The ability to support web access and wireless
access is also important.
When we see more specific into some of methodology
that we review from literatures. All of them more concern
about technical factors with less considering of non-technical factors into an ERP implementation methodology.
As explained above, Niv Ahituv et.al. (2002) proposed an
ERP implementation methodology with collaborating Software Development Life Cycle (SDLC), Prototyping and
Software Package. The methodology contains four phases,
namely: selection, definition, implementation and operation.
In line with Niv Ahituv, Jose M. Esteves (1999) divided an
ERP Life cycle become five phases, such as: adoption,
acquisition, implementation, use & maintenance, and evolution & retirement. And one of famous ERP product, SAP
proposed well-known methodology namely Accelerated
SAP (ASAP) that contains five phases: project preparation (change chapter, project plan, scope, project team organization), business blueprint (requirement review for each
SAP reference structure item and define using ASAP templates), realization (master lists, business process procedure, planning, development programs, training material),
final preparation (plan for configuring the production
hardware’s capabilities, cutover plan, conduct end user
training), go live & support (ensuring system performance
through SAP monitoring and feedback).
However, Shin & Lee (1996) show that ERP life cycle contained three phases, such as: project formulation (initiative, analysis of need); application software package selection & acquisition (preparation, selection, acquisition);
33
installation, implementation & operation.
In general, all ERP implementation methodologies above
have a similar concept. But there are only more concerning
on technical factors than non-technical factors.
3. Research Design
Methodology is a fundamental problem on ERP implementation (Juhani et. al, 2001). When the organizations were
successful in implementing ERP system, it can improve an
organization productivity and efficiency. The conceptual
framework will be used in this research, describe in figure
1.
Variables on this research contain of independent variable, such as: ERP implementation success factors (X1..X2)
(i.e. organization maturity level, implementation approach,
top management commitment, organization culture, investment value, etc.) and dependent variable is ERP implementation success.
Referring back to final product, this study used a literature
review methodology. The developing ERP implementation
methodology is academic activities that need a theoretical
exploration and a real action. Furthermore, the planning
and developing this methodology, we need to identify
some problems and doing a deep analysis for some factors
that influenced ERP implementation success. These factors can be used to develop a preliminary study of ERP
implementation methodology. The phases that have to be
done in this research are: justification of ERP implementation success factors (technical or non-technical) from literatures review, and the developing of preliminary model.
4. Result and Discussion
In this study, we found out that some factors that influenced an ERP implementation success can be shown on
table 1. These factors (technical or non-technical) will be
used to develop a new ERP Implementation methodology.
Non-technical aspects were important thing that always
forgotten by organization when adopt ERP system as support for their organization. A lot of companies were failed
to implement ERP system because of it.
Ø ERP Implementation Success Factors
Related to the literature review which is focused on discussion and need assessment for ERP implementation in
private or public sector, we can conclude that some factors influence the ERP implementation success, we can
classify into three aspects, namely: Organizational, Technology and Country (External Organizational).
§ Organizational Aspects
The organizational aspect is an important role in ERP implementation. Related to it, there are some activities that are
supposed to be done on ERP implementation methodology, such as: (1) identification of top management support, commitment and capability; (2) identification of
project team composition and leadership; (3) identification of business vision; (4) preparing of project scope,
schedule and role; (5) identification of organization maturity level; (6) change management; (7) Business Process
Reengineering (BPR); (8) building of functional requirement; (9) preparing of training program; (10) build a good
Figure 1. The conceptual framework for ERP implementation Methodology
34
Table 1. Factors that influence an ERP Implementation Success
internal/external factor; and (11) identification an investment budget.
§ Technology Aspects
This aspect contains software, hardware and ICT infrastructure. Technology aspect needs to be identified before we implement ERP system. We can divide this aspect
become certain activities that important for ERP implementation methodology, such as: (1) identification of legacy
systems; (2) software configuration; (3) choosing of implementation strategy; (4) motivating of user involvement;
(5) identification of hardware and ICT infrastructure; (6)
identification of consultant skill; (7) data conversion; and
(8) systems integration.
§ Country/External Organizational Aspects
ERP implementation as Enterprise System is very important to consider a country or external organizational aspects. Viewed from a literature review, we can describe
some activities that support for ERP implementation methodology, such as: (1) identification of current economic
and economic growth; (2) aligning with government policy,
and (3) minimizing a political issue that can drive ERP implementation.
Some of activities that we need to give a stressing from an
explanation above such as: organization maturity level and
business process. Organization maturity level is important aspect before chosen one of ERP product that will be
35
adopted by organization to support their operational
(Hasibuan and Dantes, 2009). It can divide into three levels, namely: operational, managerial and strategic level.
Each level can define by considering a role of IS/IT to the
organization. For company that lied at operational level,
the ERP system is only supporting a company operational.
But the company that lied at strategic level can create a
competitive advantage for organization.
The other activity that also important is business process.
It involves in ERP product as best practices. A lot of organizations change the ERP business process to meet their
organization business process. This affects to the failure
of the ERP implementation. The changes in process give a
more significant impact than the changes in technology.
The process change in an organization has to be followed
with “management change” implementation. And the technology changes usually will be followed by training to
improve the employees’ skill. Through this aspect, we can
describe two activities that give significant influence in
the development of ERP implementation methodology,
namely: change management implementation, and identifying the alignment of the organization business process
with ERP business process.
Ø Comparison of ERP Implementation Methodology
A lot of ERP methodologies used by consultant/vendor to
implement this system. But, in this study we will compare
some of methodologies that common used, such as: Accelerated SAP (ASAP), ERP life cycle model by Shin &
Lee, Niv Ahituv et.al and Jose M. Esteves et.al. In general,
all of methodologies have similar component, namely:
selection phase (how we compare all of ERP product and
choose one of them that very suitable to organization requirement and budget), project preparation (this phase,
we will prepare all of requirement for this project, such as:
internal project team, consultant, project scope, functional
requirement building, etc), implementation & development
(how we will configure the software/ERP product to suit
with organization requirement), and the last part is operational & maintenance (in this phase, system will be deploy
to production and try to support/maintenance it).
Normally, all of methodologies that used by consultants
were concerning about technical aspects without considering non-technical aspects. Through this study we try to
indentify some of non-technical aspects that influenced
ERP implementation success, such as: top management
support, commitment and capability; project team composition, leadership and skill; business vision; organization
maturity level; organization culture; internal/external project
team communication, etc. All non-technical factors above
we will used to build an ERP implementation methodology
as an activities for each phase.
Ø Preliminary of ERP Implementation Methodology
Based on the activities above, we can develop the ERP
implementation methodology as a preliminary design. We
can divide five phases of the ERP implementation methodology, such as:
Figure 2. Comparison of ERP Implementation Methodology
36
(1) ERP Selection Phase, this phase will be comparing all
of ERP product that will most suitable with the organization. It contains some activities, such as: aligning one of
ERP product with an organization IS/IT strategy; aligning
with government / company policy; matching with an industrial standard; business vision identification; suitable with organizational culture, identify a budget of investment, internal IS/IT (hardware and software) identification; ICT infrastructure identification, organization maturity level identification, identification of aligning between
organization business process with ERP business process.
(2) Project Preparation Phase contains some activities
such as: identification of top management support, commitment and capability; identification of project team composition, leadership and skill; identification of project
scope, schedule, investment and role; function requirement building; identification of internal/external project
team communication; identification of legacy systems that
will integrate with ERP product; choose of implementation
strategy; define a consultant skill; define a job description
of project team members; motivate of user involvement.
(3) Implementation & Development Phase contains some
activities, such as: developing implementation plan; ERP
or software configuration; business process reengineering
(BPR); data conversion; change management; system integration; penetration application; and training.
(4) Operational and Maintenance Phase contains some
activities: operational and maintenance of software package, evaluation and audit the system periodically.
(5)
Support and Monitoring Phase, ensuring system performance through ERP monitoring and support.
Aim of this study is proposing a new ERP implementation
methodology that can minimize a failure of implementation
this system. With this methodology, ERP implementation
will give an optimal return value for organization itself.
This methodology has already involved some factors that
influenced an ERP implementation success. It give us a
guidance to exercise some components that most important for implementation ERP system. That’s component,
such as: how we know a top management support, commitment and capability; how we can build a project team
that have a good composition, leadership and skill; how
we can identify the organization business vision, so it can
suitable with the ERP product that organization chosen;
how we can exercise the project scope, schedule, investment and role; how we can identify the organization maturity level, thus we can select the right ERP product and
what modules we suppose to implement to support an
operational organization; how we can build a functional
requirement; how we can build a good communication in
internal/external project team, etc.
5. CONCLUSSION
In the light of the findings on this study, it can be concluded that ERP implementation methodology as preliminary study divided into 5 phase, namely: (1) ERP Selection
Phase, (2) Project Preparation Phase, (3) Implementation &
Development Phase, (4) Operational & Maintenance Phase,
and (5) Support & Monitoring Phase. This methodology
will give the organization an optimal return value. Because,
each phase contained some factors that influenced ERP
implementation success.
6. FURTHER RESEARCH
This study shown that some aspects influence ERP implementation success, which we can classify into organization factor, technology factor and country / external organization factor. Each aspect contains some activity that
should be involved in ERP implementation methodology
as preliminary design that we proposed. For further research, we need to explore more deeply according to ERP
implementation methodology that suitable for organization culture especially in Indonesia and also fit to industrial sector.
REFERENCES
Ahituv Niv, Neumann Seev dan Zviran Moshe (2002), A
System Development Methodology for ERP Systems, The
Journal of Computer Information Systems.
Allen D, Kern T, Havenhand M (2002), ERP Critical Success Factors: An Exploration of the Contextual Factors in
Public Sector Institution, Proceeding of the 35th Hawaii
International Conference on System Sciences.
Al-Mashari M, Al-Mudimigh A, Zairi M (2003), Enterprise
Resource Planning: A Taxonomy of Critical Factors, European Journal of Operational Research.
Brown C., Vessey I. (1999), Managing the Next Wave of
Enterprise Systems: Leveraging Lessons from ERP, MIS
Quarterly Executive.
Dantes Gede Rasben (2006), ERP Implementation and
37
Impact for Human & Organizational Cost), Magister
per Saddle River, New Jersey.
Thesis of Information Technology, University of Indonesia.
Motwani J, Akbul AY, Nidumolu V. (2005), Successful Implementation of ERP Systems: A Case Study of an Interna-
Davidson R. (2002), Cultural Complication of ERP, Com-
tional Automotive Manufacturer, International Journal
munication of the ACM.
of Automotive Technology and Management.
Deloitte Consulting (1999). ERP’s Second Wave: Maximizing the Value of Enterprise Applications and Processes,
Murray MG, Coffin GWA (2001), Case Study Analysis of
http://www.dc.com/Insights/research/cross_ind/
Factors for Success in ERP System Implementation, Pro-
erp_second_wave_global.asp
ceeding of the Americas Conference on Information Systems, Boston, Massachusetts.
Esteves J, Pastor J. (2000), Toward Unification of Critical
Success Factors for ERP Implementation, Proceedings of
th
O’Kane JF, Roeber M. (2004), ERP Implementation and
the 10 Annual Business Information Technology (BIT)
Culture Influences: A Case Study, 2nd World Conference
Conference, Manchester, UK.
on POM, Cancun, Mexico.
O’Leary E. Daniel (2000), Enterprise Resource Planning
Gargeya VB, Brady C. (2005), Success and Failure Factors
System (Systems, Life Cycle, Electronic Commerce, and
of Adopting SAP in ERP System Implementation, Busi-
Risk), Cambridge University Press, Cambridge.
ness Process Management Journal.
Parr A, Shanks G. (2000), A Model of ERP Project ImpleGunson, John dan de Blasis, Jean-Paul (2002), Implement-
mentation, Journal of Information Technology.
ing ERP in Multinational Companies: Their Effect on the
Organization and Individuals at Work, journal ICT.
Rajapakse and Seddon (2005), Utilizing Hofstede’s Dimensions of Culture, Investigated the Impact of National and
Hasibuan Zainal A. and Dantes Gede Rasben (2009), The
Organizational Culture on the adoption of Western-based
Relationship of Organization Maturity Level and Enter 2009.
ERP Software in Developing Country in Asia.
Management Journal.
Reimers K. (2003), International Examples of Large-Scale
for ERP Implementation, IEEE Software.
Systems – Theory and Practice I: Implementing ERP Systems in China, Communication of the AIS.
Jying Information Systems Development Methodologies
and Approaches, Journal of Management Information
Roseman M, Sedera W, Gable G (2001), Critical Success
Systems.
Factors of Process Modeling for Enterprise Systems, Proceedings of the Americas Conference on Information Sys-
Liang H, Xue Y, Boulton WR, Byrd TA (2004), Why West-
tems, Boston, Massachusetts.
ern Vendors don’t Dominate China’s ERP Market?, Communications of the ACM.
Siau K. (2001). ERP Implementation Methodologies — Past,
Present, and Future, Proceedings of the 2001 Information
Martinsons MG (2004), ERP in China: One Package, Two
Resources Management Association International Con-
Profiles, Communication of the ACM.
ference (IRMA’2001), Toronto, Canada.
Mary Summer (2004), Enterprise Resource Planning, Up-
Soh C, Kien SS, Tay-Yap J. (2000), Enterprise Resource
38
Planning: Cultural Fits and Misfits: Is ERP a Universal
(2002), ERP Implementation Management in Different Or-
Solution?, Communication of the ACM.
ganizational and Culture Setting, European Accounting
Information
Somers TM, Nelson KG (2004), A Taxonomy of Players
Systems
Conference,
http://
accountingeducation.com/ecais
and Activities Across the ERP Project Life Cycle, Information and Management.
Xue, Y., et al. (2005), ERP Implementation Failure in China
Case Studies with Implications for ERP Vendors”, Interna-
Tsai W, Chien S, Hsu P, Leu J (2005), Identification of Criti-
tional Journal Production Economics.
cal Failure Factors in the Implementation of Enterprise
Resource Planning (ERP) System in Taiwan’s Industries,
Zang, Z., Lee, M.K.O., Huang, P., Zhang, L., Huang, X.
International Journal of Management and Enterprise
(2002), “A framework of ERP systems implementation suc-
Development.
cess in China: An empirical study”, International Journal
Production Economics.
Umble E, Haft R, Umble M. (2003), Enterprise Resource
Planning: Implementation Procedures and Critical Success
Factors, European Journal of Operational Research.
Wassenaar Arjen, Gregor Shirley dan Swagerman Dirk
39
Paper
Saturday, August 8, 2009
13:30 - 13:50
Room L-211
EDGE DECTION USING CELLULAR NEURAL NETWORK
AND TEMPLATE OPTIMIZATION
Widodo Budiharto, Djoko Purwanto, Mauridhi Hery Purnomo
Electrical Engineering Department Institue Technology Surabaya
Jl. Raya ITS, Sukolilo, Surabaya 60111, Indonesia
[email protected]
Abstract
Result of edge detection using CNN could be not optimal, because the optimal result is based on template applied to the
images. During the first years after the introduction of the CNN, many templates were designed by cut and try techniques.
Today, several methods are available for generating CNN templates or algorithms. In this paper, we presented a method
to make the optimal result of edge detection by using TEMPO (Template Optimization). Result shown that template
optimization improves the image quality of the edges and noise are reduced. Simulation for edge detection uses CANDY
Simulator, then we implementing the program and optimized template using MATLAB. Comparing to Canny and Sobel
operators, image shapes result from CNN edge detector also show more realistic and effective to user.
Keywords: CNN, edge detection, TEMPO, Template optimization.
I. Introduction
A cellular neural network (CNN) is a 2 dimensional
rectangular structure, composed by identical analogical
non-linear processors, named cells [1]. CNN can be used
in many scientific applications, such as in signal
processing, image processing and analyzing 3D complex
surfaces [9]. In this paper, we implement edge detection
program based on CNN and optimized using TEMPO
provided by CNN Simulator called CANDY Simulator [7].
The basic circuit unit of CNNs contains linear and nonlinear
circuit elements, which typically are linear capacitors, linear
resistors, linear and nonlinear controlled sources, and
independent sources.
Figure 1. Typical circuit of a single cell
The structure of cellular neural networks is similar to that
found in cellular automata; namely, any cell in a cellular
neural network is connected only to its neighbor cells. All
the cells of a CNN have the same circuit structure and
element values. Theoretically, we can define a cellular neural
network of any dimension, but in this paper we will focus
our attention on the two dimensional for image processing.
A typical circuit of a single cell is shown in the figure 1
below.
Each cell contains one independent voltage source E
uij (Input), one independent current source I (Bias),
several voltage controlled current sources Inuij, Inyij,
and one voltage controlled voltage source Eyij (Output).
The controlled current sources Inuij are coupled to
neighbor cells via the control input voltage of each
neighbor cell. Similarly, the controlled current sources
Inyij are coupled to their neighbor cells via the feedback
from the output voltage of each neighbor cell [2].
40
The CNN allows fully parallel image processing, a given
processing being executed simultaneously for the entire
image. An example of 2 dimensional cellular neural
networks is shown in Fig 2:
Figure 4. Structure for cell Cij, arrow printed in bold mark
parallel data path from the input and the output of the
surround cell ukl and ykl. Arrows with thinner lines denote
the threshold, input, state and output, z, uij, xij and yij
respectively.
II. LITERATURES
Figure 2. A two-dimensional cellular neural network.
This circuit size is 4x4. The squares are the circuit units
called cells. The links between the cells indicate that there
are interactions between the linked cells [2].
The state equation of a cell is [2]:
Edge Detection
In general, edge detection defines as boundary between
2 region ( two adjacent pixel) that have very high different
intensity [3]. Some of the others operator are Sobel,
Prewitt, Roberts and Canny [3]. In this research we will
compare the result with Sobel and Canny edge detector
as another important methods [6].
output and is the threshold of the cell C(i,j). A(i,j;k,l) is
the feedback operator and B (i,j;k,l) is the input synaptic
operator. The ensemble (A, B, z) is named template.
are the cells from a r-order neighborhood Sr of the cell
(i,j).
EDGEGRAYCNN
We use EDGEGRAY CNN for edge detection gray scale
input images that accepting gray-scale input images and
always converging to a binary output image . One
application of this CNN template is to convert gray –scale
images into binary images, which can then be used as
inputs to many image-processing tasks which require a
binary input image. Here is gray scale edge detection
template with z/bias used are -0.5:
(2)
Table 1 : Template for gray scale edge detection
(1)
Where
represent the state, ukl is the input, ykl is the
III. SYSTEM ARCHITECTURE
Figure 3. Signal flow structure of a CNN with a 3x3
neighborhood.
We use MATLAB and webcam for capturing images, and
CANDY (CNN Simulator) [7] for testing the color images
edge detection. For optimizing template, we use TEMPO
provided by CANDY. Diagram block that show the order
of the process of the system shown in figure below :
The system structure of a center cell is represented in
Figure 4:
41
The template value and script above will be used by
CANDY for simulation.
IV. RESULTS
In this section the experimental results obtained by
CANDY and MATLAB are presented. Let us consider an
image figure 6,
Figure 5. Diagram block of Edge Detection using CNN
using template optimization
First, the original edgegray template given to TEMPO
program. Using some features of its program, we can
optimizing the template. As the result, template below
show optimized template for edgegray edge detection : :
neighborhood: 1
feedback:
0.0000 0.0000 0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 0.0000
control:
-1.0000 -1.0000 -1.0000
-1.0000 8.0000 -1.0000
-1.0000 -1.0000 -1.0000
current: -1.0000
Figure 6. Edge detection using CANDY Simulator
Figure above is the result of edge detection without
template optimization, it shown that many noises arised
(see image detail below).
Then, the script below show program generated by
TEMPO to optimize the edgegray detection using CNN
by modifying Iteration.
{START: temrun}
Initialize SIMCNN CSD
TimeStep 0.100000
IterNum 80
OutputSampling 1
Boundary ZEROFLUX
SendTo INPUT
PicFill STATE 0.0
TemplatePath C:\Candy\temlib\
CommunicationPath C:\CANDY\
TemLoad o_edgegray.tem
RunTem
Terminate
{STOP: temrun}
42
Figure 7. Result of edge detection without template
optimization
Using template and script above, we try to implement edge
detection based on CNN using CANDY, the result shown
below:
CNN edge detector show more realistic and easy to
understand.
Figure 8. Result of edge detection with template
optimization
Figure above shows that using template optimization,
some noises are reduced. This indicated that template
optimization succesfully implemented. Figure below is a
program developed by MATLAB to implementing the
edge detection using CNN and template optimization.
Figure 10. Original image (a), Comparing to Sobel (c) and
Canny edge detector (d), CNN edge detector with z=0.8
optimized with closing operation (b) show more realistic
to user.
V. CONCLUSION
In this paper, we have investigated the implementation of
CNN and template optimization for edge detection. Based
on the experiment, template optimization proved able to
improves the quality of images for edge detection.
Template optimization also reduced noises, but it makes
some important lines disconnected. To solve this problem,
we can use closing operation.
VI. FUTURE WORK
CNN very important method for image processing. We
propose this system can be used for system that need
high speed image processing such as robotics system for
tracking object and image processing in medical
application. We will continue working on CNN for
development of high speed image tracking in servant
robot.
Figure 9. Implementation using MATLAB for template
optimization.
VII. REFERENCES
To evaluate the effectiveness of CNN to any operators,
we Compared to Sobel and Canny operator, from the
figure below, indicated that image processed using
[1]. Chua LO, Yang L, “Cellular Neural Networks:
Theory”, IEEE Transactions on Circuit and System,
vol 35, 1998, pp.1257-72.
43
[2] Chua LO, Roska T, Cellular Neural Networks and
Visual Computing, Cambridge University Press,
2002.
[3] Gonzales, Rafael C. and Richard E. Woods. Digital
Image Processing. 3rd ed. Englewood Cliffs, NJ:
Prentice-Hall, 2004.
[4] Alper Basturk and Enis Gunay, “Efficient edge
detection in digital images using a cellular neural
network optimized by differential evolution
algorithm”, Expert System with Applciation 35,
2009, pp 2645-2650.
[5] Koki Nishizono and Yoshifumi Nishio, “Image
Processing of Gray Scale Images by Fuzzy Cellular
Neural Network”, International workshop on
Nonlinear Circuits and Signal Processing, Hawaii,
USA, 2006.
44
[6] Febriani, Lusiana, “Analisis Penelusuran Tepi Citra
menggunakan detector tepi Sobel dan Canny”,
Proceeding National Seminar on Computer and
Intelligent System, University of Gunadarma, 2008.
[7] CANDY Simulator,
www.cnntechnology.itk.ppke.hu\
[8]. CadetWin CNN applicaton development \
environment and toolkit under Windows.Version
3.0, Analogical and Neural Computing Laboratory,
Hungarian Academy of Sciences, Budapest, 1999.
[9] Yoshida, T. Kawata, J. Tada, T. Ushida, A.
Morimoto, Edge Detection Method with CNN, SICE
2004 Annual Conference, 2004, pp.1721-1724.
Paper
Saturday, August 8, 2009
14:45 - 15:05
Room L-212
Global Password
for Ease of Use, Control and Security in Elearning
Untung Rahardja, M.T.I.
Jazi Eko Istiyanto, Ph.D
Valent Setiatmi, S.Kom
STMIK RAHARJA
Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
[email protected]
Gajah Mada University
Indonesia
[email protected]
STMIK RAHARJA
Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
[email protected]
Abstract
Authentication is applied in the system for maintaining the confidentiality of information and data security. How common
is through the use of a password. However, the authentication process such as this would cause inconvenience to both
users and administrators, that is, when taken in the environment that has many different systems, where on each system to
implement the authentication process is different from one another. Through the method of global password, a user does
not need to enter passwords repeatedly to enter into multiple systems at once. In addition, the administrator does not
need to adjust the data in each database system when there are changes occur on data user. In this article, identified the
problem that faced e-learning institution in terms of authentication using the password on a web-based information
system, defined 7 characteristics of the concept of authentication using the global password method as a step of problemsolving, and set benefit from the implementation of the new concept. In addition, also shown the snippets of program
written using ASP script and its implementation in e-learning the Students Information Services (SIS) Online JRS at
Perguruan Tinggi Raharja. Global password as a method in e-learning, not only the security level which is the attention,
but also the convenience and ease of use both in the process and at the time control.
Index Terms—Global Password, Authentication, Security, Database, Information System
I. Introduction
II. Problems
In a system, external environment (environments) affects
system operation, and can be harmful or beneficial to the
system. External security, internal security, security of the
user interface are three types of security systems that can
be used [3]. Security in a system become very important
because the information system provide information
needed by the organization [2]. Aspect of security that is
often observed is in the case of the user interface, that is
related to the identification of the user before the user is
permitted to access the programs and data stored. One of
the main component is the authentication. Type of authentication that is most widely used is knowledge-based
authentication, that is, through the use of password or
PIN [1].
In the information system that implements authentication
using passsword, each user logs in to the system by typing in the username and password, which ideally is only
known by the system and the user.
Figure 1. Users log in to perform in a system
The process above does not seem to have have any problems. This is because the user only access to one system
only. However, different conditions will be felt if the user is
on the environment where there are more than one system.
45
When each system has its own authentication process, it
can cause inconvenience for the user who has a lot of
accounts. This can make it, because every time a user access to different systems, then he must type the password
one by one for each system. The situation will become
more difficult if the user has a username and password that
is different for each system.
has its own database password. The difficulty lies in the
synchronization of data authentication user between one
system to other systems. Especially when there is a user
who has account on more than one system.
Figure 4. Each system to check the username and
password in the user database of each
Figure 2. Users log in to perform in more than one
system
The logging in process is a time where the system convinced that users who are trying to access is actually entitled. Web-based information systems usually store data
in regard to a username and password on a table in the
database. Therefore, the system will check into the database whether the username and password entered is in
accordance or not.
Figure 3. A system to check the username and password in
the user database
In the management information system, usually there are
administrators who are responsible with regard to authentication. Administrator must be able to ensure that the authentication data for each user on the system always updated. When there are changes in user and password, administrators must be ready to update data in a database
related to the system.
This kind of condition will be complicated for an environment within multiple systems, that is when each system
46
In this case, if a user intends to change the password of
the entire account, the administrator must be ready to update the data with regard to the user’s password on each
database system. The administrators must also be able to
know in which system does the user has an account in it.
Circumstances such as this would complicate the administrators in efforts to ensure that the data of user authentication is always updated.
III. Literature Review
1. Research conducted by Markus Volkmer of Hamburg
University of Technology Institute for Computer Tech
nology Hamburg, Germany, entitled Entity Authenti
cation and Key Exchange authenticated with parity Tree
Machines (TPMS). This research is a concept proposed
as an alternative to secure the Symmetric Key. Adding
the direct methods to access to many systems using
TPMS. TPMS can avoid using a Man-In-The-Middle
attack.
2. Research conducted by Shirley Gaw and Edward W.
Felten titled Password Management Strategies for
Online Accounts. This study discusses the security
password that has been developed into a commentary
to implement the password management strategies that
are focused on the account online. There is a gap be
tween what technology can do to help and now have
been provided such a method. The method is feasible
with the current log to avoid identity theft and demon
strate the user not to use the book to the web sites.
3. Research conducted by Whitfield Diffie titled Authen
tication and authenticated key exchanges discuss the
two parts to add a password as the exchange point by
using a simple technique asymmetric. Goal one proto
col to communicate over a system with the assurance
that a high level of security. Password security is fun
damental that the absence of any point of the exchange
are interconnected. Authentication and key exchanges
must be related, because it can enable someone to take
over a party in the key exchange.
4. Research conducted by Pierangela Samarati (Computer
Science Technology) sushil & Jajodia (Center for Se
cure Information System), entitled Data Security. The
development of computer technology is increasingly
fast, sophisticated and have high capability include:
the memory capacity of the larger, the process data
more quickly and the function of a very complex (multi
function) and more easily operated through a series of
computer program packages, the process also affects
the security of data. From the results of the reference
reference from some of the research and literature, there
are some steps that can be done as a form of data
security, including, namely: Identification and Authen
tication, Access Control, Audit, Encryption. The steps
taken to maintain the Confidentiality / Privacy, Integ
rity and Availability (availability) of data. Implementa
tion of the research can be seen in some instances the
process of securing data for an application. Research
will also inform about the introduction of various in
struments of security data. Example: cryptography
techniques as the security password. In accordance
with the progress of information technology at this
time, research should also be how to manage the secu
rity of data on the Internet.
5. Research conducted by Chang-Tsun Li, entitled Digi
tal Watermarking for Multimedia Authentication
schemes from the Department of Computer Science Uni
versity of Warwick Coventry CV4 7AL UK. This study
discusses about the Digital Watermarking schemes for
Multimedia. Multimedia can be the strength of the digi
tal processing device for the duplication and manipu
lation can improve the perfect forgery and disguise.
This is a major concern in the current era of globaliza
tion. The importance of validation and verification the
contents become more evident and acute. With the
traditional digital signature information that is less ap
propriate because it may cause the occurrence of coun
terfeiting. Approach to validation data in digital media.
Data validation techniques for digital media is divided
into two categories, namely providing technical infor
mation and basic techniques watermark-base. The main
difference from the two categories of these techniques
s that in the endorsement to provide basic information,
and data authentication or signature.
Needham, entitled Strengthening Passwords. This re
search discusses a method to strengthen the pass
word / password. This method does not require the
user to memorize or record the old password, and do
not need to take the hardware. Traditional password is
still the most common basis for the validation, the us
ers often have a weak password, strong password be
cause it is difficult to remember. Password strengthen
the expansion of traditional password mechanisms.
Techniques of this method is easy to implement in the
software and is conceptually simple.
7. Research conducted by Benny Pinkas and Tomas
Sander, entitled Securing Passwords Against Dictio
nary Attacks. This study discusses the use of a pass
word is a main point of the sensitivities in security
sensitive. From the perspective can help to solve this
problem by providing the necessary restrictions in the
real world, such as infrastructure, hardware and soft
ware available. It is easy to implement and overcome
some of the many difficulties the method previously
proposed, from improving the security of this authen
tication scheme. Proposed scheme also provides bet
ter protection against attacks from the service user ac
count.
8. Research conducted by Kwon Taekyoung titled Au
thentication and Key Agreement via Memorable Pass
word discuss about a new protocol called the Agree
ment Memorable Password (AMP). AMP is designed
to strengthen password and sensitive to the dictio
nary attack. AMP can be used to improve security in
the distributed environment. Most of the methods used
widely in touch with some of the benefits such as sim
plicity, comfort, ability to adapt, mobility, and fewer
hardware requirements. That requires the user to re
member only one with a password. Therefore, this
method allows the user to move comfortably without a
sign of hardware.
IV. Problem Solving
To overcome the problems as described above, can be
done through the application of the methods of the Global
Password. Here are 6 characteristics of the Global Password applied to the authentication process in the information system:
1. Each user has only one username and password
2. Username and password for each user in each system
is the same
3. Users just log in once to be able to enter more than
one system
4. User authentication data for the entire system is stored
6. Research conducted by T. Mark A. Lomas and Roger
47
in the same database
5. There is levels of authorization
6. Adjustment user session on each system
7. The data of users’s password stored in the database
have been encrypted
Problem in terms of user inconvenience in typing the
username and password repeatedly solved with the simplification process of authentication. Based on point 3 of
the characteristic of the Global Password, a user only need
to do that once, that is at the time of the user first logs in to
the system.
After the user do the log in process, and declared eligible,
then the user can go directly to some of the desired system without having to type the username and password
again. With notes that the user has an account on the
systems that will be accessed.
Figure 6. Data user username and password for each
system stored in a database the same
This condition will be easier for administrators to control
in the user authentication data, because he does not need
to update at some database systems caused by a change
happened at a single user.
In addition, at the point 5 of characteristics of the Global
Password, is mentioned that this method is also pplied to
support user-level authorization. In the table storing the
username and password, authorization level can be distinguished from each user based on the classification of data
stored, in accordance with the desires and needs of the
organization.
Figure 5. User login beginning only one time
This may be related to two other characteristics of the
Global Password, i.e. point 1 and point 2. For one user,
only given a single username and password, which can be
used for the entire system at once. The existence of the
same username and password is what allows for the communication between the system in terms of data verification. To make user can move from one system to another
system without logging in, then each system must also be
able to read session one another and to adjust them on it.
This condition is in accordance with the characteristic of
the Global Password, that is point 6.
To make it easier for administrator to control the data of
user authentication, conducted with storing data of the
username and password at the same place. In accordance
with the Global Password characteristic points on the number 4 (four), the data is stored in a table in a single database that is used collectively by entire related system.
48
In terms of security, the Global Password is also equipped
by the encryption process. Characteristics according to
the Global Password point number 7 (seven) said that password for each user is stored in the form of encryption, so
it’s not easy to know the real person’s password by another.
V. Implementation
Authentication method to use the Global Password is implemented in Raharja University, namely the information system SIS OJRS (Online JRS). Students Information Services,
or commonly abbreviated SIS, is a system developed by
University Raharja for the purpose of information services
system as student optimal [4]. Development of SIS is also
an access to the publication Raharja Universities in the
field of computer science and IT world in particular [4].
SIS has been developed in several versions, each of which
is a continuation from the previous version of SIS. SIS
OJRS (Online Schedule Study Plan) is a version of the SIS4. Appropriate name, SIS OJRS made for the needs of the
student lecture, which is to prepare the JRS (Schedule
Study Plan) and KRS (Card Plan Studies) students.
In the SIS OJRS there are other subsystem-subsystem,
RPU ADM, ADM Lecturer, Academic, GO, Pool Registration, Assignment, and Data Mining. Each subsystem is
associated with one or more of the Universities in Raharja.
Therefore, to facilitate the user in the access switch or
apply the concept of inter-subsystem Global Password.
For, not uncommon in the user account have more than
one subsystem, and must move from one subsystem to
another subsystem.
Figure 9. Table structure Tbl_Password
Figure 7. Log in page for the initial authentication at the
SIS OJRS
Picture above is a display screen when the user first will
enter and access the SIS OJRS. On that page, the user
must type the username and password for authentication.
The system will then check the authentication data. When
declared valid, the user can directly access the subsystemsubsystem which is in the SIS OJRS without a need to
type the username and password again, of course with the
appropriate level of authorization given to the user concerned.
A. Database
SIS OJRS implemented on the University Raharja database using SQL Server. In the database server, in addition
to database-the database used by the system, also provided as a special master database to store all the data the
user username and password. Database is integrated with
all other systems, including versions of SIS previous.
This database is created on the table-a table that is required in connection with the authentication process.
There are two types of tables that must be prepared,
namely: a table that contains data authentication, and authorization level information table.
The table above is a table which is the main place where
data needed for authentication. These fields are required
in compliance with the existing system. Field Name,
Username, Password, Occupation, and IP_Address is a
field that describes the user data itself. While the fieldwork as a field the next time the user authorization level
entrance to each subsystem.
Fill in the password field should be in the form that has
been encrypted. This applies not so easy to guess password by another person, in considering this method can
be used one password for entry to many systems at once.
The form of encryption that can be referred to the various,
customized to the needs of the organization. Can only be
number only, or combination of numbers, letters, and other
characters.
Specific to the field as OJRS_All, OJRS_RPU,
OJRS_ADM_Dosen and so forth, is made with the data
type smallint. This is because the contents of the fieldfield is only a number. Value for each digit represents the
level of authorization granted to the user concerned.
Figure 10. Tbl_Password table of contents
To explain the value of the number, then needed another
table-table, which functions as a description for each field
in the main table.
49
another system is in, in this case is the ADM RPU:
Figure 11. Table structure Tbl_OJRS_RPU
Fill in the table, the table explains the meaning of each
number is entered in the main table, that is, the user authorization level. Whether the user can only read (read) system, make changes to data stored (update), or have no
rights at all (null).
Figure 12. Tbl_OJRS_RPU table of contents
B. Listing Program
By using the Global Password, the verification process
through the input username and password only once. To
further maintain the security system, the password is entered first will be encrypted. Encryption methods are not
limited, tailored to the needs. In addition, the inspection IP
Address can also be added at the time of verification. Here
is a snippet of ASP scripts that are used at the time the
user logs in.
In the script this is the level of user authorization review. If
the user does not have rights to the system, the automatic
user can not enter into it. Laying down the script into the
appropriate key in the security system. In this case, the
identity proofing and user-level authorization must be in
the top of the, or each time a user wanted to enter in each
system.
VI. Conclusion
Authentication is an important part in the security system.
Will also be optimal when considering the environment
and the needs of both users and administrators. Global
Password is a new concept that will accommodate the needs
of user convenience in accessing information systems,
especially on the environmental condition of a compound
system. From an administrator, will also become easier in
the case of authentication data for each system. In addition, the Global Password still maintaining the confidentiality of the data in the system for the early goal, security of
the information system.
References
Figure 13. Snippets of the script when the user logs in
[1]Chandra Adhi W (2009). Identification and Authentication: Technology and Implementation Issues.
Ringkasan Makalah. Diakses pada 4 Mei 2009 dari:
http://bebas.vlsm.org/v06/Kuliah/Seminar-MIS/
2008/254/254-08-Identification_and_Authentication.
pdf
If after the user logs in and declared eligible for entry into
the system, then a session will be formed. This session
enter into other systems or not.
[2]Jogiyanto Hartono (2000). Pengenalan Komputer: Dasar
Ilmu Komputer, Pemrograman, Sistem Informasi dan
Intelegensi Buatan. Edisi ketiga. Yogyakarta: Andi.
Here is a snippet of ASP scripts that are used when a user
has successfully logged on SIS ago OJRS want to access
50
[3]Missa Lamsani (2009). Sistem Operasi Komputer:
Keamanan Sistem. Diakses pada 5 Mei 2009 dari:
http://missa.staff.gunadarma.ac.id/Downloads/files/
6758/BAB8.pdf
[4]Untung Rahardja (2007). Pengembangan Students Information Services di Lingkungan Perguruan Tinggi
Raharja. Laporan Pertanggung Jawaban. Tangerang:
Perguruan Tinggi Raharja.
[5]Untung Rahardja, Henderi, dan Djoko Soetarno (2007).
SIS: Otomatisasi Pelayanan Akademik Kepada
Mahasiswa Studi Kasus di Perguruan Tinggi Raharja.
Jurnal Cyber Raharja. Edisi 7 Th IV/April. Tangerang:
Perguruan Tinggi Raharja.
51
Paper
Saturday, August 8, 2009
13:55 - 14:15
Room L-210
MINING QUERIES FASTER USING MINIMUM
DESCRIPTION LENGTH PRINCIPLE
Diyah Puspitaningrum, Henderi
Department of Computing Science Universiteit Utrecht
[email protected]
Information Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
email: [email protected]
Abstract
Ever since the seminal paper by Imielinski and Mannila [8], inductive databases have been a constant theme in the data
mining literature. Operationally, an inductive database is a database in which models and patterns are _rst class
citizens.
Having models and patterns in the database raises many interesting problems. One, which has received little attention so
far, is the following: do the models and patterns that are stored help in computing new models and patterns? For example,
if we have induced a classi_er C from the database and we compute a query Q. Does knowing C speed up the induction
of a new classi_er on the result of Q? In this paper we answer this problem positively for one speci_c class of models, viz.,
the code tables induced by our Krimp algorithm. The Krimp algorithm was built using minimum description length
(MDL) principle. In Krimp algorithm, if we have the code tables for all tables in the database, then we can approximate
the code table induced by Krimp on the result of a query, using only these global code tables as candidates; that is, we do
not have to mine for frequent item sets one the query result. Since Krimp is linear in the number of candidates and Krimp
reduces the set of frequent item sets by many orders of magnitude, this means that we can speed up the induction of code
tables on query results by many orders of magnitude.
Keywords: Inductive Database, Frequent Item Sets, MDL
1. Introduction
Ever since the start of research in data mining, it has been
clear that data mining, and more general the KDD process,
should be merged into DBMSs. Since the seminal paper
by Imielinski and Mannila [8], the so-called inductive databases have been a constant theme in data mining research.
Perhaps surprisingly, there is no formal de_nition of what
an inductive database actually is. In fact, de Raedt in [12]
states that it might be too early for such a de_nition. There
is, however, concensus on some aspects of inductive databases. An important one is that models and patterns
should be _rst class citizens in such a database. That is,
e.g., one should be able to query for patterns.
52
Having models and patterns in the database raises interesting new problems. One, which has received little attention so far, is the following: do the models and patterns
that are stored help in computing new models and patterns? For example, if we have induced a classi_er C from
the database and we compute a query Q. Does knowing C
speed up the induction of a new classi_er on the result of
Q?
In fact, this general question is not only interesting in the
context of inductive databases, it is of prime interest in
everyday data mining practice.
In the data mining literature, the usual assumption is that
we are given some database that has to be mined.
In practice, however, this assumption is usually not met.
Rather,the construction of the mining database is often
one of the hardest parts of the KDD process. The data
often resides in a data warehouse or in multiple databases,
and the mining database is constructed from these underlying databases.
From most perspectives, it is not very interesting to know
database are of no importance whatsoever.
It makes a di_erence, however, if the underlying databases
tabases, one would hope that knowing such models would
help in modelling the specially constructed ‘mining database. For example, if we have constructed a classi_er on a
database of customers, one would hope that this would
help in developing a classi_er for the female customers
only.
In this paper we study this problem for one speci_c class
of models, viz., the code tables induced by our Krimp algorithm [13]. Given all frequent item sets on a table, Krimp
selects a small subset of these frequent item sets. The
reason why we focus on Krimp is that together the selected item sets describe the underlying data distribution
of the complete database very well, see, e.g., [14, 16].
More in particular. we show that if we know the code tables
for all tables in the database, then we can approximate the
code table induced by Krimp on the result of a query, using only the item sets in these global code tables as candidates.
Since Krimp is linear in the number of candidates and Krimp
reduces the set of frequent item sets by many orders of
magnitude, this means that we can now speed up the induction of code tables on query results by many orders of
magnitude.
This speed-up results in a slightly less optimal code table,
but it approximates the optimal solution within a few percent. We’ll formalise \approximation” in terms of MDL [7].
Hence, the data miner has a choice: either a quick, good
approximation, or the optimal result taking longer time to
compute.
The road map of this paper is as follows. In the next Section we formally state the general problem. Next, in Section
3 we give a brief introduction to our Krimp algorithm. In
Section 4, we restate the general problem in terms of Krimp.
Then in Section 5 the experimental set-up is discussed.
Section 6 gives the experimental results, while in Section 7
these results are discussed.
Section 8 gives an overview of related research. The conclusions and directions for further research are given in
Section 9.
2. Problem Statement
This section starts with some preliminaries and assumptions. Then we introduce the problem informally. To
formalise it we use MDL, which is briey discussed.
2.1 Preliminaries and Assumptions We assume that our
data resides in relational databases. In fact, note that
the union of two relational databases is, again, a
relational database. Hence, we assume, without loss of
generality, that our data resides in one relational
database DB. So, the mining database is constructed
from DB using queries. Given the compositionality of
relational query languages, we may assume, again with
out loss of generality, that the analysis database is
constructed using one query Q. That is, the analysis
database is Q(DB), for some relational algebra
expression Q. Since DB is _xed, we will often simply
write Q for Q(DB); that is we will use Q to denote both
the query and its result.
2.2 The Problem Informally In the introduction we stated
that knowing a model on DB should help in inducing a
model on Q. To make this more precise, let A be our
data mining algorithm. A can be any algorithm, it may,
e.g., compute a decision tree, all frequent item sets or a
neural network. LetMDB denote the model induced by
A from DB, i.e, MDB = A(DB). Similarly, let MQ = A(Q).
We want to transform A into an algorithm A_ that takes
at least two inputs, i.e, both Q and MDB, such that:
1. A_ gives a reasonable approximation of A when ap
plied to Q, i.e., A_(Q;MDB) tMQ
2. A_(Q;MDB) is simpler to compute than MQ.
The second criterion is easy to formalise: the runtime
of A_ should be shorter than that of A. The _rst one is
harder. What do we mean that one model is an
approximation of another? Moreover, what does it mean
that it is a reasonable approximation? There are many
ways to formalise this. For example, for predictive
models, one could use the di_erence between
predictions as a way to measure how well one model
approximates. While for clustering, one could use the
number of pairs of points that end up in the same
cluster.
We use the minimum description length (MDL)
principle [7] to formalise the notion of approximation.
MDL is quickly becoming a popular formalism in data
mining research, see, e.g., [5] for an overview of other
applications of MDL.
2.3 Minimum Description Length MDL like its close cousin
MML (minimum message length) [17], is a practical
version of Kolmogorov Complexity [11]. All three
embrace the slogan Induction by Compression.
For MDL, this principle can be roughly described as
follows.
53
Given a set of models1 H, the best model H 2 H is the
one that minimizes L(H) + L(DjH) in which L(H) is the
length, in bits, of the description of H, and
_ L(DjH) is the length, in bits, of the description of the
data
when
encoded
with
H.
One can paraphrase this by: the smaller L(H) + L(DjH),
the better H models D.
What we are interested in is comparing two al-gorithms
on the same data set, viz., on Q(DB).
Slightly abusing notation, we will write L(A(Q)) for
L(A(Q)) + L(Q(DB)jA(Q)), similarly, we will write
L(A_(Q;MDB)). Then, we are interested in comparing
1MDL-theorists tend to talk about hypothesis in this
context, hence the H; see [7] for the details.
L(A_(Q;MDB)) to L(A(Q)). The closer the former is to
the latter, the better the approximation is.
Just taking the di_erence of the two, however, can be
quite misleading. Take, e.g., two databases db1 and
db2 sampled from the same underlying distribution,
such that db1 is far bigger than db2. Moreover, _x a
model H. Then necessarily L(db1jH) is bigger than
L(db2jH).
In other words, big absolute numbers do not
necessar-ily mean very much. We have to normalise
the di_er-ence to get a feeling for how good the ap
proximation is. Therefore we de_ne the asymmetric
dissimilarity mea-sure (ADM) as follows.
Definition 2.1. Let H1 and H2 be two models for a
dataset D. The asymmetric dissimilarity measure
ADM(H1;H2) is de_ned by:
ADM(H1;H2) = jL(H1) _ L(H2)j L(H2) Note that this
dissimilarity measure is related to the Normalised Com
pression Distance. The reason why we use this
asymmetric version is that we have a \gold standard”.
We want to know how far our approximate result
A_(Q;MDB) deviates from the optimal result A(Q).
2.4 The Problem Before we can formalise our prob-lem
using the notation introduced above, we have one more
question to answer: what is a reasonable
approx-imation? For a large part the answer to this
questionis, of course, dependent on the application in
mind. An ADM in the order of 10% might be perfectly
alright in one application, while it is unacceptable in
another.
Hence, rather than giving an absolute number, we make
it into a parameter _. Problem:
For a given data mining algorithm A, devise an
algo-rithm A_, such that for all relational algebra
expressions Q on a database DB:
1. ADM(A_(Q;MDB);A(Q)) _ _
54
2. Computing A_(Q;MDB) is faster than computing A(Q)
2.5 A Concrete Instance: Krimp The ultimate solution to
the problem as stated in above would be an algorithm
that transforms any data mining algorithm A in an
algorithm A_ with the requested properties. This is a
rather ambitious, ill-de_ned (what is the class of all
data mining algo-rithms?), and, probably, not attain
able goal. Hence, in this paper we take a more modest
approach: we trans-form one algorithm only, our Krimp
algorithm.
The reason for using Krimp as our problem instance is
threefold. Firstly, from earlier research we know that
Krimp characterises the underlying data distribution
rather well; see, e.g., [14, 16]. Secondly, from earlier
research on Krimp in a multi-relational setting, we al
ready know that Krimp is easily transformed for joins
[10]. Finally, Krimp is MDL based. So, notions such as
L(A(Q)) are already de_ned for Krimp.
3. Introducing Krimp For the convenience of the reader
we provide a brief introduction to Krimp in this
section, it was originally introduced in [13] (although
not by that name) and the reader is referred to that
paper for more details.
Since Krimp is selects a small set of representative item
sets from the set of all frequent item sets, we _rst recall
the basic notions of frequent item set mining [1].
3.1 Preliminaries Let I = fI1; : : : ; Ing be a set of binary (0/1
valued) attributes. That is, the domain Di of item Ii is
f0; 1g. A transaction (or tuple) over I is an element of
Qi2f1;:::;ng Di. A database DB over I is a bag of tuples
over I. This bag is indexed in the sense
that we can talk about the i-th transaction.
An item set J is, as usual, a subset of I, i.e., J _ I. The
item set J occurs in a transaction t 2 DB if 8I 2 J : _I (t)
= 1. The support of item set J in database DB is the
number of transactions in DB in which J occurs.
That is, suppDB(J) = jft 2 DBj J occurs in tgj. An item
set is called frequent if its support is larger than some
user-de_ned threshold called the minimal support or
min-sup. Given the A Priori property, 8I; J 2 P(I) : I _ J !
suppDB(J) _ suppDB(I) frequent item sets can be mined
e_ciently level wise, see [1] for more details.
Note that while we restrict ourself to binary databases
in the description of our problem and algo-rithms, there
is a trivial generalisation to categorical databases. In
the experiments, we use such categorical databases.
3.2 Krimp The key idea of the Krimp algorithm is the code
table. A code table is a two-column table that has item
sets on the left-hand side and a code for each item set
on its right-hand side. The item sets in the code table
are ordered descending on 1) item set length and 2)
support size and 3) lexicographically. The actual
codes on the right-hand side are of no importance:
their lengths are. To explain how these lengths are
computed the coding algorithm needs to be intro
duced.
A transaction t is encoded by Krimp by searching for
the _rst item set c in the code table for which c _ t.
The code for c becomes part of the encoding of t. If t
n c 6= ;, the algorithm continues to encode t n c.
Since it is insisted that each code table contains at
least all singleton item sets, this algorithm gives a
unique encoding to each (possible) transaction
over I.
The set of item sets used to encode a transaction is
called its cover. Note that the coding algorithm
implies that a cover consists of non-overlapping item
sets.
The length of the code of an item in a code table CT
depends on the database we want to compress;
the more often a code is used, the shorter it should
be. To compute this code length, we encode each
transaction in the database DB. The frequency of an
item set c 2 CT, denoted by freq(c) is the number of
transactions t 2 DB which have c in their cover That
is, freq(c) = jft 2 DBjc 2 cover(t)gj The relative fre
quency of c 2 CT is the probability that c is used to
encode an arbitrary t 2 DB, i.e.
P(c) = freq(c) Pd2CT freq(d)
For optimal compression of DB, the higher P(c), the
shorter its code should be. Given that we also need
a pre_x code for unambiguous decoding, we use the
well- known optimal Shannon code [4]:
lCT (c) = _log(P(cjDB)) = _log_ freq(c) Pd2CT
freq(d)_The length of the encoding of a transaction
is now simply the sum of the code lengths of the item
sets in its cover. Therefore the encoded size of a
transaction t 2 DB compressed using a speci_ed code
table CT is calculated as follows:
LCT (t) = X c2cover(t;CT) lCT (c)
The size of the encoded database is the sum of the
sizes of the encoded transactions, but can also be
computed from the frequencies of each of the ele
ments in the code table:
LCT (DB) = Xt2DB LCT (t)
= _Xc2CT freq(c) log_ freq(c) Pd2CT freq(d)_
To _nd the optimal code table using MDL, we need
to take into account both the compressed database
size, Figure 1: Krimp in action as described above, as
well as the size of the code table.
For the size of the code table, we only count those
item sets that have a non-zero frequency.
The size of the right-hand side column is obvious; it is
simply the sum of all the di_erent code lengths. For the
size of the left-hand side column, note that the simplest
valid code table consists only of the singleton item sets.
This is the standard encoding (st), of which we use the
codes to compute the size of the item sets in the left-hand
side column. Hence, the size of code table CT is given by:
L(CT) = X c2CT:freq(c)6=0 lst(c) + lCT (c) In [13] we de_ned
the optimal set of (frequent) item sets as that one whose
associated code table minimises the total compressed size:
L(CT) + LCT (DB)
Krimp starts with a valid code table (only the collection of
singletons) and a sorted list of candidates (frequent item
sets). These candidates are assumed to be sorted descending on 1) support size, 2) item set length and 3) lexicographically. Each candidate item set is considered by inserting it at the right position in CT and calculating the
new total compressed size. A candidate is only kept in the
code table i_ the resulting total size is smaller than it was
before adding the candidate. If it is kept, all other elements
of CT are reconsidered to see if they still positively contribute to compression. The whole process is illustrated in
Figure 1. For more details see [13].
4 The Hypothesis for Krimp If we assume a _xed minimum
support threshold for a database, Krimp has only one essential parameter:
the database. For, given the database and the (_xed) minimum support threshold, the candidate list is also speci_ed.
Hence, we will simply write CTDB and Krimp(DB), to denote the code table induced by Krimp from DB. Similarly
CTQ and Krimp(Q) denote
the code table induced by Krimp from the result of applying query Q to DB.
Given that Krimp results in a code table, there is only one
sensible way in which Krimp(DB) can be re-used to compute Krimp(Q): provide Krimp only with the item sets in
CTDB as candidates. While we change nothing to the code,
we’ll use the notation Krimp_to indicate that Krimp got
only code table elements as candidates. So, e.g., Krimp_(Q)
is the code table that Krimp induces from Q(DB) using the
item sets in CTDB only.
Given our general problem statement, we have now have
to prove that Krimp_ satis_es our two require-ments for a
transformed algorithm. That is, _rstly, we have to show
that Krimp_(Q) is a good approximation of Krimp(Q). That
is, we have to show that ADM(Krimp_(Q); Krimp(Q)) =
jL(Krimp_(Q)) _ L(Krimp(Q))j L(Krimp(Q))j _ _ for some
(small) epsilon. Secondly, we have to show that it is faster
to compute Krimp_(Q) than it is to compute Krimp(Q). Given
that Krimp is a heuristic algorithm, a formal proof of these
two requirements is not possible. Rather, we’ll report on
extensive tests of these two requirements.
5 The Experiments In this section we describe our experi-
55
mental set-up.
First we briey describe the data sets we used. Next we
discuss the queries used for testing. Finally we describe
how the tests were performed.
5.1 The Data Sets . To test our hypothesis that Krimp_ is a
good and fast approximation of Krimp, we have performed
extensive test on 8 well-known UCI [3]
data sets, listed in table 1, together with their respective
numbers of tuples and attributes. These data sets were
chosen because they are well suited for Krimp. Some of
the other data sets in the UCI repository are simply too
small for Krimp to perform well. MDL needs a reasonable
amount of data to be able to function.
Some other data sets are very dense. While Krimp performs well on these data sets, choosing them would have
turned our extensive testing prohibitively time-consuming.
Note that all the chosen data sets are single table Dataset
#rows #attributes Heart 303 52
Iris 150 19 Led7 3200 24 Pageblocks 5473 46 Pima 786 38
Tictactoe 958 29 Wine 178 68
Table 1: UCI data sets used in the experiments.
data sets. This means, of course, that queries involving
joins can not be tested in the experiments. The reason for
this is simple: we have already tested the quality of Krimp_
in earlier work [10]. The algorithm introduced in that paper,
called R-Krimp, is essentially Krimp_;
we’ll return to this topic in the discussion section.
5.2 The Queries To test our hypothesis, we need to consider randomly generated queries. On _rst sight this appears a daunting task. Firstly, because the set of all possible queries is very large. How do we determine a representative set of queries? Secondly, many of the generated
queries will have no or very few results. If the query has
no results, the hypothesis is vacuously true.
If the result is very small, MDL (and Krimp) doesn’t perform very well.
Generating a representative set of queries with a non-trivial
result set seems an almost impossible task.
Fortunately, relational query languages have a useful property: they are compositional. That is, one can combine
queries to form more complex queries. In fact, all queries
use small, simple, queries as building blocks.
For the relational algebra, the way to de_ne and combine
queries is through well-known operators: pro-jection (_),
selection (_), join (on), union ([), intersec-tion (\), and
setminus (n). As an aside, note that in principle the Cartesian product (_) should be in the list of operators rather
than the join. Cartesian prod-ucts are, however, rare in
practical queries since their results are often humonguous
and their interpretation is at best di_cult. The join, in contrast, su_ers less from the _rst disadvantage and not from
the second. Hence, our ommission of the Cartesian products and addition of the join.
56
So, rather than attempting to generate queries of arbitrary
complexity, we generate simple queries only.
That is, queries involving only one of the operators _, _, [,
\, and n. How the insight o_ered by these exper-iments
coupled with the compositionality of relational algebra
queries o_ers insight in our hypothesis for more general
queries is discussed in the discussion section.
5.3 The Experiments The experiments preformed for each
of the operators on each of the data sets were generated
as follows.
Projection: The projection queries were generated by randomly choosing a set X of n attributes, for n 2 f3; 5g. The
generated query is then _X. For this case, the code table
elements generated on the complete data set were also
projected on X.
The rationale for using a small sets of attributes rather
than larger ones is that these projections are the most
disruptive. That is, the larger the set of attributes projected on, the more the structure of the table remains in
tact. Given that Krimp induces this structure, projections
on small sets of attributes are the best test of our hypothesis.
Selections: The random selection queries were again generated by randomly choosing a set X of n attributes, with
n 2 f1; 2; 3; 4g. Next for each random attribute Ai a random
value vi in its domain Di was chosen. Finally, for each Ai in
X a random _i 2 f=; 6=g was chosen The generated query
is thus _(VAi2X Ai_ivi).
The rationale for choosing small sets of attributes in this
case is that the bigger the number of attribute sets selected on, the smaller the result of the query becomes. Too
small result sets will make Krimp perform badly.
Union: For the union queries, we randomly split the dataset
D in two parts D1 and D2, such that D = D1 [D2; note that
in all experiments D1 and D2 have roughly the same size.
The random query generated is, of course, D1 [ D2.
Krimp yields a code table on each of them, say CT1 and
CT2. To test the hypothesis, we give Krimp_the union of
the item sets in CT1 and CT2.
In practice, tables that are combined using a union may or
may not be disjoint. To test what happens with various
level of overlap between D1 and D2, we tested at overlap
levels from f0%; 33:3%; 50%g.
intersection: For the intersection queries, we again randomly split the data set D into two overlapping parts D1
and D2. Again, such that D = D1 [ D2 and again in all
experiments D1 and D2 have roughly the same size. The
random query gener-ated is, of course, D1 \ D2.
Again Krimp yields a code table on each of them, say CT1
and CT2. To test the hypothesis, we give Krimp_ the union
of the item sets in CT1 and CT2.
The union of the two is given as either of one might have
good codes for the intersection. The small raise in the
number of candidates is o_set by this potential gain.
In this case the overlap levels tested were from f33:3%;
50%; 66:6%g.
Setminus: Selection queries can, of course, be seen as a
kind of setminus queries. They are special, though, in the
sense that they remove a well described part of the database.
To test less well structured setminus operations, we simply generated random subsets of the data set. The sizes of
these subsets are chosen from f33:3%; 50%; 66:6%g.
Each of these experiments is performed ten times on each
of the data sets.
6 The Results In this section we give an overview of the
results of the experiments described in the previous section. Each relational algebra operator is briey discussed in
its own subsection.
6.1 Projection The projection results are given in Table 2.
The ADM scores listed are the average ADM score over
the 10 projection experiments performed on that data set _
the standard deviation of those 10 ADM scores. Similarly,
for the Size scores. Note that Size stands for the reduction
in the number of candidates.
That is, a Size score of 0.2, means that Krimp_ got only
20% of the number of candidates that Krimp got for the
same query result.
First of all note that most of the ADM scores are in the
order of a few percent, whereas the Size scores are generally below 0.2. The notable exceptions are the scores for
the Iris data set and, in one case, for Led7. Note, however,
that for these three averages, the standard deviation is
also very high. As one would expect, this is caused by a
few outliers. That is, if one looks at the individual scores
most are OK, but one or two are very high. Random experiments do have their disadvantages.
Moreover, one should note, however, that these_gures
are based on randomly(!) selected sets of attributes. Such
random projections are about as disruptive of the data
distribution as possible. In other words, it is impressive
that Krimp_ still manages to do so well.
The trend is that the bigger the result set is, the smaller
both numbers are. One can see that, e.g., by comparing the
results on the same data set for projections on 3 and 5
attributes respectively. Clearly, these di_erences are in
general not signi_cant and the trend doesn’t always hold.
However, it is a picture we will also see in the other experiments.
6.2 Selection The selection results are given in Ta-ble 3.
Again the scores are the averages _ the standard deviations over 10 runs. The resulting ADM scores are now
almost all in the order of just a few percent. This is all the
more impressive if one considers the Size scores.
These approximations were reached while Krimp_ got
mostly less than 2% of the number of candidates than
Krimp. In fact, quite often it got less than 1% of the num-
ber of candidates.
The fact that Krimp_ performs so well for selections means
that while Krimp models the global underlying data distribution, it still manages capture the \local” structure very
well. That is, if there is a pattern that is important for a part
of the database, it will be present in the code table.
The fact that the results improve with the number of attributes in the selection, though mostly not sig-ni_cantly,
is slightly puzzling. If one looks at all the experiments in
detail, the general picture is that big-ger query results give
better results. In this table, this global picture seems reversed. We do not have a good explanation for this observation.
6.3 Union The projection results are given in Ta-ble 4. The
general picture is very much as with the previous experiments. The ADM score is a few percent, while the reduction in the number of candidates is often impressive.
The notable exception is the Iris database. The explanation is that this data set has some very local structure that
(because of minsup settings) doesn’t get picked up in the
two components; it only becomes apparent in the union.
Note that this problem is exaggerated by the fact that we
split the data sets at random. The same explanation very
much holds for the_rst Led7 experiment.
We already alluded a few times to the general trend that
the bigger the query results, the better the results.
This trend seems very apparent in this table. For, the higher
the overlap between the two data sets, the bigger the two
sets are, since their union is the full data set.
However, one should note that this is a bit misleading, for
the bigger the overlap the more the two code tables\know”
about the \other” data distribution.
6.4 Intersection The projection results are given in Table 5.
Like with for the union, the reduction of the number of
candidates is again huge in general. The ADM scores are
less good than for the union, however, still mostly below
0.1. This time the Heart and the Led7 databases that are
the outliers. Heart shows the biggest reduction in the number of candidates, but at the detriment of the ADM score.
The explanation for these relative bad scores lies again in
local structures, that have enough support in one or both
of the components, but not in the intersec-tion. That is,
Krimp doesn’t see the good candidates for the tuples that
adhere to such local structures. This is witnessed by the
fact that some tuples are compressed better by the original
code tables than by the Krimp generated code table for the
intersection. Again, this problem is, in part, caused by the
fact that we split our data sets at random.
The ADM scores for the other data sets are more in line
with the numbers we have seen before. For these, the ADm
score is below 0.2 or (much) lower.
6.5 Setminus The projection results are given in Table 6.
57
Both the ADM scores and the Size scores are very good
for all of these experiments. This does make sense, each of
these experiments is computed on a random subset of the
data. If Krimp is any good, the code tables generated from
the complete data set should compress a random subset
well.
It may seem counter intuitive that the ADM score grows
when the size of the random subset grows. In fact, it is not.
The bigger the random subset, the closer its underlying
distribution gets to the \true” underlying distribution. That
is, to the distribution that underlies the complete data set.
Since Krimp has seen the whole data set, it will pick up this
distribution better than Krimp_.
7 Discussion
First we discuss briey the results of the experiments.
Next we discuss the join. Finally we discuss what these
experiments mean for more general queries.
7.1 Interpreting the Results The Size scores re-ported in
the previous section are easy to interpret.
They simply indicate how much smaller the candidateset
becomes. As explained before, the runtime complex-ity of
Krimp is linear in the number of candidates. So, since the
Size score is never below 0.4 and, often, con-siderably
lower, we have established our _rst goal for Krimp_. It is
faster, and often far faster, than Krimp.
In fact, one should also note that for Krimp_, we do not
have to run a frequent item set miner. In other words, in
practice, using Krimp_ is even faster than suggested by
the Size scores.
But, how about the other goal: how good is the approximation? That is, how should one interpret ADM scores? Except for some outliers, ADM scores are below 0.2. That is,
a full-edged Krimp run compresses the data set 20% better
than Krimp_. Is that good?
In a previous paper [15], we took two random sam-ples
from data sets, say D1 and D2. Code tables CT1and CT2
were induced from D1 and D2 respectively.
Next we tested how well CTi compressed Dj . For the four
data sets also used in this paper, Iris, Led7, Pima and,
PageBlocks, the \other” code table compressed 16% to
18% worse than the \own” code table; the _g-ures for
other data sets are in the same ball-park. In other words, an
ADM score on these data sets below 0.2 is on the level of
\natural variations” of the data distri-bution. Hence, given
that the average ADM scores are often much lower we
conclude that the approximation by Krimp_ is good.
In other words, the experiments verify our hypoth-esis:
Krimp_ gives a fast and good approximation of Krimp. At
least for simple queries.
7.2 The Join In the experiments, we did not test the join
operator. We did, however, already test the join in a previous paper [10]. The R-Krimp algorithm introduced in that
58
paper is Krimp_ for joins only.
Given two tables, T1 and T2, the code table is induced on
both, resulting in CT1 and CT2. To compute the code table
on T1 on T2, R-Krimp only uses the item sets in CT1 and
CT2. Rather than using, the union of these two sets, for
the join one uses pairs (p1; p2), with p1 2 CT1 and p2 2
CT2.
While the ADM scores are not reported in that paper, they
can be estimated from the numbers reported there. For
various joins on, e.g., the well known _nancial data set,
the ADM can be estimated as to be between 0.01 and 0.05.
The Size ranges from 0.3 to 0.001; see
[10] for details.
In other words, Krimp_ also achieves its goals for the join
operator.
7.3 Complex Queries For simple queries we know that
Krimp_ delivers a fast and good approximation.
How about more complex queries?
As noted before, these complex queries are built from simpler ones using the relational algebra operators.
Hence, we can use error propagation to estimate the error
of such complex queries.
The basic problem is, thus, how do the approxima-tion
errors propagate through the operators? While we do have
no de_nite theory, at worse, the errors will have to be
summed. That is, the error of the join of two se-lections
will be the sum of the errors of the join plus the errors of
the selections.
Given that complex queries will only be posed on large
database, on which krimp performs well. The initial errors
will be small. Hence, we expect that the error on complex
queries will still be reasonable; this is, however, subject to
further research.
8 Related Work While there are, as far as the authors know,
no other papers that study the same problem, the topic of
this paper falls in the broad class of data mining with background knowledge. For, the model on the database, MDB,
is used as background knowledge in computing MQ. While
a survey of this area is beyond the scope of this paper, we
point out some papers that are related to one of the two
aspects we are interested in, viz., speed-up and approximation.
A popular area of research in using background knowledge is that of constraints. Rather than trying to speed up
the mining, the goal is often to produce mod-els that adhere to the background knowledge. Examples are the use
of constraints in frequent pattern mining, e.g. [2], and monotonicity constraints [6]. Note, how-ever, that for frequent
patter mining the computation can be speeded up considerable if the the constraints can be pushed into the mining
algorithm [2]. So, speed-up is certainly a concern in this
area. However, as far
as we know approximation plays no role. The goal is still to
_nd all patterns that satisfy the constraints.
Another use of background knowledge is to _nd un-expected patterns. In [9], e.g., Bayesian Networks of the data
are used to estimate how surprising a frequent pattern is.
In other words, the (automatically induced) background
knowledge is used _lter the output. In other words, speedup is of no concern in this approach. Ap-proximation clearly
is, albeit in the opposite direction of ours: the more a pattern deviates from the global model, the more interesting it
becomes. Whereas we would like that all patterns in the
query result are covered by our approximate answer. 9
Conclusions In this paper we introduce a new problem:
given that we have a model induced from a database DB.
Does that help us in inducing a model on the result of a
query Q on DB. For a given mining algorithm A, we
formalise this problem as the construction of an algorithm
A_ such that:
[3] Frans Coenen. The LUCS-KDD discretised/normalised
ARM and CARM data library: http://
www.csc.liv.ac.uk/~frans/ KDD/Software/LUCS
KDD DN/. 2003.
1. A_ gives a reasonable approximation of A when applied
to Q, i.e., A_(Q;MDB) tMQ
[7] Peter D. Grnunwald. Minimum description length tutorial. In P.D. Grnunwald and I.J. Myung, editors,
Advances in Minimum Description Length. MIT
Press, 2005.
[8] Tomasz Imielinski and Heikki Mannila. A database perspective on knowledge discovery. Communications
of the ACM, 39(11):58{64, 1996.
2. A_(Q;MDB) is faster to compute than MQ.
We formalise the approximation in the _rst point using
MDL.
We give a solution for this problem for a particular algorithm, viz, Krimp. The reason for using Krimp as our problem instance is threefold. Firstly, from earlier research we
know that Krimp characterizes the underlying data distribution rather well; see, e.g.,
[14, 16]. Secondly, from earlier research on Krimp in a multirelational setting, we already know that Krimp is easily
transformed for joins [10]. Finally, Krimp is MDL based,
which makes it an easy _t for the problem as formalised.
The resulting algorithm is Krimp_, which is actu-ally the
same as Krimp, but gets a restricted input.
Experiments on 7 di_erent data sets and many di_erent
simple queries show that Krimp_ yields fast and good approximations to Krimp.
Experiments on more complex queries are currently underway.
[4] T.M. Cover and J.A. Thomas. Elements of Informa-tion
Theory, 2nd ed. John Wiley and Sons, 2006.
[5] C. Faloutsos and V. Megalooikonomou. On data mining, compression and kolmogorov complexity. In
Data Mining and Knowledge Discovery, volume
15, pages 3{20. Springer Verlag, 2007.
[6] A. J. Feelders and Linda C. van der Gaag. Learning
bayesian network parameters under order
constraints.Int. J. Approx. Reasoning, 42(1-2):37{53,
2006.
[9] Szymon Jaroszewicz and Dan A. Simovici. Interestingness of frequent itemsets using bayesian networks
as background knowledge. In Proceedings KDD,
pages 178{186, 2004.
[10] Arne Koopman and Arno Siebes. Discovering relational item sets e_ciently. In Proceedings SDM 2008,
pages 585{592, 2008.
[11] M. Li and P. Vit_anyi. An introduction tp kolmogorov
complexity and its applications. Springer-Verlag,
1993.
[12] Luc De Raedt. A perspective on inductive databases.
SIGKDD Explorations, 4(2):69{77, 2000.
References
[1] Rakesh Agrawal, Heikki Mannila, Ramakrishnan Srikant,
Hannu Toivonen, and A. Inkeri Verkamo.Fast discovery of association rules. In Advances in Knowledge Discovery and Data Mining, pages 307{328.
AAAI, 1996.
[2] Jean-Fran_cois Boulicaut and Artur Bykowski. Frequent
closures as a concise representation for binary data
mining. In Knowledge Discovery and Data Mining, Current Issues and New Applications, 4th
Paci_c-Asia Conference, PADKK 2000, pages
62{73, 2000.
[13] A. Siebes, J. Vreeken, and M. van Leeuwen. Item sets
that compress. In Proceedings of the SIAM Conference on Data Mining, pages 393{404, 2006.
[14] Matthijs van Leeuwen, Jilles Vreeken, and Arno Siebes.
Compression picks item sets that matter. In Proceed-ings PKDD 2006, pages 585{592, 2006.
[15] J. Vreeken, M. van Leeuwen, and A. Siebes. Preserving privacy through data generation. In Proceedings of the IEEE International Conference on Data
Mining, pages 685{690, 2007.
[16] Jilles Vreeken, Matthijs van Leeuwen, and Arno Siebes.
59
Preserving privacy through data generation. In Proceedings ICDM, 2007.
[17] C.S. Wallace. Statistical and inductive inference by
minimum message length. Springer, 2005.
Dataset 3 attr 5 attr ADM Size ADM Size
Heart 0.06 _ 0.09 0.2 _ 0.13 0.03 _ 0.03 0.2 _ 0.13
Iris 0.24 _ 0.28 0.17 _ 0.12 0.21 _ 0.18 0.14 _ 0.12
Led7 0.05 _ 0.1 0.31 _ 0.23 0.38 _ 0.34 0.25 _ 0.19
PageBlocks 0.04 _ 0.06 0.23 _ 0.21 0.08 _ 0.06 0.2 _ 0.17
Pima 0.04 _ 0.05 0.14 _ 0.13 0.08 _ 0.07 0.23 _ 0.17
TicTacToe 0.12 _ 0.09 0.11 _ 0.17 0.09 _ 0.1 0.17 _ 0.11
Wine 0.16 _ 0.2 0.10 _ 0.09 0.1 _ 0.11 0.1 _ 0.09
Table 2: The results of the projection experiments. The
ADM and Size scores are averages _ standard deviation
Dataset 1 attr 2 attr 3 attr 4 attr
ADM Size ADM Size ADM Size ADM Size
Heart 0.04 _ 0.03 0.04 _ 0.11 0.04 _ 0.03 0.02 _ 0.002 0.04 _
0.03 0.003 _ 0.003 0.02 _ 0.02 0.001 _ 0.0004
Iris 0.04 _ 0.04 0.09 _ 0.01 0.05 _ 0.05 0.1 _ 0.02 0.04 _ 0.01
0.1 _ 0.01 0.01 _ 0.03 0.1 _ 0.01
Led7 0.04 _ 0.06 0.02 _ 0.001 0.04 _ 0.01 0.02 _ 0.001 0.03 _
0.02 0.02 _ 0.001 0.03 _ 0.03 0.02 _ 0.01
PageBlocks 0.09 _ 0.07 0.007 _ 0.008 0.05 _ 0.04 0.002 _
0.0002 0.03 _ 0.02 0.002 _ 0.0002 0.02 _ 0.02 0.002 _
0.0002
Pima 0.1 _ 0.14 0.01 _ 0.003 0.03 _ 0.02 0.01 _ 0.003 0.03 _
0.02 0.01 _ 0.002 0.03 _ 0.02 0.01 _ 0.001
TicTacToe 0.16 _ 0.09 0.01 _ 0.002 0.1 _ 0.028 0.01 _ 0.002
0.12 _ 0.04 0.02 _ 0.02 0.08 _ 0.03 0.01 _ 0.005
Wine 0.03 _ 0.03 0.02 _ 0.02 0.02 _ 0.02 0.02 _ 0.02 0.02 _
0.01 0.01 _ 0.006 0.02 _ 0.01 0.01 _ 0.005
Table 3: The results of the selection experiments. The ADM
and Size scores are averages _ standard deviation
Dataset 0% 33.3% 50%
ADM Size ADM Size ADM Size
Heart 0.07 _ 0.02 0.0001 _ 0.0001 0.04 _ 0.02 0.001 _ 0.00004
0.03 _ 0.05 0.001 _ 0.0002
Iris 0.36 _ 0.11 0.07 _ 0.01 0.37 _ 0.1 0.07 _ 0.007 0.34 _ 0.12
0.07 _ 0.006
Led7 0.38 _ 0.31 0.02 _ 0.005 0.05 _ 0.02 0.03 _ 0.002 0.03 _
0.02 0.03 _ 0.002
PageBlocks 0.06 _ 0.01 0.002 _ 0.0001 0.04 _ 0.01 0.003 _
0.0001 0.02 _ 0.01 0.003 _ 0.0001
Pima 0.04 _ 0.03 0.01 _ 0.0006 0.03 _ 0.02 0.02 _ 0.002 0.03 _
0.02 0.02 _ 0.002
TicTacToe 0.07 _ 0.01 0.009 _ 0.0005 0.03 _ 0.02 0.01 _
0.0003 0.01 _ 0.002 0.01 _ 0.0002
Wine 0.03 _ 0.01 0.006 _ 0.0003 0.03 _ 0.01 0.008 _ 0.0006
0.02 _ 0.01 0.008 _ 0.0003
60
Table 4: The results of the union experiments. The percentages denote the amount of overlap between the two
data sets. The ADM and Size scores are averages _ standard deviation
Dataset 33.3% 50% 66.6%
ADM Size ADM Size ADM Size
Heart 0.39 _ 0.14 0.0002 _ 0.0001 0.36 _ 0.05 0.0002 _ 0.0001
0.42 _ 0.17 0.0001 _ 0.0001
Iris 0.09 _ 0.08 0.1 _ 0.02 0.08 _ 0.07 0.09 _ 0.02 0.03 _ 0.02
0.09 _ 0.01
Led7 0.5 _ 0.14 0.005 _ 0.002 0.42 _ 0.1 0.007 _ 0.001 0.3 _
0.12 0.01 _ 0.001
PageBlocks 0.13 _ 0.07 0.001 _ 0.0002 0.09 _ 0.06 0.002 _
0.0001 0.07 _ 0.05 0.002 _ 0.0001
Pima 0.09 _ 0.06 0.01 _ 0.002 0.09 _ 0.09 0.01 _ 0.003 0.05 _
0.06 0.01 _ 0.002
TicTacToe 0.2 _ 0.05 0.007 _ 0.002 0.22 _ 0.04 0.005 _ 0.0007
0.24 _ 0.04 0.004 _ 0.0007
Wine 0.1 _ 0.02 0.01 _ 0.005 0.12 _ 0.03 0.005 _ 0.001 0.15 _
0.04 0.002 _ 0.0006
Table 5: The results of the intersection experiments. The
percentages denote the amount of overlap between the
two data sets. The ADM and Size scores are averages _
standard deviation
Dataset 33.3% 50% 66.6%
ADM Size ADM Size ADM Size
heart 0.01 _ 0.01 0.001 _ 0.00007 0.01 _ 0.01 0.001 _ 0.0001
0.03 _ 0.02 0.002 _ 0.0004
iris 0.003 _ 0.006 0.11 _ 0.007 0.005 _ 0.008 0.12 _ 0.01 0.02 _
0.02 0.14 _ 0.01
led7 0.02 _ 0.02 0.02 _ 0.0002 0.02 _ 0.02 0.02 _ 0.0006 0.06 _
0.03 0.02 _ 0.001
pageBlocks 0.01 _ 0.004 0.002 _ 0.00004 0.02 _ 0.01 0.002 _
0.00003 0.03 _ 0.01 0.003 _ 0.00007
pima 0.02 _ 0.01 0.02 _ 0.001 0.01 _ 0.01 0.02 _ 0.001 0.01 _
0.02 0.02 _ 0.001
ticTacToe 0.06 _ 0.02 0.01 _ 0.0003 0.07 _ 0.02 0.01 _ 0.0005
0.08 _ 0.02 0.02 _ 0.002
wine 0.01 _ 0.007 0.01 _ 0.002 0.02 _ 0.01 0.02 _ 0.005 0.03 _
0.02 0.04 _ 0.01
Table 6: The results of the setminus experiments. The percentages denote the size of the remaining data set.
The ADM and Size scores are averages _ standard deviation
Paper
Saturday, August 8, 2009
13:55 - 14:15
Room L-211
FORECASTING USING REGRESSION DYNAMIC LINIER MODEL
Wiwik Anggraeni
Danang Febrian
(University of Indonesia)
(University of Indonesia)
Information System Department, Institut Teknologi Sepuluh Nopember,
Kampus Keputih, Sukolilo, Surabaya 60111, Indonesia
Abstract
Nowadays, forecasting is developed more rapidly because of more systematicaly decision making process in companies.
One of the good forecasting characteristics is accuration, that is obtaining error as small as possible. Many current
forecasting methods use large historical data for obtaining minimal error. Besides, they do not pay attention to the
influenced factors. In this final project, one of the forecasting methods will be proposed. This method is called Regression
Dynamic Linear Model (RDLM). This method is an expansion from Dynamic Linear Model (DLM) method, which model
a data based on variables that influence it.In RDLM, variables that influence a data is called regression variables. If a
data has more than one regression variables, then there will be so many RDLM candidate models. This will make things
difficult to determine the most optimal model. Because of that, one of the Bayesian Model Averaging (BMA) methods will
be applied in order to determine the most optimal model from a set of RDLM candidate models. This method is called
Akaike Information Criteria (AIC). Using this AIC method, model choosing process will be easier, and the optimal
RDLM model can be used to forecast the data.BMA-Akaike Information Criteria (AIC) method is able to determine
RDLM models optimally. The optimal RDLM model has high accuracy for forecasting. That can be concluded from the
error estimation results, that MAPE value is 0.62897% and U value is 0.20262.
Keyword : Forecasting, Regression variables, RDLM, BMA, AIC
1. Introduction
Nowadays, forecasting has developed more rapidly
because of the more systematically decision making in a
organization or company. One of the good forecasting
characteristic is from accuration, and should get error
that is as minimal as possible. Usually, forecasting just
estimates based on historical data only without
considering external factors that might influence the data.
Because of that, in this paper will be proposed a method
that takes all external factors into consideration, this
method is called Regression Dynamic Linear Models
(RDLM), with Bayesian Model Averaging (BMA) applied
in order to choose the most optimal model. By using this
method, the forecast results will have high accuracy.
(Mubwandarikwa et al., 2005).
model, forecasting using optimal model, and measuring
accuracy of optimal model.
2.1 Dynamic Linear Models (DLM)
Dynamic Linear Model is an extension of state-space
modeling on prediction and dynamic system control
(Aplevich, 1999). State-space model of time series contains
data generating process with state (usually shown by
vector of parameter) that can change over time. This state
is only observed indirectly, as far as values of time series
that are obtained as function of state in correspond period.
DLM base model at all time t is described by evolution /
system and observation equation. The equation forms are
as follow :
o Observation equation :
2. The Method
There are four steps to forecast a data using RDLM
method, i.e. : forming candidate models, choosing optimal
61
Yt = Ftθ t + vt ,
where
o is system evolution variance matrix (p x p) which is
estimated using discount factors, for ith time :
(5) Discount factors are determined by checking off
model to determine the optimal values. Optimal
value for trend component , seasonality , variance
and regression (Mubwandarikwa et al., 2005).
vt ~ N [0, Vt ] (1)
o System equation :
θ t = Gtθ t −1 + ω t ,
where
ω t ~ N [0,Wt ]
(2)
o Initial information :
θ 0 ~ N [m0 , C 0 ]
(3)
DLM can be explained alternatively with 4 sets as
follow :
M t ( j t ) = {Ft , Gt ,Vt , Wt } j j = 1,2,... (4)
Where at time t, :
o
θt
o
Ft is known regression variable vector.
o
vt is observation noise that has Gaussian distribution
is state vector at time t.
with zero mean and known variance Vt , where it
represents estimation and error trial of changing
observation of
o
Yt .
Gt is state evolution matrix, it describes deterministic
mapping of state vector between time t – 1 and t.
ωt is evolution noise that has Gaussian distribution
o
with zero mean and variance matrix Wt, where it
represents changing in state vector.
2.2 Regression Dynamic Linear Models (RDLM)
Regression Dynamic Linear Model (RDLM) is an extension
of DLM, which RDLM considers regression variables
(regressor) in modeling process. For example, for time
series data that has regressors X1, X2, then it will have
several possible models, that are M1( ,X1), M2( ,X2) and
M3( ,X1,X2). For time t, t = 1,2,… Regression Dynamic
Linear Model (RDLM), ,(j = 1,2,...,k), represents a base
time series model with 4 observations, which can be
identified by 4 sets, where :
o
F j = ( X 1 ,..., Xp ) j is regression vector (1 x p), X ij
is ith variable regression (i =1,2,...,p) which for
X1
has value of 1.
o
G j is system evolution matrix (p x p) with the value
of
G j = I (n) identity matrix.
o
62
V j is observation variance of .
2.2.1.
RDLM Sequential Updating
Estimation of state variables () can not be done directly at
all times, but by using information from data which update
from time t-1 to t is performed using Kalman Filter. For
further information, see West and Harrison (1997).
Take as example describes all information from past times
until time t and is data at time t.
Assume that :
(6) Equation (2) and (6) have Gaussian distribution, so
linear combinations of both of them can be formed
and produce prior distribution that is :
(7) Then from equation (1) and (7), forecast distribution
can be obtained, that is :
(8) where From forecast distribution at equation (8),
forecast result for can be obtained using :
(9) By using Kalman Filter, posterior distribution can be
obtained :
(10)
where
with
All the steps above solve recursive update of RDLM and
can be summaried as following :
1. determining model by choosing .
2. setting initial values of .
3. forecasting using equation (9) .
4. observing and updating using equation (10).
5. back to (c), then substituting t+1 with t
2.3 Bayesian Model Averaging of RDLM
In RDLM method, there are many candidate models. For
determining the most optimal model, one of BMA method
is used, that is Akaike Information Criteria (AIC).
2.3.1 Akaike Information Criteria (AIC)
Akaike Information Criteria (AIC) by Akaike (1974)
originates from maximum (log-)likelihood estimate (MLE)
from error variance of Gaussian Linear regression model.
Maximum (log-) likelihood model can be used to estimate
parameter value in classic linier regression model. AIC
suggests that from a class of candidate models, choose
model that minimize :
(11) Where for jth model :
o is likelihood.
o p is number of parameters in model.
This method chooses model that gives best
estimates asymptotically (Akaike, 1974) in
explanation of Kullback-Leibler. Akaike weight
can be estimated by defining :
(12) where minAIC is the smallest value of AIC in a set
of models. Likelihood from every model conditional
on data and set of models. Then Akaike weight
can be estimated using equation :
(13) where k is number of possible models in
consideration and the rest of defined models
component. (Turkheimer et al., 2003)
regression variables fertilizer price index, agriculture tools
price index, and refined fuel oil price index. Plot of rice
price index is shown on figure 1.
2.4 Error Estimation
For knowing the accuration of forecasting model, it can
be seen from error estimation result. According to
Makridakis et al., 1997, several methods in forecast error
estimation that can be used are as following :
o
Mean Absolute Percentage Error (MAPE)
MAPE is differences between real data and forecast result
that is divided with forecast result then is absoluted and
the result is on percent value. A model has excellent
performance if MAPE value lies under 10%, and good
performance if MAPE value lies between 10% and 20%
(Zainun dan Majid, 2003).
3.1 Model Choosing
Since data is influenced by 3 variables, then there are 7
RDLM
candidate
models,
that
are
:
M1(D,X1),M2(D,X2),M3(D,X3),M4(D,X1,X1),M5(D,X1,X3),M6(D,X2,X3),
M7(D,X1,X2,X3).
After implementing AIC, then weight of every model is
obtained as following :
(14)
o
Theil’s U statistic
U statistic is performance comparation between a
forecasting model with naïve forecasting, that predicts
future value is equivalent with real value one time before.
Comparation takes correspond ratio with RMSE (root
mean squared errors), that is square root of average
squared differences between prediction and observation.
As the main rule, forecasting method that has Theil’s U
value larger than 1 is not effective.
(15) where for all methods,
= data, = forecasting result.
3. Implementation and Analysis
Several trial test that have been done are choosing
optimal model, forecasting optimal model, testing AIC
performance and comparing DLM with RDLM.
To do the trial tests, world commodity price index data is
used. This data contains many kinds world commodity
including food, gas, agriculture, and many kinds of metal.
Several variables used are :
1. Rice price index (D).
2. Fertilizer price index (X1).
3. Agriculture tools price index (X2).
4. Refined fuel oil price index (X3).
This data is from 1980 until 2001. The target forecast
data is the first variable that is rice price index, with
Figure 1 Data Plot
From the table above, it can be seen that M5 has the
largest weight, so M5 is the most optimal model.
3.2 Optimal Model Forecasting
From the previous section, M5 has been chosen as the
most optimal model which will be used for forecasting.
The forecasting result of M5 is shown on figure 2.
Figure 2 RDLM Forecasting
63
From the forecasting result, then the accuration is
calculated and shown on the following table
Table 2 Error Estimation Result
From those calculation, it can be seen that RDLM model
has excellent performance in forecasting. This is because
its MAPE value lies under 10%, that is 0.62897%. From
Theil’s U point of view, this model is effective since its U
value is under 1.
3.3 Testing AIC Performance
In order to analyze AIC performance, every model
accuration will be compared, then it can be seen whether
model that has been chosen by AIC is a model with the
smallest error. Accuration of every model is shown on the
following table :
Figure 3 RDLM and DLM Forecasting
From the forecasting result, error estimation will be done
and shown on the following table.
Table 4 RDLM and DLM Errors
Table 3 Errors of Every Model
From the above table, it can be seen that RDLM method
has smaller error than DLM model, from the MAPE and
Theil’s U value.
4 . Conclusion
It can be seen from the table above that M5 has the
smallest error, so it can be concluded that AIC method
works well in choosing model.
3.4 Comparation Between RDLM and DLM Performance
In this section, RDLM and DLM will be compared to prove
that RDLM method work better than DLM method. This
can be done by comparing DLM model with the most
optimal RDLM model that is M5. Forecasting result of
both of those models is plotted on figure 3.
Several conclusions than can be taken about application
of BMA-Akaike Information Criteria (AIC) in RDLM
(Regression Dynamic Linear Model) forecasting method
is as follows :
1. Forecasting using RDLM (Regression Dynamic
Linear Model) has high accuration as long as the
chosen model is the most optimal model.
2. BMA-Akaike Information Criteria (AIC) method is
proven to determine the RDLM models optimally.
3. Forecasting using RDLM method has better result
than normal DLM method as long as the RDLM model
is the most optimal model.
4. Using rice price index data on 1997 – 2001, RDLM
method works 48% better than DLM method judging
from MAPE value, and 46% better judging from
Theil’s U value.
5. Daftar Pustaka
Akaike, H. (1974), A new look at the Statistical model
identication. IEEE Trans. Auto. Control, 19, 716-723.
Aplevich, J., 1999. The Essentials of Linear State Space
Systems. J. Wiley and Sons.
64
Grewal, M. S., Andrews, A. P., 2001. Kalman Filtering:
Theory and Practice Using MATLAB (2nd ed.). J. Wiley
and Sons.
Harvey, A., 1994. Forecasting, Structural Time-series
Models and the Kalman Filter. Cambridge University
Press.
Mubwandarikwa, E., Faria A.E. 2006 The Geometric
Combination of Forecasting Models Department of
Statistics, Faculty of Mathematics and Computing, The
Open University
Mubwandarikwa, E., Garthwaite, P.H., dan Faria, A.E., 2005.
Bayesian Model Averaging of Dynamic Linear Models.
Department of Statistics, Faculty of Mathematics and
Computing, The Open University
Turkheimer, E., Hinz, R. and Cunningham, V., (2003), On
the undesirability among kinetic models: from model
selection to model averaging. Journal of Cerebral Blood
Flow & Metabolism, 23, 490-498.
Verrall R. J. 1983. Forecasting The Bayesian The City
University, London
West , Mike. 1997. Bayesian Forecasting, Institute of
Statistics & Decision Sciences Duke University
World Primary Commodity Prices.(2002). Diambil pada
tanggal
23
Mei
2008
dari
http://
www.economicswebinstitute.org
Yelland, Phillip M. & Lee, Eunice. 2003, Forecasting
Product Sales with Dynamic Linear Mixture Models. Sun
Microsystem
Zainun, N. Y., dan Majid, M. Z. A., 2003. Low Cost House
Demand Predictor. Universitas Teknologi Malaysia.
65
Paper
Saturday, August 8, 2009
14:20 - 14:40
Room L-210
EDUCATIONAL RESOURCE SHARING IN THE HETEROGENEOUS
ENVIRONMENTS USING DATA GRID
A an Kurniawan, Zainal A. Hasibuan
Faculty of Computer Science, University of Indonesia
email: [email protected] 1, [email protected] 2
Abstract
Educational resources usually reside in the digital library, e-learning and e-laboratory systems. Many of the systems
have been developed using different technologies, platforms, protocols and architectures. These systems maintain a
large number of digital objects that are stored in many different storage systems and data formats with differences in:
schema, access rights, metadata attributes, and ontologies. This study proposes a generic architecture for sharing
educational resources in the heterogeneous environments using data grid. The architecture is designed based on the
two common types of data: structured and unstructured data. This architecture will improve the accessibility, integration
and management of those educational resources.
Keywords: resource sharing, data grid, digital library
1. Introduction
Currently, the increasing social demands on high quality
educational resources of higher education cannot be
fulfilled only by the available educators and conventional
libraries. With the advances of information technology,
many learning materials and academic journals created by
universities have been converted into digital objects. The
rapid growth of Internet infrastructure accelerates the
transformation of conventional libraries and learning to
the digital libraries and e-learning. This transformation
greatly affects the way of people to get information and
learn. Accessing information and learning now can be
done from anywhere at any time.
Since many digital library, e-learning and e-laboratory
systems have been developed using different
technologies, platforms, protocols and architectures, they
will potentially introduce the problem of information
islands. In order to address this problem, some previous
works [1][2][3][4] proposed the use of grid technology
that has the capability of integrating the heterogeneous
platforms. However, most of them considered that the
shared resources are only files or unstructured data.
Educational resources consist of not only unstructured
data, but also structured data. Much information such as
66
the metadata describing the shared digital objects and
the XML formatted documents is stored in a database.
This information also needs to be shared with other
systems.
In this study, we propose a generic architecture for sharing
educational resources in heterogeneous environment
using data grid. We also show how this architecture
applies in the digital libraries using Indonesian Higher
Education Network (INHERENT) [5].
2. Inherent
INHERENT (Indonesian Higher Education Network) [5] is
a network backbone that is developed by Indonesian
government to facilitate the interconnection among the
higher education institutions (HEIs) in Indonesia. The
project was proposed by the directorate of higher
education. Started on July 2006, currently it connects 82
state HEIs, 12 regional offices of the coordination of
private HEIs, and 150 private HEIs (see Figure 1).
All state HEIs in Java are connected by STM-1 National
Backbone with the bandwidth of 155 Mbps. Other cities
in the other islands use 8-Mbps leased line and 2 Mbps
VSAT connections.
Figure 1. Indonesian Higher Education Network in 2009
[5]
This network has been used for various educational
activities including video-conferencing and distance
learning. Every university can build their own digital
libraries and learning management systems (LMSs) and
then publish their educational resources through the
network. Although the resources can be shared to each
another via FTP or web servers, the systems (digital
libraries and LMSs) cannot provide an integrated view to
users. Users still have to access every digital library
systems in order to find the resources required by them.
This network has a potential of sharing various
educational resources using data grid.
Classified as the first generation of the data grid, SRB is
mainly focused on providing a unified view over
distributed storages based on logical naming concepts
using the client-server architecture. The concepts
facilitated the naming and location transparency where
users, resources, data objects and virtual directories were
abstracted by logical names and mapped onto physical
entities. The mapping is done at run time by the
Virtualization sub-system. The information of the mapping
from the logical name to physical name is maintained
persistently in a database system called the Metadata
Catalog. The database also maintains the metadata of the
data objects that use the schema of attribute-value pair
and the states of data and operations. Built upon this
logical abstraction, iRODS takes one level higher by
abstracting the data management process itself called
policy abstraction.
Whilst the policies used for managing the data at the server
level in SRB are hard-coded, iRODS uses another
approach, Rule-oriented Programming (ROP), to make the
customization of data management functionalities much
easier. Rules are explicitly declared to control the
operations performed when a rule is invoked by a particular
task. In iRODS, these operations are called micro services
and implemented as functions in C programming language.
3. Data Grid
Data grid is one of the types of grid technologies. The
other types are computational and access grid. Originally,
the emphasis of grid technology lay in the sharing of
computational resources [6]. Technological and scientific
advances have led to an ongoing data explosion in many
fields. Data are stored in many different storage systems
and data formats with different schema, access rights,
metadata attributes, and ontologies. These data also need
to be shared and managed. The need then introduces a
new grid technology, namely data grid. There are some
existing data grids. In the following, we will overview two
of them (iRODS and OGSA-DAI) and highlight their
features to address the need.
iRODS
iRODS (Integrated Rule-Oriented Data System) [7] is a
second generation data grid system providing a unified
view and seamless access to distributed digital objects
across a wide area network. It is extended from the Storage
Resource Broker (SRB) that is considered as the first
generation data grid system. Both SRB and iRODS are
developed by the San Diego Supercomputing Center
(SDSC).
Figure 2. iRODS Architecture [7]
Figure 2 displays the iRODS architecture with its main
modules. The architecture differentiates between the
administrative commands needed to manage the rules, and
the rules that invoke data management modules. When a
user invokes a service, it fires a rule that uses the
information from the rule base, status, and metadata
catalog to invoke micro-services [8]. The micro-services
either change the metadata catalog or change the resource
(read/write/create/etc).
Figure 3 illustrates a scenario when a client sends a query
asking for a file from an iRODS zone. Firstly, he connects
to one of iRODS servers (for example server A) using a
client application and sends the criteria of the file needed
67
(e.g. based on the metadata, filename, size, etc). The request
is directed to server A that will find the file using
information available in Metadata catalog. The query result
is sent back to the client. If he/she wants to get the file,
server A asks the catalog server which iRODS server that
stores the file (for example in the server B). Server A then
communicates with server B to request the file. Server B
applies the rules related with the request. The rules can
be the process of authorization (whether the client has a
privilege to read the file) and sending the file to the client
using iRODS native protocol. The client is not aware of
the location of the file. This location transparency is
handled by the grid.
OGSA-DAI
OGSA-DAI (Open Grid Services Architecture – Data
Access and Integration) [9] is a middleware software that
allows structured data resources, such as relational or
XML databases, from multiple, distributed,
heterogeneous and autonomously managed data sources
to be easily accessed via web services. It focuses on cases
where the assembly of all the data into a single data
warehouse is inappropriate [9].
Figure 4. An overview of OGSA-DAI components [10]
Figure 3. A client asks for a file from an iRODS
data grid [7]
Based on the explanation above, we conclude that iRODS
focuses on managing unstructured data objects such as
files. Although it can also access structured data
resources, its orientation is mainly on distributed file
management. However, it also uses structure data
(relational database) to manage the metadata of the data
objects, the states of data and the states of operations.
The metadata can potentially be integrated using the
OGSA-DAI data grid.
OGSA-DAI is designed to enable sharing of data
resources to make collaboration that supports:
a. Data access service, which allows to access
structureddata in distributed heterogeneous data
resources.
b. Data transformation service, which allows to expose
data in schema X to users as data in schema Y
c.
Data integration service, which allows to expose
multiple databases to users as a single virtual database
Figure 5. The proposed generic architecture for resource sharing
68
d. Data delivery service, which allows to deliver data to
where it’s needed by the most appropriate means,
middleware is proposed based on the two types of data:
structured and unstructured data.
uch as Web service, email, HTTP, FTP and GridFTP
OGSADAI has adopted a service oriented architecture
(SOA) solution for integrating data and grids through the
use of web services. The role of OGSA-DAI in a servicebased Grid, illustrated in Figure 4, involves interactions
between several following components [10]:
From the perspective of computer processing, the digital
objects are merely data. Generally, data can be classified
into two categories: unstructured and structured data.
Unstructured data consists of any data stored in an
unstructured format at an atomic level. There is no
conceptual definition and no data type definition in the
unstructured content. Furthermore, unstructured data can
be divided into two basic categories: bitmap objects (such
as video, image, and audio files) and textual objects (such
as spreadsheets, presentations, documents, and email).
Both of them can be treated as a string of bits. The
unstructured data is usually managed by operating
system. Structured data has schema information that
describes its structure. The schema can be separated from
the data (such as in relational database) or it can be mixed
with the data (e.g. XML format). The structured data is
usually managed by a database management system. The
system facilitates the processes of defining, constructing,
manipulating, and sharing the data among various users
and applications [11]. This differentiation of the nature of
data brings into different treatment when the various
formats of data and storage systems are handled in the
middleware layer.
a. OGSA-DAI data service: a web service that implements
various port types allowing the submission of requests
and data transport operations
b. Client: an entity that submits a request to the OGSADAI data service. A request is in the form of a perform
document that describes one or more activities to be
carried out by the service.
c. Consumer: a process, other than the client, to which
an OGSA-DAI service delivers data.
d. Producer: a process, other than the client, that sends
data to an OGSA-DAI data service.
When a client wants to make a request to an OGSA-DAI
data service, it invokes a web service operation on the
data service using a perform document. A perform
document is an XML document describing the request
that the client wants to be executed, defined by linking
together a sequence of activities. An activity is an OGSADAI construct corresponding to a specific task that should
be performed. The output of one activity can be linked to
the input of another to perform a number of tasks in
sequence. A range of activities is supported by OGSADAI, falling into the broad categories of relational activities,
XML activities, delivery activities, transformation
activities and file activities. Furthermore, the activity is an
OGSA-DAI extensibility point, allowing third parties to
define new activities and add them to the ones supported
by an OGSA-DAI data service.
OGSA-DAI focuses on managing heterogeneous
structured data resources. Although it can also access
the unstructured data using file transfer, its orientation is
database) management.
According to the two existing data grid middlewares, there
ollowing section, we propose a system architecture that
accommodates these two kinds of data.
4. The Proposed Architecture
In this study, a generic architecture for sharing educational
resources in heterogeneous environment using data grid
Figure 5 shows the proposed architecture that utilizes the
similar hierarchy used in [12] but it is applied in managing
heterogeneous data resources. The architecture consists
of three layers: data layer, data grid middleware layer and
application layer.
At the data layer, the various data resources in the
heterogeneous file systems and storage systems can be
joined into one large data collection. We distinguish
between structured and unstructured data because of their
different inherent characteristics. At the data grid
middleware layer, the data virtualizations for each data
type are separated. The unstructured data are virtualized
by file-oriented data grid middleware, such as SRB and
iRODS, while the structured data virtualization is handled
by database-oriented data grid middleware, such as OGSADAI.
Based on the analysis of the file-orientated data grids
(such as SRB and iRODS), the unstructured data
virtualization provides the following basic services [13]:
a. Data storage and replication service, which allows to
store any type of digital object content and to replicate
it into several other resources. The service is
independent of the content type because only the
clients need to be aware of the content internal format
and structure.
69
b.
Composition and relation service, which allows to
define various relations between digital objects and
to define multiple groups of related objects. Those
relations may be used to create complex digital
objects, to show parent/child relationships between
objects or to create collections of digital objects.
c.
Search service, which allows to search in previously
defined sets of digital objects. The search can be
based on the query matching with the metadata
catalog.
d.
Metadata storage service, which allows to store
metadata describing digital objects. One object can
be described in many metadata records. The metadata
also records information associated with replication.
These records can be utilized for the search service.
Usually, database systems are used to store and
manage the metadata. Furthermore, some database
systems containing metadata can be integrated using
structured data virtualization components of the data
grid middleware.
Some digital library systems store index files for the use
of searching in relational databases. The systems can also
manage some kinds of educational resources formatted in
XML (e.g. semi-structured documents) using native XML
databases. All of this information can be accessed and
integrated by the Integrated Higher Education Digital
Library Portal using OGSA-DAI. Therefore, a user can do
distributed searching for files stored in all resources of
both sites.
The structured data virtualization provides the four basic
services as described in the section of OGSA-DAI.
At the application layer, data-intensive applications, such
as e-learning management systems and digital libraries,
can utilize the two data virtualizations in order to publish
and share their digital content objects.
Figure 6 shows a typical implementation of the proposed
architecture for digital library. Every digital library sites
that are registered in the Integrated Higher Education
Digital Library Portal manage their own data resources
integrated system also enables a user to get all resources
closer to him if the resources are already replicated to
some locations.
Since all sites are connected in INHERENT with highspeed bandwidth, there is no need for the Integrated
Higher Education Digital Library Portal to harvest the
metadata from all member sites such as proposed in [4].
No central metadata repository is required. This ensures
that the query results of distributed searching will always
be up to date because they come from the local query
processing of each member sites.
consisting of the collection of digital objects and
structured data (relational database and XML). The digital
objects are stored in various storage systems that are
managed by iRODS storage servers. Since all of iRODS
servers are registered in one zone, namely Zone IDL
(Indonesian Digital Libraries), the digital objects can be
replicated among the servers. Some files of site A can be
replicated to the servers of site B, and vice versa. The
metadata catalog servers in both sites will contain the
same information of all collected educational resources.
Figure 6. A typical implementation of the proposed architecture for digital library
70
5. Conclusion
In this study, we propose a generic architecture for sharing
educational resources in the heterogeneous environment.
The architecture distinguishes the managed data into two
categories, namely structured and unstructured data. The
data grid middleware used for virtualization is separated
based on the two categories of data. In our design, we
The combination of the two data grids completely handles
all kinds of data types. Hence, this architecture can
improve the accessibility, integration and management of
those educational resources.
6. References
[1] Yang, C.T., Hsin-Chuan Ho. Using Data Grid
Technologies to Construct a Digital Library
Environment. Proceedings of the 3rd International
Conference on Information Technology: Research
and Education (ITRE 05), pp. 388-392, NTHU,
Hsinchu, Taiwan, June 27-30, 2005. (EI)
[2] Candela, L., Donatella Castelli, Pasquale Pagano,
Manuele Simi. Moving Digital Library Service
Systems to the Grid. Springer-Verlag. 2005.
[3] Sebestyen-Pal, G., Doina Banciu, Tunde Balint, Bogdan
Moscaiuc, Agnes Sebetyen-Pal. Towards a GRIDbased Digital Library Management System. In
Distributed and Parallel Systems p77-90. SpringerVerlag. 2008.
[4] Pan, H. Research on the Interoperability Architecture
of the Digital Library Grid. 2007. in IFIP
International Federation for Information
Processing, Volume 251, Integration and
InnovationOrient to E-Society Volume l, Wang, W.
(Eds), (Boston: Springer), pp. 147 154.
[5] Indonesian Higher Education Network (INHERENT).
http://www.inherent-dikti.net
[6] Foster, I., Carl Kesselman. The Grid: Blueprint for a
New Computing Infrastructure. 2 nd Edition.
Morgan Kaufmann. 2006.
[7] iRODS (Integrated Rule-Oriented Data System. https:/
/www.irods.org
[8] Weise, A., Mike Wan, Wayne Schroeder, Adil Hasan.
Managing Groups of Files in a Rule Oriented
Data Management System (iRODS). Proceedings
of the 8 th International Conference on
Computational Science, Section: Workshop on
Software Engineering for Large-Scale Computing,
Krakow, Poland. 2008
[9] OGSA-DAI (Open Grid Services Architecture - Data
Access
and
Integration).
http://
www.ogsadai.org.uk/index.php
[10] Chue Hong, N.P., Antonioletti, M., Karasavvas, K.A.
and Atkinson, M. Accessing Data in Grids Using
OGSA-DAI, in Knowledge and Data Management
in GRIDs, p3-18, D. Talia, A. Bilas, M.D. Dikaiakos
(Eds.), 2007, ISBN: 978-0-387-37830-5
[11] Elmashri, R., Shamkant B. Navathe. Fundamental of
Database Systems. 5th Edition. Addison Wesley.
2006.
[12] Coulouris, G., Jean Dollimore, Tim Kindberg.
Distributed Systems : Concepts and Design. 4th
edition. Addison Wesley. 2005.
[13 ]Kosiedowski, M.,Mazurek, C., Stroinski, M., Werla,
M. and Wolski, M. Federating Digital Library
Services for Advanced Applications in Science
and Education, Computational Methods in
Science and Technology 13(2), pp. 101-112.
December, 2007
71
Paper
Saturday, August 8, 2009
14:20 - 14:40
Room L-212
Behavior Detection Using The Data Mining
Untung Rahardja
STMIK RAHARJARaharja Enrichment Centre (REC) Tangerang - Banten, Indonesia
[email protected]
Edi Winarko
GADJAH MADA UNIBERSITYFaculty of Mathematics and Natural SciencesYogyakarta, Indonesia
[email protected]
Muhamad Yusup
STMIK RAHARJARaharja Enrichment Centre (REC) Tangerang - Banten, Indonesia
[email protected]
Abstract
Goal implementation system that is web-based information so that users can access information wherever and whenever
desired. Absensi Online (AO) is a web-based information system that functions to serve the students and lectures have
been applied in the Universities Raharja. Lecturer in attendance to the presence of lecturers and students in classrooms
on a lecture. The system can still be read and accessed by all users connected to the network. However, this may lead to
the occurrence of Breach or deceptions in the presence of the attendance. There is access to the prevention of entry is not
the right solution to be used when the information should still be readable and accessible by all users connected to the
network. Security system must be transparent to the user and does not disrupt. With behavior detection, using the concept
of data mining Absensi Online (AO) can be done with the wise. The system can detect and report if there is an indication
things are negative. Initialization of the information still can be enjoyed by all users connected to the network without
restriction access entrance. In this article, the problems identified in the Absensi Online (AO) in the Universities Raharja,
critical review related to the behavior detection, detection and behavior is defined using the concept of data mining as
a problem-solving steps and defined the benefits of concept. There is a behavior detection using the concept of data
mining on a web-based information systems, data integrity and accuracy can be guaranteed while the system performance be optimized so that the life of the system can continue to progress well.
Index Terms— behavior detection, data mining, Absensi Online (key words)
I. Introduction
Raharja university that are moving in the field of computer
science and located in Banten Province is located only 10
(ten) minutes from the International Airport Soekarno Hatta. Many awards that have been achieve in, one of
which is winning the WSA 2009 - Indonesia E-Learning
and Education of Intranet Product Category Raharja Multimedia Edutainment (RME). At this time Raharja university has to improve the quality and quality through accreditation certificate National Accreditation Board of
Higher Education (BAN-PT) of which states that the program of Diploma 3 in Komputerisasi Akuntansi AMIK
Raharja Informatika accredited A. In addition, the univer-
72
sity has entered Raharja ranked top 100 universities and
colleges in the Republic of Indonesia.
Universities Raharja has 4 (four) Platform IT E-learning
consists of the SIS (Student Information Services), RME
(Raharja Multimedia Edutainment), INTEGRAM (Integrated Marketing Raharja), and GO (Green Orchestra) is
the instrument to be Raharja University campus excellent
fit with the vision that is superior to the universities that
produce graduates who competent in the field of information systems, informatics techniques and computer systems and has a high competitiveness in the globalization
era.
Multimedia Edutainment (RME), which designed and implemented to improve services and better training of students
and lecturers to be able to appreciate more time in the
lecture [6].
But, whether the system has been developed which is able
to provide convenience in terms of reporting to the faculty
and students considered to violate the order and lecture
discipline or in the process of learning activity? Whether
the system itself can detect and report if there is an indication of deception, cheating the system?
II. Problem
Figure 1. 4 Pilar IT Perguruan Tinggi Raharja
SIS (Student Information Services) is a software specially
designed to improve the quality of service to students and
work to provide information on: student lecture schedule
was selected based on the semester, Kartu Hasil Studi
(KHS), Index table Comulative Achievement (GPA), a list
of values, provides a service creation form that can be
used by student activities in lectures and so quickly and in
real time [1].
Green Orchestra (GO) is a financial instrument accounting
IT system at the University Raharja to provide service
excellence to the online Personal Raharja to give comfort
to the cash register for staff and students in terms of speed
and accuracy of data services [2].
INTEGRAM (Raharja Integrated Marketing) is a web-based
information system designed specifically to serve the process of acceptance of new students at the University
Raharja. With INTEGRAM, student acceptance of a new
faster (service excellence) and the controlling can be done
well [3].
RME (Raharja Multimedia Edutainment) the understanding that Raharja Universities in developing the concept of
the learning process based multimedia entertainment are
packed so that the concept of Interactive Digital Multimedia Learning (IDML) that touches five senses strengths
include text, images, and to provide a voice in the process
of learning to all civitas academic and continuously make
improvements (continuous improvement) towards perfection in the material of teaching materials, which is always
evolving as the progress and development of technology
[4].
RME can facilitate civitas academic to obtain information
about the SAP, and Syllabus of teaching materials, faculty
can easily mold to the presentation materials in to RME
presented to students, and the system controlling in the
academic field to make the decision easy [5].
Absensi Online (AO) is part of the development and Raharja
A system is the subject of miss management, errors, deception, cheating and abuse-general debauchery other. The
control system applied to the information, very useful for
the purpose of maintaining or preventing the occurrence
of things that are not desired [7]. Similarly, Absensi Online
(AO), which is part of the RME (Raharja Multimedia
Edutainment) require a control that is useful to prevent or
keep things that are negative so that the system will be
able to continue to perpetuate his life.
Control of both is also very important for web-based information system to protect themselves from things that hurt,
considering the ability of the system to be accessed by
many users, including users who are not responsible [8].
One of the ways the system is web-based information system with the security transparent to the user and does not
disrupt. In this case, whether the behavior detection using
the concept of data mining can be a choice?
Have been described previously that Absensi Online (AO)
in the RME functions to serve the lecture. Lecturers can
do attendance attendance lecturers and students in Absensi
Online (AO) in a lecture room in particular. However, the
process that occurs in the system there is a problem in the
cheating-cheating student attendance attendance. This is
because the system can be read and accessed by all users
connected to the network so that attendance can be done
by anyone and anywhere.
Besides, cheating can also be done by lecturers who do
attendance attendance as students’ lecturer states not
present, the present faculty and students present states,
but not present. It is used as indication of deception can
also obtain the results of auditing, meeting lecturers, findings, complaints of students and others. When auditing is
done, many in attendance found that Online provides students present in class when the student outside of class.
In addition, attendance at the Online that a class average
of all students present at the appropriate time after the
check, the lecturers are mengabsen all students at the beginning of the first lecture, and if there are students who
do not attend, attendance changes made at the end of the
lecture.
73
III.
Critical Review
A number of critical reviews will be sought for the detection or behavior associated with it. After that the results
will be considered, sought equality and difference, and to
detect weaknesses and strength. Some of the critical review are as follows :
Figure 2. Persentase IMM pertanggal
From figure 2 above, can know the average percentage of
attendance and the number of students each day. If the
data at a time with a very drastic change it can also be an
indication of deception, cheating Absensi Online (AO).
Figure 3. Absensi Online (AO) in the lecture
Another thing that can be used as indication of deception
that is shown in the image of the circle 3. The student did
not follow the lectures since the beginning of the meeting,
but at the meeting to-4 (four) in Absensi Online (AO) to
the student to attend the next meeting and the student
does not re-present. It so happens because the student
may miss out to friends without teachers or attendance
done by the lecturers concerned itself.
Other findings are that a number of students obtaining
GPA (Komulatif Performance Index) is high when the number of IMM (Quality Index Students) student attendance
is very low, or vice versa. This can also be cheating as an
indication.
To keep the process on a series of Absensi Online (AO) is
running properly and the accuracy of the information generated can be guaranteed, it is usually done by way of a
password. However, this action will not cause a user can
find out what information is in addition to the authorized
user. In addition, the password to be less effective for
systems that are accessed by many users that are always
changing from time to time and the activities and prevent
the security system does not become transparent to other
users. Conversely, if there are no access restrictions on
incoming Absensi Online (AO), it is possible for the user
to perform other deception-deception that is not desired.
From the above description, several problems can be formulated as the following:
1.
2.
3.
74
Can we use IT to detect cheating?
What data mining can be used to it?
Is there a behavior detection in data mining?
1. Data Mining is the exploration and analysis of infor
mation valuable or valuable in the possibility of a large
database which is used for the purposes of knowledge
and decision-making is better and beneficial for the
company overall [9].
2. On the different, the data mining is also used for data
or computer security. Techniques of data mining such
as association rules discovery, clustering, deviation
detection, time series analysis, Classification, induc
tive databases and dependency modeling, and in fact
may have been used for fraud or misuse detection.
framework that we call anomaly detection system make
this knowledge discovery, but especially to make the
system more secure to use [10].
3. Research carried out by the data mining experts, Anil K
Jain in 2000 at Michigan State University on Statistical
pattern recognition approach provides a statistical pat
tern recognition to the results obtained from data min
ing. Summarizes this research and compare multiple
stages of pattern recognition system to determine how
data mining methods or the most appropriate in the
various fields, including Classification, clustering, fea
ture extraction, feature selection, error estimation and
classifier combination [11].
4. Research conducted by Wenke Lee in 1998 from Co
lombia University, the misuse and anomaly detection
by using data mining. Based on the audit data avail
able, the system can be trained to know the behavior
pattern so that the classifier can be mapped using two
dimensional axis and mining for procedural disassemble
low frequency pattern. Data mining is described here
can also be applied in this research, so that they can
learn the pattern of previous behavior, and perform
mapping on the behavior pattern that is now, so it can
detect low frequency pattern. When you’ve obtained
the desired pattern, can be categorized as an anomaly
list that you want to be processed further [12].
5. Research is also conducted by Stefanos Manganaris
in 1999 at the International Business Machines Corpo
ration examine the context-sensitive anomaly alerts
using real-time Intrusion detections (RTID). The pur
pose is to alert the entire mengkarakterisasi filtered,
and compared with historical behavior, so it is useful
to identify the profile of clients is different. The current
research can also be equated with this research, pro
cessing system because this lecture attendance akan
Intrusion that have occurred when the system detects
an anomaly. May not be present but diabsen present.
The number of data anomalies that exist, also in the
characterization, so also has the ability to identify the
profiles of different clients different [13].
6. Research conducted by Mahmood Hossain in 2001 at
Mississippi State University, almost the same with the
research conducted by Stefanos Manganaris, data min
ing framework that is input into the system profile that
adaptive, even using the fuzzy association rule mining
to detect the anomaly and normal behavior. Research
that can be done now also adopted the more so that
profiling can be adaptive mendeteteksi anomaly [14].
IV. Problem Solving
Basically, the control system Absensi Online (AO) is done
to prevent the deception-deception and negative things
will happen in the presence of attendance. One of the ways
that you can control is done by using the behavior detection.
However, critical review of the existing note that the focus
on examining the behavior detection have not been done,
most research conducted to examine the anomaly detection. However, the outline possible anomaly detection can
be useful for the detection behavior as behavior detection
can be done using the data mining concept.
Previously described some of the things that can be used
as an indication of cheating Absensi Online (AO) of which
is the lecturer says students do not attend, the student
attend and present faculty represent the student, the student does not attend.
The first indication that may not be discussed in this article, because if it happens automatically so the student
will complain to the lecturer concerned. Meanwhile, the
second indication will be the focus of this aritkel, that is
how the data mining concept, behavior detection can be
done well so that the system itself can know if there is an
indication cheating or anomaly.
Next, predective conducted based on data mining and
analysis, and database to build model predicted trends
and information on properties that have not been known
[15].
Before further discussion, first must be identified on the
unusual events that can be represented based on the 7
(seven) criteria are as follows :
1. The
average
student
attendance
The system can detect the cheating by looking at the
level of attendance of a student. This is the same as
the previous explanation is shown in the picture with
the 3 red circle.
2. Student with low IPK Cumulative Performance Index
(IPK) is the average value of the last students during
lectures followed.
Students who have an average GPA that low, if at a
GPA semester will increase drastically it can also serve
as the indication of deception.
3. Data Breach of the previous Breach of previously re
corded to have occurred in a database and can be used
as a reference if it happens again.
4. Based on the social background Deception can also
be seen from the student social background. This is
represented on the size of the income earned.
5. Friends or gang factor Generally, students have a friend
or group (gang) specific. Groups can make a student to
be diligent in following the lectures, or vice versa can
also.
6. Comparing Low Aptitude Test Behavior detection can
also refer to Ujian Saringan Masuk (USM) students, as
the number of GPA and level of attendance a student
may also be measured from the USM.
7. Comparing IMM Student Quality Index (IMM) is a sys
tem that is prepared to measure and know the level of
discipline a student attendance by using Online (AO).
Time student attendance during the Teaching Learning
Activities (KBM) will be recorded in the whole database.
So that it can also serve as the reference behavior detection.
When the system detects an unusual event detection based
on behavior, there are two (2) the possibility that can be
done by the user control system at the time of which is as
follows:
1. Going down to the field and find the truth
Absensi Online (AO) system controllers or admin per
form auditing directly to the class that was detected to
determine whether there is a true indication of the ex
istence of cheating or not.
2. Saved a reference behavior for the next detection
Breach that is detected can be recorded and stored in
the database to serve as the reference detection if be
havior occurs again Breach.
75
Therefore, 7 (seven) the criteria above can be formed into
five (5) itemize the factors described in the following table:
random initial centroids used for the proximity is calculated by using the euclidean distance, that in the end some
Iterations will be able to find the desired point.
Table 1. Itemize Factor
Before you begin to format your paper, first write and save
the content as a separate text file. Keep your text and
graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard
returns to only one return at the end of a paragraph. Do
not add any kind of pagination anywhere in the paper. Do
not number text heads-the template will do that for you.
V. Implementation
Figure 4. Data Mining Architecture Universities Raharja
Before data mining engine is started, the need to central
repository or data storage that is ideal. Figure 4 above is
the architecture of data mining Universities Raharja. Can
note that the operational data of a role as a source of data
is divided into two parts, namely the external data and
internal data. External data is survey data that are usually
obtained from field studies of students and the Internet.
External data is a process that requires continuous and
long to produce data mining reports.
Terms of GPA (see table 1) can be done using the K-means
clustering(). Points. In this article which will be discussed
only two, namely High and Low GPA GPA. To select a
76
Figure 5. Step-1 in determining the high & low GPA
using the K-means
Figure 6. Last Step in determining high & low GPA using
the K-means
In the picture 5 and 6 above is the initial step and final step
to determine the clustering of high and low GPA GPA. Can
be K-means Clustering for high and low GPA GPA is as
follows:
1. Select K points as the initial GPA centroids
2. Repeat
3. Form K clusters by assigning all points to the closest
GPA centroid
4. Recompute the GPA centroid of each cluster
5. Until the GPA centroid don’t change
Absensi Online (AO) system at the University Raharja
can be described using the Unified Modeling Language
(UML) are as follows :
Figure 7. Use Case Diagram Absensi Online (AO)
Figure 9. Display Screen Check In Lecturers
From the image above can know that there is one system
that covers all activities Absensi Online (AO) at the
University Raharja, two the actor doing the activity as a
lecturer and lecturer adm, and 6 use the usual case-actor
actor is.
Check In after lecturers in a way to emphasize on the thumb
that has been provided on the display screen above, new
faculty can be present in the absent class. Next is the display screen to make lecturers absent in the present class:
Figure 10. Display screen Lecturers To Present
Absent in Class
Figure 8. Code Generation Sintax Check AO
When the lecturers have to click on the section be surrounded at the top, then this means the process that is
absent a second lecturer in the present class is finished.
The process is next to the lecturers to students absent on
the page or screen the same as above.
Figure 8 above is a code generation sintax check use case
diagram Abesensi Online (AO). After the test validasinya
that there is no error process. This proved the absence of
any error messages that appear in the Message.
Absensi Online at the University Raharja for lecturers and
students, where a series of attendance begins at the faculty, ie, when the lecturer to use the Check In screen with
touchscreen.
Figure 11. Display screen Absent Students
77
After lecturers do absent present, then automatically link
to absent lecturers will be lost or changed information into
the form of hours to attend lectures in the classroom. In
addition, the link will display the lecturers to absent students. When the lecturers have not been absent, the link
will be written “Absent”, but when the student was absent on the link will change to indicate the number of hours
students attended the class, such as 11 appear in the image above. The process of absent students can only be
done through the computer where the lecturer concerned
to attend absent. At the time this is often an indication of
deceptions (Breach) as described previously.
Figure 11 above is one example an indication of cheating
on attendance attendance Absensi Online (AO). Visible in
the meeting to-1 faculty absent “present” in the classroom
at 14:02 and some students attend classes with the same
time. At the meeting to the absent-2 lecturers “present” in
the classroom at 14:08 and a few absent students attend
classes at the same time is at 14:12 but, time information is
not the same as the description of the present faculty in
the classroom. Meeting to-3 present in the classroom lectures at 14:09 and the visible presence of student information in different classes is not the same as the previous
meeting.
Meeting to-3 on the image 11 on the normal and visible
indication of the absence of cheating. However, for the
meeting to the 1-to-2 and can be used as an indication of
cheating on these lectures. When a lecture is in progress
and there are things such as meeting to-1 and-2 to the
image in the top 11, then on the admin computer faculty
Absensi Online (AO) the classroom will look like the figure below.
VI. Program Listing
To apply the behavior detection using the concept of data
mining in attendance Online (AO), one can use the ASP
file. Active Server Pages (ASP) is a script-based server
side, that means the entire application process is done
entirely in the server. ASP file is actually a set of ASP
scripts combined with HTML. Thus, the ASP file consists
of several structures that are interconnected and form a
function that returns a. Structure in the ASP file consists
of: text, HTML tags, scripts and ASP [16].
Figure 13. ASP Scripts behavior detection
in Absensi Online
ASP script snippet above is the script used for the detection behavior in attendance Online (AO). Discount script
to read data on the attendance attendance attendance to
include in the Online (AO). Anomaly detection occurs if
the system will immediately provide a report or warning as
in the previous 12 figure.
Figure 12. Views detection in attendance Behavior Online
If the system detects an anomaly, then the data will change
as a red circle in the picture given in the top 12. After admin
lecturers see it, the action was admin lecturers can do to
auditing the class directly or the data recorded by the system to be a reference if such things happen again. With the
behavior detection, system can detect and report the case
of the anomaly so that the system performance to be more
optimal.
78
VII. Conclusion
There is a behavior detection system on attendance Online
(AO) can minimize or even eliminate cheating-cheating
(Breach), which is in the presence of student attendance.
Critical review of some existing, it is known that the focus
on examining the behavior of detection have not been
done, most research conducted to investigate the anomaly
detection. However, outline the anomaly detection can be
useful for the detection behavior for behavior detection
can be done using the data mining concept. K-means clus-
tering can be used to segment high GPA and a low GPA
can be used as a reference to detect anomalies.
Behavior detection with the concept of data mining as a
form of control system information attendance Online (AO)
can be done so with good accuracy and integrity of the
data can be guaranteed and the system can continue to
perpetuate his life. In addition, this new concept can still
support the main goal of a web-based information system,
the display information to all users anytime and anywhere
without the restrictions also access entry.
[8]Guritno, Suryo dkk. 2008. Access Restriction sebagai
Bentuk Pengamanan dengan Metode IP Token.
Jurnal CCIT Vol. 1 No.3-Mei. Tangerang : Perguruan
Tinggi Raharja.
[9]Han, Jiawei dkk. 2000. DBMiner : A system for data mining in relational databases and data warehouses.
Data Mining Research Group, Intelligent Database
Systems Research Laboratory School of Computing Science. Simon Fraser University : British Columbia.
References
[10]Chung, Christina. 1998. Applying Data Mining to Data
Security. University of California : Davis.
[1]Rahardja, Untung dkk. 2007. SIS: Otomatisasi Pelayanan
Akademik Kepada Mahasiswa Studi Kasus di
Perguruan Tinggi Raharja. Jurnal Cyber Raharja.
Edisi 7 Th IV/April. Tangerang : Perguruan Tinggi
Raharja.
[11]Jain, Anil K dkk. 1999. Statistical Pattern Recognition:A
Review. IEEE Trans. Department of Computer Science and Engineering Michigan State University :
USA.
[2]Rahardja, Untung dkk. 2008. Presentasi Peluncuran GO
(Green Orchestra). Tangerang : Perguruan Tinggi
Raharja.
[12]Lee, Wenke. Stolfo, Salvatore J. Mok, Kui W. 1998.
Mining Audit Data to Build Intrusion Detection
Models. Computer Science Department Columbia
University : New York.
[3]Agustin, Lilik. 2008. Design dan Implementasi Integram
Pada Perguruan Tinggi Raharja. Skripsi. Jurusan
Sistem Informasi. Tangerang : STMIK Raharja.
[4]Rahardja, Untung dkk. 2007. Raharja Multimedia
Edutainment Menunjang Proses Belajar Mengajar
di Perguruan Tinggi Raharja. Jurnal Cyber Raharja.
Edisi 7 Th IV/April. Tangerang : Perguruan Tinggi
Raharja.
[5]Rahardja, Untung dkk. 2008. Meng-Capture EQ Melalui
Daftar Nilai Indeks Mutu Komulatif (IMK) Berbasis
ICT. Jurnal CCIT Vol 1 No.3-Mei. Tangerang :
Perguruan Tinggi Raharja.
[6]Rahardja, Untung dkk. 2007. Absensi Online (AO). Jurnal
Cyber Raharja. Edisi 7 Th IV/April. Tangerang:
Perguruan Tinggi Raharja.
[7]Hartono, Jogiyanto. 2000. Pengenalan Komputer : Dasar
Ilmu Komputer, Pemrograman, Sistem Informasi dan
Intellegensia Buatan. Edisi Ke Tiga. Yogyakarta :
Andi.
[13]Manganaris, Stefanos. Christensen, Marvin. Zerkle,
Dan. Hermiz, Keith. 1999. A Data Mining Analysis
of RTID Alarms. International Business Machines
Corporation (IBM) : USA.
[14]Hossain, Mahmood. Bridges, Susan M. 2001. A Framework for an Adaptive Intrusion Detection System
With Data Mining. Department of Computer Science
Mississippi State University : USA.
[15]Han, Jiawei dkk. 2000. DBMiner : A system for data
mining in relational databases and data warehouses.
Data Mining Research Group, Intelligent Database
Systems Research Laboratory School of Computing Science. Simon Fraser University : British Columbia.
[16]Bowo, Eko Widodo. 2005 . Membuat Web dengan ASP
dan Microsoft Access. Yogyakarta: Andi.
79
Paper
Saturday, August 8, 2009
14:20 - 14:40
Room L-211
PERFORMANCE INDICATOR MEASUREMENT AND MATURITY
LEVEL ASSESSMENT IN AN IT AUDIT PROCESS
USING COBIT FRAMEWORK 4.1
Sarwosri, Djiwandou Agung Sudiyono Putro
Department of Informatics, Faculty of Information Technology
Institute of Technology Sepuluh Nopember
Gedung Teknik Informatika Lt. 2, Jl. Raya ITS Surabaya
email : [email protected], [email protected]
ABSTRACT
In order to provide assurance about the value of Information Technology (IT), enterprises needs to recognize and
manage benefits and associated risks related to business process and IT. Failures of managing IT can lead to problems
in achieving enterprise objectives as IT is now understood as key elements of enterprise assets.
Control Objectives for Information and Related Technologies (COBIT®) provides good practices across a domain and
process framework and presents activities in manageable and logical structure. It is known as framework to ensure that
the enterprise’s IT supports the business objectives by providing controls to measure IT effectiveness and efficiency.
COBIT also provides tools for obtaining an objective view of an enterprise’s performance level namely Performance
Indicator Measurement and Maturity Level Assessment.
In this paper, we propose the implementation of IT Audit process by using COBIT Framework 4.1, the latest COBIT
version. Due to the large extent of scope of COBIT Framework 4.1, hereby we delimitate scope of using COBIT’s
measurement tools to determine and monitor the appropriate IT control and performance in the enterprise.
Keywords : COBIT Framework 4.1, IT Audit, Performance Indicator Measurement, Maturity Level Assessment
1. INTRODUCTION
In 1998, monetary scandal involved Enron and Arthur
Andersen LLP, IT failure at AT & T, and also fund problems
on internet and e-commerce development in USA has
caused great growth within IT Audit field.[3]
In that case, the importance of IT Audit really does
matter to ensure availability and succeed in IT projects
and services inside the company. Framework means
international standard that ensure IT Audit process can
be implemented appropriately and meet the requirements,
the example of IT Audit Frameworks: COBIT, ITIL, ISO.
2. COBIT FRAMEWORK
COBIT framework used in this paper can be modeled
below:
The process focus of COBIT illustrated by Figure1
subdivides IT into four domains and 34 processes in line
80
monitor, providing and end-to-end view of IT. [1]
of being business-focused, process-oriented, controls
based, and measurement-driven :
2.1 Business-Focused
COBIT is focused on business orientation provides
comprehensive guidance for management and business
process owner.
To satisfy business objectives, information needs to
conform to certain control criteria, COBIT’s information
criteria are defined as follows [2]:
·
Effectiveness
·
Efficiency
·
Confidentiality
·
Integrity
·
Availability
Figure 1. COBIT Framework Model [1]
·
·
Compliance
Reliability
Every enterprise uses IT to enable business initiatives,
and these can be represented as Business Goals for IT.
COBIT also identifies IT Resources as follows:
· Applications
· Information
· Infrastructure
· People
2.2 Process-Oriented
COBIT defines IT Activities in a generic process model
within four domains. These domains, as shown in fig.1,
are called:
· Plan and Organise (PO) – Provides direction to
solution (AI) and service delivery (DS).
· Acquire and Implement (AI) – Provides the
solutions and passes them to be turned into
services.
· Deliver and Support (DS) – Receives the
solutions and makes them usable for end users.
· Monitor and Evaluate (ME) – Monitors all
processes to ensure that direction provided is
followed.
2.3 Controls-Based
COBIT defines control objectives for all 34 processes
as well as overarching process and application controls.
Each of COBIT’s IT Process has a process description
and a number of control objectives. The control objectives
are identified by a two-character domain reference (PO,
AI, DS, and ME) plus a process number and a control
objective number.
2.4 Measurement-Driven
COBIT deals with the issue of obtaining an objective
view of an enterprise’s own level to measure where they
are and where improvement is required.
COBIT provides measurement tools as follows:
· Maturity Models to enable benchmarking and
identification of necessary capability
improvement.
81
· Performance goals and metrics, demonstrating
how processes meet business and IT Goals
are used for measuring internal process
performance based on Balanced Scorecard
principles.
· Activity Goals for enabling effective process
performance.
Measurement tools implemented in this paper will be
Performance Indicator Measurement and Maturity Level
Assessment, as for the implementation scenario of COBIT
measurement tools will be identified by model as shown
in figure 2.
3. PERFORMANCE INDICATOR
MEASUREMENT
process, will then conduct performance measurement and
maturity models measurement based on IT Goals and IT
Process obtained by following figure 3 and figure 4.
Figure 5. COBIT Table Linking IT Goals to IT
ProcessesThe terms KGI and KPI, used in previous
version of COBIT, have been replaced with two types of
metrics:
· Outcome Measures, previously Key Goal
Indicators (KGIs), indicate whether the goals
have been met. These can be measured only
after the fact, therefore, are called ‘lag
indicators’.
· Performance Indicators, previously Key
Performance Indicators (KPIs), indicate
Goals and metrics are defined in COBIT at three levels
[1]:
· IT Goals and metrics that define what the
business expects from IT and how to measure
it.
· Process Goals and metrics that define what the
IT process must deliver to support IT’s
objectives and how to measure it.
· Activity Goals and metrics that establish what
needs to happen inside the process to achieve
the required performance and how to measure
it.
Goals are defined top-down in that a Business Goal will
determine a number of IT Goals to support it. Figure 2
below provides examples of goal relationships.
we need to define business goals of an enterprise. COBIT
provides table linking goals and processes, starting with
Business Goals to IT Goals, then IT Goals to IT Process.
Goals [1]Enterprise’s representative, called as Auditee,
will be asked to define which business goals is conform
with enterprise business goals in Balance Scorecard
principles.
Auditor, person who conduct IT Audit, give analysis
about audit result, and recommendation along IT Audit
Figure 2. COBIT Measurement Scenario
Figure 3. Example of COBIT Goal RelationshipsAt first,
82
53
Figure 4. COBIT Table Linking Business Goals to IT
whether goals are likely to be met. They can be measured before the outcome is clear, and therefore, are called
‘lead indicators’.
Figure 6 provides possible goal or outcome measures for the example in figure 2.
Figure 5. COBIT Table Linking IT Goals to IT ProcessesThe terms KGI and KPI, used in previous version of COBIT,
have been replaced with two types of metrics:
54
83
Figure 6. Possible Outcome Measures for the Example in Figure 3.
Figure 7. Possible Performance Drivers for the Example in Figure 3.
Outcome Measures and Performance Drivers can be assessed as a process in IT Audit as shown in Figure 8.
Auditor needs to do an analysis regarding Outcome Measures and Performance Indicator of the Auditee’s Enterprise
and then giving the Score and Importance appropriate to the Enterprise’s achievement for each COBIT statement.
Measure will be filled with Auditor self opinion regarding Score and Importance, and also facts obtained in the field
study.
4. MATURITY LEVEL ASSESSMENT
COBIT Maturity Models responds to three needs of IT Management for benchmarking and self-assessment tools in
response to the need to know what to do in an efficient manner:
1.
A relative measure of where the enterprise is.
Figure 8. COBIT Performance Measurement Table
84
55
2.
A manner to efficiently decide where to go.
3.
A tool for measuring progress against the goal.
Maturity modeling for management and control over IT
processes is based on a method of evaluating the
organization, so it can be rated from a maturity level of
non-existent (0) to optimized (5) as shown in Figure 9.
Figure 9. Graphic Representation of Maturity Models
Using the Maturity Models developed for each of
COBIT’s 34 IT processes, management can identify:
· The actual performance of the enterprise – Where
the enterprise is today.
· The current status of the industry – the
comparison.
· The enterprise’s target for improvement – Where
the enterprise wants to be.
· The required growth path between ‘as-is’ and
‘to-be’.
Figure 10. COBIT Maturity Level Assessment
TableFigure 10 provides Maturity Level Assessment Table
to assess enterprise’s IT value.
To obtain Value of each statement ‘s answer, we need to
multiply Weight and Answer. With the formula:
(1)
Which V is value score of a maturity statement, W is
weight for each maturity statement, and A is the answer’s
value for each maturity statement.
Answer is categorized into four values:
1.
Not at all with value 0.00
2.
A little with value 0.33
3.
To some degree with value 0.66
4.
Completely with value 1.00
Figure 10. COBIT Maturity Level Assessment TableFigure 10 provides Maturity Level Assessment Table to assess
enterprise’s IT value.
56
85
an IT Process Auditor will obtain Enterprise’s Maturity
Level Score.
(4)
Which Co is Contribution, L is Level, and N is Normalize
for each maturity level statement.
(5)
Which ML is Maturity Level, and Co is Contribution
for each maturity level statement.
5. CONCLUSION
1.
Figure 11. COBIT Maturity Level Calculation
In order to obtain Maturity Level score, first, Auditor
needs to calculate the Compliance for each Maturity Level.
2.
Compliance is obtained by totalize each maturity level
statement’s value and divide it with total weight for each
statement.
3.
(2)
Which C is Compliance, V is value for each maturity
statement’s answer, and W is weight for each maturity
statement.
Normalize can be obtained by calculating each level’s
Compliance divided with Total Compliance for all level of
an IT Process.
(3)
Which N is Normalize, C is Compliance for each maturity
statement.
4.
Enterprise will become prescience
regarding of their current IT performance
measured by COBIT Performance Indicator
Measurement.
COBIT Framework can be used as the
source of recommendation for increasing
enterprise’s IT performance.
With a view of Maturity Level
Assessment result, enterprise will be able to
specify its IT strategy in alignment with
Business Strategy.
Escalation of enterprise’s maturity level
is enabled by remarking recommendations
made using COBIT Framework.
REFERENCES
[1] IT Governance Institute.2007.COBIT 4.1
[2]
Indrajit, Richardus Eko. Kajian Strategis
Analisa Cost-Benefit Investasi Teknologi
Informasi.
[3]
NationMaster.com. 2008. History of Information
Technology
Auditing<URL
:
http://
www.nationmaster.com/encyclopedia/History-ofi n f o r m a t i o n - t e c h n o l o g y auditing.htm#Major_Events>
Contribution is multiplication of Level with Normalize
for each level. With the total contribution for all level of
86
57
Paper
Saturday, August 8, 2009
14:45 - 15:05
Room L-210
AN EXPERIMENTAL STUDY ON BANK PERFORMANCE PREDICTION
BASE ON FINANCIAL REPORT
Chastine Fatichah, Nurina Indah Kemalasari
Informatics Department, Faculty of Information Technology
Institut Teknologi Sepuluh Nopember, Kampus ITS Surabaya
Email: [email protected], [email protected]
ABSTRACT
This paper presents an experimental study on bank performance prediction base on financial report. This research use
Support Vector Machine (SVM), Probabilistic Neural Network (PNN) and Radial Basis Function Neural Network
(RBFN) methods to experiment the bank performance prediction. To improve accuracy prediction of both neural
network methods, this research use Principal Component Analysis (PCA) to get best feature. This research work based
on the bank’s financial report and financial variables predictions of several banks that registered in Bank Indonesia.
The experimental results show that the accuracy rate of bank performance prediction of PCA-PNN or PCA-RBFN
methods are higher than SVM method for Bank Persero, Bank Non Devisa and Bank Asing categories. But, the accuracy
rate of SVM method is higher than PCA-PNN or PCA-RBFN methods for Bank Pembangunan Daerah and Bank Devisa
categories. The accuracy rate of PCA-PNN method for all bank categories is comparable to that PCA-RBFN method.
Keywords: bank performance prediction, support vector machine, principal component analysis, probabilistic neural
network, radial basis function neural network
1. INTRODUCTION
The prediction of accuracy financial bank has been
the extensively researched area since late. Creditors,
auditors, stockholders and senior management are all
interested in bankruptcy prediction because it affects all
of them alike [7].
When the shareholders will make the investment
to a bank, the shareholder must first see the performance
of banks is good or not [2]. In some cases accurately
predicted the performance of a bank can also through
economic and financial ratio, the current assets / total
assets, current assets - cash / total assets, current assets
/ loans, reserve / loans, net income / total assets, net
income / total capital share, net income / loans, cost of
sales / sales, cash flow / loan.
Some research [3] use neural network approach for
performance predictions, neural network considered as
an alternative network to predict accuracy that can result
in the total value of the error more or less the same Error
Type 1 and Error Type 2.
Type of error is determined from the predicted
performance of the bank. Error Type 1 as the number of
“actually poor performance banks” predicted as
“adequate performance banks” expressed as percentage
of total poor performance banks and Error Type 2 as the
number of “actually adequate performance banks”
predicted as “poor performance banks” expressed as a
percentage of total adequate performance banks.
Ryu and yue researchers [9] introduce isotonik
to predict the spread of a financial company and produce
MLFF-BP, logistic regression, and probit methods. By
using the data from one of a financial company, predicted
the failure of small community banks and regional banks
or big banks use MLFF-BP, MDA, and professional
assessment. For community banks and regional banks,
the researchers observed that the neural network model
produce MDA model, especially type I error. The result is
predicted in the small community bank is less accurate
than the regional banks.
87
The objective of the this study is primarily to
experiment several method of soft computing to analysis
Error Type 1 and Error Type 2 of bank performance
prediction.
The rest of the paper is organized as follows;
Section 2 describes methods of bank performance
prediction such as SVM, PNN, and RBFN methods. Section
3 describes experimental result and evaluation
performance, and Section 4 describes conclusion of this
research.
support vectors
Figure 1. Optimal Boundary by SV M method
The optimal boundary is computed as decision surface of
the form:
f ( x ) = sgn( g ( x ))
2. METHODS OF BANK PERFORMANCE
PREDICTION
This section presents methods of bank performance
prediction that used in this research. This research use
Support Vector Machine (SVM), Probabilistic Neural
Network (PNN) and Radial Basis Function Neural Network
(RBFN) methods. To improve accuracy prediction of both
neural network methods, this research use Principal
Component Analysis (PCA) to reduce the dimension of
the input space. Each of the constituent methods is briefly
discussed below.
(1)
where,
(2)
In Equation 2, K is one of many possible kernel
functions, y i ∈ {− 1,1} is the class label of the data
*
point xi , and
{x }
* l*
i i =1
is subset of the training data set.
xi* are called support vectors and are the points from
the data set that fall closest to the separating hyper
plane. Finally, the coefficients
2.1 Support Vector Machine
Support Vector Machine (SVM) [1] is a method for
obtaining the optimal boundary of two sets in a vector
space independently on the probabilistic distributions of
training vectors. Its fundamental idea is locating the
boundary that is most distant from the vectors nearest to
the boundary in both of the sets. Note that the optimal
boundary should classify not only the training vectors,
but also unknown vectors in each set. Although the
distribution of each set is unknown, this boundary is
expected to be the optimal classification of the sets, since
this boundary is the most isolated one from both of the
sets. The training vectors closest to the boundary are
called support vectors. Figure 1 illustrates the optimal
boundary by SVM method.
˜
˜
˜
˜
˜
¿
¿
¿
¿
¿
¿
optimal boundary
88
αi
and b are determined
by solving a large-scale quadratic programming
problem. The kernel function K that is used in the
component classifier is a quadratic polynomial and has
the form shown below:
K ( x, xi* ) = ( x.xi* + 1) 2
(3)
f ( x) ∈ {− 1,1} in equation (1) is referred to as the binary
class of the data point x which is being classified by the
SVM. Values of 1 and -1 refer to the classes of positive
and the negative training examples respectively. As
Equation (1) shows, the binary class of a data point is the
sign of the raw output g(x) of the SVM classifier.
The raw output of a SVM classifier is the distance
of a data point from the decision hyper plane. In general,
the greater the magnitude of the raw output, the more
likely a classified data point belongs to the binary class it
is grouped into by the SVM classifier.
2.2 Principal Component Analysis
Principal component analysis (PCA) [5] has been
called one of the most valuable results from applied linear
algebra. PCA is used abundantly in all forms of analysis from neuroscience to computer graphics - because it is a
simple, non-parametric method of extracting relevant
information from confusing data sets. With minimal
additional effort PCA provides a roadmap for how to reduce
a complex data set to a lower dimension to reveal the
sometimes hidden, simplified structure that often underlie
it.
2.3 Probabilistic Neural Network
The PNN [4] employs Bayesian decision-making
theory based on an estimate of the probability density in
the data space and Parzen estimates to make predictions.
PNN requires onepass training and hence learning is very
fast. However, the PNN works for problems with integer
outputs, hence, can be used for classification problems.
PNN is not stuck in some local minima of the error surface.
The PNN as implemented here has 54 neurons in the input
layer corresponding to the 54 input variables in the
dataset. The pattern layer stores all the training patterns
one in each pattern neuron. The summation layer has two
neurons with one neuron catering to the numerator and
another to the denominator of the non-parametric
regression estimate of Parzen. Finally, the output layer
has one neuron indicating the class code of the pattern.
2.4 Radial Basis Function
RBFN [4], another member of the feed-forward
neural networks, has both unsupervised and supervised
training phases. In the unsupervised phase, the input data
are clustered and cluster details are sent to the hidden
neurons, where radial basis functions of the inputs are
computed by making use of the center and the standard
deviation of the clusters. The radial basis functions are
similar to kernel functions in kernel regression. The
activation function or the kernel function can assume a
variety of functions, though Gaussian radial basis
functions are the most commonly used. The learning
between hidden layer and output layer is of supervised
learning type where ordinary least squares technique is
used. As a consequence, the weights of the connections
between the kernel layer (also called hidden layer) and
the output layer are determined. Thus, it comprises a
hybrid of unsupervised an supervised learning. RBFN can
solve many complex prediction problems with quite
satisfying performance.
3.
EXPERIMENTAL RESULT AND
ANALYSIS PERFORMANCE
The first process of this system is collect financial
report of banks that registered in Bank Indonesia about
110 banks. Data are taken as the period of 1 year, so that
the amount of data used for 1320 data. From the amount
of data, divided into 660 data for training data, and 660
data for testing data. There are six bank categories that
used to evaluate bank performance prediction i.e. Bank
Pembangunan Daerah, Bank Persero, Bank Devisa, Bank
Non Devisa and Bank Asing.
Data are taken based on the variables from the
financial report below:
1. Earning asset
2. Total loans
3. Core deposit
4. Non-interest
5. Interest income
6. Gain(losses)
7. Non-interest expense-wages and salary
8. Total interest expense
9. Provision expense
10. Off balance sheet commitment
11. Obligation and letter of credit
The second process of this system identifies bank
financial variables that would be used to classify data.
There are two bank financial variables such good and
poor variables.
The third process of this system is bank
performance prediction using SVM, PCA-PNN, and PCARBFN methods to produce Error Type 1, Error Type 2 and
accuracy. To evaluate Error Type 1, Error Type 2 and
accuracy for each method used testing data of each bank
category. PCA method is used to get best feature of
dataset before is classified by PNN or RBFN. Base on
several references that PCA is one of the best methods
for reducing attribute of dataset but not lost important
information of data.
The experiment results of one bank sample of Bank
Pembagungan Daerah categories on 1 year (Table 1) show
that Error Type 1 of PCA-RBFN method is lowest and the
accuracy of SVM method is highest.
The experiment results of one bank sample of Bank
Persero categories on 1 year (Table 2) show that Error
Type 1 of SVM method is highest and the accuracy of
SVM method is lowest.
The experiment results of one bank sample of Bank
Devisa categories on 1 year (Table 3) show that the
accuracy of SVM method is highest.
The experiment results of one bank sample of Bank
Non Devisa categories on 1 year (Table 4) show that Error
Type 1 of SVM method is highest and the accuracy of
SVM method is lowest.
The experiment results of one bank sample of Bank
Asing categories on 1 year (Table 5) show that the
accuracy of SVM method is lowest.
The experimental results show that the accuracy
rate of bank performance prediction of PCA-PNN or PCARBFN methods are higher than SVM method for Bank
Persero, Bank Non Devisa and Bank Asing categories.
But, the accuracy rate of SVM method is higher than PCAPNN or PCA-RBFN methods for Bank Pembangunan
Daerah and Bank Devisa categories. The accuracy rate of
PCA-PNN method for all bank categories is comparable
to that PCA-RBFN method.
89
Table 3. The result of two bank sample of Bank Devisa
category
Tabel 1. The result of Bank Pembangunan Daerah
category
Methods Error and Accuracy name
Percentages of error and accuracy
SVM
16.67%8.33%75%
SVM
2Accuracy
Error Type 1Error Type
16.67%0%83.33%
PCA-RBFN
2Accuracy
Error Type 1Error Type
0%25%75%
PCA-PNN
2Accuracy
Error Type 1Error Type
16.67%8.33%75%
Error Type 1Error Type 2Accuracy
PCA-RBFN
Error Type 1Error Type 2Accuracy
0%33.33%66.67.%
PCA-PNN
25%8.33%66.67.%
Methods Error and Accuracy name
Percentages of error and accuracy
Error Type 1Error Type 2Accuracy
Table 4. The result of two bank sample of Bank Non
Devisa category
Table 2. The result of Bank Persero category
Methods Error and Accuracy name Percentages of
error and accuracy
SVM
25%16.67%58.33%
Error Type 1Error Type 2Accuracy
PCA-RBFN
Error Type 1Error Type 2Accuracy
0%16.67%83.33%
PCA-PNN
Error Type 1Error Type 2Accuracy
0%16.67%83.33%
90
Methods Error and Accuracy name
Percentages of error and accuracy
SVM
Error Type 1Error Type 2Accuracy
16.67%16.67%66.67%
PCA-RBFN
Error Type 1Error Type 2Accuracy
0%16.67%83.33%
PCA-PNN
Error Type 1Error Type 2Accuracy
0%16.67%83.33%
5. REFERENCE
[1]. A. Asano, “ Pattern information processing”,
Session 12 (05. 1. 21), 2004.
[2]. “Bank”.(http://en.wikipedia.org/wiki/Bank).
[3]. K.Y. Tam, M. Kiang, Predicting bank failures: a
neural network approach, Decis. Sci. 23 (1992)
926–947.
Table 5. The result of two bank sample of Bank Asing
category
[4]. Simon Haykin, Neural Network: A Comprehensive
Foundation.(2005) 240-301.
Methods Error and Accuracy name
Percentages of error and accuracy
[5]. Smith, Lindsay I.26 Pebruari 2006.”A Tutorial on
SVM
8.33%16.66%75%
[6]. Support Vector Machines, available at http:/
Error Type 1Error Type 2Accuracy
Principal Component Analysis”.
www.svms.org/introduction.html; libSVM tool can
PCA-RBFN
Error Type 1Error Type 2Accuracy
16.66%0%83.3333%
be downloaded from http://www.csie.ntu.edu.tw/
cjlin/libsvm.
PCA-PNN
Error Type 1Error Type 2Accuracy
0%16.66%83.3333%
[7]. V.Ravi, H.Kurniawan, Peter Nwee Kok Thai, dan
P.Ravi Kumar, “Soft computing system for bank
performance prediction”, IEEE Journal, February
4. CONCLUSION
This paper presents bank performance prediction
can be used to evaluate bank performance in real cases.
2007.
[8]. Y.U. Ryu, W.T. Yue, Firm bankruptcy prediction:
experimental comparison of isotonic separation and
other classification approaches, IEEE Trans. Syst.,
Manage. Cyber.—Part A: Syst. Hum. 35 (5) (2005)
727–737
91
Paper
Saturday, August 8, 2009
13:30 - 13:50
Room L-212
Critical Success Factor of E-Learning Effectiviness
in a Developing Country
Jazi Eko Istiyanto
GADJAH MADA UNIVERSITY
Yogyakarta, Republic of Indonesia
[email protected]
Untung Rahardja
STMIK RAHARJA
Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
[email protected]
Sri Darmayanti
STMIK RAHARJA
Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
[email protected]
Abstract
Many research has been done on information technology planning effectiveness in a developing country, this paper takes
a step further in examining such factors in Indonesia, which is also a developing country. The results have surprisingly
shown that empirical data produced in Indonesia is not consistent to those researches conducted in other developing
countries. Hence we come to conclude that a study of one developing country on E-laerning Effectiveness cannot and
should not represent all developing nations in the world. One should carefully study the regional cultures and background that will eventually help to determine one different IT behaviors to another. For research to become effective,
hypothesis should be tested on several different countries and then follow by paying attention on similar behavior on the
results before drawing the conclusion. Based on some previous research about IT Planning Effectiveness in a Developing
Country, we perform similar research in Indonesia on the 6 hypothesis research model previously performed in Kuwait
[1].
Index Terms — Information Technology Planning Effectiveness, Developing countries, Indonesia
I. Introduction
The term Critical Success Factor (CSF) is defined simply
as “The thing(s) an organization MUST do to be successful.” [2]. This definition is translated into the conceptual
context of our subject, which is the critical success factor
of IT planning effectiveness. What are the thing(s) a developed country MUST have in order to be effective in Elaerning? As a whole, good E-laerning must be able to
integrate the business perspectives of the other organizational functions into an enterprise IT perspective that addresses strategic and internal technology requirements.
Research on Information Technology planning effectiveness has been done to many developing countries, such
92
as North America [3], Latin America [4], Western Europe
[5], Eastern Europe [6], South East Asia [7], and the Middle
East. However, no similar research has been done on information technology planning effectiveness in Indonesia.
Observing previous research shows little correlation between similar researches conducted on different regional
parts of the world. We strongly believe that regional cultures and behaviors affect final results in the study of
relationships of factors relating to information technology
planning effectiveness.
Past research concludes that informed IT management,
management involvement, and government policed con-
tributes most to IT planning effectiveness. However, IT
penetration, user involvement and financial resources does
little effect to IT planning effectiveness [8]. We take a step
forward by conducting similar research in Indonesia, to
prove whether those hypotheses are true. The article is
divided into five sections. Section 1 is the abstract and
the introduction. Section 2 describes the research model
and prediction of theory. Section 3 describes comparison
of results. Section 4 discusses the global implications of
our findings and the conclusion of our paper. Section 5 is
the limitations and future directions.
vious IS research in developed countries has shown that
increasing information technology penetration into an organization leads to favorable business consequences [12].
The benefits of higher IT penetration include support for
the linkage of IT-business plans and evaluation and review of the IT strategy [13]. However, only one existing
research has specifically examined the relationship between
information technology penetration and planning effectiveness in developing countries which is done by
Aladwani. Therefore, it is not clear yet whether information technology penetration would affect information technology planning effectiveness in the context of a developing country such as Indonesia. The aim of the present
study is to continue the previous attempt to test the following prediction:
H1: Information technology penetration is one of the critical success factors of information technology planning
effectiveness in developing countries.
Prediction of theory on H1:
Figure 1. The Research Model
I. Research Model and Prediction of Theory
This section restates the research model used by Aladwani
in conducting study in Kuwait [9]. Looking at the strategic alignment model perspective [10], given a concise picture of business strategy which is the external domain, Elaerning effectiveness should originate from the internal
domains of IT. Particularly, Aladwani look at the organizational point of view affecting E-laerning effectiveness.
We would like to point out that the three organizational
factors affecting E-laerning effectiveness is not thorough,
let alone adequate in representing organizational behaviors. Factors such as administrative structure and present
critical business processes should also be considered.
Furthermore, the environmental factors, which seems to
be further extended from the organizational internal model,
should include additional factors besides government
policies. Factors such as nations’ economic strength, social and cultural value certainly contributes in determining the environmental factors of E-laerning effectiveness.
For the purpose of this discussion, let us focus on the six
factors affecting E-laerning effectiveness, which is presented by Aladwani in the form of hypotheses. The research model in systematically represented in Figure 1.
A. Information Technology Factor
Information technology penetration may be conceptualized as the extent to which information technology is available throughout the premises of the organization [11]. Pre-
Based on unofficial survey among friends, colleagues and
family, also based on personal experience, we mostly agree
that information technology penetration will have a substantial positive effect on information technology planning effectiveness. As we humbly look at the regional data,
metropolitan city such as Jakarta, which have the highest
information technology penetration, have the best technology infrastructures and services. On the contrary, suburban and countryside areas, which have the low information technology penetration, are also low on technology
based facility. Thus we assume that organizations in the
big city do better technology planning effectiveness compare to the countryside areas. Hence information technology penetration will have a positive effect on information
technology planning effectiveness in developing country
such as Indonesia.
B. Organizational Factor
In this study, organizational factors were measured using
informed information technology management, management involvement, user involvement, and adequacy of financial resources for information technology planning.
Informed information technology management in this study
refers to the extent to which information technology management is informed about organizational goals and plans
[14]. There is a consensus among researchers that the effectiveness of information technology planning is dependent on its integration with business objectives [15]. Weak
IT-business relation was found to be among the top key
issues facing information technology managers in Asian
93
countries. Informed information technology management
helps to accomplish certain information technology planning objectives such as better utilization of information
technology to meet organizational objectives. Thus, we
hypothesize that:
H2:
Well informed IT management is one of the critical success factors of information technology planning
effectiveness in developing countries.
to be involved in information technology planning, if we
want proper resource and funding to support the IT building. Hence we support the hypothesis that management
involvement is definitely one of the critical success factors of information technology planning effectiveness in
developing countries such as Indonesia.
H4: User involvement is one of the critical success factors
of information technology planning effectiveness in developing countries.
Prediction theory on H2:
Prediction theory on H4:
This is thus true whether this hypothesis is tested on developed country or developing country, that obviously
informed IT management have a positive effect on information technology planning effectiveness. Inversely, uninformed IT management has adverse effect on IT planning because IT management is involved directly to Elaerning. Hence, for E-laerning to be effective, IT management has to be well informed on technology skills and
update necessary. Thus, we fully support hypothesis H2
to be true that the critical success factor of E-laerning
effectiveness in developing country such as Indonesia is
well informed IT management.
Management involvement is defined as the extent to which
management is involved in the planning process [16],
whereas user involvement is defined as the extent to which
there is adequate user involvement in the planning process [17]. Both management involvement and user involvement are important ingredients for successful information
technology planning [18];[19]. Premkumar and King reported higher management involvement in strategic IS planning [20]. Gottschalk reported a positive relationship between user involvement and the effective implementation
of information technology planning [21]. Gibson asserted
that management involvement is critical for successful planning for information technology transfer to Latin American countries [22]. In a study of information technology
projects in Kuwait, Aladwani emphasized the importance
of involving management and users in information technology implementation activities [23]. Thus, in order to
conduct similar survey in another developing country such
as Indonesia, we hypothesize that:
Organizations in developing countries too [24] If organizations want to benefit from information technology planning, then they must allocate adequate financial resources
for information technology planning [25]. Thus, we hypothesize that:
H3:
Management involvement is one of the critical
success factors of information technology planning effectiveness in developing countries.
Prediction theory on H3:
Management is part of the stakeholder of information technology development. It also means that they are the source
to provide funding to conduct operation such as Elaerning. Funding becomes available after the management approval of E-laerning. Hence the management has
94
H5: Adequacy of financial resources is one of the critical
success factors of information technology planning effectiveness in developing countries.
Prediction theory on H5:
As it appear true that the extent of adequate financial availability mirror the success of information technology planning effectiveness in developed countries [26], it is even
more critical for the financial factor to be available adequately for information technology planning to be effective in developing countries. Where E-laerning in developing countries normally correspond to extensive deployment of new IT infrastructure, it is obvious that large funding is needed to purchase the infrastructure and to hire
human resources, starting at the earliest stage such as
planning activities. Hence, we fully support that adequate
financial resources is one of the critical success factor of
information technology planning effectiveness in developing countries such as Indonesia.
A. Environmental Factor
The environment of the IS organization is a critical determinant of its performance [27]. One of the major dimensions of the external environment is government policy,
which is defined as the extent to which top management
views government policies to be restrictive of liberal [28].
Understanding information technology management issues such as information technology planning in a global
setting would require examining government policies in
the local country [29]. In Saudi Arabia, Abdu-Gader and
Alangari [30] reported that government practices and policies were among the top barriers of information technology assimilation. Organizations viewing government policies to be restrictive are expected to have less computerization [31];[32]; and are expected to devote less attention
to strategic IT related activities such as information technology planning [33]. As it appears that the hypothesis
environmental factors such as government policies are
supported in Kuwait [34], we continue the same hypothesize that:
H6: Liberal government policy is one of the critical success factors of information technology planning effectiveness in developing countries.
Prediction theory on H6:
Liberal government policies generally act as a catalyst to
encourage existing organization to extensively perform Elaerning. How effective is the performance of technology
planning somewhat contributes little for this purpose. We
conclude that a liberal government policy is none whatsoever contribute to the effectiveness of E-laerning. Hence
a liberal government policy is not one of the critical success factors of information technology planning effectiveness in developing countries such as Indonesia.
II. Comparison of Result
After we conduct similar research on E-laerning effectiveness in developing countries such as Indonesia, as
previously conducted in another developing countries
Kuwait, we would like to produce a table of comparison
between the two results.
Table 1. Comparison of Results between Kuwait and Indonesia
We observe that the research countries being conducted
are both developing countries. They do not however produce the same results. Three of the hypotheses that is
Informed IT Management, Management Involvement and
User Involvement are supported similarly for both developing countries, whereas three other hypotheses that is
IT penetration, financial resources, and government policies are supported differently for both developing countries.
I. Global Implication and Conclusion
The goal of this study was to explicate the nature of contextual correlates of information technology planning effectiveness in Indonesia, in comparison to Kuwait. The
present investigation contributes to the literature by being one of the first studies to provide an empirical test of
information technology planning effectiveness in the context of developing countries. Our analysis for both countries reveals mixed and different support for the proposed
relationships. In accordance with the findings of information technology planning research in Indonesia, we found
a positive relationship between IT penetration, management involvement, informed information technology management, and financial resources. On the other hand, we
found no support for a positive relationship between user
involvement and liberal government policies on determining the critical success factor of information technology
planning effectiveness.
Management involvement is found to have a positive relationship with information technology planning effectiveness. This result is somewhat expected. This finding confirms one more time the importance of management involvement in information technology initiatives in contemporary organizations. It is not surprising as it is pointed earlier in the paper that management involvement is more substantial in developing countries compare to developed
country as the critical success factor of E-laerning effectiveness. This finding indicates that management involvement is the most important facilitator of information technology planning in the research model. It coincides with
our findings that both countries Kuwait and Indonesia
both supports Management Involvement hypothesis (H3).
Informed information technology management is the second most significant correlate of information technology
planning effectiveness in our study (H2). As we can see in
the comparison tables, that Indonesia and Kuwait, both
support the hypothesis. The findings show that an informed information technology manager plays an important role in enhancing information technology planning
effectiveness through improving communication with top
management of the organization. Additionally, the finding
also show that an informed information technology manager has a greater propensity to develop work plans that
support organizational goals and activities leading to better integration of IT-business plans.
The findings of past information technology planning research highlight the importance of informed information
technology management for organizations operating in
developed countries [35] and our finding highlights the
similar findings of the importance for organizations operating in developing countries as well.
Furthermore, we found contrasting relationship on liberal
government policies and information technology planning
effectiveness. Extensive government liberalization has been
conducted in Indonesia where e-government plan supported fully by ICT, Nusantara 21, SISFONAS and
95
BAPPENAS has been seriously conducted [36]. Yet despite such effort, E-laerning effectiveness is still very minimal. This finding is not consistent with our theorizing and
with the findings of Dasgupta and his colleagues [37];
[38]. Even though liberal government policies was ranked
the second most significant determinant of information
technology planning effectiveness in the ITPE-5 model
(IT for gaining competitive advantage), survey conducted
on the two countries failed to both support the hypothesis. When organizations perceive government policies to
be less restricting, they become more inclined to engage in
operations aimed at exploiting information technology
opportunities for gaining competitive advantage.
II. Limitations and Future Directions
Hofstede suggests that there are differences between developed and developing countries along these cultural
aspects. Contrasting the culture of Kuwait and the United
States gives a good example of Hofstede’s scheme [39].
However, not mentioning the developed countries, are there
any differences among the developing countries along
these cultural aspects? Are there any similarity of Elaerning effectiveness results in developing countries
which have regional proximity, share the same religions,
climates, and cultures? On the other hand, are there any
similarity of E-laerning effectiveness results in developing countries which are distant, do not share the same
religions, climates and cultures. We will not be surprised
to see that different provinces or states on the same developing country will produce different empirical results. We
suggest that further research should starts on better
grouping of data field rather than grouping by developed
or developing countries to come to conclusion on factors
affecting E-laerning effectiveness.
References
[1]Aladwani, A.M. (2000). IS project characteristics and
performance: A Kuwaiti illustration. Journal of Global Information Management, Vol.8 No.2 , pp. 50-57.
[2]Luftman, J. (1996). Competing in the Information Age –
Strategic Alignmen in Practice, ed. By J. Luftman.
Oxford University Press.
[3]Brancheau, J.C., Janz, B.D & Wetherbe J.C. (1996). Key
Issues in Information Systems Management: 1994 –
95 SIM Delphi Results. MIS Quarterly, Vol. 20 No. 2,
pp. 225-242.
[4]Mata, F.J. & Fuerst, W.L. (1997). Information Systems
Management Issues in Central America: A Multinational and Comparative Study. Journal of Strategic
96
Information Systems, Vol. 6. No. 3, pp. 173-202.
[5]Gottschalk, P (1999). Global comparisons in key issues
in IS Management: Extending Initial Selection Procedures and an Empirical Study in Norway. Journal of
Global Information Technology Management, Vol. 6
No. 2, pp. 35-42.
[6]Dekleva, S.M., & Zupanzic, J. (1996). Key Issues in Information Systems Management: A Delphi Study in
Slovenia. Information & Management, Vol. 31 No. 1,
pp. 1-11.
[7]Moores, T.T (1996). Key Issues in the Management of
Information Systems: A Hong Kong Perspective. Information & Management, Vol. 30 No. 6, pp. 301- 307.
[8]Aladwani, A.M. (2001). E-laerning Effectiveness in a
Development Country. Journal of Global Information
Technology Management, Vol.4 No.3, pp. 51-65.
[9]Aladwani, A.M. (2001). E-laerning Effectiveness in a
Development Country. Journal of Global Information
Technology Management, Vol.4 No.3, pp. 51-65.
[10]Henderson, J.C. & Venkatraman, N. (1999). Strategic
Alignment: Leveraging information technology for
transoforming organizations. IBM System Journal,
Vol.32 No.1, pp.472-484.
[11]Benbasat, I., Dexter, A.S., & Mantha, R.W. (1980). Impact of organizational maturity of information system skill needs. MIS Quarterly, Vol. 4 No. 1, pp. 21-34.
[12]Winston, E.R., & Dologite, D.G (1999). Achieving IT
Infusion: A Conceptual Model for Small Businesses.
Information Resources Management Journal, Vol. 12
No. 1, pp. 26-38.
[13]Cerpa, N. & Verner, J.M (1998). Case Study: The effect
of IS Maturity on Information Systems Strategic Planning. Information & Management, Vol. 34 No. 4, pp.
199-208.
[14]Premkumar, G. & King, W.R (1992). An Empirical Assessment of Information Systems Planning and the
Role of Information Systems in Organizations. Journal of Management Information Systems, Vol. 9 No.
2, pp. 99-125.
[15]Cerpa, N. & Verner, J.M (1998). Case Study: The effect
of IS Maturity on Information Systems Strategic Planning. Information & Management, Vol. 34 No. 4, pp.
199-208.
[16]Premkumar, G. & King, W.R (1992). An Empirical Assessment of Information Systems Planning and the
Role of Information Systems in Organizations. Journal of Management Information Systems, Vol. 9 No.
2, pp. 99-125.
[17]Premkumar, G. & King, W.R (1992). An Empirical Assessment of Information Systems Planning and the
Role of Information Systems in Organizations. Journal of Management Information Systems, Vol. 9 No.
2, pp. 99-125.
[18]Lederer, A.L. & Mendelow, A.L. (1990). The Impact of
the Environment on the Management of Information
Systems. Information Systems Research, Vol.1 No. 2,
pp. 205-222.
[19]Lederer, A.L. & Sethi, V. (1988). The Implementaion of
Strategic Information Systems Planning, Decision
Sciences, Vol. 22 No. 1, pp. 104-119.
[20]Premkumar, G. & King, W.R (1992). An Empirical Assessment of Information Systems Planning and the
Role of Information Systems in Organizations. Journal of Management Information Systems, Vol. 9 No.
2, pp. 99-125.
[21]Gottschalk, P. (1999). Strategic Information Systems
Planning: The IT Strategy Implementation Matrix.
European Journal of Information Systems, Vol. 8 No.
2, pp. 107-118.
[22]Gibson, R. (1998). Informatics Diffusion in South American Developing Economies. Journal of Global Information Management, Vol. 6 No. 1, pp. 35-42.
[23]Aladwani, A.M. (2000). IS project characteristics and
performance: A Kuwaiti illustration. Journal of Global Information Management, Vol.8 No.2 , pp. 50-57.
[24]Abdul-Gader, A.H., & Alangari, K.H. (1996). Enhancing IT assimilation in Saudi public organizations:
Human resource issues. In E. Szewczak & M.
Khosrowpour (Eds.), The human side of information
technology management, Idea Group Publishing, pp.
112-141.
[25]King, W.R (1978). Strategic Planning for Management
Information Systems. MIS Quarterly, Vol. 2 No.1, pp.
26-37.
[26]Lederer, A.L. & Sethi, V. (1988). The Implementaion of
Strategic Information Systems Planning, Decision
Sciences, Vol. 22 No. 1, pp. 104-119.
[27]Lederer, A.L. & Mendelow, A.L. (1990). The Impact of
the Environment on the Management of Information
Systems. Information Systems Research, Vol.1 No. 2,
pp. 205-222.
[28]Dasgupta, S., Agarwal, D., Ioannidis, A. &
Gopalakrishnan, S. (1999). Determinants of Information Technology Adoption: An extension of existing
models to firms in a Developing Country. Journal of
Global Information Management, Vol. 7 No. 3, pp. 3040.
[29]Watad, M.M (1999). The Context of Introducing IT/ISbased Innovation into Local Government in Colombia. Journal of Global Information Management, Vol.
7 No. 1, pp. 39-45.
[30]
Abdul-Gader, A.H., & Alangari, K.H. (1996). Enhancing IT assimilation in Saudi public organizations:
Human resource issues. In E. Szewczak & M.
Khosrowpour (Eds.), The human side of information
technology management, Idea Group Publishing, pp.
112-141.
[31]Dasgupta, S., Ionnidis, A., & Agarwal, D. (2000). Information Technology Adoption in the Greek Banking
Industry. Journal of Global Information Technology
Management, Vol. 3 No. 3, pp. 32-51.
[32]Dasgupta, S., Agarwal, D., Ioannidis, A. &
Gopalakrishnan, S. (1999). Determinants of Information Technology Adoption: An extension of existing
models to firms in a Developing Country. Journal of
Global Information Management, Vol. 7 No. 3, pp. 3040.
[33]Palvia, S.C., & Hunter, M.G (1996). Information Systems Development: A Conceptual Model and a Comparison of Methods used in Singapore, USA and
Europe. Journal of Global Information Management,
Vol. 4 No. 3, pp. 5-16.
[34]Aladwani, A.M. (2001). E-laerning Effectiveness in a
Development Country. Journal of Global Information
Technology Management, Vol.4 No.3, pp. 51-65.
[35]Premkumar, G. & King, W.R (1992). An Empirical Assessment of Information Systems Planning and the
Role of Information Systems in Organizations. Journal of Management Information Systems, Vol. 9 No.
2, pp. 99-125.
97
[36]Rusli, A. & Salahuddin, R. (2003). E-Government Planning in Indonesia: A Reflection against Strategic Information Communication Technology Planning Approaches. Proceedings for the Kongres Ilmu
Pengetahuan Nasional (KIPNAS) VII, September 9 –
11, 2003.
[37]Dasgupta, S., Ionnidis, A., & Agarwal, D. (2000). Information Technology Adoption in the Greek Banking
Industry. Journal of Global Information Technology
Management, Vol. 3 No. 3, pp. 32-51.
[38]Dasgupta, S., Agarwal, D., Ioannidis, A. &
Gopalakrishnan, S. (1999). Determinants of Information Technology Adoption: An extension of existing
models to firms in a Developing Country. Journal of
Global Information Management, Vol. 7 No. 3, pp. 3040.
[39]Hofstede, G. (1980). Culture’s consequences: International differences in workrelated values. Beverly Hills,
CA: Sage Publications.
98
Paper
Saturday, August 8, 2009
14:45 - 15:05
Room L-211
COLOR EDGE DETECTION USING THE MINIMAL SPANNING
TREE ALGORITHM AND VECTOR ORDER STATISTIC
Bilqis Amaliah, Chastine Fatichah, Diah Arianti
Informatics Department – Faculty of Technology Information
Institut Teknologi Sepuluh Nopember (ITS), Surabaya, Indonesia
[email protected], [email protected],[email protected]
Abstrak
The edge detection approach based on minimal spanning tree and vector order statistic is proposed. Minimal spaning
tree determined ranking from the observations and identified classes that have similarities. Vector Order Statistic view a
color image as a vector field and employ as a distance metrics. Experiment of edge detection on several images show that
the result of minimal spanning tree is more smooth and more computational time comparing to that vector order statistic.
Keywords: edge detection, Minimal Spanning Tree, Vector Order Statistics.
1. Introduction
Edge detection is a very important low-level vision operation. Despite the fact that a great number of edge detection methods have been proposed in the literature so far,
there is still a continuing research effort. Recently, the main
interest has been directed toward algorithms applied to
color [1] and multispectral images [6] , which also have the
ability to detect specific edge patterns like corners and
junctions [2]. Edges are defined, in digital image processing terms, as places where a strong intensity change occurs. Edge detection techniques are often required in different tasks in image processing and computer vision applied to areas such as remote sensing or medicine, to preserve important structural properties, image segmentation,
pattern recognition, etc [7]. Another method to edge detection is using YUV Space and Minimal Spanning Tree
[8].
Scalar order statistics have played an important role in the
design of robust signal analysis techniques.
Statistic ordering can be easily adapted to unvaried data,
but for multivariate data, it must go through preprocessing before it can be ordered. For this Vector Order Statistic
method, R-ordering is used because based of test result; it
is the best method to be used on color image processing.
In this work, a new approach for ordering and clustering
multivariate data is proposed. It is based on the minimal
spanning tree (MST) [5] and takes advantage of its unique
ability to rank multivariate data, preserve hierarchy and
facilitate clustering. The proposed method can detect all
the basic forms of edge structures and is suited for color
or multispectral images of higher dimensions.
2. Vector Order Statistic
If we employ as a distance metric the aggregate distance
of Xz to the set of vectors XI, X2,. . . , X”, then
(1)
By ordering every for every vector in the set, we can have
(d(1) d” d(2) d” … d” d(n)), which is in line with :
X(1) d” X(2) d” … d” X(n)
In this work a color image is viewed as a vector field [1],
represented by a discrete vector valued function f(x) : Z2
ÀÛÆÜ Zm, where Z represents the set of integers. For W
, n is the size (number of pixels) of W, f(xi) will be denoted
as Xi. X (i) will denote the ith ordered vector in the window according to the R-ordering method where the aggregate distance is used as a distance metric. Consequently,
X(1) is the vector median [3] in the window W and X(n) is
the outlier in the highest rank of the ordered vectors On
this behalf, the base method for edge detection is Vector
Ranking (VR), in which
99
(2)
3. Minimal Spanning Tree
A different approach is adopted here for ranking a set of
observations from a vector valued image.
Using the MST, multivariate samples are ranked in such a
way that the structure of the group is also made clear.
Graph theory sketches the MST structure with the following definitions [5].A graph is a structure for representing
pair wise relationships among data. It consists of a set of
nodes V = {Vi }i=1:N and a set of links E = {Eij}i#j between
nodes called edges.
Applied in the description of a vector-valued image, it is
represented by a graph G(V,E). Each node Vi corresponds
to a pixel, while the undirected edge Eij between two neighbor pixels (i, j ) on the image grid has a scalar value equal to
Euclidean distance of the corresponding vectors. A tree is
a connected graph with no cycles. A spanning tree of a
(connected) weighted graph G(V,E) is a connected sub
graph of G(V,E) such that (i) it contains every node of
G(V,E),
2 and (ii) it does not contain any cycle. The MST is a
spanning tree containing exactly (N “ 1) edges, for which
the sum of edge weights is minimum.
In what follows, the method is restricted for color RGB
images (i.e. p = 3) and in the case of a 3 ×
3 rectangular sliding window W. However, the method is
more general as it can be applied to higher dimensions and
by using a window of an other size and/or shape. Given a
set of N = 9 vectors corresponding to the pixels inside W,
the Euclidean MST (represented by T) is constructed in
R3.
Considering the edge types we would like to detect, three
possible color distributions [3] can be usually found inside W. If no edge is present and the central pixel is located at a uniform color region of the image, the distribution is unimodal denoting a “plain” pixel type. If there is an
“edge” or “corner” point, a bimodal distribution is expected. Finally, in the case of a “junction”, pixels are expected to form three clusters. Thus, edges and corners are
straight and angular boundaries of two regions, whereas
junctions are boundaries of more than two regions.
4. System Design
In Minimal Spanning Tree method implementation, there
are 3 main process, which are the calculation of the distances between neighboring pixels, finding the Minimal
Spanning Tree route, and the deciding the output type
(plain, edge, corner, or junction). Those three processes
are done in a sliding window that the size is already defined, which is 3x3.
For that reason, the original input image matrix must be
100
added with one pixel width of pixel on each side, so that
the output of the pixels at the edge of the original image
can be calculated.
The distances of neighboring pixel are calculated using
Euclidean Distance. Then the Minimal Spanning Tree route
can be determined using Kruskal Algorithm. T1 variable
was defined as a threshold parameter and T as total length
to determine the pixel type. Next is the pixel type determination algorithm.
The process of the proposed method is summarized within
the following steps:
• Construct the MST.
• Sort the derived MST-edges E1,E2, . . . , E8 in ascending
order.
• Denote as “T” the total length of the MST.
• Define threshold parameter “T1” so that 0_T1_1.
• If (E7/E8_T1) then unimodality exists (one distribution)
’! plain pattern.
’! R1 = mean(T ) is the detector’s output.
• Else if (E6/E7_T1) then bimodality exist (two distributions)
’! edge or corner pattern.
’! Cut the maximum edge E8 (two subtrees are generated,
thus two separate clusters).
’! Find the mean value of the two clusters C1 and C2.
’! R2 = is the detector’s output.
• Else, multimodality exist (three distributions) ’! junction
pattern.
’! Cut the two bigger edges, E7 and E8 (three subtrees are
generated, thus three separate clusters).
’! Find the mean value of each of the three clusters Ci, i = 1
: 3.
’! Compute the distance between the three cluster centers.
Rij = , i, j = 1 :
3 for i = j .
’! R3 = mean(Rij) is the detector’s output.
5. Experiment Result
The images that used are lena.bmp, peppers.bmp,
house.bmp, dan clown.bmp.
5.1 MST Method Test Using T1 Variation
The goal of this test is to prove whether changes in T1
value affect the edge detection process.
Based on the result on Figure 1, T1 value that gave best
result is 1.0 on all samples images.
5.2 MST and VOS Comparison Test
5.2.1 Edge Quality Test
From the test result in Figure 2, it is seen that the result of
edge detection using Minimal Spanning Tree Method gave
edges that are more solid and not separated. Meanwhile
using Vector Order Statistic gave edges that are not solid
and sometimes not connected between each other.
T1 = 0.7 T1 = 0.8 T1 = 0.9 T1 = 1.0
3
T1 = 0.7 T1 = 0.8 T1 = 0.9 T1 = 1.0
T1 = 0.7 T1 = 0.8 T1 = 0.9 T1 = 1.0
T1 = 0.7 T1 = 0.8 T1 = 0.9 T1 = 1.0
Figure 1. Result detection using MST with threshold variation
Citra Masukan Minimal Spanning Tree Vector Order Statistic
4
Figure 2. Result comparison between MST and VOS base
on edge quality
5.434minutes
.
(MST = 100%)
Elapsed time is
14.242192
seconds =
0.2374 minutes.
(VOS= 4.388%)
Figure 3. Result comparison and procentase between MST
and VOS base on algorithm’s time execution.
From the result in Figure 3, it is clearly seen that Minimal
Spanning Tree method takes longer time to finish than
Vector Order Statistic method.
5.2.2 Algorithm’s Execution Time Test
Citra
Masukan
Minimal
Spanning Tree
Vector Order
Statistic
Elapsed time is
398.900481
seconds =
6.6483 minutes.
(MST = 100%)
Elapsed time is
14.334815
seconds =
0.2389 minutes.
(VOS = 3,59%)
Elapsed time is
416.205385
seconds =
6.9368 minutes.
(MST = 100%)
Elapsed time is
14.242192
seconds =
0.2374 minutes.
(VOS= 3.422%)
Elapsed time is
398.900481
seconds =
6.6483 minutes.
(MST= 100%)
Elapsed time is
14.334815
seconds =
0.2389 minutes.
(VOS= 3.549%)
Elapsed time
is 26.049380
seconds =
6. Result Evaluation
6.1 Best Threshold for MST Method
T1 value that gives best result on all sample images is
1.0.
5
The correlation between T1 and the detector output are as
follow:
If (E7/E8 >T1) then Result = plain
else if (E6/E7 > T1) then Result = edge or
corner
else Result = junction
In this method, if E7 equal to E8 then it can be concluded
that the pixel in observation is plain.
The pixel is considered edge or corner if E6 equal to E7
while E7 not equal to E8.
Last, if E6, E7 and E8 are all not equal, then the pixel can be
considered as junction.
Data on Figure 3 was tested using T1 = 0.7 or T1 = 0.8,
which gave plain type result, meanwhile if tested using T1
= 0.9 or T1 = 1.0, it was detected by the detector as junction.
From the experiments it is known that the increment of T1
value makes the sensitivity of the detector increases.
Changes on T1 value can be used to get different detector
sensitivity.
6.2 Edge Quality Comparison
Vector Order Statistic is a method that orders the sum of
distances between pixels in a sliding window.
A pixel has distance to all other pixels in the sliding window, including to itself. That distance is calculated using
Euclidean distance equation.
In a uniform area, each vector will relatively close to each
other and the distance value is smaller. Output value from
this method is the average of the distances between pixels
neighboring the pixel in question.
Minimal spanning tree is a method that can rank data in a
set into clear groups. Because of the nature of this method
101
in which it only considers the neighboring pixel in the
sliding window, the correlation between pixels is preserved.
The point is, that the edge detection process put emphasis at the relationships.
Minimal Spanning Tree method also group pixels into clusters based on color similarities between neighboring pixels. By using the distances between each clusters as detector output, the edge resulted is finer.
6.3 Algorithm’s Execution Time Comparison
Based on the experiments, execution time needed for this
MST method is longer than VOS method with average
ratio of MST : VOS = 27 : 1. This is not a small ratio.
For the VOS, the method doesn’t do much iteration. All
pixel type is processed using the same way, which is by
using the defined equation.
Meanwhile for MST method, it is known that this method
is doing a lot of iteration, for defining the cluster for example. If the input image has a lot of plain pixel type that is
detected by the detector, then this method will finish in
relatively less time. And if the input image has a lot of
edge/corner pixel type, then this method will finish in longer
time. It is caused by the creation of 2 clusters.
7. Conclusion
From the experiments done, some conclusions found:
1. Minimal Spanning Tree method gave a more solid line
than edge result from Vector Order Statistic method.
2. With the use of threshold parameter, detector sensitivity of Minimal Spanning Tree method can be defined
according to the preferred result. Threshold value for
best edge detection on Minimal Spanning Tree method
is 1.
3. Minimal spanning Tree needs longer execution time
than Vector Order Statistic with average ratio for sample
images between MST and VOS is 100% : 3.733%.
4. Edge detection result from Minimal Spanning Tree
method for images that have more detail will give sharper
edge than Vector Order Statistic.
102
8. Suggestions
It is suggested to optimize the edge detection algorithm
using minimal spanning tree, to shorten the execution time.
9. References
[1] P.W. Trahanias, A.N. Venetsanopoulos, Color edge
detection using vector order statistics, IEEE Trans. Image
Process. 2 (2) (1993) 259–264.
[2] M.A. Ruzon, C. Tomasi, Edge, junction, and corner detection using color distributions, IEEE Pattern Anal. Mach.
Intell. 23 (11) (2001) 1281–1295.
[3] J. Astola, P. Haavisto, Y. Neuvo, Vector median filter,
Proc. IEEE 78 (1990) 678–689.
[4] C.T. Zahn, Graph-theoretical methods for detecting and
describing Gestalt clusters, IEEE Trans. Comput. C 20 (1)
(1971).
[5] Theoharatos .Ch, Economou.G, Fotopoulos. S, Color
edge detection using the minimal spanning tree, Pattern
Recognition 38 (2005) 603 – 606.
[6] P.J. Toivanen, J. Ansamaki, J.P.S. Parkkinen, J.
Mielikainen, Edge detection in multispectral images using
the selforganizing map, Pattern Recognition Lett. 24 (16)
(2003) 2987–2993.
[7] T. Hermosilla, L. Bermejo, A. Balaguer, Nonlinear fourthorder image interpolation for subpixel edge detection and
localization, Image and Vision Computing 26 (2008) 1240–
1248
[8] Runsheng Ji, Bin Kong, Fei Zheng. Color Edge Detection Based on YUV Space and Minimal Spanning Tree,
Proceedings of the 2006 IEEE International Conference on
Information Acquisition August 20 - 23, 2006, Weihai,
Shandong, China.
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room L-210
DESIGN COMPUTER-BASED APPLICATION FOR RECRUITMENT AND
SELECTION EMPLOYEE AT PT. Indonusa Telemedia
Tri Pujadi
Information System Department – Faculty of Computer Study
Universitas Bina Nusantara
Jl. Kebon Jeruk Raya No. 27, Jakarta Barat 11530 Indonesia
email :[email protected]
ABSTRACT
This report contains about one of the applications that used by PT. Indonusa Telemedia. The function of this application
to facilitate the recruitment and selection process of the company employee’s candidate. The process becomes more
efficient because the application can organize the employee candidate data, interview status (proceed, hire, keep, and
reject), and his comments based on the interview. The benefit for the company that uses this application is that they can
increase their level of efficiency, such as in time and man labor. The level of efficiency can be increase because this
application can sort the employee’s candidate data as the request of the department that request addition of employee
and centralizing information in one application database.
Key words : Application, Employee Recruitment, Selection
1. INTRODUCTION
PT. Indonusa Telemedia is with Brand Name
TELKOMVision who gets address at Tebet in south
Jakarta. Executed research program deep three-month
duration, from date 1st January 2009 and end on the 08
Aprils 2009. The scope of observational activity to be
done at Network’s &IT division whereas watch on HRD’S
division, Business Production & Customer Quality’s
division communicates in makings carries on business to
process, design of application. There are severally job
description of this activities, for example is (1) Design of
Application Penerimaan dan Seleksi Calon Karyawan ;
(2) Design of Application Koperasi ; (3) Captured this
candidate fires an employee Indonusa Telemedia; (4)
Design of Business Process;
Presented result write-up deep observational one contains
about application scheme activity Penerimaan dan Seleksi
Calon Karyawan utilizing Visual Basic 6.0, Ms Access
database and microsoft excel’s to Report of application.
PT. Indonusa Telemedia stand on the May 1997 and
operating on year 1999, with many of stockholder there
are PT Telkom, PT Telkomindo Primabhakti (Megacell),
PT RCTI and PT Datakom Asia. On year 2003,
TELKOMVision has Head End at six metropolises which
is Field, Jakarta, Bandung, Semarang, Surabaya and
Jimbaran Bali and some mini Head End at all Indonesia.
Network support Hybrid Fiber Optic Coaxial and
coverage satellite at Indonesian exhaustive one plays
along with Telkom as Holding Company make its as the
one only of Operator Pay TV one that has ability to service
customer at Indonesian exhaustive good utilize Satellite
or Cable.
103
Figure 1. Organization Chart PT Indonusa
(2009)
firm requirement.
It can ensue on arises it
2. Human Resource Management
Terminologicals of human resource management by
Dessler (2008) is activities various management to increase
labour effectiveness in order to reach organization aim.
Human resource management process (SDM) constituting
activity in saturated logistic requirement fires an employee
to reach employee performance. Meanwhile (Parwiyanto,
2009), SDM’S planning constitutes to analysis process
and identification most actually requirement will man so
organization resource that can reach its aim. There be
many procedures SDM’S planning, which is:
· Establishing qualities clear all and needed SDM
amount.
· Gathering data and information about SDM.
· Agglomerating data and information and analysis this.
· Establish severally alternative.
· Choosing the best one of taught alternative becomes
plan.
· Informing plan to employees for realized PSDM’S
method (Human resource planning) known in two
methods, which is method scientific or non-scientific
method. That non-scientific method are planning SDM
just is gone upon for experience, imagination, and
estimatesonly. SDM’S plan this kind of its risk a great
degree, e.g. quality and labour amount in conflict with
104
·
·
·
·
mismanagement and adverse dissipation corporate.
The scientific method of PSDM is done by virtue of
result of data analysis, information, and forecasting(
forecasting) of its planner. SDM’S plan this kind of
relative’s risk little because all something it was taken
into account beforehand. If SDM’S planning is put
across therefore will be gotten benefits of as follows:
Top management to have the better view to SDM’S
dimension or to its business decision.
Expenses SDM wills be smaller because management
can estimate things that don’t at wants that can ensue
to swell needed cost it.
Most actually more a lot of time to place clerk that
potentially because requirement can be anticipated and
is known before total labour that actually been needed.
Mark sense the better chance to involve woman and
the few faction at strategical deep proximately.
Clerk acceptance
Function of recruitment clerk is look for and pulls clerk
candidate to want apply for works according to job
description and, job specification. For the purpose that
firm can look for clerk candidate of internal source and
external’s source. Each source has gain and lack.
Advantages of clerk acceptance by internal source.
1. Stimulating preparation for transfer and promotion.
2. Increasing job spirit.
3. More information a lot of about candidate can be gotten
71
from work note at corporate.
4. Less expensive and was ready conforms.
7. Behavioural training, training pass business game and
role playing.
But the disadvantages is:
1. Drawing the line clerks prospective source.
2. Reducing new view source chance.
3. Pushing smug taste especially if stipulates responsible
position rise be seniority.
Definition of Design System
According to Whitten (2005) on binds books System
Analysis And Design For Enterprise: Design of system is
process one to get focus on detail of solution that bases
information system. That thing can also be said as design
of physical. On system analysis moring to reassure
business cares, meanwhile on system scheme gets focus
on technical problem and implementation which pertinent
with system. To the effect main of design of system is
subject to be meet the need system user and to give clear
capture and design that clear to programmer.
Severally phase in design of system is:
1. Design of control, its aim that implementing system
afters can prevent fault’s happening, damage, system
failing or threat even system security.
2. Design of output, on this phase reporting of resultant
one shall correspond to needful requirement by
application user.
3. Design of input, on this phase GUI’S scheme( Graphic
User Interface) made for the purpose more of eficient
input data and data accuracy.
4. Design of database are an information system that
integrate bulk of interrelates data one by another.
5. Design of computer configurations to implement the
systems
Advantages of clerk acceptance by external’s source is
can pull advertising pass, its source is Depnaker, education
institute, consultant office, alone coming applicant, and
extent society as labor market. But on the other hands,
disadvantages is the process adapts clerk to slower
tending firm instead of clerk which stem from within firm.
Clerk selection
Choosing candidate to be able to prospective one
corresponds to to talk shop that available by:
1. Checking application document and stipubting
document that shall be attached in application letter.
2. Interview advancing to checks truth written document.
3. Diagnostic test, skill, health, can own do by corporate
/ can do extern party.
4. Background research of other source at work previous.
Training and Development
Training terminology is utilized to increase technical
membership. Development is utilized to increase
conceptual membership and human relationship. There
be three requirement prescriptive processes training,
which is:
1. Appraisal achievement compared with by default, if
haven’t reached matter default is required training.
2. Analisis is requirement talks shop, which is employee
which haven’t qualified given by training.
3. Survey of personel, asking faced problem and training
what does they require.
Seven training’s forms, which is:
1. On the job training
2. Job rotation
3. Internship, appointment brazes and field practice.
4. Appretionship
5. Off the job training
6. Vestibule training, simulation talks shop that don’t
trouble others. Example: seminar, college.
72
3. Research result
Telkom has Vision To become a leading InfoCom player
in the region, meanwhile its Mission is give to service “
One Stop InfoCom Services with Excellent Quality and
Competitive Price and To Be the Role Model as the Best
Managed Indonesian Corporation.” Unit Carries On
Business Telkom consisting of division, Centre,
Foundation and Subsidiary. To subsidiary Telkom have
stock ownership is more than 50%, for example one of it,
is on PT Indonusa Telemedia (Indonusa)
The Product and Services
In the early year month of July 2000, PT. Indonusa
Telemedia has begun to do INTERNET service attempt.
Now that service finitely can be enjoyed by achievable
customer by Hybrid Fiber Optic Coaxial (HFC networks)
at Jakarta, Surabaya and Bandung.
Many several product and servives from PT Indonusa
Telemedia is :
105
a) Pay TV Cable TELKOMVision :
Basic service was included channel HBO’S main,
CINEMAX and STAR MOVIES. ; Utilizing FO’s
infrastructure from Telkom. ; PRIMA pictured quality.; Tall
reliability & interactive network (two aims) for Internet
and further can service VOD, Video Streaming; Without
decoder / Converter ( TV Cable) ; If in one house exists
more than one TV, therefore each TV can enjoy channel
option each independent ala without shall add decoder
and also converter for each one TV.
b) TELKOMVision’s internet :
Flat fee without pulse count.; Speed 64 s / d. 512 Kbps.;
Be of service service corporate & individual. ; Building
with Pay TV – Cable.
c) Pay TV Satellite TELKOMVision :
Utilizing TELKOM’S satellite –1, extended C band.; Prima
pictured quality and digital voice. ; If customer candidate
have had parabola at place that will be assembled, therefore
not necessarily substitutes parabola.; Coverage :
NATIONAL
d) SMATV TELKOMVision :
Satellite Master AntenaTV (SMATV) service.; Location
that was reached network HFC. ; Customer Hotel,
Apartment or Estate Settlement.; Coverage : NATIONAL
Business process on recruitment and selection
Table 1.1
Number
of employees
perclerk
job unit
1. Related
User
or directorate
ask for
affix fill
recruitment requisition form.
2. After form accepted recruitment by HRD then staff
HRD checks budget and organization chart on
corporate. Then staff HRD publishes vacancy through
media or notice to corporate clerk.
3. Candidate fires an employee to send application letter
and biographically to HRD.
4. Staff HRD sort this candidate corresponds to criterion
that being needed by according to requisition user.
5. Prospective denominating fires an employee to be done
by HRD and interview done by user adjoined by staff
HRD.
6. If afters interview user looking on that employee
candidate criterion pock, therefore candidate data fires
an employee to be kept on for phase interview
hereafter.
7. User have final HRD’S party do wages negotiation
Total employee on PT Indonusa
Table 1.1 Number of employees per job unit
Figure 2. Business process on recruitment and selection
106
with prospective employee.
8. If employee candidate accepts to wages negotiation,
HRD will publish offering letter for candidate to fire
an employee refuses wages negotiation, HRD can look
for another candidate.
After candidate accepts an offering letter therefore that
new employee shall fill candidate form and can begin
internship term at corporate up to three-month.
Design of application :
a. Design of Control
On this application there be two types login. Every user
have level one that different. Type following – type
levelthe:
1. ADMINISTRATOR
: Have rights to utilize all this
application function.
2. HRD : Just utilizes function input prospective data
fires an employee, interview, comment and report.
Figure 4. Account Setting
Input Applicant data, function of menu that is subject to
be input prospective data fires an employee. Prospective
data source employee can thru get Enamel( soft copy) and
application letter gets to form hard copy.
b. Design of Output
Output or reporting result of that application as statistical
of denominating amount interview candidate fires an
employee in any directorate and sidelight hit state
interview of employee candidate.
Figure 5. Input Applicant
Input Data Interview, in function for memasukan appraisal
of each interviewer for each called employee candidate
and for each step interview. Menu it also been utilized to
process candidate more employee already at interview, is
that employee prospective will be drawned out to next
phase or not.
Figure 3. Report Statistik Interview
c. Design of Input
On Accepting application and Employees
Prospective selection be gotten four menus and eight
menu subs, which is:
1. File
2. Application
3. Interview
4. Repor
Figure 6. Input interview
Report function report of that application just gets
statistic form of report base to see dammed hell first
employee increase per division. That thing is done that
HRD Dapa monitors to foot up step-up or requisition
decrease fires an employee new on each division.
107
Dian. Jakarta: Erlangga, Trans. Of Database Processing
Fundamental, Design & Implementation, 2004.
Firdaus. (2005) Pemrograman Database dengan Visual
basic 6.0 untuk Orang Awam. Palembang: Maxikom.
Parwiyanto, Herwan. Perencanaan SDM. 3 Maret 2009
<http://herwanparwiyanto.staff.uns.ac.id/page/2/>.
Dessler, Garry, (2008) Human Resource Management.
Singapore: Pearson Edication Singapore.
Figure 7. Report function
Whitten,(2005) System Analysis And Design For
Enterprise. Prentice Hall.
c. Design of database
4. Summary
Figure 8. Design of Database
Base job process of acceptance Application and Candidate
selection an employee :
· Can do to validate schedule interview and interviewer
easily.
· Prospective bespoke statistical employee can presto
get since statistical at o self acting and is featured at
excel’s Microsoft, so more data processing can be done
by easier.
· This application can direct be printed or is kept into
Microsoft format form Excel 2003( .xls).
REFERENCES
Kroenke, David M.(2005) Database Processing Dasardasar, Desain, dan Implementasi. 2 vols. Trans. Nugraha,
108
Paper
Saturday, August 8, 2009
13:55 - 14:15
Room L-212
Application of Data Mart Query (DMQ)
in Distributed Database System
Untung Rahardja
Faculty of Information System
Raharja University
Tangerang, Indonesia
[email protected]
Retantyo Wardoyo
Faculty of Mathematics and Natural Science
Gadjah Mada University
Yogyakarta, Indonesia
[email protected]
Shakinah Badar
Faculty of Information System
Raharja University
Tangerang, Indonesia
[email protected]
Abstract
Along with rapid development of network and communication technology, the proliferation of online information resources increases the importance of efficient and effective distributed searching in distributed database environment
system. Information within a distributed database system allows users to communicate with each other in the same
environment. However, with the escalating number of users of information technology in the same network, the system
often responds slowly at times. In addition, because of large number of scattered database in a distributed database
system, the query result degrades significantly in the occurrence of large-scale demand at each time data needs. This
paper presents a solution to display data instantly by using Data Mart Query. In other words, Data Mart Query (DMQ)
method works to simplify complex query manipulating table in the database and eventually creates a presentation table
for final output. This paper identifies problems in a distributed database system especially display problem such as
generating user’s view. This paper extends to define DMQ, explain the architecture in detail, advantages and weaknesses
of DMQ, the algorithm and benefits of this method. For implementation, the program listings displayed written ASP script
and view the example using DMQ. DMQ methods is proven to give significant contribution in Distributed Database
System as a solution that is needed by network users to display data instantly that is previously very slow and inefficient.
Index Terms—Data Mart Query, Distributed Database
I. Introduction
The development of technology that continues to increase
rapidly, affecting the rate of information on human needs,
especially in a organization or company. Information continues to flow and the longer the amount increasing as the
number of requests, the amount of data and more. In addi-
109
tion, the use of databases in a company and the more
organization especially with the network system. The database can be distributed from one computer to another.
The user can increased the amount of flow over the size of
the organization or company.
Organizations and companies need information systems
to collect, process and store data and an information channel. Development of information systems from time to time
have produced a lot of information that the more complex.
The complexity of information is caused by the many requests, the amount of data and the level of Iterations SQL
command in a program .
Utilization of information technology by organizations or
companies are the major aims to facilitate the implementation of business processes and improve competitive ability. Through information technology, the company expected business processes can be implemented more easily, quickly, efficiently and effectively. The use of network
technology in an organization or company to become a
regular thing. A system within the network now many organizations and companies that have implemented a database system for distributed database.
Distributed Database is a database that is under the control where the central DBMS storage is not centralized to
a CPU, but may be stored on multiple computers in the
same physical location or distributed through a network
of computers that are connected each other.
data should be fast, for example, on google.com where
continuous data is desired by consumer to be quickly located in the display, not the little data that must be removed. Basically, it requires a table of data from the process according to the number of data that will be in the
show. Use of the conventional way is essentially a practical, because they do not need when editing data in the
database increases, but if the speed display will be long
when a lot of data that is displayed.
II. PROBLEMS
Distributed database that has many advantages especially
for the structure of the organization at this time. However,
among the benefits of the distributed database also allows
a system more complex, because the number of databases
which are spread and the amount of data and many continue to increase in an organization or company. If a database has a number of data stored with the many queries
and tables, a request the search result data source or data
to be slow addition, the number of users that can access a
web display or display a Web information system is also a
slow .. Here was the view of conventional sources of data
that have multilevel query:
Figure 1. Distributed Database Architecture
Figure 2. Conventional Data Source
Picture above, the architecture is a description of
the database distributed. Where in the system distributed
database allows several terminals connected in a system
database. And each terminal can access or obtain data
from the database that have both computer centers and a
local computer or a database with other databases. Database distributed also have advantages such as organizational structure can reflect, local autonomy. Error in one
fragment does not affect the overall database. There is a
balancing of the database server and the system can be
modified without affecting other modules.
Therefore, the data storage table in SQL server in a distributed database, is a practical step that many people do
at this time. For institutions serving a specific process
From the picture above, we can see that to produce a display on the web display, the conventional sources
of data need to be stratified queries. Source of data is done
from one table and another table and to query the query to
one another. Imagine if you have hundreds or thousands
of tables and queries in a database, and database distributed so happens that the relationship between the database with each other. How long does it take only one view
to the web?
From the above description, several problems can be formulated as the following:
1. Whether the query view because research has been
graded?
2. What is the impact of a slow process due to stratified
110
queries?
3. What methods can be used to speed up the process
on a display distributed database system?
4. What benefits and disadvantages with the new method
is proposed?
III. LITERATURE REVIEW
Many of the previous research done on the distributed database. In developing distributed database this
study should be performed as one of the libraries of the
application of the method of research to be conducted.
Among them is to identify gaps (Identify gaps), to avoid
re-creating (reinventing the wheel), identify the methods
that have been made, forward the previous research, and
to know other people who specialize, and the same area in
this research. Some Literature review are as follows:
1. Research was conducted by Jun Lin Lin and Margaret
H. Dunham’s Southern Methodist University and Mario
A. Nascimento, entitled “A Survey of Distributed Da
tabase Check pointing.” This study discusses the
check pointing in the database distributed and ap
proaches used. This study begins from the many sur
veys conducted in connection with the database re
covery process, and many techniques proposed to ad
dress them. With the distributed database check point
ing, the process can reduce the recovery time of a fail
ure in the database distributed. Check pointing can be
described as an activity to write information to a stable
storage during normal operation in order to reduce the
amount of work at the restart. Disprove that this re
search and a little bit limitation of resources is a prob
lem in the approach distributed database, and disprove
that check pointing can be used only for the distribu
tion of many multi database system. Although this re
search has been done, but quite complex in its imple
mentation. With this research we can develop a data
base distributed with check pointing to speed up the
process of recovery database[1].
2. Research was conducted by David J. Dewitt from the
University of Wisconsin and Jim Gray in 1992, entitled
“Parallel Database Systems: The Future of High Per
formance Database Processing”. Research was con
ducted with the concept of database distributed which
is a database stored on several computers that distrib
uted one another. On this research, described parallel
database system began replacing Mainframe computer
for data and transaction processing tasks. Parallel da
tabase machine architectures have evolved from the
use of software for exotic hardware that parallel. Like
most applications, users want the system’s hardware
database that cheaper, fast. This concern about the
processor, memory and disk. As a result, the concept
of exotic hardware that the database is not appropriate
for the technology at this time. On the other hand, the
availability of microprocessors faster, cheaper and
smaller to be cheaper but the standard package so
quickly become the ideal platform for parallel database
systems. Stonebraker proposes a simple design to the
design spectrum that is shared memory, shared disk
and shared nothing. And the language used in the SQL
database is in accordance with ANSI and ISO stan
dards. With this research, we can develop a database
system that can be used in different scope [2].
Figure 3. Shared-Nothing Design, Shared-Memory and
Shared-Disk
3. Research was conducted by Carolyn Mitchell of Nor
folk State University, entitled “Components of a Dis
tributed Database” 2004. This study discusses the com
ponents within the database. One of the main compo
nents is in DDBMS Database Manager. “A Database
Manager is the software responsible for processing
the data segments that were distributed. The main com
ponents are the Query User Interface, which is a client
program that acts as an interface to the Transaction
Manager distributed ..” Distributed a Transaction Man
ager is a program that translates requests from users
and convert them to query the database manager,
which is usually distributed. A system database that
distributed of both the manager and the Transaction
Manager Database Manager Distributed[3].
111
Relational model is superior in securities. This is be
cause more object oriented database model is still less
maturities. So that in the heterogeneous environment,
the process integrities still cause many problems.
OODBMS still only technology still needs further de
velopment, but in the homogenous environment,
OODBMS can be a good choice[5].
Figure 4. Distributed Database Architecture and components
4. Research conducted by Hamidah Ibrahim, “Deriving
Global Integrity and the Local Rules for Distributed
Database. Faculty of Computer Science and Informa
tion Technology University Putra Malaysia, 43400 UPM
Serdang. He said that the most important goal of the
database system is a guarantee data consistency, which
means that the data contained in the database must be
properly and accurately. In the implementation to en
sure consistency of data is very difficult to change,
especially for distributed database. In this paper, de
scribes an algorithm based on the rule enforcement
mechanism for the distributed database that aims to
minimize the amount of data must be transferred or
accessed across the network to maintain the consis
tency of the database at one site, the site at which the
update needs to be done. This technique is called the
integrity of the test generation, which comes from lo
cal and global integrity, and rules that have been effec
tive to reduce the cost of a check constraint in the data
that has been distributed in the environment. In his
research has produced a large centralized system with
a high level of reliability for data integrity[4].
5. Research conducted by Steven P. Coy. “Security Im
plication of the Choice of Distributed Database Man
agement System Model: Relational Object Oriented Vs.
University of Maryland that data security must be ad
dressed when developing a database of them and
choose between Relational and object oriented model.
Many of the factors must be considered, especially in
terms of effectiveness and efficiency, as well as securi
ties and whether the integrity of this food resource is
too large not only the security features. Both these
options will affect the strength and weaknesses of
these databases. Centralized database for both this
model can be as well. But for the distributed database,
112
6. Research conducted by Stephane Gançarski, Claudia
León, Naacke Hubert, Martha Rukoz and Pablo Santini,
entitled “Integrity Constraint Checking in Distributed
nested transactions over a Database Cluster” is a so
lution to check the integrity and global constraints in
multi-database related systems. This study also pre
sents the experimental results obtained on a PC cluster
solutions with Oracle9i DBMS. Goal is to experiment to
measure the time spent in the check constraints in the
global system that distributed. Result shows that the
overhead is reduced to 50% compared with the integ
rity of the examination center. Studies show that the
system for possible violation of referential integrity
constraints and global conjunctive. However, with the
distributed nested transactions, with the execution and
parallelism, the integrity can be guaranteed[6].
7. Research was conducted by Allison L. James Powell
C.dkk, France Department of Computer Science Uni
versity of Virginia, entitled, entitled The Impact of Da
tabase Selection on Distributed Searching. This study
explains that distributed searching consists of 3 parts,
namely database selection, query processing, and re
sults merging. Self-made database that some database
selection (not all) and the performance will increase
quite significantly. When the selection is done with a
good database, the search is distributed akan perform
better than the centralized search. Searching the data
base is also added to the selection process and the
ranking so that the potential to increase the effective
ness of search data[7].
8. Research was conducted by Yin-Fu Huang and HER
JYH CHEN (2001) from the National University of Sci
ence and Technology Yunlin Taiwan, entitled fragment
Allocation in Distributed Database Design. On this re
search about the Wild Area Network (WAN), fragment
allocation is a major issue in the distribution database
design as a concern on the overall performance of dis
tributed database system. System proposed here is
simple and comprehensive model that reflects the ac
tivity in a distributed database. Based on the model
and transaction information, the form of two algorithms
developed for the optimal allocation of the total cost of
such communication be minimized as much as pos
sible. The results show that the allocation of fragmen
tation is found by using the appropriate algorithm will
be more optimal. Some research is also done to ensure
that the cost of formula can truly reflect the cost of
real world communication[8].
9. Research conducted by this Nadezhda Filipova and
Filcho Filipov (2008) from the University of Econom
ics. Varna, Bul. Kniaz BorisI entitled Development of a
database for distributed information measurement and
control system. This study describes the development
of a database of measurement information and distrib
uted control systems that apply the methods of optical
spectroscopy for plasma physics research and atomic
collisions and provides access for information and re
sources on the network hardware Intranet / Internet,
based on a database management system on the Oracle9i
database . The client software is realized in the Java
Language. This software is developed using the archi
tecture model, which separates the application data
from graphical presentation components and input pro
cessing logic. Following graphical presentations have
been conducted, the measurement of radiation from
the plasma beam Spectra and objects, excitation func
tions of non-elastic collisions of heavy particles and
analysis of data obtained in the previous experiment.
Following graphical client that has a function of inter
action with the browsing database information about a
particular type of experiment, the search data with the
various criteria, and enter information about the previ
ous experiment[9].
10. Research conducted by Lubomir Stanchev from the
University of Waterloo in 2001, entitled “Semantic Data
Control in Distributed Database Environment”. This
research states that there are three main goals in the
semantic data control, namely: view management, data
security and semantic integrity control. In a relation
ship, these functions can achieve uniformity with en
forcing the rules control the data manipulation. The
solution is to centralization or distributed. Two main
issues to make efficient control is the definition of data
and storage rules (site selection) and the enforcement
of design algorithms that minimize the cost of commu
nication. The problem is difficult, because the increase
in function (and general) tend to increase the commu
nication site. Solutions to semantic data control dis
tributed is the existence of centralized solutions. The
problem is simple if you control the rules fully repli
cated at all sites and site autonomy difficult if patented.
In addition, the specific optimization can be done to
minimize the cost of control data, but with additional
overhead such as management of data snapshots.
Thus, the specification control distributed data must
be included in the database design so that the cost to
update the control programs are also considered[10].
Figure 5. Data Visualization with materialized views and
Auxiliary
Literature review of the ten who have, have a lot of research on check pointing, parallel database system, discussion of the component database system, is also about
security. Besides, there is also discussion about the nested
transaction, distributed searching, view management and
fragment allocation. However, it can be that there is no
research that specifically discuss the issue or view a slow
process due to stratified queries.
IV. TROUBLESHOOTING
To overcome the above issues, the process required a fast
and efficient access to all data in a more organized and not
in the database, especially for a system database that distributed. Currently, programmers prefer to use Ms Access
and query functions for the entire script command. Consequently the process of large-scale query occurs every need
data. The use of SQL server is not a new thing in this case,
therefore proposed for the establishment of a system to
process more at the time of loading and presentation of
data have a linear speed faster than the conventional way.
DMQ (Mart Data Query) is a method of applying the analogy “Waste Space for Speed.” DMQ is also one of the
methods of forming the separation between the “Engine”
and “Display”. In other words DMQ method can display
the source code directly on the display and process the
query is done on the engine. DMQ generally produce a
display that data far more quickly than by using common
methods, as DMQ does not do it again in the process of
displaying data. DMQ and finally a solution that can help
the needs of users on the display data, previously very
slow and does not efficient.
Query on the Data Mart data sources come from the table.
113
So in this process DMQ, allocate all of the selected data
into a table. So user does not need to consider the purpose of making the structure of the table, which should be
considered only where the data is located. DMQ is used to
avoid the use of complex queries. DMQ will compromising
the amount of data storage capacity (space hard disc) to
increase speed (increase speed). DMQ need trigger update data to generate the data current.
The following is a description of the request data from the
user. Where a user make a request will display, and data
query module to search on db1, db2 to dbn. By using the
data mart or queries DMQ query data from the module
directly queries want in the form of a graphical display
module that can be viewed by the user.
If compared will looks like the picture above. DMQ which
can make viewing the web more quickly done because the
process does not require a complex search. This can also
be evidenced in the graph 1.
Description:
= Not Use the DMQ
= Use the DMQ
Figure 8. Comparison of Time and Number of data
In the chart above, you can see how the comparison time and the amount of data to display a web in which
the amount of data for each graph the value is the same.
The graph above explains that if a view does not use the
DMQ, then graphed will rise above, or the greater amount
of data then the process will be long. However, if using the
DMQ for view the data regardless of the time needed to
process view relatively constant.
A. Linear Regression
Besides the proven graphics, can also be evidenced
with the exponentially Linear regression equation as follows:
Y’= a + bX
Figure 6. Data Visualization with DMQ
With Query Data Mart (DMQ) process data faster, not
because the conventional data source such as the need to
find from the table. Data Mart Queries (DMQ) can cut the
time because the process of search data to only one table
that have been merged.
Where
a = Y shortcuts, (the value of Y ‘when X = 0)
b = slope of regression line (increase or decrease in Y ‘for
each one-unt change in X) or regression coefficients, which
measure the amount of the influence of X on Y when X
increased unit
X = value of variable-free
Y ‘= a value that is measured / calculated on a variable not
free
Values a and b in the regression equation can be
calculated with formula below:
a = é - X ………[11]
·
Linear regression calculation for the view without using the DMQ
Here is the data obtained when not using the Data Mart
Query:
Figure 7. Comparison of conventional data sources
and the source data with the Data Mart Query
114
Summary of data
x2
X
Y
5000
43
506250000
10000 79
Time
x
y
xy
(X – X) (Y- é)
-22500 -80.25
1811250 6480.25
-17500
778750 1980.25
-44.5
y2
306250000
15000 78
156250000
20000 103
56250000
25000 118
6250000
30000 124
6250000
35000 159
56250000
40000 141
156250000
45000 189
306250000
50000 201
506250000
275000 1235
2062500000
-12500
-45.5
568750 2070.25
-7500
-20.5
153750 420.25
-2500
-5.5
13750
30.25
2500
0.5
1250
0.25
7500
35.5
266250 1260.25
12500
17.5
218750 306.25
17500
65.5
1146250 4290.25
22500
77.5
1743750 6006.25
6702500 22844.5
Table 1. Data and calculations for the view without the
DMQ
10000 16
306250000
15000 18
156250000
20000 15
56250000
25000 19
6250000
30000 21
6250000
35000 18
56250000
40000 20
156250000
45000 23
306250000
50000 19
506250000
275000 180
2062500000
-17500
-2
35000
4
-12500
0
0
0
-7500
-3
22500
9
-2500
1
-2500
1
2500
3
7500
9
7500
0
0
0
12500
2
25000
4
17500
5
87500
25
22500
1
22500
102
355000 203
From the above calculation, based on the same way
yielded the following equation:
X = 27500
Y = 15.25 + 0.0001X
é = 123.5
From the above calculation, the value of a and b are calculated as follows:
So each time the amount of data increases the time the
process will be 0.0001 times.
Based on the above regression calculation, it can be proved
that bDMQ not significant.
bDMQ: bn = 0.0001: 0.0032
bDMQ: bn H” 1:32
= 0.0032
a =é-X
= 123.5 – 0.0035 (27500)
= 35.5
So, the regression equation that shows the relationship between the number of both variable data and a time
to view the display is:
This shows that bDMQ not significant than bn. thus regression to remain non DMQ:
y = 35.5 + 0.0032x
whereas DMQ regression to be:
Y = 35.5 + 0.0032X
y = 15.25
So each time the amount of data increases the time
the process will be 0.0032 time.
·
Linear regression calculation with a view to using the DMQ
Here is the data obtained by using the Data Mart Query:
Table 2. Data for the calculation and view the DMQ
Summary of Data Time
x
y
xy
y2
2
x
X
Y
(X – X) (Y- é)
5000
11
-22500 -7
157500 49
506250000
B. Linear Correlation
The term correlation refers to the concept of mutual relationships between several variables. In correlation of the
complex involving many variables at once. However, in
this discussion I take the two variables, namely the amount
of data for the X and Y for the time. One formula to calculate the size of the correlation coefficient between two variables that each scale intervals have been formulated by
experts and statistics formula called the product-moment
correlation Pearson. Formula is as follows:
115
To view without using the DMQ large amount of data tends
to be followed by the amount of time the process and
reverse its increasingly small amount of data the smaller
the time the process is required. Of the changes in the
variable X is not followed by changes in the variable Y is
absolute. There is little variation shows that the large
changes in X are not always followed by a proportional
change in the Y. This shows there is no indication that
perfect relationship between two variables and this is a
non-physical characteristics of the variables. Relationship
that can only be perfect once the variables science exact
sciences.
The correlation is expressed in a number called the correlation coefficient rxy and given the symbol. Correlation coefficients contains two meanings, that is a strong relationship and the direction of the weak relationship between
variables.
Strong, weak relationship between two variables
is shown by the large absolute price moves that correlation coefficients between 0 up to 1. The approximate number 0 means the relationship is weak and the coefficient
approaching 1 means the number the stronger the relationship.
The data in Table 1. can be calculated linear correlation
between the amount of data and the time to view the process without using the DMQ as follows:
<%
dim conn
set conn=server.CreateObject(“ADODB.Connection”)
conn.open “PROVIDER=MSDASQL;DRIVER={SQL
SERVER};SERVER=rec;DATABASE=raharja_integrated;”
%>
<% dim strsql1,rs1
strsql1=”drop table Daftar_Nilai”
set rs1=conn.execute(strsql1) %>
<% dim strsql2,rs2
strsql2=”select * INTO Daftar_Nilai from Lap_KHS4"
set rs2=conn.execute(strsql2) %>
<% response.redirect (“default.asp”) %>
Figure 10. Listing the value of the program update the list
of GPA
V. IMPLEMENTATION
The concept of Data Mart Query (DMQ) was implemented
on the Raharja University in List view to create value GPA
(Cumulative Performance Index). GPA is an average of IPS
(Index As Prestasi). IPK system is prepared to measure
and know the level of ability for students to lecture. Use
the DMQ in the value of making a list of GPA, the appearance can be quickly accessed.
= 0.97
While the linear correlation between the amount of data
and the time to view the process of using the DMQ-based
data in the table 2. is as follows:
Figure 11. Query the database structure Raharja_Integrated
= 0.55
SELECT dbo.Lap_Khs3.NIM, dbo.Lap_Khs3.Kode_MK,
dbo.Lap_Khs3.Mata_Kuliah, dbo.Lap_Khs3.Sks,
dbo.Lap_Khs3.Grade,
dbo.Lap_Khs3.AM,
dbo.Lap_Khs3.K, dbo.Lap_Khs3.M, DBo .
QKurikulum.Kelompok, dbo.QKurikulum.Kajur FROM
dbo.Lap_Khs3 inner JOIN dbo.Mahasiswa ON
dbo.Lap_Khs3.NIM = dbo.Mahasiswa.NIM inner JOIN
dbo.QKurikulum ON dbo.Mahasiswa.Jenjang =
dbo.QKurikulum.Jenjang AND DBo . Mahasiswa.Jurusan
=
dbo.QKurikulum.Jurusan
AND
dbo.Mahasiswa.Konsentrasi
=
dbo.QKurikulum.Konsentrasi
AND
dbo.Lap_Khs3.Kode_MK = dbo.QKurikulum.Kode
The high coefficient is defined as the existence of a strong
relationship between the amount of data with time. Positive sign on the correlation coefficient shows that the
amount of data that the higher the time the process has an
increasingly high.
C. Designing Through Program Flowchart
Figure 9. Register Value Flowchat GPA
D. Program Listing
IPK is a list of values that a program using the method
DMQ (Query Data Mart), so that the listing program that
will display the listing includes the list of update values
GPA, and listing the value of a list GPA. Following the
program listing:
116
·
·
Algorithm Lap_khs4:
The Screen
The screen (interface) Panel Chairman has been
integrated with some system information such as Raharja
Multimedia Edutainment (RME), On-line attendance (AO),
and Student Information Services (SIS). The interface - the
interface consists of:
at the time of opening this page, does not require a long
time.
a. Main Display Panel Chairman
On this view we can see the entire of the Raharja university in the GPA (Cumulative Quality Index) and the IMK
(Cumulative Performance Index). On this view there is also
the number of students, men and women, along with
student’s number, the student is entitled to UTS and UAS,
Top 10 Best and Worst Top 10 GPA and IMK, and the
average GPA for both active students and graduates from
Raharja university.
VI. CONCLUSION
Figure 12. Main Display Panel Chairman
In the column on the left or top, when we click on the linklink, it will open a URL on the right-hand column. In the
picture above there is a link Asdir. When click on the link,
it will open a URL that contains the entire Top 100 Students with Active status that are sorted descending. URL
has the interface as the image below.
Figure 13. Showing Top 100 Students with active status
In the above, there is a NIM, Student Name, and the GPA
and the IMK. To be able to see the detail list of the GPA a
student, please click on the value of the student GPA.
b. Display the list of values on the GPA Board Panel
Unlike the previous interface, the interface panel
at the GPA leaders illustrate the value of this special Cumulative Performance Index (GPA) of each student. GPA is
the average value of the overall value obtained in the whole
semester that has been executed by each student. IPK is
packed in a “Value List IPK” format that can be seen in the
image below:
Figure 14. List view GPA Value
To provide a list value above GPA, many tables
and queries used. So if using a conventional data source
requires a long time. However, the value of the display list
is created with a GPA above the Query Data Mart, so that
Based on the description above, concluded that the Data
Mart Queries (DMQ) is the appropriate method to speed
up the process on a system with a database of information
distributed. DMQ is used to avoid the use of complex queries. Thus DMQ will sacrificing the size of the data storage
capacity (space hard disc) to increase speed (increase
speed) in the initialization. This has proved both logic, the
calculation of graphs with linear regression and linear correlation and also through the implementation.
References
[1]Lin. J. L., Dunham M. H. and Nascimento M. A. A Survey of Distributed Database Checkpointing. Texas:
Department of computer science and engineering,
Shoutern Methodist University. 1997.
[2]DeWitt. D.J., Gray.J. Parallel Database Systems: The
Future of High Performance Database Processing.
San Francisco: Computer Sciences Department, University of Wisconsin. 1992.
[3]Mitchell Carolyn. Component of a distributed database.
Department of Computer science, Norfolk state University. 2004.
[4]Hamidah Ibrahim. Deriving Global And Local Integrity
Rules For A Distributed Database. Departement of
Computer Science Faculty of Computer Science and
Information Technology, University Putra Malaysia 43400 UPM Serdang. 2001.
[5]Steven P Coy. Security Implications of the Choice of
Distributed Database Management System Model:
Relational Vs Object Oriented. University of Maryland. 2008.
[6]Stephane Gangarski, Claudia Leon, Hurbert Naacke,
Marta Rukoz and Pablo Santini. Integrity Constraint
Checking In Distributed Nested Transactions Over
A Database Clustur. Laboratorie the Information
Paris 6. University Pierre et Marie Curie 8 rue du
Capitaine Scott, 75015, Paris. Centro de Computacion
Paralela Y Distribuida, Universidad Central de Venezuela. Apdo. 47002, Los Chaguaramos, 1041 A,
Caracas, Venezuela. 2006.
117
[7]Allison L. Powell, James C. French, Jamie Callan, Margaret Connell and Charles L. Viles. The Impact of
Database Selection on Distributed Searching. 23rd
ACM SIGIR Conference on Information Retrieval
(SIGIR’00), pages 232-239, 2000.
[8]Huang Yin-Fu and JYH-CHEN HER. Fragment Allocation in Distributed Database Design. Nasional
Yunlin University Sains and Teknology Yunlin. Taiwan 640, R.O.C. 2001.
[9]Filipova Nadezhda and Filipov Filcho. Development Of
Database For Distributed Information Measurement And Control System University of Economics.
Varna, Bul. Kniaz Boris I. 2008.
[10]Stanchev Lubomir. Semantic Data Control In Distributed Database Environment. University of Waterloo. 2001.
[11]Supranto, Statistik Teori dan Aplikasi, Erlangga, 2000.
118
Paper
Saturday, August 8, 2009
16:50 - 17:10
Room L-211
IT Governance: A Recommended Strategy and Implementation Framework
for e-government in Indonesia
Henderi, Maimunah, Asep Saefullah
Information Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
email: [email protected]
Abstract
Many government organizations have been using a lot of IT in the implementation activities. Utilization of IT is known
by the term e-government. Implementation in general aims to improve the quality of activities and services to the
community. Therefore, the role of e-government the more important to support and create good governance in the
government organization. Through e-government implementation of the government are expected to provide first-rate
service to the community. However, in fact the implementation of e-government is not everything went well in accordance
with the expected. The-efisienan to occur in the different practices of governance, development planning and
implementation of e-government has not been sustainable, the development of IT infrasrtuktur many overlapping, and
service to the community also has not been able to be either prime.
Keywords: IT governance, e-government, good governance
1. Introduction
The information technology (IT) in the era melenium
at this time not only monopolized by business
organizations. Governmental organizations, through
various ministries have also been using IT in order to
optimize the implementation of various activities.
Government to use IT to support various areas of activities
related to the public, asset management organization, the
implementation of aspects of service and operational, even
to measure the achievement of the performance and
efficiency. Through the application of IT, the government
is expected to improve performance in various aspects in
accordance with the principles and way of good
governance, including: (1) create a governance or
management systems in good governance, (2) increase
community participation, (3) to know a complaint / real
needs of the community and follow up with the right /
responsive, (4) created through the ease of getting
information, (5) increase public confidence in reciprocal
back-the government, (6) accountability of the government
concerning the interests of the area, (7) the effectiveness
and efficiency of service and operational activities of
government (henderi et. all: 2008). Thus, the use of IT by
the government in general, aims to improve the quality of
the activities and services to the community. Utilization
of IT by the government is more known as the egovernment. Implementation of e-government is important
to realize the good governance, so that the government is
able to provide the prime to the community. Because the
government has undertaken many efforts to implement egovernment. Various application and infrastructure was
also built and used in order to support its implementation.
But in fact the implementation of e-government is not
everything went well and in accordance with the expected.
In the implementation of e-government, the variousefisienan to occur in the practice of government activities,
service to society can not be the prime, there are many
resources that terbuang useless, plan implementation and
development of e-government is not yet sustainable,
development and supporting IT infrastructure are also
many overlapping. In others, the implementation of egovernment supported IT governance is both very
important and are expected to have a significant thrust of
the development. Therefore, in context development, IT
governance is an issue which is very important because
119
is one of the dimensions that determine the effectiveness
and efficiency of IT utilization and ultimately determine
the success of the development. For that need to be efforts
to improve the system of e-government through the
application of the principles and way of working on IT
governance.
context of governance, IT governance is an integral part
of good governance. Following diagram is the theory of
IT governance architecture is briefly.
2 Issues
Based on the facts mentioned above and some of
the results of research articles, opinions, and report on
the strategy, implementation and benefits of the use of egovernment, it is known that the benefits of implementing
e-government is not in accordance with what is expected,
and not comparable with the value and investment
financing that has been remove. Implementation of egovernment has not been able to significantly improve
the ease of public access in the information required, have
not been able to increase community empowerment, and
not play a role in providing maximum service to the
community. To overcome these problems, and the
challenges that have the necessary framework and IT
governance strategy in e-government.
3. Discussion
3.1 Definition of Research Literature on the Definition
of IT Governance
According to Weill and Ross (2004) IT
governance is the authority and responsibility are properly
set in a decision to encourage the use of information
technology on the company. Meanwhile, henderi et. all
(2008: 3) defines IT governance is the correct decision in
a frame that can be responbility and the desire to
encourage the use of information technology practices.
On the other, Henderi et. all (2008: 3) also defines IT
governance is the basis of the measure and decide the
use and utilization of information technology by
considering the purpose, goals, targets and business
companies. Hence, IT governance is a business and
synergize the role of IT governance in achieving goals
and objectives of the organization and is the responsibility
of the Board of Directors and Executive Management. IT
governance is also a fact on the control system of organs
through the application of IT companies in order to
achieve the objectives and targets set, create new
opportunities, and the challenges that arise. Because IT
is a governance structure and mechanisms that are
designed to give the role of control and adequate
management for the organization or management. IT
Governance and management both have a strategic
function, while management also have operational
functions. IT governance is the mechanism to deliver value,
performance and risk management, with a focus on aspects
of how decisions are taken, who took the decision, the
decision of what, and why decisions are made. In the
120
Figure 1. Diagram IT governance architecture theory
Base on architecture theory diagram IT governance
in the image on the above, it can be concluded that IT
governance in general, consists of four eleman, namely:
1) the purpose/goal, 2) technology, 3) human/people, and
4) process. Each of these elements must then be
understood properly, the strategy defined achievements,
and monitored the progress and Sustainability.
3.2 Definition of Research Literature The e-Government
E-government are often used by many, discussed
and reviewed in various forums and literature. However,
understanding of e-government remain different. For
example, Hazibuan A. Zainal (2002), quoted Heeks, defines
e-government as an activity undertaken by the
government to use information technology (IT) to provide
services to the community. This definition in line with the
opinion Heeks stating that almost all government
institutions in the world conduct activities and services
to people with inefficient, especially in developing
countries. Many of the policy direction that government
is not made well known by the general public, therefore,
expenditure of funds are not reported well, and going on
various queue service center is a supporter of a publicefisienan because the resources are terbuang.
3.3 Relations and IT Governance e-Government
Application of the principles and way of IT
governance in e-government is an imperative and not
difficult to do because it has characteristics and goals
relative to the same support the creation and
implementation of the principles and good governance is
working in different business organizations, social, and
governance. In the context of the development and
implementation services to the community, principles and
how to work IT governance always have relationships
with e-government. Because of the principles and way of
IT governance is to be applied to the e-government.
Relations can accelerate the implementation of
development efforts in: (a) build the capacity of all
elements of society and government in achieving the
objective (welfare, progress, independence, and
civilization), (b) community empowerment, and (c) the
implementation of good governance and public services
are prime. The implementation and achievement of
development objectives can be optimized through the
application of IT in e-governance tujan governance in
accordance with the characteristics of good governance
and. Here is the purpose of IT governance relationship
with the goals and characteristics of good governance be
achieved through the implementation of e-government
(Henderi et.all: 2008, ):
Based on the above table in mind that the relation
IT governance and e-government appears on the similarity
with the purpose of IT governance goals and
characteristics of good governance be achieved with
implementing e-government. Through the implementation
of IT governance in e-government and how the principles
of good governance can be optimized implementation.
Therefore, e-government is built and developed with
attention to and apply the principles and way of IT
governance will result in e-government applications that
support the creation and implementation of most of the
principles and main characteristics of good governance,
including:
1.Creating a governance system or organization in good
governance
2. Increasing the involvement and role of community
(participatory)
3.Improving the sensitivity of the organizers of the
government the aspirations of the community
(responsive)
4.Ensure the provision of information and the ease of
obtaining accurate information and adequate
(transparency) so that the trust created between the
government and the community
5.Provide the same opportunities for every member of the
community to increase welfare (equality)
6.Guarantee the service to the community by using the
available resources optimally and full responsibility of
the (effective and efficient)
7.Increase the accountability of decision makers in all areas
related to the broad interests of (the accountability)
In line with the relationships described above, each
step of development undertaken by the government at
this time and the future challenges faced in the universal
which can be briefly illustrated in Figure 2 below.
Table 1 Comparison of IT Governance Goals and
Objectives with the Characteristics of Good Governance
121
Figure 2. Development Scheme in the Knowledge Era
(Taufik: 2007)
Based on the two pictures above, the development done
in the era of the knowledge and application of IT
governance in e-government should consider some basic
thoughts below:
1.Primary goal development needs to be translated
through the increased competitiveness of the nation
and social regulation. Development as a process of
development must be concrete can improve the welfare
of a higher, more equitable, strengthen self-reliance, and
promoting civilization. Implementation of IT governance
in e-government can help accelerate the achievement
of this goal.
2.Otherwise have the best potential and unique
characteristics, the implementation of development at
economic network, and (5) of the locality.
3. competitiveness and social regulation.
4. ICT is also one of the factors are very important in the
context of development undertaken by the government.
Based on the relation of thought and above,
implementation of IT governance in the context of egovernment is relevant to optimize the implementation of
the principles and way of good governance of government.
Thus, the step to bring development and pendayagunaan
e-government that is supported by the principles and how
to work a part of the IT governance is important in
optimizing the implementation of the development to be
more effective, efficient and empowering communities.
3.4 IT Governance Requirements in e-Government
IT Governance needs in e-government in line with
the terms and understanding of good governance is often
used approximately 15 years after the last international
agencies condition good governance in the aid program.
In the field of science information technology, good
governance, known by the term IT governance. Thus the
need for IT governance in e-government is an equity in
order to optimize the role of IT to achieve good governance
in the organization of government. Through the
implementation of IT governance in e-government and
122
good governance principles for the organization of
government can be implemented in the form of
strengthening transparency and accountability,
strengthening the regulation (setting, supervision pruden,
risk management, and enforcment), the integrity of the
market to encourage, strengthen cooperation, and
institutional reform. Therefore, IT governance needs in egovernment is also analogous to the definition of IT
governance in the framework of governance given by Ross
et. All (2004) is that IT governance as a set of management,
planning and performance reports, and review processes
associated with the decisions correct and appropriate,
establish control, and measurement of performance on
the key investors, services, operations, delivery and
authorization to change or new opportunities appropriate
regulations, laws and policies. In connection with the
above, e-government should be built , and implemented
in accordance with this principle and how to work IT
governance, involving elements of decision-makers in
government, well planned, decided to consider decisionmaking processes are correct and appropriate (consensus),
controlled and monitored the development and
implementation, evaluation of achivement in improving
the quality of development and first-rate service to the
community, creating new opportunities and the challenges
that arise, and it matches with the regulations and policies
-government policy and even the global rules. Meanwhile,
in the context of government, far as this legal foundation
footing form new IT governance regulations in line with
the principles of good governance is more emphasized in
the context of practices of corruption, misuse of funds
and anticipate the country. In its development, efforts to
reform the bureaucracy and the development of good
governance system of government continues to be
improved. Although some government agencies have a
document that already have IT master plan, but often
difficult implement IT in the form of good governance to
develop e-government with a variety of reasons. Because
of improvements in the level regulation is important is
done through creating a framework and strategies for IT
governance to the development of e-government.
Framework and that this strategy must be accompanied
with the seriousness, consistency and the various capacity
building, particularly the government to achieve good IT
governance in e-government. Framerwork strategy and is
expected to bring to the action plan, and the implementation
of IT governance in e-government in accordance with the
needs, abilities, specific level of government organization,
and can support the implementation of the principles of
good governance and good governance in the context of
IT development.
3.5 Infrastructure and implementation of IT Governance
in the
e-Government
3.5.1 Infrastructure
Provision of information and communication
infrastructure, information communicatioan and
techonoloy (ICT) are integrated effectively and efficiently
in different levels of government are very important and
closely related to IT governance, governance and
development. ICT infrastructure to be developed and
planned development. Because if this is ignored will lead
to the occurrence of waste ICT infrastructure development.
When this waste is going because its development is often
not synchronized with each other, and often overlapping.
Government agencies often work alone without
considering the efforts made by other parties. There is an
IT governance framework is expected to create a national
information infrastructure and integrated communications.
3.5.2 Implementation of IT Governance
in the E-Government
Problems with the elements of ICT infrastructure,
as described previously is a result of implementation not
work and how the concept of IT governance in the
development and implementation of various egovernment. While the development and empowering egovernment is important to realize good governance
support, and provide public services to the people of the
prime. However, implementation of e-government without
improvement followed by the IT governance in the
government sector and how the principles of good
governance in the system of government will not create
an optimal, even not possible will only be for the
government discourse. Therefore, the concept of work
and prinisp IT governance implicitly applied in egovernment.
4. IT Governance Framework Strategy for
e-Government Strategy Framework
4.1 Many IT Governance IT governance framework
strategy that has been made.
Some of them were prepared and issued by the
IT Governance Institute and COBIT. From the various
framework and strategy, conclusions can be drawn that
the framework and strategies for IT governance in general
consists of two elements of the IT governance domains
and objective. Framework and IT governance strategy that
can be referred to briefly illustrated in the form of IT
governance life cycle as follows.
Figure 3. IT Governance Life Cycle-Static View (IT
Governance Implementatin Guide, 2003)
Based on the three images above, note that the
domain of IT governance is part strategy implementation
consists of five main components (Erik Guldentops: 2004),
namely: (1) Aligment, (2) Value delivery, (3) Risk
management, (4) Resource management, (5) Performane
management. Explanation of the five main components
are as follows:
1. Aligment; emphasized the integration of IT
organizations to enterprise organizations in the future,
and IT operations that are run in order to create added
value for the organization.
2. Value delivery; the value of IT through the creation of
the accuracy of the time in completing the project
construction, the accuracy of the budget is used, and
the needs that have been identified. The process of
development must be in IT-design, improved and
operated efficiently and effectively through the
accuracy of the settlement and achievement of
objectives set and agreed.
3. Risk management; Value includes the maintenance,
the internal control to test enterprise governance
organization to stakeholders, customers and key
stakeholders to improve the activity management of
the organization.
4.Resource management; regarding the establishment and
development of IT capabilities that fit well with the
needs of the organization
5.Performane management; cycle includes the provision
of clear feedback, IT governance initiatives aligment
on the right path, and create new opportunities with
measurement is correct and timely.
IT governance strategy that is described based on
five main elements of the above, in line with the objectives
be achieved by the leaders of the organization in the field
of IT investment. Some goals are:
123
a.For make cut operational costs or increase fees and cut
costs the same. For example, the marketing costs that
must be issued by the company (goals transactional)
b.For make meet or provide information to various needs.
Including financial information, management, control,
report, communicate, collaborate, and analyze
(destination information)
c.For make benefit or competitive positioning in the market
(market share increase organization). For example, the
various facilities provided by banks has changed the
status of savings products that become a form of
investment for bank customers at this time to be a
product of the bank must be paid by the customer
(strategic goal).
d.Delivery base/foundation on the services IT by different
applications, such as servers, laptops, network,
custmoers database, and others (infrastructure
purposes).
4.2 Framework e-Government Strategy
Framework e-government strategy is basically
defined as a blue print on the IT aspects of the development
and implementation service to the community to be more
effective, efficient, and empowering the community. In
connection with the case, the framework e-government
strategy is in essence a framework of a general
development strategy and how useful e-government in
the cycle of planning, implementation, monitoring,
evaluation and continuous improvement in order to
become an integral part of development and service to
the community. Framework strategy e-government also
has a meaning that the development and services that the
government needs to do more useful ICT in realizing the
goals.
Framework of this strategy can become a
comprehensive guide and ready to use and tailored to the
nature and characteristics of development and services
made from time to time. Thus, the follow-up plan can be
translated in accordance with the challenges, skills and
government priorities. Aligment unison and action can be
expected to provide adequate support for the change, and
may trigger a significant impact in implementing and
achieving the goal. Framework e-government strategy to
provide philosophical foundations for IT implementation
in governance and development services in accordance
with the principles of good governance. Framework egovernment strategy is a minimal-consisting of the
elements essential element following:
1.Leadership (e-leadership), policy and institutional;
2.Information and communication infrastructure/ICT
integrated
3.Application of ICT in government (e-government)
4.Utilization of ICT in community development (e-society);
and
124
5.Industrial development and utilization of ICT for
business (e-business)
From the five main elements of the e-government
strategy, the elements of leadership should be and how to
apply the principles of e-leadership. Elements of eleaderhsip this line with the opinion delivered by Henderi,
et. All (2008: 165) that the manager at this time the
organization is required to understand the concept and
how to work the implementation of information technology
to support the implementation of the functions of
leadership. Elements of e-leadership is applied in
preparing, implementing, and evaluating the policies,
programs, and services performed. Therefore, the main
elements of this are affecting the other elements in the
framework and strategies for e-govenment. In line with
the fifth framework and main elements of e-government
strategy mentioned above, several other principal terms
also play a role in encouraging the implementation of IT
governance in e-government, including:
a.IT governance should be an integral part of the
governance system of government.
b.Many determines policy and the main stakeholders need
to create harmony (alignment) between the ICT
development and services performed.
c.Aligment context with ICT organizations or systems of
government regulation requiring the implementation of
the framework strategy and appropriate, especially in
the implementation phase.
Therefore, IT governance framework in the
development, implementation and development of
egovernment is absolutely prepared, approved, certified,
and have an adequate legal basis so that they can
contribute significantly in optimizing the development and
implementation of the prime ministry to the community in
accordance with the principles of good governance.
5 Conclusion
Governmental organizations, through various
ministries and instansinya utilize IT in order to optimize
the implementation of various development activities and
services to the community. Utilization of IT by government
are known by the term e-government, the implementation
is very important to support and achieve good governance,
so that the government is able to provide the prime to the
community. For that, IT governance framework strategy
in e-government is made absolute and is used to prevent
or reduce the various permasalaan faced in the
implementation of e-government at this time such as: the
efisienan-going activities in different practices of
governance, resources spent useless, various edevelopment plan government has not been sustainable,
the development of IT infrasrtuktur many overlapping,
and services to society can not be either prime. This article
recommends a strategy framework for the development of
IT governance, development and implementation of egovernment with the main attention to the five elements
of IT governance, namely: (1) Aligment, (2) Value delivery,
(3) Risk management, (4) Resource management, and (5)
Performane management, and further define the five
essential elements of e-government strategies, namely:
(1) Leadership (e-leadership), policy and institutional, (2)
Information and communication infrastructure, integrated,
(3) The application of ICT in government (e-government),
(4) Utilization of ICT in community development (esociety), and (5) Development of ICT industry and the
utilization of business (e-business). Framework, and this
strategy can further be a reference in the development
and implementation of e-government can accelerate the
implementation of the principles of good governance in
the system of government.
REFERENCES
Darrell Jones. “Creating Stakeholder Value: The Case for
Informatioan Technology Governance”.
w w w. k p m g . c a / e n / s e r v i c e s / a d v i s o r y / e r r /
inforiskmgmt.html, Jule 11 2008
Hasibuan A. Zainal (2002). Electronic Govermnet For Good
Governance. Journal of Management Information
Systems and Information Technology 1 (1), 3-4.
Henderi and Sunarya Abas (2008). IT Governance Role In
Improving Organizational Performance: Issues,
Development Plan and Implementation Strategy.
CCIT Journal 2 (1), 1-12
Henderi and Padeli (2008). IT Governance - Support for
Good Governance. CCIT Journal 2 (2), 142-151.
Henderi, Maimunah, and Euis Siti Nur Aisyah (2008). ELeadership: The concept and the impact on
Leadership Effectiveness. CCIT Journal 1 (2), 165172.
Kordel Luc (2004). IT Governance Hands-on: Using Cobit
to implement IT Governane. Informatioan System
Control Journal, Volume 2, USA
Ross, Jeanne, and Weill, Peter. (2004). Recipe for Good
Governance, CIO Magazine, 15 June 2004, 17, (17).
Taufik, (2007). IT Governance: The approach to realize
integration in the development of Regional TIK.
Seminar Materials Trisakti University, Jakarta.
Anonymous (2007). What is Good Governance? United
Nations Economic and Social Commission for Asia
and the Pacific (UN ESCAP) (4 pages). Accessed
on 30 January 2009 from: http://www.unescap.org/
huset/gg/governance.htm
125
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room L-211
Information System Development
For Conducting Certification for Lecturers in Indonesia
Yeni Nuraeni
Program Study Information Technology
University Paramadina
Email : [email protected]
Abstrak
Since 2008 Indonesian Governmemt had begun to conduct certification for lecturers of non professor with
purpose of recognation of profesionalism, profession protection and tutor welfare in Indonesia. All these efforts are
expected, at the end, could improve quality of mutu colleges and university in Indonesia, as lecturesrs are most
important component and determining in studying and teaching in university.
The fact that is not easy for lecturers to be certified, besides limited quota, the process should go through
complex bureaucracy. Threfore every university sahould set up mechanism and strategy in order to simplify for
gaining certificate for the lecturers. Among other thing that may be done is to set up integrated information system to
support process for certification for the lecturers. Required stages are process design model, application modules
development and implementation also system testing. Result of this investigation copuld become reference for every
univerity in Indonesia for dalam accomplishment of certified lecturers.
Key words: information system, certified lecturers
I. FOREWORDS
1.1 Background
Lecturers are one among essential components in
education system in university. Roles, assignments, and
lecturter’s responsibility is very important in realizing
national education. To perform these functions, roles, and
strategic position, it is required professional lecturers.
Human Resources for Lecturers has vital position in
creating quality image of then graduates as well as quality
of institution in general. This position should also be
strengthen by the fact that lecturers should have
considerable authoritiesin academical process, and even
higher than similar proffesion in lower educational
institutes.
People demand quantity and quality of the output
generated by University / Faculties / Majors is growing
stronger. Eventhough numbers of graduates from
university much larger than previous, particular majors
and especially in terms of quality still below expectation.
More advance the civilization, greater the competion in
various fields, including fields of science and technoilogy,
126
furthermore in globalization. Univeristies had becoming
peoples’s foothold in generating highh quality human
resources. Lecturers empowerment is becoming
compulsary for universities, as it is key success University
/ Faculties / Majors, where 60% of university is in hands
of these lecturers. Whatever education enhancement
policies were designed and defined, at the end of the day,
lecturers who are actually performing in teaching activities.
Learning and teaching activities are very much depending
on lecturer’s competence and commitment.
Lecturers professionalism valuation is crucially
important to be performed as one of the effort to enhance
eductaion quality in high education system.
Proffesionalism recognation may be realized in the form
of granting certificate to lecturers by authorized
establishement. Certificate awarding is also effort tom
conatitute profession protection dan warranty of lecturers
welfare. Since 2008, Indonesian government have
conducted certification process for lecturers non-
proffesor based on PERMENDINAS 42 of 2007 (Decree #
420 Ministry of National Education) as effort to have
patronage of lecturers in performing professional
assignment.
Execution of certification for lecturers nowadays
performed by certification establishment in universities
and or in cooperation with other university as accrdited
administrator of certification. Every university private
managed as well as state own in Indonesia were guided to
empowering existing unit or to build a new one which has
potency to perform certification program. Every university
require strategy to make effort for all lecturers under
campus community will be certified as one of efforts to
have performance enhanced and lecturers welfare dosen
where ultimately expected to implicate on education
quality enhancement.
Based on above condition, therfore in this research
will build an application functioned to facilitate
accomplishment process, monitoring, evaluate and to
guide lecturers certification in university. Withnthis
application, it is expected every university in Indonesia
cana accommodate bigger opportunity dan faster for the
lecturers to obtain certification.
1.2 Objective
This study has specific objective to perform the following:
1. Analyze certification process where so far had been
conducted dan identified problems (non added value
process) which had resulted certificatio process
becoming ineffective and inefficient.
2. Developing process model for certifcation for the
lecturers according to the need, certification
committee, PTP – Serdos and other related parties.
3. Developing application, monitoring, evaluating and
guiding lecturers certification according to suggested
process model.
4. Take reference material in conducting certification
process for each university in Indonesia.
1.3 Urgency for Reseach
Lecturers certification process had been involving
bureaucracy procedure with many institutions being
involved. Parties who were involved in the process were
concerned lecturer, faculty / majors / prodi, students,
colleagues, direct superior, serdos committee on university
proposer, university conducting lecturer certification (PTP
serdos), Directorate General of High Education, and
Private Universities Co-ordinator. This process is related
to bundle tracker, required files and documents for
processing certification generally still be done manually.
This causing the certification process lengthly dan
complicated.
Beside problem in certification processing, there is
also quota problem which limiting number of lecturers
being recommended to have them certifioed. Government
had defined quota number of lecturers who cabn be
proposed to have them certified for each university in
Indonesia. Becase of this quita, every university has to
have strategy and mechanism to define first prioritized
lecturers to be recommended in getting certificated every
year and ensuring those recommended lecturers have big
chances to pass the test and getting certificates.
Based on those problems, it is required to analize
accomplishment process for lecturers in getting
certificates to find out non-added value process which
resulting ineffective and unefficien for lecturers in getting
certificates, furthermore based on the analysis ; process
model will be made which hopefully can elliminate all those
non-added value processes into added value processes,
it will then be followed up by contructing accomplishment
process implementation for lecturers in getting certificates.
With this new process model and implementiing it, we
hope the following :
1. Can motivate lecturers to improve their
profesionalism and getting recognition and right
according to government policy as stated in
PERMENDINAS 42 of 2007
2. Simplify certification process for lecturers, which
involving internal parties and external university
proposer.
3 University proposer has strategy and mechanism to
utilize lecturer quota whio can be recommended
tobe certified, so that every lecturer has impartiality
and big opportunity to be certified in accordance
with his / her academic quality, performance,
compentency and contribution.
4. Dengan mendapatkan sertifikasi maka kesejahteraan
dosen akan bertambah dan diharapkan akan
berimplikasi pada peningkatan kinerja serta
perbaikan mutu perguruan tinggi di Indonesia.
5. University has program planning for counceling
those lecturere who do not have certificate and
program planning for quality assurance of certified
lecturer so they can always strive for self
improvement in encounter challenging new Science
and Teknology.
127
II. LITERATURE REVIEW
2.1 Concept of Lecturer Certification in Indonesia
As Sebagaimana stated in Statute number 14 of 2005
concerning Teacher and Lecturer, lecturers expressed
as proffesional educatorand scientist with main
assignment to transform, develop, and widespreadsciencem technology, and art through
education, risearch and dedication to the society
(Chapter 1 Article 1 verse 2). Meanwhile, proffesional
are also expressed as proffesion or activities doen by
someone and as earning that require expertise, skill,
meet quality standard of particular norm with
profession education.
Academic qualification for lecturer and various
aspects of performance as defined in SK
Menkowasbangpan Nomor 38 Tahun 1999 (Decree
of Ministry Coordination of wasbangpan), denoted
as determined aspect of lecturer authority to
give lecture in particular stage of education.Beside,
competency of the lecturer is also
determining requirement for teaching authority.
Teaching Competency, particularly lecturer,
accounted as set of knowledge, skill dan bahaviour
that should be possesed, mastered and realized by
lecturer in performing his / her professional
assignment. These competencies including pedagogy
competency, personality competency, social
competency and professional copentency.
Lecturer competency define performance quality of
Tridharma College as shown in professional lecturer
activities. Competence lecturer to execute their
assignment professionally are those who have
pedagogic competence, proffesional competence,
proffesional competence, personality competence,
and social competence required for education,
research practices, and dedication to the society.
Students, colleagues and superiors can value level of
competence of lecturers. Because of the valuation is
based on perception during interaction between
lecturers and evaluators, thus this valuation may be
called as perception valuation.
Academic qualification and performance, level of
competence as valuated by others and by themselves,
and contribution statement by themselves, all
together will define lecturer professionalism.
Professionalism of a lecturer and his / her teaching
authority stated in presenting teaching certificate. As
appreciation for being professional lecturer,
government provide various allowances and also
professionalism related matters of a lecturer.
Certification concept briefly presented in scheme on
Figure 2.1.
128
Lecturer certification procedure can be illustrated as
follows :
Figure 2.2 : Procedure Lecturer cirtification
2.2 Lecturer Certification Unit of Quality Assurance
Directorate General of High Education carry out
monitoring and evaluating through Quality Assurance
Unit in ad hoc manner. based on the result of monitoring
and evaluating towards PTP-Serdos Quality Assurance
Unit, giving recommendation to Director General of High
Education concerning status of PTP-Serdos. Quality
Assurance Unit of internal university conducting
monitoring and evaluation towards certification
establshment in related university. Performance of Internal
Quality Assurance being monitored and evaluated by
Quality Assurance Unit for High Education.
Lecturer Certification is meant for getting theaching
authority in college and university in accodance with
Legislation No. 14 of 2005. Obvious challenge is challenge
of IPTEKS development challenge in real life. Lecturers in
colleges and universities should always be able to improve
quality of themselves be aup against this challenge.
Quality assurance program post certification in facing
IPTEKS development:
1. Continual counceling by internal university as well
as other institution.
2. Self studi being performed by lecturers individually
as well as a group.
3. Application concept of life long education where
study is part of their life.
IV. RESULT AND DISCUSSION
4.1 Execution of Lecturer Certification Process in
Indonesia
Figure 2.3 : Quality Assurance Unit of Lecturers
Certification
Provcess model consists of three stages i.e:
1. Internal certification process model
An internal certification process is conducted by
proposer universitywith target of selecting lectureres
which will be recommended to receive an external
certificates. On this stage, it is performed score
calculation stimulation of the professionalism based
on th existing criterion, for passed lectureres will
continue to be proposed for an external certification,
where for those who can bot pass the test will receive
counceling program, and may be recommended
replacement lecturers to fill up government quota.:
III. METHOD OF STUDY
Taken steps for development of information system on
certification for lecturer in Indonesia are as follows :
1.
Requirement Analysis On this stage it is performed
literature study and analysis concerning policies,
valuation system and creterion being applied, all
involved parties in lecturers certification alh occurred
problems identifation pon the execution.
2. Lecturer Certification Design Process Model
On this stage it is performed process model for
execution of lecturers ertification with consideration
of efficience aspects dan maximum possible to
eliminate manual processes. On designing this
process it is made an internal certification process,
performed by proposer university and an external
certification in coordination with authorized parties
(PTP serdos, kopertis and Ditjen Dikti)
3. Designing process of Information sytem
Architecture data à that is defining all required data,
where the position is and how to access them.
Software architecture à on this stage, it is defined
softwares to be used, whatever application will be
built using this particular software, whatever
functions will be used dan also how to utilize and
retrieve them.
Appearance architecture à on this stage, it is
defined lay out design and the look or appearance.
Infrastucture architecture à defining server to host
the website, where software can be run, and what
computer platform will be used .
4. Implementation and calibration will consist of the
following steps:
- Build and calibrate application codes and
functions to be used.
- Installing requitred infrastructure components
- Installing and running the system
2. External Certification Process Model
For lecturers who have passed an internal certification
process will continue to follow an external certification
process in accodance with defined procedures by
government.
3. Counceling Process Model and Quality Assurance
for Lecturers Certification
Those lecturers who could not pass both internal and
external, will be conducted a conceling to enhance
their professionalism, in oder to be re-proposed for
next opportunities of sehingga pada kesempatan
mendatang dapat diajukan kembali untuk attend an
external certification. For those lecturers who have
passed certification will be performed quality
assurance thus lecturers may continously enhancing
themselves in according to Science and Technology.
129
Figure 4.1: execution of Lecturers Certification in Indonesia
4.2 Software Module of Information System for
Execution of Lectuirers Certification in Indonesia
Based on above mentioned process model
design, it is required number of application modules as
follows:
1. Data application of academical lecturers à to store
academic position data, education stepladder and
sequence list of lecturers grades
2. DSS aplication & Expert System à to define
lectureres priorities dosen to be proposed for
certification and PAK score counting of each
lecturer
130
3. Perceptional valuation application à consist of
questioners form which should be filled up by
students, co-workers and selected superior to
conduct perceptional score counting.
4. Tridarma contribution of Lecturer application à in
form of portofolio which should be filled up by
proposed lecturer to attend certification and
conduct personal score counting.
5. Combined scores application à score counting on
combined PAK and Personal scores.
6. Consistence application à counting consistency
between perceptional consistence score and
personal scores.
7. Passing Definition of Lecturer Certification
Application à counting score final marks dan
provide lecturers lists of passed and failed in an
internal certification process.
8. Lecturere counceling program application à to used
for program planning of lecturer counceling who do
not pass both internal and external certification.
9. Aplikasi program penjaminan mutu dosen sertifikasi
à untuk merencanakan program pembinaan
berkelanjutan, studi mandiri dan life long education
bagi dosen yang sudah mendapatkan sertifikasi.
4.3
User Interface of Information System for
Conducting Lecturers Certification in Indonesia.
Several user interface from information system on
execution of lecturer certification may be seen on nthe
following Illustration:
Figure 4.2: User Interface from information system on
execution of lecturer certification
V. CONCLUSION
Lecturers are most important component in
education. Lecturer in the position as professional
educators and assigened scientist to be able in
tranforming, developingand and wide-spread-out sience,
technology dan art vthrough education, research and
dedication towards people. Government of Indonesia
through Permendiknas no 42/2007 awarding recognition
towards professionalisme, protect profesion and aslo
assuring lecturers welfare in the form of execution of
lecturer’s certification..
Each and every college and university in Indonesia
should have mekanism dan efective strategy in executing
lecturers certification for ensuring lecturers under shelter
of their organization can easily obtain certfication thus
expect performance and welfare of lecturers may be
improved and ultimately can be implicated on quality
enhancement of univerities in Indonesia.
VI. REFERENCE LIST
1.
Direktorat Jenderal Pendidikan Tinggi, Departeman
Pendidikan Nasional, 2008, Buku I Naskah Akademik
Sertifikasi Dosen, Ditjen Dikti
131
2.
Direktorat Jenderal Pendidikan Tinggi, Departeman
Pendidikan Nasional, 2008, Buku II Penyusunan
Portofolio Sertifikasi Dosen, Ditjen Dikti
3.
Direktorat Jenderal Pendidikan Tinggi, Departeman
Pendidikan Nasional, 2008, Buku III Manajemen
Pelaksanaan Sertifikasi Dosen dan Pengelolaan
Data, Ditjen Dikti
4.
Ditjen Dikti,2008, Kinerja Dosen Sebagai Penentu
Mutu Pendidikan Tinggi, Ditjen Dikti
5. Tim Sertifikasi Ditjen Dikti, 2008, Sertifikasi Dosen
Tahun 2008, Ditjen Dikti
6. Tim Serdos UPI, 2008, Sosialisasi Sertifikasi Dosen,
Tim Serdos UPI
7. Barizi,2008, Pemberdayaan dan Pengembangan Karir
Dosen, Institut Pertanian Bogor
132
8. Pressman , Roger S, 2007, Softaware Engineering: A
Practitioner’s Approach, McGrawhill Companies, Inc.
9. Suryanto Herman Asep, 2005, Review Metodologi
Pengembangan Perangkat Lunak, http: www.asep
hs.web.ugm.ac.id
10. Dadan Umar Daihani, 2001, Komputerissi
Pengambilan Keputusan, PT.Elex Media
Komputindo, Jakarta.
11. Turban Efraim, 2005 Decision Support Systems And
Intelligent Systems, Edisi 7 Jilid 1 & 2, Andi ,
Yogyakarta
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room L-211
A MODEL OF ADAPTIVE E-LEARNING SYSTEM
BASED ON STUDENT’S MOTIVATION
Sfenrianto
Doctoral Program Student in Computer Science University of Indonesia
email: [email protected]
ABSTRACT
This study describes a model of adaptive e-learning system based on the characteristics of student’s motivation in the
learning process. This proposed model aims to accommodate learning materials that are adaptive and intelligent based
on the difference of the characteristics of student’s motivation, and to solve problems in a traditional learning system,
which only provide the materials for all students, regardless of motivation characteristics. Thus, the student must be
provided with good materials in accordance with the ability to study their characteristics and also their motivation in
learning. Approaches to the system development consist of have advantages in terms of combining the characteristics of
motivation, intelligent systems and adaptive systems, in e-learning environment. The system becomes a model for
intelligent and adaptive e-learning diversity of student’s motivation to learn. Study will identify student’s motivation in
learning with the support of intelligent and adaptive system, which can be use as a standard for providing learning
material to students based on the motivation levels of students.
Keywords: adaptive system, e-learning, studen’s motivation, adaptive e-learning.
1. INTRODUCTION
The development of e-learning activities in universities
should be able to observe the condition of the student
concerned, due to the changes in paradigm of learning
that is from Teacher Center Learning toward Student
Centered learning. System e-learning approach with the
Student Centered Learning can encourage students to
learn more active, independent, according to the style of
learning and others. Thus, by using e-learning system
approach with the Student Centered Learning will be able
to support students to learn more optimal because they
will get the learning materials and information needed.
E-learning system should be developed to be a student
centered e-learning. The system will have various features
that can support the creation of an electronic learning
environment. Among other features to ensure interaction
between faculty with students, to determine the various
features of the meeting schedule and the assignment for
both tests, quizzes, and writing papers [1].
E-learning in universities at this time is not fully Student
Centered Learning, and limited only to enrich the teaching
of conventional [1].
Conventional learning is the same learning materials for
each user as it assumes that all characteristics of the user
is homogeneous. In fact, each user has characteristics
that differ both in terms of level of ability, motivation,
learning style, background and other characteristics. elearning system should be developed to overcome the
conventional learning, in terms of providing learning
materials and user behavior, particularly for the
classification of student’s motivation.
Motivation is a paramount factor to student 2
success. In particular, many educational psychologists
emphasize that motivation is one of the most important
affective aspects in the learning process [2]. From the
cognitive viewpoint, the motivation is related to how an
individual’s internal student such as goals, beliefs, and
emotions affects behaviors [3]. Motivation obviously
influences on students learning behaviors to attain their
learning goals. Therefore, one of the main concerns in
education should be how to induce the cognitive and
emotional states and desirable learning behaviors which
make the learning experience more interesting [4]. Although
133
it is well known that the student’s motivation and emotional
state in educational contexts are very important, they have
not been fully used in e-learning base on motivation
student characteristics in the learning.
An AES (Adaptive E-learning Systems) try to develop
student centered e-learning method, with the
representation of learning content that can be customized
with different variations of the characteristics of users
such as user motivation. An AES can gather information
from users on the objectives, options and knowledge, and
then adapt to the needs of specific users. This is because
the components can be developed from the AES
combination two systems, namely: ITS (Intelligent
Tutoring Systems) and AHS (Adaptive Hypermedia
Systems) [5].
ITS components can consider a student’s ability, present
the material in accordance with the learning ability and
motivation characteristics of students’s.
This way the learning process becomes more effective.
Ability to understand the student is part of the
“intelligences” ITS, other than that ITS can also know the
weakness of students, so that decisions an be taken to
address them pedagogic [6]. While the AHS components,
can then designed an elearning material delivered to
students dynamically based on the level of knowledge
and material format in accordance with the characteristics
of motivation. In other words, the AHS provides a link /
hyperlink in the display to navigate to relevant information
and hides information that is not relevant. Then there is
the freedom, flexibility and comfort of the students to
select appropriate learning material desires [5].
This allows the student’s of increased motivation in using
elearning.
2. ADAPTIVE E-LEARNING SYSTEM
Many e-learning systems developed to support students
in learning. E-learning will be able to support them to get
the learning materials and information needed. One method
that can be used in optimizing the effectiveness of the
learning process through E-Learninga as an adaptive
system [3].
Therefore, an adaptive system for e-learning is called an
adaptive e-learning system (AES).
An adaptive e-learning system is described, according to
Stoyanov and Kirschner, as follows:
“An adaptive e-learning system is an interactive system
that personalizes and adapts e-learning content,
pedagogical models, and interactions between
participants in the environment to meet the individual
needs and preferences of users if and when they arise”
[7]. Thus, an adaptive e-learning system takes all properties
of adaptive systems. To fit the needs for the application in
134
the field of elearning, adaptive e-learning systems adapt
the learning material by using user models.
E-learning system is called adaptive when the system is
able to adjust automatically to the user based on
assumptions about the user [7]. In the context of elearning, adaptive system specifically 3 focused on the
adaptation of learning content (adaptation of learning
content) and presentation of learning content
(presentation of learning content).
How the Learning content to presented, the focus of an
adaptive system [8]. Then De Bra Et Al. propose a model
of adaptive system consists of three main components
namely: Adaption Model, Domain Model and User Model,
such as the following figure: [9] Figure 1. A Model of
Adaptive System [9] Then Brusilovsky and Maybury,
propose An adaptive system consists of the learner and
the learner’s model. As the following figure: [10]:
Figure 2. An Adaptive Systems [10] The explanation of
the adaptive system, then there can be three-stage
adaptive system process, namely:
the process of collecting data about propile user Learner
Profile), the process of building a model user (User
Modeling) and the process of adaptation (Adaptation).
This study will describe the details of each sub-model of
adaptive system in the following 3 (three) sections.
2.1 Learner Profile
In adaptive system, learner profile components use to
obtain student information. This information is stored
without making changes, and does not close the
possibility of changing information. Changes occur
because learner profile information such as: level of
motivation, learning style, and others also change. The
learner profile has four categories of information that can
be used as benchmarks, namely: [11].
· Studen’s behavior, consists of information: level of
motivation, learning style and learning materials.
· Student’s knowledge, the information about the
knowledge levels of students. There are two
approaches that can be used, namely: the test
automatically (auto-evaluation) by an adaptive
system and the test manual (manualevaluation) by a
teacher.Levelsofknowledge students can be
categories: new, beginner, medium, advance and expert.
· Student’s achievement, the information relate to
student achievement results.
· Student’s preferences, which explain the concept of
information
preferences,
such
as:
cognitive preferences: (introduction, content, exercise,
etc.), preferences physical support (text, video, images,
etc.).
2.2 User Modelling
The user modeling process requires techniques to gather
relevant information about the student.
Therefore, it is plays an important role in adaptive system.
According Koch, The purposes of user modeling are to
assist a user during learning of a 4 given topic, to offer
information adjust to the user, to adapt the interface to
the user, to help a user find information, to give to the
user feedback about her knowledge, to support
collaborative work, and to give assistance in the use of
the system. [12].
Then De Bra Et Al. also describes the goals of user
modeling are to help a user during a lesson given topic, to
offer customized information to users, to adjust the link
information to the user, to help a user find information, to
provide feedback to the user’s knowledge about, to
support collaborative work, and to provide assistance in
the use of the system [9].
2.3 Adaptation Model
The process of adaptation is develope to student’s
achievement, student’s preferences, student’s motivation,
and Student’s knowledge. These benchmarks of the
student are stored in a user modeling. The user modeling
is hold by the system and provides information about the
user like for example, knowledge, motivation, etc. A user
modeling gives the possibility to distinguish between
student and provides the system with the ability to tailor
its reaction depending on the modeling of the user [10].
Thus, an Adaptation model is require by the adaptive
system to get information on adaptive. It is contains a set
of adaptation rules that exist in the stated terms and
conditions of an action from the adaptive system.
System components for adaptive e-learning can be
developed, as follows:[13]
·
·
·
·
·
Adaptive information resources, give the user
information in accordance with the project they are
doing, add a note with the resources and projects that
need to be based on knowledge of the user itself.
Adaptive navigational structure, record structure, or
adapt to provide navigation information to users about
learning the material in accordance with the next lesson.
Adaptive trail generation, provides some guidance by
giving examples that fit with the goals of learning.
Adaptive project selection, provides a suitable project
that depends on the user’s goals and previous
knowledge.
Adaptive goal selection, suggested that the purpose
of learning the knowledge of the user.
An adaptive e-learning system (AES) that has been
previously described, component’s AES can be
developed. In AES development components such as:
learner propile, and user modeling will allow for the
combination with the characteristics of student’s
motivation, and information focus on student’s
motivation
and
student’s
behavior.
Next topic will explain the two main components that
support AES. AES is a combination of two components:
intelligent tutoring systems (ITS) and hypermedia
adaptive systems (AHS). Such as the following figure
3: Three main components, namely: ITS and AHS, such
as the following figure: Figure 3. Component’s of AES
[10].
3. INTELLIGENT TUTORING SYSTEMS
(ITS)
A method of teaching and learning approach using ITS
can be used to design learning. Approach to the system
is a paradigm change in teaching and learning. According
to Merril, PF, et al. and Shute, VJ and Psotka, J, a teaching
system based on the concept of artificial intelligence has
been able to provide effective learning for students. [14]
[15].
Intelligent Tutoring Systems (ITS) are adaptive
instructional systems applying artificial intelligence (AI)
techniques. The goal of ITS is to provide the benefits of
one on-one instruction automatically. As in other
instructional systems, ITS consist of components
representing the learning content, teaching and
instructional strategies as well as mechanisms to
understand what the student does or does not know.
Effectiveness learners with the ITS system is the ability
to understand the behavior of students. Due to, ITS
consists of components representing the learning content,
teaching and instructional strategies as well as
mechanisms to understand what the student does or does
not know. Thus, an understanding of the ITS has
developed into a system that is able to “understand” style
of learning, motivation, and provide flexibility in the
present material. Ability is raised by ITS Franek, that can
be realized in its ability to deliver pedagogic material
characteristics according to students giving assignments,
and assess capabilities [16].
ITS requires a dynamic model of learning, with a set of
rules or a related module, where this system has the ability
to evaluate some of the solutions with the right solutions
to respond to a user’s behavior. Next will be described a
model for ITS Elearning in figure 4, the model comprises
the following modules: [17] Figure 4 Modules of ITS [17].
· Domain model: this provides the knowledge that the
student will be taught, and consists of declarative
knowledge (lessons, tests, exams, etc.) and procedural
135
knowledge (sets of rules to execute a task).
· Student model: this records information about the
student (personal information, interaction and learning
process parameters).
· Teacher model: this records information about the
teacher (such as their personal information).
· Teaching model: this defines the students’ learning
cycle. For this, it adjusts the presentation of the material
to each student’s knowledge according to the
information contained in the student model. The
teaching model comprises seven modules: evaluation
model, problem generation model, problemsolving
model, model for analyzing students’ answers, model
for generating plan of revision units, model for
predicting students’ grades, and the syllabus
generation
model.
· Graphic interface: this is responsible for user
interaction with the intelligent tutoring system. 6
4 . ADAPTIVE HYPERMEDIA
SYSTEM (AHS)
Brusilovsky propose a model to AHS in two categories,
namely: adaptive presentation and adaptive navigation
support, as follows: [5] [18] Figure 5. AHS Model [5] [18].
· Direct Guidance, direct visual guidance system. One of
the nodes that have shown that this is a good node for
access and recommend users to continue next web page.
· Adaptive Link Sorting, sorting the nodes of an adaptive
system, so that all nodes on a particular web page that
will be selected in accordance with the desire to model
users.
· Adaptive Link Hiding, adaptive nodes that can hide a
web page in order to prevent users to access the web
page next to it because it is not relevant.
· Adaptive Link Annotation, additional notes are adaptive
node with a few comments to the web page.
· Adaptive Links Generation, nodes that can be adaptive
megenarate from the previous node for the user to link a
particular web page.
· Map Adaption, a map of the node structure adaptation
of the system with a graphical way memvisualisai
navigation system, to make a web page to the desired
user.
7
4.1 Adaptive Presentation
Presentation adaptive attempt to present information that
will be introduced to a particular user through AHS, and
adjust the content of a hypermedia page to the user. This
page can make the selection of information that will be
introduced to the user. There are three methods of
presentation adaptive system, namely: [5] [18].
5. MOTIVATION STUDENT
CHARACTERISTICS
· Adaptive Multimedia Presentation, is to provide
additional information, explanations, illustrations,
examples, and others, for users who require the use of
an adaptive multimedia system.
· Adaptive Text Presentation, is to provide the adaptive
information system from a web page with the text type
is different and refers to the fact that jenisyang with the
same information can be introduced with a different
way different.
Motivation in the learning contents have three variables:
Time spent T(x) = {Fast, Medium, Slow}, The number of
activities A(x) = {Many, Normal, Few}, and Help request
H(x) = {Yes or No} [9]. For example, if a student spent a
long time in the learning contents, conduct many activities,
and ask for help . This motivation is represent as Rule CD5
(Contents Diagnosis) = f(Slow, Many, Yes). Therefore,
there are 18 (3*2*3) rules from the combinations of
elements in three variables in the learning contents (rule
number CD1…CD18). For rules CD1…CD9 are student’s
motivation and rules CD10…CD18 are student’s not
motivation. As shown in the table 1 below: Table 1.
Motivation rules in the learning contents
· Adaptive Of Modality, the model allows a user can
change the material information. For example, some users
like the only example of a defenisi information, while
other people who like the other information.
4.2 Adaptive Navigation Support
The six-way Navigation Support adaptive methods,
namely: [5] [18].
136
To achieve the classification results of student motivation
in learning can use the model Promoting Motivation Tutor
(MPT). There are two components of the MPT, namely:
motivation in the learning contents and motivation in the
learning exercise. [19].
Rule
Number
Time
spent
The
number of
activities
Help
request Motivation
CD1 Slow Many No Yes
CD2 Slow Normal No Yes
CD3 Medium Many No Yes
CD4 Medium Normal No Yes
CD5 Slow Many Yes Yes
CD6 Slow Normal Yes Yes
CD7 Medium Many Yes Yes
CD8 Medium Normal Yes Yes
CD9 Slow Few No Yes
CD10 Slow Few Yes No
CD11 Medium Few No No
CD12 Medium Few Yes No
CD13 Fast Many No No
CD14 Fast Normal No No
CD15 Fast Many Yes No
CD16 Fast Normal Yes No
CD17 Fast Few No No
CD18 Fast Few Yes No
In the learning exercise can be divides the into four
variables: Quality of Solving Problems Q (x) = {Hight,
Medium, Low}, Time spent T (x) = {Fast, Medium, Slow},
Hint or Solution Reques S(x) = {Yes or No}, and Relevant
Contents Request R(x) = {Yes or No} [9]. Therefore, there
are 36 (3*3*2*2) rule from the combinations of elements
in four variables in the learning contents (rule number
CD1…CD36). For rules CD1…CD18 are student’s
motivation and rules CD19…CD38 are student’s not
motivation. As shown in the table 2 below:
Table 2. Motivation rules in the learning exercises Based
on the results of the simulation table 1 and table 2 can be
use as the standard rules implementation in student’s
motivation .
8
6. ARCHITECTURE DESIGN AES
MOTIVATION
The development a model of adaptive e-learning system
base on student’s motivation in this study will be combine
some components of the AES, ITS, AHS and MPT
[9,10,17,18,19]. A model AES Motivation are consists of
several components, namely: domain model, student
model, teacher model, pedagogic model, adaptation model,
graphic interface, motivation in learning contents and
motivation In learning exercise. Architecture design a
model AES Motivation can be described fully, as follows:
Figure 6 Architecture Design AES Motivation Study
results that have been conducted by W. Fajardo Contreras,
et. al, “An Intelligent System fortutoring a Virtual Elearning Center” [17] is the basis arsiektur of a model
AES Motivasion, above.
7. CONCLUSION
The development of intelligent and adaptive Elearning
sytem based on the characteristics of student’s motivation
level can combine several components ITS, AHS, and
MPT. ITS components are domain model, student model,
teacher model, pendagogic model, and the graphic
interface.
Support. AHS with the adaptive modules: presentation
adaptive and adaptive navigation support, can support
AES adaptive motivation.
While the MPT variables: learning in contens and learning
in exercise, can support AES Motivation in determining
the characteristics of student’s motivation.
Architecture design of a model AES Motivation developed
from several researches: AES, ITS, AHS, and MPT. The
system can accommodate the delivery of materials adaptive
learning, and knowing the characteristics of student’s
motivation in learning process.
FUTURE RESEARCH
Research on the classification of student’s motivation
have been conducted, with Naive bayes method. Results
of the use is already on trial in a small scale, Sfenrianto
[20].
REFERENCES
[1] Hasibuan, Zainal A and Harry B. Santoso, (2005) “The
Use of E-Learning towards New Learning
Paradigm: Case Study Student Centered ELearning Environment at Faculty of Computer
Science – University of Indonesia”, ICALT TEDC,
Kaohsiung, Taiwan.
[2] Vicente, A. D. (2003) “Towards Tutoring Systems that
Detect Students’ Motivation: an
Investigation”, Ph.D. thesis, Institute for Communicating
and Collaborative Systems, 9 School of Informatics,
University of Edinburgh, U.K.
[3] Yong S. K., Hyun J. C., You R. C., Tae B. Y., Jee-H. L.
(2007) “A Perspective Projection Tutoring System
With Motivation Diagnosis And Planning,”,
Proceedings of the ASME International Design
Engineering Technical Conferences & Computers
and Information in Engineering Conference IDETC/
CIE September 4-7, 2007, Las Vegas, Nevada, USA.
[4] Soldato, T. D., (1994) “Motivation in Tutoring Systems”,
Ph.D. thesis, School of Cognitive and Computing
Sciences, The University of Sussex, U.K. Available
as Technical Report CSRP 303.
137
[5] Brusilovsky, (2001) “Adaptive Hypermedia. User
Modeling and User-Adapted Interaction”, vol. 11,
no. 1–2 p.p. 87–110, 2001.http://www2.sis.pitt.edu/
~peterb/ papers
/brusilovsky-umuai-2001.pdf [Accessed May 18th, 2008].
[6] Nwana, H. and Coxhead, P. (1988), “Towards an
Intelligent Tutor for a Complax Mathematical
Domain”, Expert System, Vol. 5 No. 4.
[7] Stoyanov and Kirschner, (2004) “Expert Concept
Mapping Method for Defining the Characteristics
of Adaptive E-Learning”, ALFANET Project Case.
Educational Technology, Research &
Developement, vol.52, no. 2 p.p. 41–56.
[8] Modritscher et al. (2004) “The Past, the Present and
the future of adaptive ELearning”.
In Proceedings of the International Conference Interactive
Computer Aided Learning (ICL2004).
[9] De Bra et al. (1999) “AHAM: A Dexter-based Reference
Model for Adaptive Hypermedia”, In Proceedings
of the 10th ACM Conference on Hypertext and
Hypermedia (HT’99), P.p. 147–156.
[10] Brusilovsky and Maybury, (2002) “From adaptive
hypermedia to the adaptive web”, Communications
of the ACM, vol. 45, no. 5 p.p. 30–33.
[11] Hadj M’tir, et all, “E-Learning System Adapted To
Learner Propil”, RIADI-GDL Laboratory, National
School of Computer Sciences, ENSI, Manouba,
TUNISIA, http://medforist.ensias.ma/Contenus/
Conference%20Tunisia%20IEBC%202005/papers/
June24/08.pdf [Accessed: Jun 4th, 2009].
[12] Nora Koch, (2000) “Software Engineering for
Adaptive Hypermedia Systems”, PhD thesis,
Ludwig-Maximilians-University Munich/ Germany,
2000. http://www.pst.informatik.uni-muenchen.de/
personen/kochn/PhDThesis Nora Koch.pdf
[Accessed: Jan 10th, 2008]
[13] Henze and Nejdl, (2003) “Logically Characterizing
Adaptive Educational Hypermedia Systems”, In
Proceedings of InternationalWorkshop on
Adaptive Hypermedia and Adaptive Web-Based
Systems (AH’03), P.p. 15–29. AH2003.
138
[14] Merrill, D. C., P.F. et al, (1996) “Computer Assisted
Instruction (CAI)”, Mac Millan Publishing.
[15] Shute, V.J. and Psotka, J., (1998) “Intelligent Tutoring
System: Past, Present and Future”, abstract for
chapter on ITS for handbook AECT.
[16] Franek, (2004) “Web-Based Architecture of an
Intelligent Tutoring System for Remote Students
Learning to Program Java”.
[17] W. Fajardo Contreras1 et. al. (2006) “An Intelligent
Tutoring System for a Virtual Elearning Center”,
Departament Computación Intelligent Artificial,
E.T.S. Fakulty of Informática, University of
Granada, 18071 Granada, Spain.
[18] Brusilovsky, (1996) “Methods and Techniques of
Adaptive Hypermedia”, User Modeling and UserAdapted Interaction, vol. 6, no. 2–3 p.p. 87–129,
http://www2.sis.pitt.edu/~peterb/ papers/
UMUAI96.pdf [Access May 18th, 2008].
[19] Yong S. K. , Hyun J. C., You R. C., Tae B. Y., Jee-H. L.
(2007) “An Intelligent Tutoring System with
Motivation Diagnosis and Planninga”, Creative
Design & Intelligent
Tutoring Systems (CREDITS) Research Center, School of
Information
&
Communication
Engineering,Sungkyunkwan University, Korea
[email protected].
[20] Sfenrianto, (2009) “Klasifikasi Motivasi Konten
Pembelajaran Dalam ITS Dengan Metode Naïve
Bayes”, research reports, independent task:
Machine Learning.
Sfenrianto is Doctoral Program Student in Computer
Science Department University of Indonesia. He
is also lecture in Postgraduate Masters of Computer
Science Department, Faculty of Computer Science,
STMIK Nusa Mandiri Jakarta. He received his
master degree in Magister of Information
Technology STTIBI Jakarta, Indonesia. He majored
in Computer Science/Information Technology and
minored in ELearning Adaptive System (AES).
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room AULA
SMART WHEELED ROBOT (SWR) USING MICROCONTROLLER
AT8952051
Asep Saefullah
Computer System DepartmentSTMIK Raharja – Tangerange
Mail: [email protected]
Sugeng Santoso
Information Engineering DepartmentSTMIK Raharja – Tangerange
Mail: [email protected]
ABSTRACT
Along with the microcontroller technology that very rapidly develops and in the end it brings to Robotics technology
era. Various sophisticated robot, home security systems, telecommunications, and computer systems have use
microcontroller as the main controller unit. Robotics technology has also been reaching out to the entertainment and
education for human. One type of robot that is the most attractive type is the wheeled Robot (Robotic Wheeled). Wheeled
Robot is the type of robot that using a wheel likes a car in its movement. Wheeled Robot has limited movement, its can only
move forward and only have control on the speed system without any control. By combined the microcontroller system
and embedded technologies we can obtain the Smart Robotic Wheeled (SWR). To that we can design a system of Smart
Wheeled Robotic that capable of detecting obstacles, move forward, stop, rewind, and turn to the left to right automatically. It is possible to create by using microcontroller AT89S2051 on embedded system based technology combine with
artificial intelligence technology. Principle work of SWR is that the infra red sensor senses an object and sends back the
results of the object senses to process by microcontroller. Output of this microcontroller will control the dc motor so the
SWR can move according to the results of the senses and instruction from microcontroller. The result is a prototype Smart
Wheeled Robot (SWR) that has an ability to avoid obstacles. In the future, Smart Wheeled Robot (SWR) prototype can be
developed to be implemented on the vehicles so it can increase the comfort and security in the drive.
Keywords: Smart Wheeled Robotic, Microcontroller, Automatic Control
INTRODUCTION
Along with the rapid development of microcontroller technologies that ultimately deliver our technology in an era of
Robotics, has made the quality of human life is high. Various sophisticated robot, home security systems, telecommunications, and many computer systems use
microcontroller as the main controller unit. Robotics technology development has been able to improve the quality
and quantity production of various factories. Robotics
technology has also been reaching out to the entertainment and education for people. Car robot is one type of
robot that is the most attractive type. Cars robot is a kind
of robot that movement use the withdrawal wheel car, although it can use only two or three wheels only. The prob-
lems that often occur in the design and create a robot cars
is the limited ability of a robot car that can only move
forward only, and only on the speed control system without the control car robot.
By using microcontroller and embedded technology that
can be designed artificial control car robot automatically,
that is controlling the robot car that is capable of moving
forward, stop, rewind, and turn automatically. Elections
embedded systems with embedded systems that can be
easily compared with
the multi-function computer, because it does have a special function. Embedded a system can be very good ability
and is an effective solution of the financial side. Embed-
139
ded system consisting of a general one board with the
micro-computer program in a ROM, that will start a specific application is activated shortly after, and will not stop
until disabled.
One important component which is a tool from senses
Smart Wheeled Robotic (SWR) is a sensor. Sensor is tool
that can be detect something that is used to change the
variation mechanical, magnetic, heat, and chemical rays
into electric current and voltage, the sensor used in the
SWR is the infra red sensor. Infra red sensor selection
based on the function of a robot that is designed to sense
an object and sends back the results of the object senses
to process by microcontroller.
Microcontroller before use in the system SWR must first
fill program. The goal of IC charging is to program that
aims to work in accordance with the draft that has been
set. Software used to write assembly language program
listing is M-Studio IDE for MCS-51. IC microcontroller initially filled with a blank start the program. While for the IC
that contains the program had been another, the program
is deleted first before automatically filled in with the new
program. Process of charging is using ISP Flash Programmer, from AT89S2051 microcontroller producers namely
ATMEL company.
digital data issued by the microcontroller To move the program from the computer into microcontroller use two modules that act as intermediaries for the program. The system
block diagram as a whole is as follows:
IR
Circuits
Microcontroller
AT89S2051
Motor 2
Driver
Motor DC
Motor 1
Push button START
Power Supply
SESSION
Discussion of the block diagram
Control car robot automatically designed to be able to move
forward, stop, rewind, turn right and turn left independently. Mobile robot used in the design using this type of
movement Differential Steering type movement, which is
most commonly used. Kinematics used is quite simple relative position where it can be determined with the difference in speed with the left wheel right wheel, the design
has two degrees freedom. With the motor left and right
one direction will cause the robot run forwards or backwards, and with the opposite direction, the robot will rotate the opposite direction or counter-clockwise direction.
Smart Robotic Wheeled (SWR) is designed with two types
of gearbox as the motor driving the wheels left and right,
infra red sensor which functions to avoid collision with
objects in the surrounding areas so that the robot can
walk better without the crash.
Control system design software SWR was built using the
assembly language programming, which is then converted
into the form of a hex (hexadecimal). Program in the form of
a hex this is later entered into microcontroller. Data received by microcontroller digital data is processed from
Analog to Digital Converter which is owned by
microcontroller itself. This feature is one of the most important features in the design SWR is because all of the
peripherals to support this move based on the robot input
140
Figure 1. Control system block diagram SWR
Infra Red Sensor
Infra red sensor is a sensor that emits rays below 400 nm
wavelength, infra red sensor using a photo transistor as a
recipient and led as the infra red transmitter. It will raise the
signal emanated from the transmitter. When the signal beam
on the object, then this signal bounce, and received by the
receiver. Signal received by the receiver are sent to a series
of microcontroller for the next series to give the command
to the system used in accordance with the algorithm program microcontroller made, as shown in the picture below:
Figure 2. The working principles of infra red sensor
(Widodo Budiharto S.SI, M.Kom, Sigit Firmansyah, (2005),
Elektronika Digital Dan Mikroprosesor, Penerbit Andi,
Yogyakarta. )
Infra Red Transmitter
Transmitters form a series of infra red rays that emit a certain frequency between 30 - 50 kHz. Photo transistor will
be active when exposed to light from the infra red led. Led
the photo transistor and separated by a distance. Far distance proximity affects the intensity of light received by
photo transistor. Led and when the photo transistor is not
obstructed by objects, the photo transistor will be active,
causing the logic output ‘1 ‘and the Led go out. When the
photo transistor and Led obstructed by objects, photo
transistor will be switched off, causing the logic output ‘0
‘and Led light.
Figure 4. The Scheme infra red receiver
(Widodo Budiharto S.SI, M.Kom, Sigit Firmansyah, (2005),
Elektronika Digital Dan Mikroprosesor, Penerbit Andi,
Yogyakarta.)
In general system block diagram sensor is described as
follows:
Figure 5. Block diagram the sensor system
Figure 3. The scheme range infra red transmitter
(Widodo Budiharto S.SI, M.Kom, Sigit Firmansyah, (2005),
Elektronika Digital Dan Mikroprosesor, Penerbit Andi,
Yogyakarta.)
Microcontroller
Microcontroller is a major component or can be referred to
as the brain that functions as the movement of motor (Motor Driver) and processing data generated by comparator
as a form of output from the sensor. Microcontroller series
consists of several components including: AT89S2051, a
10 K© resistor sized, measuring a 220 © resistor, a capacitor electrolyte measuring 10 ¼f 35 Volt, 2 pieces LED (Light
Emitting diode) 2 pieces measuring 30 pf capacitors, a crystal
with 11.0592 MHz frequency, and a switch that serves to
start the simulation run. Of a series microcontroller scheme
is as follows:
Infra Red receiver
Circuit receiver can capture infra red signal with a frequency
30 - 50 kHz, after the credits received in the form of the
signal converted into DC voltage. When the infra red signal which caught the rebound, it will be strong enough to
make the output to be low.
141
Figure 6. The scheme Microcontroller
(Berkarya dengan mikrokontoller AT 89S2051 Nino Guevara
Ruwano.2006, Elek Media Komputindo.Jakarta)
Packaged MCU AT89S2051 shown in the figure 7. Package
only has 20 feet and has several ports that can be used as
input and output ports, in addition to supporting other
port, the port 1.0 to 1.7 and 3.0 port to port 3.7. Users must
be adjusted to the rules set by the manufacturer
microcontroller this. AT89S2051 difference between the
previous series with the AT89S51 is the only pin P1 and P3
in the AT89S2051, so that the MCU can not access the
external memory for the program, so the program should
be stored in PEROM inside it with a 2 Kbytes, while all the
pins were found in both MCU, the same function.
Figure 7. AT89S2051 Pin layout
(Berkarya dengan mikrokontoller AT 89S2051 Nino Guevara
Ruwano.2006, Elek Media Komputindo.Jakarta)
Description of the function of each pin is as follows:
1. Pin 20, Vcc = Supply voltage microcontroller
2. Pin 10, Gnd = Ground
3. Port 1.0 - 1.1 = An 8-bit input / output ports which 2-way
(Bidirectional I / O ports). Port pins P1.2 to P1.7 provide
pull-ups internally. P1.0 and P1.1 also function as a positive input (AIN0) and the negative input (AIN1) is responsible on a comparison of analog signals in the chip. 1.0
output port load currents of 20 mA and can be used to set
the LED directly. If a program to access the port pin1, then
this port can be used as an input port. When the port pin
1.2 to 1.7 is used as input port and port-port is set to be
Pulled-low, and then the port-port can generate cash (IIL)
because of the internal pull-ups before. Port 1 can also
receive the code / data in the flash memory when the programmed conditions or when the verification process is
done.
4. Port 3 3.0 to 3.5 is a six input / output pins that can
receive the code / data in two directions (bidirectional I / O
port) that have facilities internal pull-ups. P3.6 is a hard-
142
ware that is used as input / output of the comparator on
chip, but the pin can not be accessed as a port input /
output standard. Port 3 pins can issue a flow of 20mA. Port
3 also provides the function of special features that vary
from microcontroller AT89S2051, among others:
Table 1. Fungsi Port 3
(Berkarya dengan mikrokontoller AT 89S2051 Nino Guevara
Ruwano.2006, Elek Media Komputindo.Jakarta)
Port Pin Alternate Functions
Port 3 also receives some control signals for Flash memory
programming and verification of data.
5. RST = RST foot to function as a reset input signal. All
input / output (I / O) will be back in the position of zero
(reset) as soon as possible when reset is logically high.
RST pin for a two machine cycles while the oscillator is
running will cause the reset system all devices that are in a
position to zero.
6. As = XTAL1 input to the inverting amplifier and oscillator to provide input to the internal clock operating circuit.
7. XTAL2 = As output from the foot of a series of inverting
amplifier oscillator.
Motor Driver
DC drive motor required to rotate the motor is used as the
rotation of the wheels, to move motor, and used a series of
integrated (Integrated Circuit) L293D. L293D is a monolithic IC that has a high voltage (for 36 volt), which has
four channels and is designed to accept TTL-level voltage
and the DTL for moving the inductive loads such as solenoid, stepper motor and DC motor. For ease in application
L293D has two bridges which each consist of two entries
that are equipped with one enable line. For more details
can be seen in the picture 3.4. L293D IC block diagram.
Figure 8. IC L293D block diagram
(Panduan Praktis teknik Antar Muka dan Pemrograman
Microcontroller AT89S51, Paulus Andi Nalwan, 2003, PT
Elek Media Komputindo, Jakarta)
No
Yes
Robot turn left
Straight Movement
Stop a moment, Turn right
Driver of DC Motor
How it works driver of a series DC motor is controlled only
through this port Microcontroller, as described above pin
1 and 9 are as enable you to play and can stop the motor
when flood with the electric current at 0 (to rotate) and 1
(to stop), whereas for P0.0 and P0.1 are used to make playing the motor left and right while the P0.2 and P0.3 are used
to make playing the motor up and down, if P0.0 = 0 and
P0.1 = 1 then rotate to the right M1 , and if P0.0 = 1 and P0.1
= 0 and M1 rotates left, while for M2 will rotate upwards if
P0.2 = 0 and P0.3 = 1 and M2 will rotate to the top, and if
P0.2 = 1 and P0.3 = 0 and M2 rotate will down. Motor 1
(M1) and Motor 2 (M2) mounted separately so that when
M2 rotate, it will rotate M2 as well because it is in the
section that has been driven by M2. And vice versa if the
M1 rotates, the M2 will not participate because it is rotating at a fixed position. Series of DC motor that is used can
be seen in the figure 9.
Figure 10. Life cycle flowchart program control SWR
Robot prepared, and then presses the button to turn on
the robot, robot starts moving forward. Than walk straight
up to face a hurdle, robot will pause and then move to the
right to avoid obstacles. If still have obstacles the robot
will move to the left to avoid the obstacles, the robot goes
back straight.
Figure 9. Circuit driving DC motor
Panduan Praktis teknik Antar Muka dan Pemrograman
Microcontroller AT89S51, Paulus Andi Nalwan, 2003, PT
Elek Media Komputindo, Jakarta
The design prototype SWR
In the design to build a prototype SWR, designed as a
series of images on the following:
START
Prepare Robot
Press swich
Obstacle?
Straight movement
Obstacle?
No
Yes
Stop Robot
Figure 11. The prototype scheme SWR
143
The Infrared (IR) sensor to detect obstacles, the data is
sent from IR to microcontroller. Microcontroller can be read
from the IR and processed in accordance with the flowchart of the program is designed. Exodus from
microcontroller L293D provide a signal to a DC motor driving. DC motor will be react (rotate) either clockwise or
counter-clockwise the opposite according to the signal
output from microcontroller which is the response of the
infra red sensor.
POWER ON
ACALL
SUBRUTIN
DELAY
;
CALL
Writing Program Assembly Listing
Before microcontroller used in the SWR system, must first
fill program. Charging that the program is aimed IC can
work in accordance with the draft that has been set. Software used to write assembly language program listing is
M-Studio IDE for MCS-51.
Design Software
The program initializations
Program designed to control for the SWR using assembly
language, while microcontroller only able to access a digital input signal logic high (logic “1”) and logic low (logic
“0”) so that the need to perform initialization.
;======================================================
; SWR.ASM
;———————————————————————
————
; NAME
: PROTOTYPE SWR
;FACULTY
: SISTEM KOMPUTER ( SK )
;SEKOLAH TINGGI MANAJEMEN DAN ILMU
KOMPUTER RAHARJA
;=====================================================;
; INITIALISATION
;———————————————————————
———;
SWITCH
BIT
P3.7
; PIN 11
SENS_KANAN BIT
P1.0
; PIN 12
SENS_KIRI
BIT
P1.1
; PIN 13
IN_1
BIT
P3.0
; PIN 2
IN_2
BIT
P3.1
; PIN 3
IN_3
BIT
P3.2
; PIN 6
IN_4
BIT
P3.3
; PIN 7
LED_1
BIT
P1.5
; Right sensor indicator
LED_2
BIT
P1.6
; Left sensor indicator
POW_IND
BIT
P1.7
; Power indicator
;======================================================
;SUBRUTIN MAIN PROGRAM
;======================================================
=======================================================
ORG 00H
;
START ADDRESS
SWITCH_ON:
SETB
SWITCH
JB
SWITCH,SWITCH_OFF ;
CLR
POW_IND
;
144
Figure 12. Display M-Studio IDE
After writing the program listing on the M-Studio IDE text
editor is finished, and the text is saved in a file with the
name MOBIL.ASM. This must be done because the software only works on file with the name *. ASM or *. A51.
The next step is to compile assembly files into hex files, so
file into the file MOBIL.ASM will MOBIL.HEX. By pressing the F9 key on the keyboard or via the menu. Hex file *.
This will be inserted into the IC microcontroller. After the
step - this step is done then there will be some of the files
after the compilation step, namely: MOBIL.ASM,
MOBIL.LST, MOBIL.DEV and MOBIL.HEX. and until this
stage in the process of writing and compiling the program
is completed assembly.
Enter into the program Microcontroller
At this step, the IC microcontroller initially filled with empty
start the program. While for the IC that contains the program had been another, the program is deleted first before
automatically filled in with the new program. To begin,
first open the program ISP Flash Programmer is created by
the microcontroller AT89S2051 producers is the ATMEL
Company. Then select the device that will use the
AT89S2051
Microcontroller has been fully filled. And marks the emergence of error indicates the process failed, which is usually caused by errors in the hardware the downloader. After the steps - the steps above to run and complete, then
the IC Microcontroller in the design tool is the type
AT89S2051, already can be used to perform the work system design tool.
Test and Measure
Once you’ve finished all the string parts that are used to
design the system automatic control car robot, the next is
a series of trials conducted on the entire block of the series
of car robot control system is automatic.
Figure 13. Device Used AT89S2051
After selecting a device that is used, the menu on the “Hex
file to load flash buffer.” Software and then ask for the
input file. Hex will entered into the IC microcontroller, in
this case is MOBIL.HEX. File. Hex will entry that has been
recognized by the software are then entered into the IC
microcontroller. Then select the menu and search instruction Auto Program menu or press Ctrl + A on the board
keyboard.
Software for the command in the file. HEX to the IC
Microcontroller, after the auto program is selected, followed by pressing the enter key. Next appears the view
that the charging process, the stages of this process
microcontroller are charging that the software is done by
M-Studio IDE to listing all the programs into the IC is
Microcontroller.
IC Microcontroller fully in line with the increase in percentage that appear in the software process of each transfer. Charging process begins with the “erase Flash &
EEPROM Memory”, which means the software to perform
deletion of the internal memory IC Microcontroller before
put into the program is IC. In the process of deleting this,
when percentage has reached 100% then means that the
internal memory has been erased completely and in a state
of empty. If percentage has not reached 100% but the software shows an error sign, then the elimination process to
fail. This failure is usually caused by an error in the hardware the downloader. After the removal of the finished
software to automatically “Verify Flash Memory.” Then
the software starts filling with IC Microcontroller file. Hex.
As with the deletion, the process is shown with the addition of percentage. 100% indicates that the IC
Test Blocks Sensor
Tests on a series of blocks are done with a sensor voltage
power portion that is provided. In this test the goal to be
achieved is a sensor can detect an object located in front
of the infra red sensor. Trial block sensor consists of a
series transmitter and receiver. The input voltage at the
entrance of each series that is 9 VDC. Sensitivity distance
that can be located by sensors located in the table below:
Table 2. Trial Results Sensitivity Sensor
In the table above in mind that the most far distance that
can be located by the sensor 25 cm. Meanwhile, over 25
cm, sensor is not working.
145
Figure 14. Circuits of sensor block
Test and Measure Blocks Microcontroller
Tests in this series use LED as an indicator and does not
have the data out of each - each bit is determined to control DC motor driver series. While for the bit assigned to
receive input of a series of infrared sensor in this experiment using a switch that is connected with GND, so that
when the switch is pressed the tip connected to the bit
that is determined “0”. This condition will be the same as
the situation in which when applied with a series of infra
red sensor to detect the object. For more information pin
which is used to read and control the external device can
be seen in the table below.
Motor driver block test performed to determine whether
the series of motor driver can control the motor well. In
this test, the logic is to give input from the motor driver to
see the direction of motor cycles. Voltage logic of a given
voltage and +5 Volts to run a given motor is +5 Volts. Motor right connected to the output 1 and output 2 on the
driver, while the left motor is connected to the output 3
and output 4. Results from the testing can be seen in the
table below.
Table 5. Motor Driver Test Results
Table 3. Microcontroller connection with the circuits of
sensor
CONCLUSION
Table 4. Microcontroller connection with the circuits DC
Motor
Overall results of testing and analysis system Smart Robotic Wheeled (SWR), can be as follows:
1. Car speed control system can work automatically with
out the need to use the remote and in control through
`microcontroller; SWR can avoid obstacles and move
freely to the left and right.
2. The distance is ideal for sensor sensitivity is 2 cm - 20
cm, between 21 cm -25 cm sensor sensitivity has been
less good and more than 26 cm sensor is not able to
work.
3. Type differential steering can be used for the
prototype so that the SWR can turn to the right and
Test and Measure Blocks Motor Driver
Microcontroller AT89S2051 is the brain of all systems of
this mobile robot. Microcontroller receive logic function
of a series of infra red sensors and analyze the incoming
data and send the driver a series of logic in the motor, so it
can shut off the motor and DC.
By set the microcontroller series of SWR on this car then
the robot can work independently, able to take decisions
in accordance with the conditions (Autonomous Robotic).
146
left.
REFERENCES
1. Atmel, 2007, AT89S52. http://www.atmel.com/dyn resources/doc.0313.pdf (didownload 4 Maret 2007)
2. Budiharto Widodo, Perancangan Sistem dan Aplikasi
Mikrokontroller. Penerbit Erlangga, Jakarta, 2005
3. Eko Putra Agfianto, Belajar Mikrokontroller AT89S52
Teori dan Aplikasi. Penerbit ANDI, Yogyakarta, 2004
4. Operation Manual FAP 188-100L AT Command Guide,
NOKIA GSM AT COMMAND SET. http://
www.activexperts.com/smsandpagertoolkit/
atcommandsets/nokiahtml
5. Fairchild. DM74LS47. http://www.datasheetcatalog.com
(Downloaded on March 4, 2007)
6.Fairchild.DM74LS125A.http://
www.datasheetcatalog.com (Downloaded on March
4, 2007)
7. Fairchild. DM74LS04. http://www.datasheetcatalog.com
(Downloaded on March 4, 2007) Motorola.
SN74LS147. http://www.datasheetcatalog.com
(Downloaded on March 4, 2007)
8. Untung Rahardja, Simulasi Kendali Kecepatan Mobil
Secara Otomatis, Journal CCIT, Vol.2 No. 2 – Januari
2009
147
Paper
Saturday, August 8, 2009
13:30 - 13:50
Room AULA
Study Perception of Technological Consumer of Study of Raharja Multimedia
Edutainment ( RME) Use Method of Technology Acceptance Model.
Henderi, Maimunah, Aris Martono
Information Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
email: [email protected]
Abstraction
Information technology Exploiting (TI) by various organization in general aim to to facilitate and quicken execution
process business, improving efficiency, quality and ability kompetitif. That way also with College of Raharja as organization which is active in education. Through adjusment of technology information, various enforceable activity easierly,
quickly, effective, efficient, and requirement of various type of information required by all level of management in College
of Raharja representing critical success factor (CSF) for organization can be fulfilled quickly, accurate and economize.
One among product of information technology which have been created and used by College of the Raharja is Raharja
Multimedia Edutainment (RME). Technological this used to support and memperlancar of school activity execution, and
fulfill requirement of information [of] which deal with [his/its]. Refering to that matter, this research aim to to know
factors influencing accepted better or do not it him RME by his consumer. Also wish known [by] relation/link of [among/
between] factors influencing acceptance RME. Model used to know acceptance of RME of at this research is model TAM
(Technology Acceptance Model). Model of TAM in detail explain information technology acceptance (TI) with certain
dimension which can influence technological acceptance by consumer. Model this place factor of attitude and every
behavior of consumer by using two especial variable that is benefit ( usefulness) and use amenity (easy of use). Anticipated by acceptance of this RME is also influenced by other;dissimilar factor for example: Attitude Toward Using (ATU)
Or attitude to use, Intention To Use (ITU) Or intention to use to produk/servis and Actual System Usage (ASU) Or use
behavior.
Keyword : RME, TAM, usefulness, easy of use.
1. Antecedent
Besides used to facilitate execution process business and improve ability kompetitif, exploiting and
adjusment of technology information (TI) also can influence speed, efficient and effectiveness of execution activity of organizational business( inclusive of organization
moving area of education execution). Others, TI have also
offered a lot of opportunity to organization to increase
and mentransformasi service, market, process job, and the
business [relation/link]. In aspect of education management, applying of TI have influenced strategy and execution process to learn to teach. strategy of Study of at this
era have been influenced by TI and instruct to way of
learning active siswa-mahasiswa coloured by problembase-learning. Thereby, way of learning active guru-dosen
[is] progressively left by [doing/conducting] enrichment
148
and use of information technology facility ( high impact
learning
Conception the study high impact learning, by College
Raharja applied createdly is tools study Raharja Multimedia Edutainment (RME) supported by information technology. Through applying TI with concept RME, result of
belajar-mengajar expected by linear eksponensial/non,
because RME integrate information technology and educational. Others, RME also contain digital concept Interactive of multimedia learning (IDML), library by Lecturer,
continues improvement, and entertainment, borne and
developed collectively/together by Person Raharja. Technological this claim domination of information technology
of media multimedia and for the student study activity.
Technological use base on this multimedia is used in order
to process student study can be done/conducted by
interaktif and support information domination and also the
new technology. Hence, applying RME of at process learn
to teach [in] College Raharja of[is inclusive of strategy
higt impact learning with marking ( henderi, 2004): ( a) learn
by interaktif, ( b) learn by just [is] in time learning, ( c) learn
by hipernavigasi, ( d) learn by networking, ( e) learn by
kolaboratif, and (f) learn by engaged [is] learning
Besides used as by tools study, RME also have
ability in providing and fulfilling information requirement
of which deal with process learn to teach quickly, precise,
and economize representing one of Critical Success Factor (CSF) for a College. Thereby, this technological use
claim domination of information technology of media multimedia and which is the inclusive of technology which
relative newly to can reach target which have been specified.
Existence a new technology is area of informasi-komunikasi
in the form of tools of this study, will yield reaction [of] [at]
x’self of his/its consumer, that is in the form of acceptance
( Acceptance) and also deduction ( Avoidence). But that
way, with [do] not barricade of a technology come into a[n
business process, hence it is important to know how acceptance a the technology for [his/its] consumer.
2.
Is Anticipated By Perception of Benefit of Raharja
Multimedia Edutainment ( Percieved Usefulness/Pu)
having an effect on to Consumer Attitude ( Attitude
Toward Using/Atu). Excelsior mount benefit of soft
ware AMOS hence positive progressively attitude of
consumer in using the Raharja Multimedia
Edutainment
3.
Is anticipated by a Amenity Perception use Raharja
Multimedia Edutainment ( Perceived Ease of Use/Peou)
having an effect on to Consumer Attitude (Attitude
Toward Using/Atu). Progressively easy to Raharja
Multimedia Edutainment to be used hence positive
progressively the consumer attitude in using the
Raharja Multimedia Edutainment
4.
Anticipated By a Consumer Raharja Multimedia
Edutainment Attitude (Attitude Toward Using/Atu)
having an effect on to Consumer Behavior ( Behav
ioral Intention to Use/Itu). Positive progressively the
consumer attitude in using Raharja Multimedia
Edutainment hence progressively mount intention to
use it
5.
Anticipated By a Benefit Raharja Multimedia
Edutainment Perception ( Percieved Usefulness/Pu)
having an effect on to Consumer Behavior ( Behav
ioral Intention to Use/Itu). Excelsior mount benefit
Raharja Multimedia Edutainment hence progressively
mount intention to use it.
6.
Is Anticipated By a Consumer Raharja Multimedia
Edutainment Behavior ( Behavioral Intention to Use
Itu) having an effect on to Real Usage ( Actual System
Usage/Asu). Intention Excelsior to use software
AMOS hence behavioral positive progressively in
using it.
2. Problems
Problem which wish dikemukan and discussed in by this
research is
1. Any kind of factors which interact and have an effect
on to storey;level technological acceptance specially
Raharja Multimedia Edutainment to all dosen and stu
dent in College Raharja
2.
How form model acceptance of information technol
ogy that is Raharja Multimedia Edutainment applied
in College Raharja
3.
Hypothesis Hypothesis of Public raised in this research
is Anticipated by a model raised at this research is
supported by fact [in] field. This matter [is] indication
that anticipation of matrix of varians-kovarians of popu
lation of is equal to matrix of varians-kovarians sampel
(observation data) or can be expressed “p = “s.
3. Special hypothesis at this research is
1.
Anticipated by Perception of Amenity use Raharja
Multimedia Edutainment ( Perceived Ease of Use/Peou)
having an effect on to Benefit Perception ( Percieved
Usefulness/Pu). Progressively easy to Raharja Multi
mount his/its benefit
4. Basis For Theory
a. Critical Success Factor ( CSF)
Simply, Luftman J ( 1996) defining critical success factirs
( CSF) is every thing (existing ) in organization which
must be [done/conducted] successfully or succeed
better. This definition hereinafter under consideration
this research [is] translated in conceptual context,
where critical success factirs (CSF) represent factor of
key of effectiveness of planning of adjusment of tech
nology of informai by organization
b. Raharja Multimedia Edutainment ( RME).
Raharja Multimedia Edutainment ( RME) is a tools of
study base on information technology containing con
cept of digital Interactive [of] multimedia learning (
IDML), library by lecturer, continues improvement, and
149
entertainment, recource sharing, borne and developed
collectively/together by Person Raharja ( Rahardja Ben
efit, Henderi, et all: 2007). Raharja Multimedia
Edutainment represent strategy of technological imple
mentation newly at activity of study in College Raharja.
Technological this claim domination of information tech
nology and media of multimedia for activity of student
study. Technological use base on this multimedia is
used in order to process study of student can be [done
conducted] by interaktif and support domination of
information and also the new technology.
c. Technology Acceptance Model ( TAM)
Model develop;builded to analyse and comprehend
factors influencing accepting of technological use of
computer, among other things which is registered in
by various literature and reference of result of research
ing into area of information technology [is] Technol
ogy Acceptance Model ( TAM ).
Model TAM [is] in fact adopted from model TRA (
Theory Of Reasoned Action) that is theory of action
which have occasion to with one premis that reaction
and perception of somebody to something matter, will
determine attitude and behavior of the people (
Ajzen,1975) of at ( DAVIS 1989). reaction And percep
tion of consumer of TI will influence [his/its] attitude
in acceptance of consumer TI, that is one of factor
which can influence [is] perception of consumer usher
benefit and amenity of use of TI as an action which
have occasion to in context of consumer of informa
tion technology so that the reason of somebody in
seeing benefit and amenity of use of TI make action of
the people can accept use TI.
Model TAM developed from psychological theory, ex
plaining prilaku of consumer of computer that is have
base to of at belief ( belief), attitude ( attitude), inten
sity ( intention), and the behavioral [relation/link] of
consumer ( user behaviour relationship). Target model
this to explain especial factors from behavior of con
sumer of TI to acceptance of consumer TI, in more the
inch explain acceptance of TI with certain dimension
which can influence easily accepting of TI by the con
sumer ( user). Model this place factor of attitude from
every behavior of consumer with two variable that is:
(a) the use Amenity ( ease of use), ( b) Benefit ( useful
ness). second of this Variable can explain aspect of
consumer behavior ( Davis 1989) in Iqbaria et al, 1997).
Its conclusion is model of TAM can explain that per
ception of consumer will determine [his/its] attitude in
acceptance of use TI. Model this clearerly depict that
acceptance of use of TI influenced by benefit ( useful
ness) and use amenity ( ease of use )
Mount acceptance of consumer of information technology determined by 6 konstruk that is: Variable from outside the system ( external [of] variable), perception of Con-
150
sumer to amenity ( perceived ease of use), perception of
consumer to usefulness ( perceived usefulness), consumer
attitude ( attitude toward using), behaviour tendency (
behavioral intention), and usage aktual ( actual usage) (
DAVIS 1989).
Draw 1 Technology Acceptance Model (TAM) (DAVIS
1989)
d. Perceived Ease of Use ( PEOU)
Use Amenity Perception defined [by] as a(n) size measure
[of] where somebody believe that computer earn [is] easily comprehended. Some information technology use amenity indicator ( DAVIS 1989) covering a. Computer very is
easy learned b. Computer do easily what wanted by consumer c. Consumer skill can increase by using computer d.
Computer very easy to be operated.
e. Perceived Usefulness ( PU)
Perception of Benefit defined [by] as a(n) size
measure [of] where belief of somebody to use [of] something will be able to improve labour capacity one who use
it ( DAVIS 1989). Some dimension [of] about usefulness
TI, where the usefulness divided into two category, that is
1) usefulness with estimation one factor, and 2) usefulness with estimation two factor ( usefulness And effectiveness) ( Todd, 1995) [at] ( NASUTION 2004). Usefulness with one factor cover a. Making easier work b. Useful
c. Adding productivity d. Heightening effectiveness e.
Developing work performance
While usefulness with estimation two factor cover dimension a. Usefulness cover dimension: making easier work,
useful, adding productivity b. Effectiveness cover dimension: heightening effectiveness, developing work performance
f. Attitude Toward Using ( ATU)
Attitude Toward using the system weared in TAM defined [by] as a(n) storey;level of felt assessment ( negativity or positive) experienced of by as impact [of] if/when
somebody use an technology in [his/its] work ( DAVIS
1989).
Other;Dissimilar researcher express that attitude
factor ( attitude) as one of aspect influencing individual
behavior. attitude of Somebody consisted of [by] compo-
nent kognisi ( cognitive), afeksi ( affective), and component [of] related to behavior ( behavioral components). (
Thompson 1991) [at] ( NASUTION 2004 g. Intention To
Use ( THAT) Intention To Use [is] tendency of behaviour
to know how strength of attention [of] a consumer to use
a technology
Mount use a technology of computer [of] [at]
one can diprediksi accurately from attitude of its attention
to the technology, for example keinginanan add peripheral
supporter, motivate to remain to use, and also the desire to
motivate other;dissimilar consumer ( DAVIS 1989). Researcher hereinafter express that attitude of attention to
use [is] prediksi which is good to knowing Actual Usage (
MALHOTRA 1999).
h. Actual System Usage ( ASU)
Behavioral [of] real usage [is] first time concepted in the
form of measurement of frequency and durasi of time to
use a technology ( DAVIS 1989).
Somebody will satisfy to use system [of] if they believe
that the system [is] easy to used and will improve them
productivity, what mirror from behavioral condition [of]
wearer reality ( Iqbaria 1997 5. Research Methodologies
5.1. Research Type
This Research is inclusive of into type of research
Explaratory, that is containing research of verification of
hypothesizing develop;builded [by] through theory with
approach of Technology Acceptance Model ( TAM), tested
to use software AMOS.
5.2. Sampel Research population
Method used to get empirical data through kuesioner have
Semantic scale to of diferensial. With this method [is] expected obtainable [of] rating consumer Raharja Multimedia Edutainment acceptance [of] [at] College Raharja and
minimize mistake in research.
Consumer Raharja Multimedia Edutainment population of
at College Raharja is dosen student and in College Raharja.
Sum up dosen student and which will be made by a responder is as much 120 responder, where 60% is dosen
and 40% again is student
5.3. Data Collecting Method
To get data or fact having the character of theoretical which
deal with this research is done/conducted by a bibliography research, by learning literature, research journal, substance of existing other;dissimilar kuliah source and of
his/its relation/link with problems which the writer study
Besides through book research, data collecting is also
done/conducted by using kuesioner. Kuesioner contain
question made to know how influence [of] [among/between] variable of Perception of Amenity Use ( Perceived
Ease of Use/Peou), Benefit Perception ( Perceived Usefulness/Pu), Consumer Attitude ( Attitude Toward Using/
Atu), Behavioral [of] Consumer ( Behavioral Intention To
Use / THAT) and the Real Behavior ( Actual System Usage/Asu) from responder to Raharja Multimedia
Edutainment of at College Raharja
5.4. Research Instrument
This Research use instrument kuesioner made by using
closed questions. By using closed questions, responder
earn easily reply kuesioner data and from that kuesioner
earn is swiftly analysed statistically, and also the same
statement can be repeated easily. Kuesioner made by using or Semantec Differential international scale
5.4.1. Konstruk Eksogenous ( Exogenous of Constructs)
This Konstruk [is] known as by sources variables or independent of variable which do not diprediksi by other variable in model. [At] this research [is] konstruk eksogenous
cover Perceived Ease of Use (PEOU) that is a[n level of
where somebody believe that a technology earn is easily
used.
5.4.2. Konstruk Endogen ( Endogenous Constructs)
Is factors which diprediksi by one or some konstruk.
Konstruk Endogen earn memprediksi one or some other
konstruk endogen, but konstruk endogen can only correlate kausal by konstruk [is] endogen. At this research [is]
konstruk endogen cover Perceived Usefulness ( PU), Attitude Toward Using (ATU), Intention To Use (ITU) And
Actual System Usage ( ASU). With amount kuesioner
propagated only as much 120 eksemplar and anticipate
low rate of return, hence this research use storey;level
signifikansi, that is equal to 10% with assumption for
mengolah kuesioner with amount coming near minimum
boundary [of] sampel which qualify.
5.4.3. Diagram conversion groove into equation
After step 1 and 2 [done/conducted], researcher can start
to convert the specification of the model into equation
network, among other things is:
Structural Equation ( Structural Equations)
This Equation [is] formulated to express causality [relation/link] usher various konstuk, with forming variable laten
eksogenous and endogenous measurement model, form
[his/its] equation for example:
PU
ATU
(2)
ITU
ASU
=
=
ã11PEOU
ã21PEOU
+
+
ò1
(1)
â21PU + ò2
=
=
â32ATU +
â43ITU +
â31PU + ò3
ò4
(4)
(3)
Equation of is specification of measurement model (Measurement Model)
Researcher determine [which/such] variable measuring
which/such konstruk, and also with refer to matrix show-
151
ing correlation hypothesized [by] usher konstruk or variable. Form equation of indicator of variable of laten
eksogenous and indicator of variable of laten endogenous
for example :
equation of Measurement of indicator of variable
eksogenous
X1 = ë11PEOU + ä1
X2 = ë21PEOU + ä2
X3 = ë31PEOU + ä3
X4 = ë41PEOU + ä4
X5 = ë51PEOU + ä5
equation of Measurement of indicator of variable endogenous.
y1 = ë11PU + å1
y2 = ë21PU + å2
y3 = ë31PU + å3
y4 = ë41PU + å4
y5 = ë51PU + å5
y6 = ë62ATU + å6
y7 = ë72ATU + å7
y8 = ë82ATU + å8
y9 = ë93ITU + å9
y10 = ë103ITU + å10
y11 = ë113ITU+ å11
y12 = ë124ASU+ å12
y13 = ë134ASU+ å13
y14 = ë144ASU+ å14
Where this variable eksogenous variable endogenous and
second [is] [his/its] clarification [is] visible [at] tables of 1.
Research Variable which the Observation hereunder
Tables 1
Research Variable which Observation
Draw 2 Result Model Early Research
Hypothesis explaining empirical data condition by model/
teori [is] :
H0 : Data Empirik identik with theory or model ( Hypothesis accepted by if P e” 0.05).
H1 : Data Empirik differ from theory or model ( Hypothesis
refused by if P < 0.05 )
Pursuant to Picture 2 showed [by] that theory model raised
[at] this research disagree with population model which
observation, because known that [by] value probability (
P) [do] not fulfill conditions [of] because result nya below/
under value recommended [by] that is > 0.05 ( GHOZALI
2005).
Inferential temporarily that output model not yet fulfilled
acceptance Ho conditions, so that cannot be [done/conducted] [by] a hypothesis test hereinafter. But that way, in
order to the model raised to be expressed [by] fit, hence
can be [done/conducted] [by] a modification model matching with suggested by AMOS
This research use Model Developmental Strategy, this strategy enable [doing/conducting] of modification model if
model raised [by] not yet fulfilled recommended conditions. Modification [done/conducted] to get model which
fit ( sesuai) with examination conditions ( WIDODO 2006).
Pursuant to theoretical justifikasi [is] which there have,
[is] hence [done/conducted] [by] a modification model with
structural model change assumption have to be based on
with strong theory ( GHOZALI 2005
5.4.4. Examination Model To Base on Theory
Examination model to base on theory done/conducted
by using software AMOS of Version 17.0. In the following is result examination of the model
152
75
Pursuant to result Estimate and Regression Wieght, is
hence [done/conducted] by a modification vanishedly is
indicator variable which is non representing valid
konstruktor for a[n variable laten of at structural model
raised. If value stimate [of] [at] loading factor (?) from an
indicator variable < 0.5 hence the the indicator shall in
drop ( GHOZALI 2004). Hereinafter to see signifikansi (
Sig), value which qualify [is] < 0.05. If value Sig > 0.05
hence can be said that by a the indicator non representing
valid konstruktor for a[n variebel laten and this matter better [in] drop ( dihapus) ( WIDODO 2006). Modification
[done/conducted] as a mean to get value Probability >
0.05 so that model expressed [by] fit ( according to). At
this research is modification [done/conducted] in three
phase.
First step to do/conduct modification to model
develop;builded [by] [is] vanishedly is X3 ( amenity to be
learned) and X5 (amenity to be comprehended) representing valid indicator for measurement PEOU ( Perceived Ease
of Use). Abolition [done/conducted] by because loading
factor for the indicator which [his/its] value lower that is
below/under 0.50 released from model.
5.4.5. Test of According to Model
Criterion Fit or [do] not it[him] the model do not is only
seen from its value probability but also concerning
other;dissimilar criterion covering Absolute size measure
[of] Fit Measures, Incremental Fit Measures and Parsimonious Fit Measaures. To compare value which got at this
model with critical value boundary [at] each the measurement criterion, visible hence at Tables in the following is
Tables 3 Comparison Test of According to Model
Step second to [do/conduct] modification to model
develop;builded by is vanishedly is Y5 (costing effective)
representing valid indicator for measurement PU ( Perceived
Usefulness). Abolition [done/conducted] [by] because
loading factor for the indicator which his/its value lower
that is below/under 0.50 [released] from model.
Third step to [do/conduct] modification to model
develop;builded [by] [is] vanishedly is Y14 ( customer/
client satisfaction) representing valid indicator for measurement ASU ( Actual System Usage). Abolition [done/
conducted] [by] because loading factor for the indicator
which its value lower that is below/under 0.50 [released]
from model.
Tables 2 Modification Step
After [done/conducted] [by] a modification model, [is]
hence got [by] a model which fit such as those which as
described [at] Picture 3
( Source : Process data of AMOS 17.0 as according to
boundary of critical value ( WIDODO 2006)
Pursuant to above tables, hence can be told as a whole
model expressed by fit ( according to). model raised at
this research is supported by fact [in] field. This matter is
indication that matrix varians-kovarians population
anticipation [of] [is] equal to matrix varians-kovarians
sampel ( observation data) or can be expressed “p = “s.
At this research is done/conducted by a analysis model
two phase that is analyse CFA ( Confirmatory Factor
Analysis) and hereinafter analyse full model. the Analysis
second is indication that model expressed [by] fit ( sesuai)
[of] good to each variable laten and also to model as a
whole
6. Result of Examination
6.1. Test of Parameter Model Measurement of Variable
Laten
This Examination go together examination of validity and
reliabilitas 1. Validity Examination
Examination to validity of variable of laten done/
conducted seenly assess Signifikansi ( Sig) obtained by
every variable of indicator is later;then compared to by
value Ü ( 0.05). If Sig d” 0.05 hence Refuse H0, [his/its]
meaning [is] variable of the indicator represent valid
konstruktor for variable of certain laten ( WIDODO 2006)
Draw 3 Final Model Examination Result of Research
153
A. Variabel Laten Eksogen
1. PEOU (Perceived Ease of Use)
Tables 3 Test of Parameter of Variable PEOU
B. Variabel Laten Endogen
1. PU (Perceived Usefulness)
2. Examination Reliabilitas 1. Examination Directly
This Examination [is] visible directly from output AMOS
seenly [is] R2 ( Squared Multiple Correlation). Reliabilitas
from a[n visible indicator maintainedly assess R2. R2
explain to hit how big proportion of varians of indicator
explained by variable laten ( while the rest explained by
measurement error) by Ghozali ( 2005), ( WIBOWO 2006
result of Output AMOS hit value R2 ( Squared Multiple
Correlation) shall be as follows.
Tabel 8 Squared Multiple Correlation for variable X
(Eksogen)
Tabel 4 Test of Parameter of Variable PU
Tabel 9 Squared Multiple Correlation for variable Y
(Endogen)
2.
ATU (Attitude Toward Using)
Tabel 5 Test of Parameter of Variable ATU
3. ITU (Intention to Use)
Tabel 6 Test of Parameter of Variable ITU
Pursuant to visible above Tables that indicator X12
variable own highest value R2 that is equal to 0.780
inferential so that that variable laten PEOU have
contribution [to] to varians X12 [of] equal to 78 % while
the rest 22 % explained by measurement error.
Indicator Y16 variable represent indicator which at least
realibel from THAT variable laten, because value R2 which
owning of [is] compared to by smallest of other indicator
variable. result Output [of] above yielding test reliabilitas
individually.
2. Indirect Examination
4. ASU (Actual System Usage)
Tabel 7 Test of Parameter of Variable ASU
154
Done/conducted]ly test reliabilitas merger, approach
suggested by is look for value of besaran Composite
Reliability and Variance Extracted from each variable of
laten usedly is information [of] [at] loading factor and
measurement error
Composite Reliability express size measure of internal
consistency from indicator a konstruk showing degree
until where each that indicator [is] indication a [common/
public] konstruk/laten. While Variance Extracted show the
indicator have deputized well developed konstruk laten (
GHOZALI 2005) and ( FERDINAND).
Composite Reability obtained with the following formula
:
( “ std. loading )2
Constuct – Reability =
(“
std.
oladnig
)2
+“
åj
3. Hypothesis Examination
Examination of this Hypothesis to know influence
usher variable of laten-external system-seperti of at
tables 11 Result of Examination of Hypothesis
hereunder
Tables 11 Result of Hypothesis Examination
Variance extracted obtainable through
formula hereunder:
“ std. loading 2
Variance – extracted =
“ std. loading 2 + “ å j
å j adalah measurement error å j = 1
– (Std. Loading)2
Tables 10 Test of Reliabilitas Merger
At above Tables seen by that PEOU, PU, ATU and THAT
own value Composite Reliability of above 0.70. While ASU
assess its Composite Reliability still below/under 0.70 but
admiting of told [by] realibel [of] because still be at range
value which diperbolehkan. critical Value boundary
recommended for Composite Reliability is 0.70. But the
the number is not a size measure “ dead”. Its meaning, if/
when research [done/conducted] have the character of
eksploratori, hence assess below/under the critical
boundary ( 0.70) even also admit of accepted (
FERDINAND 2002). Nunally And Berstein ( 1994) in (
WIDODO 2006) giving guidance that in research
eksploratori, assess reliabilitas [of] among 0.5 - 0.6
assessed [by] have answered the demand for
menjustifikasi a research result. variable Laten PEOU, PU,
ATU, THAT and ASU mememuhi boundary assess
Variance Extracted that is e” 0.50.. Thereby can be said
that by each variable own good realibilitas
Pursuant to above Tables, explainable that 1. Variable of
Perceived Ease of Use ( PEOU) have an effect on to variable
of Perceived Usefulness ( PU 2. Variable of Perceived
Usefulness ( PU) have an effect on to variable of Attitude
Toward Using ( ATU 3. Variable of Perceived Ease of Use
( PEOU) have an effect on to Attitude Toward Using (
ATU 4. Variable of Attitude Toward Using ( ATU) have an
effect on to variable of Intention to Use ( THAT 5. Variable
of Perceived Usefulness ( PU) have an effect on to variable
of Intention to Use ( THAT 6. Variable of Intention to Use
( ITU) have an effect on to variable of Actual System
Usage(Asu).
Pursuant to test of above hypothesis, explainable hence
that use of software RME influenced by 5 variable of laten
that is Perceived Ease of Use ( PEOU), Perceived
Usefulness ( PU) , Actual System Usage ( ASU), Intention
To Use ( ITU) And Attitude Toward Using ( ATU)
6.2. Interpretation Model
Pursuant to modification model and result of hypothesis
examination, explainable hence that model got at this
research shall be as follows :
Draw 4 Research Model
155
Pursuant to model of at picture 4 got [by] that model [of]
[at] this research [is] model TAM ( Technology
Acceptance Model) by Davis ( 1989) . Variable influencing
use software RME of at this research cover PU ( Perceived
Usefulness), PEOU ( Perceived Easy of Use), Attitude
Toward Using ( ATU), Intention To Use ( ITU) And ASU (
Actual System Usage ).
Amenity Variable ( PEOU) Use software RME have an
effect on to [his/its] benefit variable ( PU), as according to
([ DAVIS 1989], 320). [His/Its] meaning progressively easy
to software RME to be used hence progressively mount
the benefit software can be said that [by] primary factor
[of] software RME accepted better by [his/its] consumer
[is] because easy software to be used
Amenity variable ( PEOU) Use software RME have an
effect on to Attitude Toward Using ( ATU). its[his] Easy
use software RME generate positive attitude to use [it]
Benefit variable ( PU) have an effect on to Attitude Toward
Using ( ATU) [of] where after consumer know [his/its]
benefit hence will generate positive attitude to use [it]
Benefit variable ( PU) have an effect on to Attitude Toward
Using ( ATU) [of] where after consumer know [his/its]
benefit hence will generate positive attitude to use [it].
Benefit variable ( PU) have an effect on to Variable of
Intention to Use ( ITU) [of] where after consumer know
[his/its] benefit hence will arise intention to use [it].
Variable of Attitude Toward Using ( ATU) have an effect
on to Intention to Use ( ITU) [of] where attitude which
potif to use software RME generate intention to use [it].
Variable of Intention to Use ( ITU) have an effect on to
ASU ( Actual System Usage ) where intention to use
software RME generate behavior of consumer to use [it].
From model [of] exist in picture 4 seen [by] that Variable
influencing use of software RME [of] [at] this research
cover PU ( Perceived Usefulness), PEOU ( Perceived Easy
of Use), Attitude Toward Using ( ATU), Intention To Use
( ITU) And ASU ( Actual System Usage
According to Ajzen ( 1988), a lot of behaviors [done/
conducted] by human being in everyday life done/
conducted below/under willingness control (volitional
control) perpetrator. Doing/Conducting behavior of below/
under willingness control ( volitional control) is do/
conduct behavioral activity for [his/its] own willingness.
Behaviors [of] below/under this willingness control [is]
referred as behaviorally [is] volitional (volitional
behaviour) what [is] defined [by] as behaviors individually
wish it or refuse do not use it if they set mind on
melawannya. Behavioral of volitional (volitional
behaviour) referred [as] also with behavioral term [of] the
desired ( willfull behaviours).
Fight against from behavior for willingness by xself
(volitional behaviour) [is] behavior obliged ( mandatory
156
include] data (it) is true the obligation or demand from
job.
following final model :
Draw 5 Final Model [of] Research.
Final model [of] this research [is] diuji-ulang by
software [is] AMOS to know storey;level of validity
and reliabilitas [of] each;every third indicator [of]
variable and also test hypothesis to know storey;level
of influence [of] [among/between] variable of eksogen
to second of variable of endogen and influence usher
second of variable of endogen [of] like [at] some tables
hereunder
6.3. Uji Validitas Model Akhir
A. Variabel Laten Eksogen
PEOU (Perceived Ease of Use)
Tabel 12 Test Parameter of Variable PEOU
B. Variabel Laten Endogen
1. PU (Perceived Usefulness)
Tabel 13 Test Parameter of Variable PEOU
3.
ASU (Actual System Usage)
Tables 17 Result of Hypothesis Examination
Tabel 14 Test Parameter of Variable PEOU
1.3.1. Uji Reliabilitas
Pengujian Secara Langsung
Result of value R2 (Squared Multiple Correlation) is like
at tables 15 and tables of 16 hereunder.
Tabel 15 Squared Multiple Correlation for variable X
(Eksogen)
Tabel 16 Squared Multiple Correlation for variable Y
(Endogen)
Where indicator Y13 variable own highest value R2 that
is equal to 0.851 inferential so that that variable laten ASU
have contribution [to] to varians [of] equal to 85 % while
the rest 15 % explained by measurement error.
Indicator Y4 variable represent indicator which at least
reliable from variable laten PU, because value R2 which
owning of [is] compared to [by] smallest [of] other
indicator variable. result Output [of] above yielding test
reliabilitas individually
Pursuant to model [of] [at] picture 5 got [by] that final
model [at] this research is modification from model TAM
( Technology Acceptance Model) by Davis ( 1989).
Variable influencing use software RME [of] [at] this
research cover PU ( Perceived Usefulness), PEOU (
Perceived Easy of Use) and ASU ( Actual System Usage ).
Amenity Variable ( PEOU) Use software RME have an
effect on to [his/its] benefit variable ( PU), as according to
( DAVIS 1989). [His/Its] meaning progressively easy to
software RME to be used hence progressively mount the
benefit software can be said that [by] primary factor [of]
software RME accepted better by [his/its] consumer [is]
because easy software to be used
Amenity variable ( PEOU) Use software RME have an
effect on to ASU ( Actual System Usage ). its[his] Easy
use software RME generate consumer behavior to use
[it].
Benefit variable ( PU) have an effect on to ASU ( Actual
System Usage). Consumer RME after consumer know [his/
its] benefit hence will generate consumer behavior to use
it.
7. Conclusion
Pursuant to examination [done/conducted] to hypothesis,
inferential hence the followings 1. Model research of at]this
research is mandatory of its meaning is model made have
to be weared by consumer or obliged to [by] become
attitude and intention to use [is] not paid attention to 2.
Final model obtained [at] this research [is] modification
from model TAM ( Technology Acceptance Model) by [
DAVIS 1989 3. Variable influencing use of software RME
[of] [at] this research cover PU ( Perceived Usefulness),
PEOU ( Perceived Easy of Use) and Actual System Usage
( ASU 4. Variable of Perceived Ease of Use ( PEOU) have
an effect on to variable of Perceived Usefulness ( PU 5.
Variable of Perceived Usefulness ( PU) have an effect on
to variable of Actual System Usage ( ASU 6. Variable of
Perceived Ease of Use ( PEOU) have an effect on to variable
of Actual System Usage ( ASU).
157
8. Suggestion
As for suggestion raised [by] as according to research
which have been [done/conducted] by is
1. use Software RME have to be supported fully by
management party and given by a supporter facility for
certain matakuliah, for example existence software
windows media player to look on video
2. Use Software RME from its system facet have to be
developed again for its benefit for example for the student
absence so that dosen by using software RME can watch
student attendance
3. Moderating Factor for the basic structure of user TAM
/ the factor of interest consisted of by gender, age,
experience, intelectual capacity and type of techonolgy.
At this research is moderating factornya do not too paid
attention to and expected at elite hereinafter the
moderating factor have to be paid attention to better
because paid attention toly [is] moderating factor result
nya will be more be good and the model yielded good also
4. Indicator User interface ( dependent variable) at TAM
consisted of [by] attitude ( affect, cognition),
behavioural intention and actual usage. [At] this
research [is] mengaju [of] [at] 5 variable that is PU (
Perceived Usefulness), PEOU ( Perceived Easy of Use),
Attitude Toward Using ( ATU), Intention To Use ( ITU)
And ASU ( Actual System Usage ). Expected [at]
research hereinafter mengaju to 3 the elementary
kompenen
5. Factor Contributing user acceptance (
independent variable) [at] TAM consisted of [by]
usefulness ( perceived), easy of use ( perceived),
playfulness, subjectiveness, and facilitating
conditions. [At] this research is Factor Contributing
user acceptancenya [do] not too paid attention to
and expected [at] elite hereinafter factor contributing
user acceptance have to be paid attention to better
because paid attention toly [is] factor contributing
user acceptance result nya will be more be good
and the model yielded good also
6. The Basic structure of uses technology
acceptance from TAM formed of [by] divisible
moderating factor become two variable that is
independent variable and dependent variable. [At]
research [is] hereinafter expected [by] two the
variable paid attention to better 7. In system having
the character of mandatory, intention and attitude
problem needn’t be paid attention to [by] because
(it) is true distinguish from nature of this mandatory
158
is forced or obliged. In research hereinafter if using
model mandatory hence the intention and attitude
needn’t be paid attention to
Bibliography
1.
Davis F. D.(1989), Perceived Usefulness,
Perceived ease of use of Information Technology,
Management Information System Quarterly.
2. Fahmi Natigor Nasution (2004), “Teknologi
Informasi Berdasarkan Apek Perilaku
(Behavior Ascpect)”, USU Digital Library.
3. Ghozali, Imam A. (2005), Model Persamaan
Struktural–
konsep dan aplikasi
dengan program AMOS Ver. 5.0., Badan
Penerbit Universitas Diponegoro, Semarang.
4. Henderi, (2004), Internet: Sarana Strategis
Belajar Berdampak Tinggi, Jurnal Cyber Raharja,
Edisi 1 Tahun I (Hal. 6-9), Perguruan Tinggi
Raharja
5. Iqbaria, Zinatelli (1997), Personal Computing
Acceptance Factors in Small Firm : A Structural
Equation Modelling, Management Information
System Quarterly.
6. Jogiyanto (2007), “Sistem Informasi
Keprilakuan” ,Andi, Yogyakarta.
7. Luftman J (1996), Competing in The
Informatioan Age – Strategic Aligment in
Practise, ed. By J. Luftman. Oxfort University
Press
8. Untung Rahardja, Henderi, Rosdiana (2007),
Raharja Multimedia Edutainment Menunjang
Proses Belajar di Perguruan Tinggi Raharja,
Cyber Raharja, Edisi 7 Tahun IV (Hal. 95-104)
Perguruan Tinggi Raharja
9. Widodo, Prabowo, P.(2006), Statistika : Analisis
Multivariat. Seri Metode Kuantitatif. Universitas
Budi Luhur, Jakarta.
10. Yogesh Malhotra & Dennis F. Galetta (1999),
“Extending The Technology Acceptance Model
to Account for Social Influence”.
Paper
Saturday, August 8, 2009
16:00 - 16:20
Room L-211
WIRELESS-BASED EDUCATION INFORMATION SYSTEM IN
MATARAM: DESIGN AND IMPLEMENTATION
Muhammad Tajuddin
STMIK Bumigora Mataram West Nusa Tenggara, e-mail: [email protected]
Zainal Hasibuan
Indonesia University, e-mail: [email protected];
Abdul Manan
PDE Office of Mataram City, email: mananmti@gmail,com
Nenet Natasudian Jaya
ABA Bumigora Mataram, e-mail: [email protected]
ABSTRACT
Education requires Information Technology (IT) for facilitating data processing, fastening data collecting, and providing solution on publication. It is to reach the Education National Standard which needs a research. The survey will use
questionnaires as data collector for explanatory or confirmatory in constructing the System Development Life Cycle
(SDLC) using structured and prototyping techniques, so a planning is by anticipating the changes. It comprises 5
subsystems; Human Resources, Infrastructures & Equipment, Library, GIS Base School Mapping, and Incoming Students
Enrollment. The construction is aimed to ease information access connected among schools, Education Service, and
society, integrating and completing each other.
.
Key Words: System, Information, Education, and Wireless
1. INTRODUCTION
Background
National education functions to carry out capability and to build the people grade and character in the
frame work of developing the nation, aims to improve student potentials for being faithful and pious human to The
Great Unity God, having good moral, healthy, erudite, intelligent, creative, autonomous, and being democratic and
credible citizen (Regulation of National Education System,
2003).
Education is also a key to improve knowledge
and quality of capability to reach upcoming opportunity
to take part in the world transformation and future development. How significant the education role is, so often
stated as the supporting factor for economic and social
development of the people (Semiawan, 1999). This important role has placed the education as the people need, so
the participation in developing the education is very important (Tajuddin,M, 2005).
The improvement of education relevancy and
quality in accordance with the needs of sustainable devel-
opment is one of formulations in the National Work Meeting (Rakernas) of National Education Department
(Depdiknas) in 2000. Result from the meeting is efforts to
improve education quality in order that student possesses
expertise and skill required by the job market after graduated (Hadihardaja, J, 1999).
All the efforts could not be immediately implemented but should follow phases in education fields subject to the regulations e.g. learning process to improve
graduates quality, increment of foreign language proficiency especially English, and IT usage (Tajuddin M, 2004).
It is known that education and knowledge are important
capitals to develop the nation. Their impact on the achieved
development is unquestionable. Appearance of technology innovation results could change the way of life and
viewpoint in taking life needs requirement of a nation. So it
is concluded and proved that education provides strong
influences to create changes on a nation
(Wirakartakusumah, 1998).
159
The education system is in accordance with the
national education system standard. Every educational
institution must have its own condition, scope, and way
of managing the education process. However, there is still
similarity in management standard so quality provided by
each educational institution is according to the conducted
assessment standard (Slamet, 1999).
In the Government Rule (PP) No. 19, 2005, for
Education National Standard (SNP) in Chapter II Section
of Education National Standard, comprises:
a. Standard of content;
b. Standard of process;
c. Standard of graduate competency;
d. Standard of educator and educational staff;
e. Standard of infrastructure & equipment;
f. Standard of management;
g. Standard of funding; and
h. Standard of evaluation.
For the eight education standards, it seems important to provide an information technology system to
facilitate the information of infrastructure, education staff,
library, etc., either on education unit level or on the government office level (Education Service), (Tajuddin M,
2007).
The education system sometimes integrated many
related parts to combine its service performances. The involved parts are equipment, education staffs, curriculum,
and so on. The mentioned parts are sectors taken in general from the common education systems. Integrated education system always fuses many parts to manage the process (Tajuddin M, 2006).
An education institution needs an integrated
system which creating service system should be benefiting the schools, the Diknas, Local Government, and communities in occasion of transparency and accountability.
Thus, the integrated education system has two main characteristics; the first is management process of internal system, and the second, is external service process. The internal management process aims to meet an effective system
management process. Occurred here is an inter-section
using data network structure existed in the educational
environment (Miarso, 1999).
Each section is able to know the newest data of
other sections that facilitating and more fastening the process of the other sections. Meanwhile, the external management process has more direction to the services for
communities. This process aims to provide easy and rapid
160
services on things connected with equipment and infrastructures of education. Integrated information system of
education equipment and infrastructures offers many benefits such as:
1. Facilitating and modifying the system. An integrated
system commonly comprises modules that separated
each other. When system change occurred, adjustment
of the application will easily and quickly perform. As
the system change could just be handled by adding or
reducing the module on application system.
2. Facilitating to make on-line integrated system. It means
that the system could be accessed from any place,
though it is out of the system environment. This on
line ability is easy to make because it is supported by
the system integrated with the centered data.
3. Facilitating the system management on the executive
level. Integrated system enables executive board to
obtain overall view of the system. So the control pro
cess could be easier and entirely executed (Leman, 1998).
Besides having those strengths, the integrated system has
also weaknesses. Since the saved data is managed centrally, when data damage occurred, it will disturb the entirely processes. Therefore, to manage the database needs
an administrator whose main function is to maintain and
duplicate data on the system (Tajuddin M, 2005). Also, an
administrator has another function to be a regulator of
rights/license and security of data in an education environment, and so is in Mataram.
Department that responsible in education management in Mataram is the Diknas (National Education),
supervises three Branch Head-offices of Education Service [Kepala Cabang Dinas (KCD)] those are KCD of
Mataram, KCD of Ampenan, and KCD of Cakranegara.
Mataram has 138 state elementary schools (SDN) with
40.604 students, 6 private elementary schools having 1.342
students. Meanwhile the state Junior school (SMPN) it
has 21 schools with 15.429 students, and private junior
school amounts to 8 schools having 1.099 students. It has
8 state high schools (SMAN) having 5.121 students, 16
schools of private having 9.623 students, state vocational
high school (SMKN) amounts to 7 schools having 4.123
students and the privates have 6 schools with 1.568 students (Profil Pendidikan, 2006).
In 2006, granted by the Decentralized Basic Education Project (DBEP), Mataram Education Service has
wireless network connected with three sub-district Branch
Offices of Service, those are Mataram, Ampenan, and
Cakranegara, which will be continued in 2007 by connecting sub-district for Junior and High schools, and cluster
for elementary schools. Three constructions of wireless
network will be built in 2007 for high school sub-rayon, 6
constructions for junior school sub-rayon, and 15 constructions for elementary school cluster; the total is 24
wireless network connections for schools and three for
the KCDs. (RPPK, 2007). To know the wireless network
constructed in supporting the education system of
Mataram, it needs development of an education information system that integrated each other in a system named
Education Information System of Mataram. So the data
and information access, either for schools, Diknas, and
community could be executed swiftly in occasion of improving education services in Mataram based on the information technology or wireless network.
3. Procedure of Data Collecting and Processing
Data is collected in the way of:
B. PROBLEM FORMULATION
·
From the above background, problems could be
formulated as follow: “Is the Design Construction of
School Base Education Information Using Wireless Network in Mataram able to process the education information system rapidly, properly, and accurately in improving
the educational services?”
C. PURPOSE OF RESEARCH
1.
Procuring information system application integrated
between one subsystem with others which will pro
vide the following data on educational units:
· Data of number of teacher and their functional posi
tion on respective educational unit, and number of
elementary school class teacher, and number of teacher
per class subject for junior and high school.
· Data of student available on the detail education units.
· Data of number of classroom, furniture, laboratory,
educational media, sport equipment available on the
education units.
· Data of library on the respective educational unit.
· Data of school location respectively using GIS.
· Data of incoming student registration for the trans
parency and accountability of incoming students en
rollment.
2. Unified data from the respective educational unit en
tirely on the Mataram local government level especially
the Education Service of Mataram.
3. Processed data on the Education Service of Mataram
to be an education management information system of
Mataram.
D. METHOD OF RESEARCH
The conducted kind of research is survey research that
is by taking samples from population using question
naires as a suitable data collecting tool (Singarimbun,
1989). This survey research is for explanatory or confir
matory in which provides explanation on the relation of
inter-variables through research and examination as for
mulated previously.
2. Location of Research Research is located on schools
in Mataram covering state and private elementary
schools which data ac cess directly to the KCDs, and
the state and private junior and high schools which
directly accessed to the Education Service of Mataram.
·
·
·
Interview with source who is a leader such as school
master, head of education branch office, or education
service head-office of Mataram.
Documentation, provided is books contained proce
dures and rules of education equipment and infrastruc
tures, functional positions, etc.
Questionnaires, disseminated on the Education Ser
vice of Mataram prior to wireless network preparation,
the questionnaires will be containing questions of data
flow diagram, hardware or software used, etc.
Observation, learning the data flow diagram. It is imple
mented from data collecting, processing until docu
menting and reporting processes.
4. Data Analysis
a. System Planning
It uses methodology of System Development Life
Cycle (SDLC) by structured and prototyping tech
niques. Presently as analyzed, the education ser
vice has some departments having self-working, it
makes the process is taking longer time. This new
system enables each department of education sub
system connected each other; it makes each of them
know the available information. To keep the data
security, there should be ID Number for every de
partment or personnel involved as password to ac
cess into the related departments.
b. Validity Examination and Sense Analysis Analysis
on the information system comprises:
- Filling-out procedure of education data
- Processing procedure of education data
- Reporting procedure of education data
c. System Analysis, covering:
§ Need analysis to produce system need specifica
1. Kind of Research
161
§
§
-
tion
Process analysis to produce:
Context Diagram
Documentation Diagram
Data Flow Diagram (DFD)
Data analysis to produce:
Entity Relationship Diagram (ERD)
Structure of Data
The system analysis aims to analyze the running
system to understand the existing condition. This analysis usually uses document flow diagram. The flow of document from one department to another can be seen obviously, as well as is the manual data savings. This process
analysis is also used on the equipment and infrastructures. Results of this analysis then are used to design
information system as required, to make an “Informative
System Construction “.
E. RESULT AND DISCUSSION
Chart 2.. Hardware network
3. Development of Software
· Mataram Area Network (MAN) Base incoming
student enrollment
· School base education equipments and
infrastructures
· School base functional position of teachers
· Geographic Information System (GIS) base
school mapping
1. Construction of Wireless Network
4. Development Model Design
a. System Development
Problem solution and user needs fulfillment are the main
purpose of this development. Therefore, in the
development should be noticed the information system
principles, those are:
1.
2.
Chart 1. Construction of Wireless Network
3.
2. Development of Hardware
· Addition of the Triangle facility on the Education
Service office of Mataram as the central of
management.
· Addition of antenna and receiver radio at the
respective cluster on elementary school, 9 units for
each sub-district and 3 units for the cluster leaders.
· Procurement of 6 units of antenna and receiver radio
at the respective sub-district of junior schools for
the sub-district leaders.
· Procurement of 6 units of antenna and receiver radio,
2 units for each sub-district of high schools in 2007.
4.
5.
6.
162
7.
Involving the system users.
Conducting work phases, for easier management and
improving effectiveness.
Following the standard to maintain development
consistency and documentation.
System development as the investment.
Having obvious scope.
Dividing system into a number of subsystems, for
facilitating system development.
Flexibilities, easily further changeable and improvable.
Besides fulfilling the principles, system development
should also apply information system methodology.
One of the methodologies and very popular is System
Development life Cycle (SDLC), using structured and
prototyping techniques.
b. Implementation
This phase is commenced on making database in SQL by
conversing database into tables, adding integrates
limitations, making required functions and view to combine
tables. Application software is using PHP language to
access the database. Information system process of credit
point determining enumeration is the important process
on this module.
a. Means and Infrastructure Information System
Prime Display
3. Land infrastructure data input
Figure 6. Land infrastructure data
Figure3. Prime Display of SIMAP Mataram
4. Land infrastructure data input
Data Input Menu
Data Input Menu is used to put means and infrastructure
for elementary schools data by the KCD of the respective
sub-district, while for junior and high schools is done by
the respective available educational unit..
As seen at the following Figure:
1. Means of education data input:
Figure 7. Land infrastructure data
5. Student data input
Figure 4 Means of education data input
2. Data input of teacher profile on the education unit
Figure 8. Student data input
Input design is a data display designed to receive
data input from user as the data entry administrator. This
input design should have clarity for users, either of its
sort or its data to put in. Meanwhile, input design of the
Credit Point Fulfillment System has 2 designs: Curriculum
Vitae and obtained credit points.
Figure 5. Educational staff (teacher)
163
Figure 9. Teacher/instructor identity data input
Figure 12. Key-word facility
3 Reference Procurement
This feature accommodates functions to record request,
ordering, and payment of references, including the
acceptance and reporting the procurement process.
Figure 10. Teacher/instructor resume data input
B. Library Information System
1. Prime Menu of Library Information System
It displays various menus of procurement, processing,
tracing, membership and circulation, rule catalogue,
administration and security. This menu display could be
set-up according to the user access (privilege), such as
could only activate the trace menu for public users, as
shown on the Figure below:
Figure 11. Prime menu of Library Information System
2. Administration, Security, and Access Limitation
This feature accommodates functions to handle limitation
and authority of users. It classifies users and provides
them identification and password. And also it provides
self-access management, development, and processing
as required.
164
Figure 13. Administrator menu facility
4. Reference Processing
This feature accommodates input process of book/
magazine to the database, status tracing of the processed
books, barcode input of book/number cover, making
catalogue card, barcode label, and book call number.
Figure 14. Catalogue category data input facility
Figure 17. Borrower’s return catalogue facility
7. Reporting
Reporting system eases librarian to work faster, in which
report and recapitulation is automatically made as
managed parameter. It is very helpful in analyzing library
activity processes, such as the librarian does not need
to open thousands of transaction manually to find the
collection borrowing transaction of one category, or to
check a member’s activity for a year.
F. CONCLUSION
Figure 15. Catalogue data input facility
5. Reference Tracing
Tracing or re-searching of the kept collections is an
important matter in the library. This feature
accommodates tracing through author, title, publisher,
subject, published year, etc.
The Construction of School Base Education Information
Using Wireless Network in Mataram is an integrated
hardware and software in supporting education, comprises
some modules such as:
1. Education equipment and infrastructures such as
land, building, sport field, etc.
2. Educators and educational supporting staffs.
3. Students data on each educational unit
4. Data of teacher’s functional position on education
unit and on level of Mataram Education Service.
5. On-line system of Incoming Students Acceptance
6. E-learning base learning.
G. REFERENCES
Alexander, 2001. Personal Web Server. http://
www.asp101.com/.
Anonim, 2003. Undang-Undang No.20 tahun 2003
tentang Sistem Pendidikan Nasional.
Figure 16. Borrowing data input facility
6. Management of Member and Circulation
It is the center-point of the library automatic system,
because here are many manual activities replaced by
the computer. Available inside is various features i.e.
input and search of library member data, record of
book borrowing and returning (using barcode
technology), fine calculating for delay book return,
and book ordering.
Anonim, Perencanaan, Mataram.
Anonim, 2005. Profil Kota Mataram, Pemerintah Kota
Mataram NTB.
Anonim, 2003. Pengumpulan dan Analisi Data Sistem
Informasi Sumber Daya Manusia, PT. Medal
Darma Buana, Pengembangan Intranet LIPI
Anonim, 2005. RencanaPengembangan Pendidikan Kota
Mataram, Dinas Pendidikan Kota Mataram Sub
Bagian Perencanaan.
Anonim, 2005. Peraturan Pemerintah Nomor 19 Tentang
Standar Nasional Pendidikan (SNP).
Curtis G.1995 Bussines Information System Analysis,
Design and Practice,2rdEdition. Addison Wesley.
165
DEPARPOSTEL, 1996. Nusantara-2, Jalan Raya Lintasan
Inforamasi : Konsep dan visi masyarakat
inforamasi nasional. Deparpostel, Jakarta.
Hadihardaja, J., 1999. Pengembangan Perguruan Tinggi
Swasta. Raker Pimpinan PTN Bidang Akademik,
Jakarta 28-30 November 1999.
Hendra dan Susan Dewichan, 2000. Dasar - dasar HTML.
[email protected].
HendScript,http://indoprog.terrashare.com/.Http://
www.informatika.lipi.go.id/perkembangan-tanggal
8 Maret 2006
Kendall, J., 1998. Information System Analysis. Perntice
Hall. Keand Design, Prentice Hall, 3rd Edition.
Leman., 1998. Metologi Pengembangan Sistem Informasi,
Elekmedia Komputindo Kelompok Gramedia,
Jakarta.
Miarso,Yusufhadi, 1999, Penerapan Teknologi Jakarta.
Tajuddin,M.,A Manan, 2005. Sistem Informasi Pemasaran
Pariwisata Kabupaten Lombok Barat
Menggunakan Internet, Jurnal Matrik STMIK
Bumigora Mataram NTB.
Tajuddin,M. Abdul Manan,2004. Rancangan dan Desain
Sistem Informasi Manajemen Perguruan Tinggi
Swasta Berbasi Web di STMIK Bumigora
Mataram, Valid Akademi Manajemen Mataram
NTB.
Tajuddin,M.,2003. Penggunaan Multi Media dan
Sibernetik Untuk Meningkatkan Kemampuan
Mahasiswa, Jurnal Matrik STMIK Bumigora
Mataram NTB.
Tirta K. Untario. 2000. ASP Bahasa Pemrograman
Web.htt://www.aspindonesia.net/.
Wijela, R. Michael, 2000. Internet dan Intranet. Penerbit
Dinastindo, Jakarta.
Wirakartakusumah,1998, Pengertian Mutu dalam
Pendidikan, Lokakarya MMT IPB, Kampus
Dermaga Bogor, 2-6 Maret.
Singarimbun, Masri dan Sofian Effendi,1986, Motede
Peneletian Survey, LP3ES, Jakarta.
Slamet,Margono,1999, Filosofi Mutu dan Penerapan
Perinsip-Perinsip Manajemen Mutu, Terpadu, IPB
Bogor.
Tajuddin,M.,A Manan, Agus P,Yoyok A, 2007. Rancang
Bangun Sisitem Informasi Sarana Prasarana
Nacsit Universitas Indonesi, Jakarta.
Tajuddin,M.,Abdul Manan, 2006. Rancangan Sistem
Informasi Sumber Daya Manusia (SISDM)
Berbasis Jaringan Informasi Sekolah, Jurnal
Matrik, STMIK Bumigora Mataram NTB.
Tajuddin,M., Abdul Manan, 2005. Desain dan
Implementasi Sistem Informasi Manajemen
Perguruan Tinggi Swasta Berbasi Web di STMIK
Bumigora Mataram, Jurnal Matrik STMIK
Bumigora Mataram NTB.
166
101
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room L-212
A SURVEY OF CLASSIFICATION TECHNIQUES AND APLICATION IN
BIOINFORMATICS
Ermatita
Information systems of Computer science Faculty Sriwijaya University
(Student of Doctoral Program Gadjah Mada university)
[email protected]
Edi Winarko
Computer Science of Mathematics and Natural Sciences Faculty Gadjah Mada University
Retantyo Wardoyo
Computer Science of Mathematics and Natural Sciences Faculty Gadjah Mada University
Abstract
Data mining is a process to extract information from a set of data, not the exception of the data in the field bioinformatics.
Classification technique which is one of the techniques extract information in data mining, have been much help in
finding information to make an accurate prediction. Many of the techniques of research in the field of classification
bioinformatics was done. Research has been the development of methods in the introduction to classify the data pattern
in the field bioinformatics. Classification methods such as Analysis Discriminant, K-Nearest-neighbor Classifiers,
Bayesian classifiers, Support Vector Machine, Ensemble Methods, Kernel-based methods, and linier programming, has
been a lot of experience in the development of the output by researchers to obtain more accurate results in the input of
the data.
Keywords: Bioinformatics, classification, data mining, K - Nearest-neighbor Classifiers, Bayesian classifiers, Support
Vector Machine, Ensemble Methods, Kernel-based methods, and linier programming,
I. INTRODUCTION
Data mining is defined as the process of automatically
extract information from a subset of the data was large
and find patterns of interest (Nugroho, U.S., 2008) (Abidin,
T, 2008). Classification techniques in data mining has been
applied in the field bioinformatics [24].
Bioinformatics developed from human needs to
analyze these data that the quantity increases. For the
data mining as one of the techniques have an important
role in bioinformatics.
In this paper will focus on introducing the
methodology in the field of classification in bioinformatics.
Activity in the classification and prediction is for a model
that is able to input data on a new bioinformatics that
have never been there. Classification of data, the data is
not been there.
Model which resulted in a classification is called a
classifier. Some of the models in the classification
bioinformatics data has been in use for example Analysis
167
Diskriminant, Decision Tree, Neural Network, Bayesian
Network, Support Vector Machines, k-Nearest Neighbor,
etc.. That is used to input data category (diskrit), while for
the numeric data (Numerical data) usually use regression
analysis. These methods have been applied in many
bioinformatics such as the introduction Patterrn
Recognition in gene. Many studies have been done for
the development of methods of pattern recognition and
classification for gene expression and microarray data was
done. Jain, AK (2002) conducted a review of the methods
of statistics, Califano, et all Analysis of Gene Expression
Microarrays for Phenotype Classification, Liu, Y (2002),
Rogersy, S, doing research on the Class Prediction with
Microarray Datasets. Lai, C (2006) , Lee, G et all (2008)
Nonlinear Dimensionality Reduction, Nugroho, AS, et all
(2008) of SVM for the microarray data. Every research has
revealed the results developed to obtain optimal results
in the classification process.
II. CLASSIFICATION IN THE
BIONFORMATICS
Classification models are needed in the classification
that is also used in kalisifikasi data in bioinformatics. To
build a classification model from the input data set required
an approach which is called classifier or classification
techniques[32].
Application of data mining in the areas of
bioinformatics for example, to perform feature selection
and classification. Examples such as the implementation
of this done by Nugroho, AS, et all (2008), in his research
retrieve data that is divided in two groups: training set (27
ALL and 11 AML), and the test set (20 ALL and 14 AML).
Each sample consisted of 7129 vectors dimension the
expression of genes derived from the patient as a result of
the analysis of Affymetrix high density oligonucleotide
microarray. Both genes are No.7 (AFFX-BioDn-3_at) and
No. 4847 (X95735_at). Distribution of data in the field
formed by the two genes are as shown in the figure.1.
Figure. 1a shows that the data on the second class in the
training set can separate the linier perfect. Figure 1b shows
that the data on the test set can not separate the linier.
data, not the exception of the data in the field
bioinformatics. Classification technique which is one of
the techniques extract information in data mining, have
been much help in finding information to make an accurate
prediction. Many of the techniques of research in the field
of classification bioinformatics was done. Research has
been the development of methods in the introduction to
classify the data pattern in the field bioinformatics.
Classification methods such as Analysis Discriminant, KNearest-neighbor Classifiers, Bayesian classifiers,
168
Support Vector Machine, Ensemble Methods, Kernelbased methods, and linier programming, has been a lot of
experience in the development of the output by
researchers to obtain more accurate results in the input of
the data.
Keywords: Bioinformatics, classification, data mining, K
- Nearest-neighbor Classifiers, Bayesian classifiers,
Support Vector Machine, Ensemble Methods, Kernelbased methods, and linier programming,
I. INTRODUCTION
Data mining is defined as the process of automatically
extract information from a subset of the data was large
and find patterns of interest (Nugroho, U.S., 2008) (Abidin,
T, 2008). Classification techniques in data mining has been
applied in the field bioinformatics [24].
Bioinformatics developed from human needs to
analyze these data that the quantity increases. For the
data mining as one of the techniques have an important
role in bioinformatics.
In this paper will focus on introducing the
methodology in the field of classification in bioinformatics.
Activity in the classification and prediction is for a model
that is able to input data on a new bioinformatics that
have never been there. Classification of data, the data is
in a particular class label. Form a classification model that
will be used to predict class labels to new data that have
not been there.
Model which resulted in a classification is called a
classifier. Some of the models in the classification
bioinformatics data has been in use for example Analysis
Diskriminant, Decision Tree, Neural Network, Bayesian
Network, Support Vector Machines, k-Nearest Neighbor,
etc.. That is used to input data category (diskrit), while for
the numeric data (Numerical data) usually use regression
analysis. These methods have been applied in many
bioinformatics such as the introduction Patterrn
Recognition in gene. Many studies have been done for
the development of methods of pattern recognition and
classification for gene expression and microarray data was
done. Jain, AK (2002) conducted a review of the methods
of statistics, Califano, et all Analysis of Gene Expression
Microarrays for Phenotype Classification, Liu, Y (2002),
Rogersy, S, doing research on the Class Prediction with
Microarray Datasets. Lai, C (2006) , Lee, G et all (2008)
Nonlinear Dimensionality Reduction, Nugroho, AS, et all
(2008) of SVM for the microarray data. Every research has
revealed the results developed to obtain optimal results
in the classification process.
II. CLASSIFICATION IN THE
BIONFORMATICS
Classification models are needed in the classification
that is also used in kalisifikasi data in bioinformatics. To
build a classification model from the input data set required
an approach which is called classifier or classification
techniques[32].
Application of data mining in the areas of
bioinformatics for example, to perform feature selection
and classification. Examples such as the implementation
of this done by Nugroho, AS, et all (2008), in his research
retrieve data that is divided in two groups: training set (27
ALL and 11 AML), and the test set (20 ALL and 14 AML).
Each sample consisted of 7129 vectors dimension the
expression of genes derived from the patient as a result of
the analysis of Affymetrix high density oligonucleotide
microarray. Both genes are No.7 (AFFX-BioDn-3_at) and
No. 4847 (X95735_at). Distribution of data in the field
formed by the two genes are as shown in the figure.1.
Figure. 1a shows that the data on the second class in the
training set can separate the linier perfect. Figure 1b shows
that the data on the test set can not separate the linier.
Diskriminant Analysis, Nearest-neighbor Classifiers,
Bayesian classifiers, Artificial Neural Network, Support
Vector Machine, Ensemble methods and Kernel-based
methods, and linier programming.
All classification techniques have been developed
rapidly, the need for input in the field bioinformatics on a
particular condition, so that the results obtained are
accurate.
III. CLASSIFICATION TECHNIQUES AND
APPLICATION IN BIOINFORMATICS
3.1 Discriminant Analysis
Discriminant analysis according to Tan, et all (2006)
is a classification study in the statistics class. Research
on the classification method diskriminant analysis has
been done and gained in the methods that have been
developed in bioinformatics. Lee, et all (2008) in his paper
have to experiment to Uncorrelated Linear discriminant
Analysis (ULDA) and Diagonal Linear discriminant
Analysis (DLDA) which is the development of the LDA.
The Eksperiment results show that the pattern of
performance ULDA work less well in the case of small
feature size and is very good for a number of genes in
many. and vice versa DLDA pattern shows a strong
performance for the small number of features[32].
3.2 K-Nearest-neighbor Classifiers
Figure 1 Distribution of data in the training set (a)
and test set (b) in the field formed by gen No.7 and gen
No.4847
Golub, TR, et all (1999) has conducted research in the
classification in the field bioinformatics, research in the
generic approach is used for classification of cancer based
on the monitoring of genes expression through a DNA
microarray to test the application in human acute
leukemia[8].
Currently classification techniques have been
developed mainly in the field bioinformatics, this
development has done a lot of research. For example
Classification techniques K-Nearest-Neighbor
Classifiers are non-parametric classification of groups that
have been in the application in information retrieval (Li, T
et all 2004). Research and development of this
classification technique has been performed. Aha (1990)
in (Tan, PN, 2006) presents theoretical and empirical
evaluation for the instance-based methods, developed by
PEBLS Cost (1993) KNN that can handle data sets
containing nominal attributes[32].
In the field of bioinformatics Slonim, DK, et all have
been doing research to diagnose and treat cancer through
the expression of genes with the classification method of
neighborhood analysis.
Picture below shows the neighborhood around a
hypothetical difference in class c and differences of class
random c ‘is made in the study.
169
approach that compares well with the simple t-test or Fold
methods[1].
Bayes very good applied to text classification.
Additional knowledge that is very good on Bayesian belief
networks in the given by Heckerman (1997) in Tan, PN
(2006).
Figure 2. hypothetical class c and differences of
classification of genes expression
Besides Li, T, et all (2004) has conducted studies for
feature selection, accuracy of feature selection increased
since permitted to eliminate noise and reduce the number
of dimention not significant. Thus, tackling the lack of a
dimensional. KNN on this case based on the working
distance between the sample geometry. Accuracy KNN
has been on the increase in all datasets except pa HBC
and Lymphoma datasets[18].
KNN number of errors have been corrected can be
two times better than a straight line with the Bayesian
error number.
.
3.3 Bayesian classifiers
Bayesian classifiers on classification technique is
related to the statistical (Statistical classifiers, it is raised
by Santosa, B (2007), Han, J (2001) Tan, PN (2006) and this
approach may be possible to predict class membership
[11].
Krishnapuram, B, et all (2003) revealed that the
Bayesian approach may help if the classifier is very simple,
effectively covering the structure of the basic relationship.
Bayesian can help with introducing some type of prior
knowledge into the design Pase.
Wang, H (2008) revealed that since the G-based Bayes
classifier is equivalent to the P-based Bayes classifier
according to Corollary 3, it can be said that the NCM
weighted kNN approximates the a P-based Bayes classifier,
and in restrictiveness, its performance will be close to Pbased Bayes classifier. Consequently. Expectations for the
performance of NCM weighted kNN in practice. On the
growth of Bayesian belief Network provides a graphical
representation of the relationships between the probability
of a set of random variables. Bayesian belief Network can
be used in the field bioinformatics to detect heart
disease[33].
Baldi, P. et all (2001) have developed a Bayesian
probabilistic framework for microarray data analysis.
Simulation shows that this point estimate with a
combination of t-test provides systematic inference
170
3.4 Support Vector Machine
Support Vector Machine classification technique still
relatively new model in classification. This technique has
been in use to complete the problems in bioinformatics in
gene expression analysis. This technique seeks to find
the classification function separator (clasifier) that
optimally separates the normal two sets of data from two
different classes (vapnik: 1995)
NugrohoA.S (2003) review the SVM as follows:
Figure. 3 SVM tries to find the best hyperplane
separating the two classes -1 and +1
Picture above shows some pattern that is a member
of two classes: +1 and -1. Pattern joined on the class -1
the simbol with the pattern and the class +1 the simbol
box. Classification problem can be translated with the effort
to find a line (hyperplane) that separates between the two
groups .
Hyperplane separator between the two classes can
be found with the measure margin hyperplane, and find
the maximum point. Margin is the distance between the
hyperplane with the nearest pattern from each class.
Pattern is the closest in a support vector. Solid lines in the
image 1-b shows the best hyperplane, which is located
right in the middle of the second class, while the red and
yellow dots that are in a black circle is the support vector.
Effort to find the location of the core of this is the
hyperplane in SVM process [24].
Many studies have been done for the development
of SVM techniques that have principles on linier clasifier,
from Boser, Guyon, Vapnik, until the development in order
to work on the problem with non linier incorporate the
concept of the kernel trick in the space high dimension.
Cristriani and Shawe-taylor (2000), which has been the
concept of SVM and kernel to solve the problem in
classification. Fung and Mangasarian (2002) have
developed a SVM with the Newton method for feature
selection in the Name Newton Method for Linier
programming SVM (NLPSVM), in this method requires
only an algorithm of solving the problem linier quickly
and easily at access that can be effectively applied in
dimensional input spaces such as a very knowledgeable
microarray. This can be applied in the case of gene
expression data analysis. In addition, this technique can
be used effectively for classification of large data sets in
the input space dimension is smaller (Fung, G; 2002). And
several evaluations have been made to this method in its
application in the field of bioinformatics. Krishnapuram,
et all (2003) have developed a Bayesian generalization of
the SVM was optimized to identify with and
simultaneously non linier classifier and optimal selection
of cells through the optimization feature Single Bayesian
likelihood function. Bredensteiner (1999) show how the
approach Linier Programming (LP) based approach based
on quadratic Programming SVM can be combined into a
new approach to multiclass problem [14].
Dönnes (2002) have used the SVM approach to
predict bond of peptides to Major Histocompatibility
Complex (MHC) class I molecules, this approach is called
SVMMHC[8].
On development or engineering approach, SVM has
been widely used for various job classifications,
particularly in the areas of bioinformatics for data analysis
and feature selection.
(Liu, 2005) combines genetic algorithm (GA) and All
Paired (AP) support Vector Machines categorize methods
for multiclass cancer. Predictive features can be
automatically determined through iterative GA / SVM,
leading to very compact sets of nonredundant cancer
genes-Relevant Classification with the best performance
reported to date[21]
Cai describe a method for classification normal and
cancer based on the pattern genes expression obtained
from DNA microarray experiment in this research he was
comparing two supervised machine learning techniques,
support vector machines and decision tree algorithm[3].
3.5 Ensemble Methods
This technique is to improve the classification accuracy
of the prediction multiple-input classifier. Figure below
is a technical overview Ensemble [5] . If there are k
features and n classifiers, and k × n feature-classifier
combinations. There is a k × nCm possibility Ensemble
classifiers when the m-feature classifier combinations
select for the Ensemble classifier. Then the trained
classifiers using the features in the select, the last
accompanied a majority voting to combine these
classifiers outputs. after several features with
classifiers trained on the output they are independent,
the final answer will be determined by a combination
of modules, where the majority voting method in the
adoption.
Figure 4 Overview of the Ensemble classifier
In the field of bioinformatics, Kim (2006) adopted
the correlation Analysis of feature selection methods in
the Ensemble classifier used for classification of DNA
microarray [13].
3.4 Kernel –based Method
When a case shows the non linier classification,
difficult to separate the linier, kernel method can introduce
them in by Scholkopf and Smola (2002). Method with a
kernel in the input data x in the space mapped to feature
space F with a higher dimension through mapö as follows:
ö: x ö (x). Because the data in the input space x to be ö (x)
in feature space [28].
Many research has done in the kernel method with
the methods that have been there before. As the kernel KMeans, kernel PCA, kernel LDA. Kernelisasi method can
improve the previous method.
Huilin (2006) modify the KNN in kernel techniques to
perform classification in bioinformatics cancer. He
developed a novel distance metric in KNN scheme for
cancer classification. The substance of increasing class
separability of data in feature space and significantly
improve the performance of KNN.
3.5 Linier Programming
Mangasarian, et all (1994) has conducted research
using linier programming to perform the classification on
the breast cancer accuraacy and to improve objectivity in
the diagnosis and prognosis of cancer. By using the
method of classification based linier programming has built
a system that has high diagnosis accuracy for the surgical
procedure.
171
IV. CONCLUSION
Techniques that have been developed in data mining
techniques such as classification has been helping many
problems in the field bioinformatics.
In microarray data analysis has been done many
studies to obtain optimal results in the classification. This
is to obtain accurate data to obtain a valid analysis.
Classification approach in the most developed at this time
is the support vector machine in the algorithm to combine
with the other algorithms, to produce optimal results.
Reference
[1]Baldi,P Anthony D. Long2, A Bayesian framework for
the analysis of microarray expression data:
regularized t-test and statistical inferences of gene
changes, Bioinformatics, Vol. 17 no. 6. 2001 Pages
509–519
[2] Bredensteiner, E. J. and Bennett, K. P, “Multicategory
Classification by Support Vector Machines”,
Computational Optimization and Applications, 12,
53–79, 1999, Copy:1999 Kluwer Academic
Publishers, Boston. Manufactured in The
Netherlands.
[3] Cai,J, Dayanik ,A, Yu,H, et all. , Classification of
Cancer Tissue Types by Support Vector
Machines Using Microarray Gene Expression
DataDepartment of Medical Informatics, 2
Department of Computer Science, Columbia
University, New York, New York 10027
[4] Chapelle, O. and Vapnik, V. “ Model Selection for
Support Vector Machines”, AT&T Research Labs
100 Schulz drive Red Bank, NJ 07701, 1999
[5] Cho, S. and Won, H, “Machine Learning in DNA
Microarray Analysis for Cancer Classification”,
This paper appeared at First Asia-Pacific
Bioinformatics Conference, Adelaide, Australia.
Conferences in Research and Practice in
Information Technology, Vol. 19. Yi-Ping Phoebe
Chen, Ed.,2003
[6]Christmant, C, Fisher,P,Joachims,T, Classification based
on the support vector machine ,regression depth,
http://
and
discriminant
analysist
citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.37.3435
172
[7] Califano, A, Stolovitzky, G. Tu, Y. “Analysis of Gene
Expression Microarrays for Phenotype
Classification”
available
on
http://
citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.26.5048
[8] Dönnes P. and Elofsson, A. “Prediction of MHC class
I binding peptides, using SVMHC”, BMC
Bioinformatics, pp: 3:25 , 2002, available from: http:/
/www.biomedcentral.com/1471-2105/3/25
[9] Fung, G, Mangasarian, . O. L. “A Feature Selection
Newton Method for Support Vector Report 02-03,
September 2002
[10] Golub,T.R, Slonim,, D. K. Tamayo,P,et all, Molecular
ClassiÞcation of Cancer: Class Discovery and Class
Prediction by Gene Expression Monitoring,
www.sciencemag.org SCIENCE VOL 286 15
OCTOBER 1999, available on http://
citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.115.7807
[11] Han, J. and Kamber, M. “Data Mining: Concepts and
Fransisco, 2001.
[12] Hsu, H.H. et all, “Advanced Data Mining Technologies
in Bioinformatics”, Idea Group publishing, 2006
[13] Kim, K. and Cho, S. “Ensemble classifiers based on
correlation analysis for DNA microarray
classification”, Neurocomputing( 70) ,pp: 187–199
,2006, Available online at www.sciencedirect.com
[14] Krishnapuram, B. Carin, L. and Hartemink, A. J. “Joint
Classifier and Feature Optimization for Cancer
Diagnosis Using Gene Expression Data”,
RECOMB’03, April 10–13, 2003, Berlin, Germany,
2003, Copyright 2003 ACM 1581136358/03/0004
[15] Lai, C, Reinders, van’t Veer, M. J.T. L. J. and Wessels,
L. F.A. “A comparison of univariate and
multivariate gene selection techniques for
classification of cancer datasets”, BMC
Bioinformatics 2006, 7:235, 2 May 2006, http://
www.biomedcentral.com/1471-2105/7/235
[16] Lee, G. Rodriguez, C. and Madabhushi, A.
“Investigating the Efficacy of Nonlinear
Dimensionality Reduction Schemes in Classifying
Gene and Protein Expression Studies”, IEEE/ACM
Transactions on computational biology and
bioinformatics, vol.5, No.3, pp:368.-384, Julyseptember,2008
[17] Ljubomir, J. Buturovic, “PCP: a program for supervised
classification of gene expression profiles”,
Bioinformatics, Vol. 22 , No. 2 , pp: 245–247, 2006,
doi:10.1093/bioinformatics/bti760
[18] Li, H.L. ,J. and Wong, L, “A Comparative Study on
Feature Selection and Classification Methods Using
Gene Expression Profiles and Proteomic Patterns”,
Genome Informatics 13: 51{60} ,2002, available
onhttp://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.99.6529
[19] Li, T. Zhang, C. and Ogihara, M. “A comparative
study of feature selection and multiclass
classification methods for tissue classification
based on gene expression”, Bioinformatics, Vol. 20,
No. 15, pages 2429–2437, 2004, doi:10.1093/
bioinformatics/bth267
[20] Liu, Y. “The Numerical Characterization and Similarity
Analysis of DNA Primary Sequences”, Internet
Electronic Journal of Molecular Design 2002, ,
Volume 1, Number 12, Pages 675–684, December
2002
[21] Liu,J.J, Cutler,G, Li,W, et all, , Multiclass cancer
classification and biomarker discovery using GAbased algorithms, Bioinformatics, Vol. 21 no. 11
2005, pages 2691–2697 doi 10.1093/
b i o i n f o r m a t i c s / b t i 4 1 9
[22] Li, F. and Yang, Y. “Using Recursive Classification to
Discover Predictive Features”, ACM Symposium
on Applied Computing, 2005
[23] Mangasarian,O.L, Street,W.N, Wolberg,W.H, Breast
Cancer Diagnosis and Prognosis via Linier
Programming, 1994, computer sciences
Departemen, University of Wiconsin,USAhttp://
citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.133.2263
[24] Nugroho, A.S, Witarto, A.B dan Handoko, D, “Support
Vector Machine: Teori dan aplikasinya dalam
Bioinformatika, http://ilmukomputer.com, Desember
2008 (in Indonesian)
[26] Robnik, M. Sikonja, Member, IEEE, and I. Kononenko,
“Explaining Classifications for Individual
Instances”, IEEE Transactions on Knowledge and
Data Engineering, VOL. 20, No. 5, pp:589-600, May
2008
[27] Rogersy, S, Williamsz R. D, and Campbell, C, “Class
Prediction with Microarray Datasets” available on
http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.97.3753
[28] Santosa, B. “Data Mining: Technical data for the
purpose of the business theory and application”,
Graha Ilmu, Yogyakarta, 2007
[29] G.P. and Grinstein, G, “A Comprehensive Microarray
Data Generator to Map the Space of Classification
and Clustering Methods”, june 2004, available on
http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.99.6529
[30] Slonim,D.K, Tamayo.P dkk, Class Prediction and
Discovery Using gene Expression Data,2000.http:/
/citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.37.3435
[31] Statnikov, A. Aliferis, C. F. Tsamardinos, I. Hardin, D.
and Levy, S. “ A comprehensive evaluation of
multicategory classification methods for microarray
gene expression cancer diagnosis”, Bioinformatics,
Vol. 21 no. 5, pages 631–643 , 2005 doi:10.1093/
bioinformatics/bti033
[32] Tan, P.N. Steinbach, M.. and Kumar, V. “Intruduction
To Data Mining”, Pearson Education,Inc, Boston,
2006
[33] Wang, H. and Murtagh, F. “A Study of the
Neighborhood Counting Similarity”, IEEE
Transactions on Knowledge and Data Engineering,
VOL. 20, No. 4, pp:469-461, April 2008
[34] Xiong, H. and. Chen, X “Kernel-based distance metric
learning for microarray data classification”, BMC h
Bioinformatics, 7:299, 2006, doi:10.1186/1471-21057-299
available
from:
http://
www.biomedcentral.com/1471-2105/7/299
[25]Nowicki, R. “On Combining Neuro-Fuzzy
Architectures with the Rough Set Theory to Solve
Classification Problems with Incomplete Data”,
IEEE Transactions on Knowledge and Data
Engineering, VOL. 20, No. 9, pp:1239-1253, Sep
2008
173
Paper
Saturday, August 8, 2009
16:25 - 16:45
Room AULA
Engineering MULTI DEKSTOP LINUX ON WINDOWS XP USING THE API
Programming – VISUAL BASIC
Junaidi, Sugeng Santoso, Euis Sitinur Aisyah
Information Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
[email protected], [email protected]
ABSTRAKSI
Desktop is something that is not foreign for computer users, is a form of the display screen as a medium for the operation
of the operating system-based gui. Linux operating system with all turunannya has inherent with the use of multi-desktop,
in which a user can have multiple active desktop at the same time. This may be necessary to make it easier to for users to
be able to mengelompokan some of the applications that are opened, so it does not look untidy. However, for users
operating system based windows, multi desktop is not found in the operation. Use visual basic to the ability to access the
windows in the fire to be able to create an application that will create multiple windows on the desktop as well as the
multi-desktop on linux. This is necessary, because it is not uncommon for Windows users feel confused when many
applications are opened at the same time, because the desktop does not appear regularly with the number of applications
that are running. This paper will discuss the technical implementation of multi desktop linux on windows xp media
programming using visual basic and access the commands in the windows api, active in the notification area icon to the
inactive, has its own task manager with the applications that are displayed according to the applications that run on
their - their desktop. Capabilities designed to create a 10 on a user’s desktop, this has exceeded the ability of new linux
desktop display 4. Pengujiannya in this application is provided that is capable of 10 desktop is created and run on
windows xp, but in the design stage, this application is able to create the number of desktops that are not limited, this is
very dependent of the amount awarded in accordance with their needs.
Keywords: Multi Desktop, Windows, Linux, Notification Area, Inactive Icon
INTRODUCTION
Multi desktop, not something new for some computer users, especially for those already familiar with the operating
system and linux turunannya. However, multi-operating
system for desktop windows spelled still rare and difficult
to get, especially when talking pembuatannya.
cause the computer to run slowly. Also self-taught and
never try to install Mandrake and redhat linux on dual
boot, and when it is know that the linux desktop is able to
create four one-on user with the ability to record an open
application appropriate location of each desktop.
Starting in the habit of using the computer and always
close the application that is running when you want to
open a new application, this is not because dilatar belakangi
with many display applications that are open to appear on
the desktop, in addition also the taskbar will be filled with
the name of the application that is currently active, seemed
so disorganized and confusing. Not to mention the ability
of the computer that has limited memory and processor,
Due to the windows operating system and coincidence
are steep maximum visual basic capabilities, start trying to
think how to apply techniques multi desktop linux on windows, of course with the help of the library windows API,
disinilah initial interest and seriousness to create a simple
application that is capable of running on windows quickly,
as well as help in overcoming the complexity because they
are not familiar with the many applications that are active
174
in the windows, called naama JaMuDeWi (Junaidi Multi
Windows Desktop).
In short, the program is simple JaMuDeWi made with visual basic programming language and use the windows
api to access some functions of the windows, and will run
over the windows operating system (in this case using
visual basic 6 and windows xp). At the time the program is
executed, will be active in the notification area icon to the
inactive, has a task manager with the application itself is
displayed according to the applications that run on their their desktop. Capabilities designed to create a 10 on a
user’s desktop, this has exceeded the ability of new linux
desktop display 4. To be able to move desktop can be
done with the mouse over the icon JaMuDeWi in the notification area, and then do right click to display the desktop
menu. To close this application can also be done in the
same way, and then select the exit, proceed to determine
whether the choice of active applications on the desktop
will be closed or remain with the move to the main desktop.
Discussion
Engineering multi desktop on windows XP with applying
the concept of multi desktop linux in perancangannya can
use the programming language visual basic, and the windows API.
Windows API is very involved in this application, other
than to switch between the desktop, also needed to hide
the active application on the desktop is not active, and
display the active applications on the desktop. It also is
able to hide the active application on the desktop is not
active so do not look at the task manager on the desktop is
active. 1 shown in the image there is a declaration in order
to access the windows API. Note the image 1 as the form
of a snippet of the script to access the windows api, and
then processed according to need.
Capabilities Application Running On With Inactive Notification Area Icon
Picture 2
Layar Design Coding Dalam Mengakses Windows Api
The ability to create and run inactive icon in notification
area aim to be multi desktop application is still accessible
to every desktop is selected, more than that also, it only
has a menu interface as the main form and the dialog interface to determine the status of the application that opens
when you want to exit, screen interface and dialogue to
convey the information. Note 2 for a picture notification
area.
Creating Array Capability
Application of the concept of Multi Desktop
Basically, the concept of multi desktop windows to apply
the concept of multi desktop linux that has the ability 4
active desktop in a single user, but this program is designed to have the ability 10 active desktop in a single
user, and can be developed as you wish. There are some
basic things that should be involved in perancangannya
to create some capacity in support multi desktop windows,
which are:
To be able to maintain that every application on each desktop, to be able hidden on the desktop when not selected,
and display applications at the desktop is selected, it is
necessary to create an array variable 1 (one) dimensions
to accommodate desktop desktop, and a variable array of
2 (two ) dimensions to accommodate the information along
with desktop applications that are active in their - their
desktop.
Showing the ability Menyenbunyikan And Applications
Accessing the Windows API Capabilities
Gambar 3
Potongan Scrip Dalam Menyembunyikan Aplikasi
175
Kemampuang is no less important role, because basically
all the applications remain active open, but not displayed
or hidden entirely. This bias is applied as each application
will be stored on the information on the aray provided in
accordance with the location of the desktop application
was first run. This capability is impressive as if - in their
manner - each desktop has its own application, on the
actual application is to stay hidden or shown, on its application to determine which will be shown or hidden is very
closely related with the location of desktop lines. Watch a
snippet of the script in the image hiding in the application
3.
Array Manipulation Capabilities and Task Manager
Gambar 4
Potongan Scrip Dalam Mendeklarasikan Array
The ability to manipulate an array of arrays is
necessary because the ones who save each desktop along
with application information. So that its implementation in
order to display the application in accordance with the
active desktop to read the information stored in the address of each array. And ability to manipulate task manager is the application that are hidden will not be visible in
the task manager, otherwise apalikasi shown akan seen in
the task manager where the location of desktop task manager is opened. Figure 4 is a snippet in the script to create
some array.
Kemampuan – kemampuan diatas mutlak harus
dipenuhi agar aplikasi multi desktop yang dimaksud dapat
berjalan dengan sempurna. Masing – masing kemampuan
memiliki saling keterkaitan dengan kemampuan yang
lainnya. Misalnya saja kemampuan dalam menyembunyikan
dan menampilkan aplikasi sesuai dengan lokasi desktop
dapat dilakukan dengan memanfaatkan informasi yang
tersimpan pada array, dan informasi pada array terealisasi
berkat kemampuan dalam manipulasi array.
IMPLEMENTASI
Paparan berikut ini akan menampilkan secara full
source code dari program JaMuDeWi (JunAidi MUlti
DEsktop WIndows) yang berhasil di rancang (Gambar 5)
176
Gambar 5
Icon Menu JaMuDeWi
Untuk dapat menciptakan aplikasi multi desktop windows
yang kita beri nama JAMU DEWI menggunakan 1 buah
project dengan 2 form dan 1 buah modul. Dua form yang
dimaksud terdiri dari form untuk memilih desktop dan form
untuk dialog keluar. Form pertama yang dimaksudkan untuk
memilih desktop yang akan dijalankan terdari dari satu
menu utama dengan 10 sub menu untuk memberikan pilihan
desktop dari 1 s/d 10 dan 1 sub menu untuk memilih dialog
keluar dari program JAMU DEWI. Form kedua dimaksudkan
untuk dialog keluar teridri dari 1 label untuk memberikan
teks pertanyaan aksi setelah keluar dan satu buah combo
box yang berisi pilihan Ya dan Tidak sebagai bentuk
implementasi jawaban yang ditanyakan pada label yang
dimaksud tadi, kemudian terdapat juga 2 command bottom
untuk menangkap pernyataan akhir dari proses keluar yang
akan sebagai bentuk pernyataan user bahwa proses keluar
dibatalkan dengan mengabaikan pilihan pada combo box,
dan command bottom kedua yang berisi pernyataan bahwa
user setuju untuk keluar dari program aplikasi JAMU DEWI
dengan memperhatikan pilihan pada combo box. Pilihan
Ya pada combo box akan melaksanakan perintah untuk
memindahkan semua aplikasi yang berjalan disemua desktop ke desktop utama, sedangkan pilihan kedua
Design JaMuDeWi (JunAidi Multi Desktop Windows)
Bahasa pemrograman yang digunakan adalah visual basic dengan kemampuan mengakses windows api.
Aplikasi ini membutuhkan sebuah form utama untuk
keperluan menu, sebuah form keluar (Gambar 6) sebagai
media dialog untuk menentukan aksi lanjutan yang akan
dilakukan setelah keluar, sebuah form untuk media
informasi dan sebuah modul untuk membuat beberapa
coding untuk keperluan programmer.
Form Utama JaMuDeWi (frmJaMuDeWi)
Picture 6
Layar Informasi JaMuDeWi
Perhatikan coding berikut ini, terdapat beberapa
deklarasi variable dengan beberapa prosedur yang
dirancang di area coding pada form utama.
‘—frmJaMuDeWi
‘— prosedur yang dilakukan pada saat program dijalankan
Private Sub Form_Load()
‘— Hide this form
Me.Hide
‘— variabel penampung informasi desktop aktif
intDesktopAktif = 1
intDesktopTerakhir = 1
‘— pengaturan program agar sebagai system tray pada
toolbar
With NotifyIcon
.cbSize = Len(NotifyIcon)
.hWnd = Me.hWnd
.uId = vbNull
.uFlags = NIF_ICON Or NIF_TIP Or NIF_MESSAGE
.uCallBackMessage = WM_MOUSEMOVE
.hIcon = Me.Icon
.szTip = “Klik Kanan - JunAidi MUlti DEsktop
WIndows” & vbNullChar
End With
Shell_NotifyIcon NIM_ADD, NotifyIcon
End Sub
Coding diatas merupakan prosedur yang paling pertama
dijalankan pada saat program pertama kali dijalankan dan
berada pada form utama. Hal ini dilakukan agar program
berjalan secara hidden dan muncul icon tray pada pojok
kanan bawah. Perintah Me.Hide berfungsi untuk
menyembunyikan form dan perintah with notifyIcon …
end with berfungsi agar program berjalan dengan system
tray. Terdapat juga deklarasi variable bertipe integer untuk
menampung jumlah desktop yang telah dipilih variable
untuk menampung desktop mana yang sedang aktif dari
beberapa desktop yang dipilih.
‘— prosedur yang dilakukan pada saat mouse diarahkan
ke icon program
Private Sub Form_MouseMove(Button As Integer, Shift
As Integer, X As Single, Y As Single)
‘— pengaturan agar program berjalan minimize
‘— pengaktifan program dengan click kanan mouse
Dim Result As Long
Dim Message As Long
If Me.ScaleMode = vbPixels Then
Message = X
Else
Message = X / Screen.TwipsPerPixelX
End If
If Message = WM_RBUTTONUP Then
Result = SetForegroundWindow(Me.hWnd)
Me.PopupMenu Me.mnu_1
End If
End Sub
Coding diatas merupakan bagian dari coding form utama
dan berfungsi sebagai prosedur untuk menangkap
pergerakan mouse pada saat cursor mouse berada tepat di
area icon tray JaMuDeWi. Prosedur ini berfungsi untuk
menampilkan pesan singkat tentang keterangan program,
dan pengaturan penggunaan tombol kanan mouse.
WM_RBTTOMUP berfungsi untuk menampilan menu
pada saat tombol kanan mouse dilepaskan setelah ditekan.
‘— prosedur yang dijalankan ketika program menampilkan
form
Private Sub Form_Resize()
‘— sembunykan form jika berjalan secara minimize
If frmJaMuDeWi.WindowState = vbMinimized Then
frmJaMuDeWi.Hide
End If
End Sub
Coding diatas merupakan bagian dari coding form utama
dan berfungsi sebagai prosedur untuk pengaturan program agar berjalan secara minimize dan disembunyikan agar
system tray berfungsi.
‘— prosedur yang dijalankan ketika ingin keluar dari program
Private Sub Form_Unload(Cancel As Integer)
‘— mematikan system tray icon pada toolbar
Shell_NotifyIcon NIM_DELETE, NotifyIcon
End Sub
Coding diatas merupakan bagian dari coding form utama
dan berfungsi sebagai prosedur untuk menghapus icon
system tray pada saat keluar dari program.
‘— pengaturan menu untuk mengkases setiap desktop
177
‘— prosedur menu pemilihan desktop J / 1
Private Sub mnu1_Click()
funPilihDesktop intDesktopAktif, 1
End Sub
Coding diatas merupakan bagian dari coding form utama
dan berfungsi sebagai prosedur untuk memanggil fungsi
pemilihan desktop dengan mengirimkan informasi desktop yang aktif sesuai nilai pada variable sekaligus
mengirimkan informasi nomor desktop 1 yang diaktifkan
sesuai dengan pilihan menu nomor 1.
‘— prosedur menu pemilihan desktop U / 2
Private Sub mnu2_Click()
funPilihDesktop intDesktopAktif, 2
End Sub
‘— prosedur menu pemilihan desktop N / 3
Private Sub mnu3_Click()
funPilihDesktop intDesktopAktif, 3
End Sub
funPilihDesktop intDesktopAktif, 10
End Sub
‘— prosedur menu keluar untuk menampilkan aksi pilihan
keluar
Private Sub mnuExit_Click()
Load frmKeluar
frmKeluar.Show
End Sub
Coding diatas merupakan bagian dari coding form utama
dan berfungsi sebagai prosedur untuk mengaktifkan desktop yang diinginkan sesuai dengan nama desktop masingmasing. Setiap menu yang ditekan akan menjalani perintah
yang berada pada prosedur menu sesuai dengan dalam
fungsi pemilihan desktop dengan mengirimkan informasi
desktop yang aktif sesuai nilai pada variable sekaligus
mengirimkan informasi nomor desktop 1 yang diaktifkan
sesuai dengan pilihan menu nomor 1.
Form Keluar (frmKeluar)
‘— prosedur menu pemilihan desktop A / 4
Private Sub mnu4_Click()
funPilihDesktop intDesktopAktif, 4
End Sub
‘— prosedur menu pemilihan desktop I / 5
Private Sub mnu5_Click()
funPilihDesktop intDesktopAktif, 5
End Sub
‘— prosedur menu pemilihan desktop D / 6
Private Sub mnu6_Click()
funPilihDesktop intDesktopAktif, 6
End Sub
‘— prosedur menu pemilihan desktop I / 7
Private Sub mnu7_Click()
funPilihDesktop intDesktopAktif, 7
End Sub
‘— prosedur menu pemilihan desktop J / 8
Private Sub mnu8_Click()
funPilihDesktop intDesktopAktif, 8
End Sub
‘— prosedur menu pemilihan desktop U / 9
Private Sub mnu9_Click()
funPilihDesktop intDesktopAktif, 9
End Sub
‘— prosedur menu pemilihan desktop N / 10
Private Sub mnu10_Click()
178
Picture 7
Layar Dialog JaMuDeWi Untuk Aksi Keluar
Selain menggunakan form utama, perlu juga
menyiapkan sebuah form lagi untuk keperluan layar dialog
keluar dari program (Gambar 7). Didalamya terdapat satu
buah label yang berisikan pertanyaan aksi yang akan
dilakukan setelah keluar dari aplikasi, dan satu buah combo
box untuk memberikan alernatif pilihan aksi, serta
menggunakan dua buah command bottom.
‘— procedure penekanan tombol keluar untuk
menghentikan program
Private Sub cmdKeluar_Click()
‘— pengaturan variabel untuk pendataan jumlah desktop dan windows
Dim intJumlahDesktop As Integer
Dim intJumlahWindow As Integer
‘— aksi yang dilakukan ketika keluar dilakukan
If cboAksiKeluar.Text = “Ya” Then
‘— seluruh aplikasi aktif akan dipindahkan ke desktop utama
intJumlahDesktop = 1
While intJumlahDesktop < 10
intJumlahWindow = 0
While intJumlahWindow <
aryJumlahBukaWindows(intJumlahDesktop)
RetVal =
ShowWindow(aryBukaWindows(intJumlahDesktop,
intJumlahWindow), _
SW_SHOW)
intJumlahWindow = intJumlahWindow + 1
Wend
intJumlahDesktop = intJumlahDesktop + 1
Wend
Shell_NotifyIcon NIM_DELETE, NotifyIcon
End
ElseIf cboAksiKeluar.Text = “Tidak” Then
‘— seluruh aplikasi aktif akan ditutup
intJumlahDesktop = 2
While intJumlahDesktop < 10
intJumlahWindow = 0
While intJumlahWindow <
aryJumlahBukaWindows(intJumlahDesktop)
RetVal =
SendMessage(aryBukaWindows(intJumlahDesktop,
intJumlahWindow), _
WM_CLOSE, 0, 0)
intJumlahWindow = intJumlahWindow + 1
Wend
intJumlahDesktop = intJumlahDesktop + 1
Wend
Shell_NotifyIcon NIM_DELETE, NotifyIcon
End
End If
End Sub
Coding diatas merupakan bagian dari coding form keluar
dan berfungsi sebagai layar dialog untuk menentukan aksi
apa yang akan dilakukan ketika berhasil keluar dari program. Pada coding diatas juga terdpat beberapa baris
perintah untuk mendeklarasikan beberapa variable desktop dan aplikasi, terdapat beberapa baris perintah untuk
melakukan langkah-langkah untuk memindahkan aplikasi
yang terbuka ke menu desktop utama atau sebaliknya.
‘— prosedur penekanan tombol batal untuk keluar
Private Sub cmdBatal_Click()
‘— keluar program
Unload Me
End Sub
Coding diatas merupakan bagian dari coding form keluar
yang merupakan aksi atas penekanan tombol batal yang
disediakan.
‘— prosedur yang dilakukan pada saat program keluar
dijalankan
Private Sub Form_Load()
‘— pengaturan awal posisi windows
SetWindowPos Me.hWnd, HWND_TOPMOST, 0, 0, 0,
0, SWP_NOMOVE Or SWP_NOSIZE
‘— pengisian combobox dengan aksi pilihan keluar
cboAksiKeluar.AddItem “Ya”
cboAksiKeluar.AddItem “Tidak”
cboAksiKeluar.Text = “Ya”
End Sub
Coding diatas merupakan bagian dari coding form keluar
yang akan dijalankan pada saat form keluar pertama kali
dijalankan.
Modul JaMuDeWi (mdlJaMuDeWi)
Picture 8
Potongan Coding Deklarasi API
Untuk dapat lebih memaksimalkan dan
menjalankan program ini sesuai dengan fungsinya, maka
menyiapkan sebuah modul sebagai bentuk komunikasi
dengan windows api. Diantanya akan memanggil beberapa
fungsi api dengan deklarasi publik, seperti Fungsi
ShowWindows,
GetWindows,
SetWindows,
SetForeground,
SetMessage,
SendMessage,
SetNotufyIcon dan masih banyak lagi sesuai kebutuhan.
Yang terpenting dari sini adalah, setiap api yang dipanggil
merujuk kepada library tertentu untuk memanfaatkan
beberapa fungsi, seperti penggunaan library user32, dan
lain sebagainya. Berikut adalah coding dari modul dalam
memanfaatkan windows api yang dimaksud. Potongan
script modul dapat dilihat pada gambar 8.
‘— deklarasi pemanggilan fungsi API Windows
Public Declare Function ShowWindow _
Lib “user32” (ByVal hWnd As Long, ByVal nCmdShow As
Long) _
As Long
Public Declare Function GetWindow _
Lib “user32” (ByVal hWnd As Long, ByVal wCmd As Long)
179
_
As Long
Public Declare Function GetWindowWord _
Lib “user32” (ByVal hWnd As Long, ByVal wIndx As Long)
_
As Long
Public Declare Function GetWindowLong _
Lib “user32” _
Alias “GetWindowLongA” (ByVal hWnd As Long, ByVal
wIndx As Long) _
As Long
Public Declare Function GetWindowText _
Lib “user32” _
Alias “GetWindowTextA” (ByVal hWnd As Long, ByVal
lpSting _
As String, ByVal nMaxCount As Long) As Long
Public Declare Function GetWindowTextLength _
Lib “user32” _
Alias “GetWindowTextLengthA” (ByVal hWnd As Long)
_
As Long
Public Declare Function SetWindowPos _
Lib “user32” _
(ByVal hWnd As Long, _
ByVal hWndInsertAfter As Long, _
ByVal X As Long, _
ByVal Y As Long, _
ByVal cx As Long, _
ByVal cy As Long, _
ByVal wFlags As Long) _
As Long
Public Declare Function SetForegroundWindow _
Lib “user32” (ByVal hWnd As Long) _
As Long
Public Declare Function PostMessage _
Lib “user32” _
Alias “PostMessageA” _
(ByVal hWnd As Long, _
ByVal wMsg As Long, _
ByVal wParam As Long, _
lParam As Any) _
As Long
Public Declare Function SendMessageByString _
Lib “user32” _
Alias “SendMessageA” _
(ByVal hWnd As Long, _
ByVal wMsg As Long, _
ByVal wParam As Long, _
ByVal lParam As String) _
As Long
Public Declare Function SendMessage _
Lib “user32” _
Alias “SendMessageA” _
180
(ByVal hWnd As Long, _
ByVal wMsg As Long, _
ByVal wParam As Integer, _
ByVal lParam As Long) _
As Long
Public Declare Function _
Shell_NotifyIcon _
Lib “shell32” _
Alias “Shell_NotifyIconA” _
(ByVal dwMessage As Long, _
pnid As NOTIFYICONDATA) _
As Boolean
Coding diatas merupakan bagian dari coding form keluar
yang akan dijalankan pada saat form keluar pertama kali
dijalankan.
‘— deklarasi tipe data public
Public Type NOTIFYICONDATA
cbSize As Long
hWnd As Long
uId As Long
uFlags As Long
uCallBackMessage As Long
hIcon As Long
szTip As String * 64
End Type
Coding diatas merupakan bagian dari coding form keluar
yang akan dijalankan pada saat form keluar pertama kali
dijalankan.
‘— deklarasi variabel public Constants
Public Const SWP_NOMOVE = 2
Public Const SWP_NOSIZE = 1
Public Const HWND_TOPMOST = -1
Public Const HWND_NOTOPMOST = -2
Public Const GW_HWNDFIRST = 0
Public Const GW_HWNDNEXT = 2
Public Const GWL_STYLE = (-16)
Public Const NIM_ADD = &H0
Public Const NIM_MODIFY = &H1
Public Const NIM_DELETE = &H2
Public Const NIF_MESSAGE = &H1
Public Const NIF_ICON = &H2
Public Const NIF_TIP = &H4
Public Const SW_HIDE = 0
Public Const SW_MAXIMIZE = 3
Public Const SW_SHOW = 5
Public Const SW_MINIMIZE = 6
Public Const WM_CLOSE = &H10
Public Const WM_MOUSEMOVE = &H200
Public Const WM_LBUTTONDOWN = &H201
Public Const WM_LBUTTONUP = &H202
Public Const WM_LBUTTONDBLCLK = &H203
Public Const WM_RBUTTONDOWN = &H204
Public Const WM_RBUTTONUP = &H205
Public Const WM_RBUTTONDBLCLK = &H206
Public Const WS_VISIBLE = &H10000000
Public Const WS_BORDER = &H800000
Coding diatas merupakan bagian dari coding form keluar
yang akan dijalankan pada saat form keluar pertama kali
dijalankan.
‘— array untuk menampung 10 informasi desktop
‘— array 2 dimensi untuk menampung aplikasi yang terbuka
pada setiap desktop
Public aryBukaWindows(0 To 10, 0 To 1023) As Long
‘— array 1 dimensi untuk menampung jumlah desktop yang
bisa dibuka
Public aryJumlahBukaWindows(0 To 10) As Long
‘— variabel untuk menampung nomor desktop
Public intDesktopAktif As Integer
Public intDesktopTerakhir As Integer
‘— pengaturan variabel type
Public NotifyIcon As NOTIFYICONDATA
Public IsTask As Long
Coding diatas merupakan bagian dari coding form keluar
yang akan dijalankan pada saat form keluar pertama kali
dijalankan.
‘— fungsi untuk penanganan pemilhan desktop
Public Function funPilihDesktop(intDesktopAsal As Integer, intDesktopTujuan As Integer)
‘— variabel penampung untuk penangan windows dan
desktop
Dim hwndPilihWindows As Long
Dim intPanjang As Long
Dim strJudulWindow As String
Dim intJumlahWindow As Integer
‘— setiap ingin berpindah desktop, lakukan cek pada
tawsk untuk setiap windows aktif
‘— jika berada pada desktop terpilih tampilkan, jika tidak
sembunyikan
IsTask = WS_VISIBLE Or WS_BORDER
intJumlahWindow = 0
hwndPilihWindows
=
GetWindow(frmJaMuDeWi.hWnd, GW_HWNDFIRST)
Do While hwndPilihWindows
If hwndPilihWindows <> frmJaMuDeWi.hWnd And
TaskWindow(hwndPilihWindows) Then
intPanjang =
GetWindowTextLength(hwndPilihWindows) + 1
strJudulWindow = Space$(intPanjang)
intPanjang = GetWindowText(hwndPilihWindows,
strJudulWindow, intPanjang)
If intPanjang > 0 Then
If hwndPilihWindows <> frmJaMuDeWi.hWnd
Then
RetVal = ShowWindow(hwndPilihWindows,
SW_HIDE)
aryBukaWindows(intDesktopAsal,
intJumlahWindow) = hwndPilihWindows
intJumlahWindow = intJumlahWindow + 1
End If
End If
End If
hwndPilihWindows
=
GetWindow(hwndPilihWindows, GW_HWNDNEXT)
Loop
aryJumlahBukaWindows(intDesktopAsal) =
intJumlahWindow
‘— tampilkan desktop terpilih ke paling atas
‘— didapat dari informasi aray berdasarkan desktop
yang terakhir dibuka
‘— secara default isi array adalah kosong
intJumlahWindow = 0
While
intJumlahWindow
<
aryJumlahBukaWindows(intDesktopTujuan)
RetVal
=
ShowWindow(aryBukaWindows(intDesktopTujuan,
intJumlahWindow), _
SW_SHOW)
intJumlahWindow = intJumlahWindow + 1
Wend
‘— memindahkan dari desktop aktif / terpilih ke desktop
baru / dipilih
intDesktopTerakhir = intDesktopAsal
intDesktopAktif = intDesktopTujuan
End Function
Coding diatas merupakan bagian dari coding form keluar
yang akan dijalankan pada saat form keluar pertama kali
dijalankan.
Function TaskWindow(hwCurr As Long) As Long
‘— panangan windows untuk keperluan task manager
Dim lngStyle As Long
lngStyle = GetWindowLong(hwCurr, GWL_STYLE)
If (lngStyle And IsTask) = IsTask Then TaskWindow =
True
End Function
181
Coding diatas diperlukan guna pengaturan task manager.
Setiap desktop memiliki list task manager sendiri, dengan
kata lain aplikasi yang aktif bukan pada lokasi desktop
yang dimaksud akan disembunyikan, sebaliknya aplikasi
yang dibuka pada lokasi desktop dimana task manager
diaktifkan akan ditampilkan.
KESIMPULAN
Dalam pengujiannya aplikasi ini memang disediakan 10
desktop yang mampu diciptakan dan berjalan pada windows xp, namun demikian pada tahap perancangan, aplikasi
ini mampu menciptakan jumlah desktop yang tidak
terbatas, hal ini sangat tergantung dari jumlah yang
diberikan sesuai dengan kebutuhan.
Prinsip kerja dari multi desktop windows ini adalah dengan
menyiapkan sebuah array ber dimensi satu untuk
menampung informasi desktop dan array berdimensi dua
untuk menampung informasi aplikasi yang aktif pada setiap
desktop. Kemudian dilakukan manipulasi dengan
menyembunyikan atau menampilkan aplikasi yang aktif
sesuai dengan desktop yang dipilih atau desktop yang
tidak terpiliah.
Penggunaan perintah API Windows pada pemrograman
visual basic untuk mengakses beberapa fungsi windows
dapat memaksimalkan fungsi visual basic itu sendiri,
sehinga aplikasi multi desktop windows sebagai konsep
penerapan dari multi desktop linux telah mampu mampu
membuktikan bahwa sebenarnya windows mampu
dimaksimalkan.
DAFTAR PUSTAKA
Junaidi (2006). Memburu Virus RontokBro Dan Variannya
Dalam Membasmi Dan Mencegah. Cyber Raharja, 5(3), 8299.
(2008). Rekayasa Teknik Pemrograman Pencegahan Dan
Perlindungan Dari Virus Lokal Menggunakan API Visual
Basic. CCIT, 1(2), 134-153.
(2008). Teknik Membongkar Pertahanan Virus Lokal
Menggunakan Visual Basic Script dan Text Editor Untuk
Pencegahan. CCIT, 1(2), 173-187.
Rahmat Putra (2006). Innovative Source Code Visual Basic, Jakarta: Dian Rakyat.
Slebold, Dianne (2001). Visual Basic Developer Guide to
SQL Server. Jakarta: Elex Media Komputindo.
Stallings, William (1999), Cryptography and Network Security. Second Edition. New Jersey: Prentice-Hall.Inc
Tri Amperiyanto (2002). Bermain-main dengan Virus
Macro. Jakarta: Elex Media Komputindo.
(2004). Bermain-main dengan Registry Windows. Jakarta:
Elex Media Komputindo.
Wardana (2007). Membuat 5 Program Dahsyat di Visual
Basic 2005. Jakarta : Elex Media Komputindo.
Wiryanto Dewobroto (2003). Aplikasi Sains dan Teknik
dengan Visual Basic 6.0. Jakarta: Elex Media Komputindo.
182
Paper
Saturday, August 8, 2009
16:00 - 16:20
Room AULA
Measure Delone and Mclean Model of Information System Effectiveness
Academic Performance
Padeli, Sugeng Santoso
Computer Accountancy DepartementAMIK RAHARJA INFORMATIKA
[email protected] Informatioan Engineering Departement STMIK RAHARJA
TANGERANG [email protected]
ABSTRACT
IT World, which is always up to date at any time and have the innovative world of education, so that the information
circulating and the more complex. One of the most important aspects in the world of education in a university is making
a fast, precise, accurate and sparingly. No organization or group that released from the performance measurements. With
the increasing demand of quality education in particular, both in terms of discipline Staff, Lecturers and Students.
Academic Universities in Raharja need a method that can answer all the needs in the form of information that’s fast,
accurate and appropriate and in decision-making, where information is fast, precise, accurate and sparing is one of the
Critical Success Factor (CSF) from an institution education. This study, entitled “Measuring Delone and Mclean Model
of Information System Effectiveness Academic Performance.” Discuss the effectiveness of information systems that can
monitor the performance of each part such as the performance studies program Kapala, Performance and Lecturers
Students active, this data can be detected from the user’s perception and behavior in use in Universities Raharja. This
study aims to determine the factors that affect the information received or not the effectiveness of academic performance
by the user. Also want to know the relationship between factors that influence the effectiveness of information systems
acceptance Academic. Model used with the DeLone and McLean model. Information System is expected to serve in the
function effectively. This shows that the effectiveness of the development of information systems success. The success of
Information Systems marked with the satisfaction by the user (user Satisfaction), but this success will not mean much
when we apply the system can not improve the performance of individuals and Organization. Statistical test performed
with Structural Equation Modeling (SEM)
Keywords: Effectiveness, Critical Success Factor, user satisfaction
Introduction
E-commerce and Internet are the two main components in
the era of digital economy. Internet as a backbone and
enabler of e-commerce which is a collection of processes
and business models are based on the network. Both grow
rapidly extraordinary, mutually related, and affect a variety
of organizations in a way that is very diverse [BIDGOLI
2004].
Recently, IT has been a dramatic race in both capability
and affordability, and recognized ability to process data to
capture, store, process, retrieve, and communicate knowledge. Thus, many development organizations that is designed specifically to facilitate knowledge management.
The goal of the effectiveness Inforasi academic system is
to display the information that is useful for others as the
user ratings or grip objects in the decision-making, Performance management systems effectiveness Inforasi are primarily academic efforts to re-otomatisasikan academic performance management process through the installation of
software designed specifically for it. Paramternya called
dahsboard although still using the shape and color because it looks so simple in design quickly identify the user
by the position of each section that are either on or posis
weak performance.
183
Performance management process management education
is not rare to disappear in our administrative process of the
many piles of paper. Imagine how many pieces of paper
that must be when we have to print and manage
memantantau monthly performance of each division or take
a policy when the lectures a semester is complete. At the
time of evaluating the lectures in our day Raharja usually
require a long time because they had to analyze the data
that is input in the input .. Automation through the performance dashboard akan men-streamlined-kan keribetan all
that. The process of becoming much more efficient, with
piles of paper that pile up every corner of the very table
we.
Successful or not an information technology system in the
organization depends on several factors. Based on the
theories and the results of previous research that has examined, DeLone and McLean (1992) and then develop a
model of a complete but simple (Parsimoni) which they call
by the name of the model of success DeLone and McLean
(D & M IS Success Model).
Analysis Problems.
After analyzing the facts of the development of information technology especially in the field of Information Systems, the authors identify the technologies that are applied and the matter of preference as a problem this thesis
research.
Model proposed to test this model of success DeLone and
McLean (D & M IS Success Model) mereflekiskan dependence six measurements of information system success.
The six elements or factors are:
1. Quality system (system quality)
2. Information quality (information quality)
3. Use (use)
4. User satisfaction (user satisfaction)
5. Individual satisfaction (individual impact)
6. The impact of the organization (organization impact)
In this research the problem will be examined are:
1. These factors are significantly contributing to the effectiveness of information systems in the academic performance of university Raharja.
2. How big a model in ujikan in this research provides an
explanation of the effectiveness of the academic level.
Discussion
Aspects of behavior (behavior) in the Acceptance of Information Technology
([Syria 1999], 17) the use of Information Technology (IT)
for the company is determined by many factors, one of
which is characteristic of IT users. Differences in the characteristics of IT users are also influenced by aspects of the
perceptions, attitudes and behavior in the use of IT to
receive. A user’s system is that human behavior has a psy-
184
chological (behavior) that have been on himself, which
caused the aspect of user behavior in an information technology becomes an important factor in each person using
information technology.
As the size of the system
According to DeLone and McLean, many researchers have
been using as a measure of the success of the system
objectives. The impact caused the system is used if the
system should be very useful and successful ([Seddon
1994], 92). If the use of forced, then the frequency of use of
the system and the information presented will be reduced
so that success is not achieved ([Seddon 1994], 93).
Benefits of an information system is the level where a person believes that using the system thoroughly to improve
the performance ([Seddon 1994], 93).
Use of focus on actual use, the widespread use in the
work, and many information systems used in the job
([Almutairi 2005], 114). Usage is defined as a result of interaction with the user information ([DeLone 1992], 62).
Information System Success Model DeLone and McLean
Measuring the success or effectiveness of information
systems is essential for our understanding of the value
and power of action and investment management information system. DeLone and McLean stated that (1) the quality of the system to measure the success of the technique,
(2) the quality of information to measure the success of
the SI, and (3) the use of the system, user satisfaction,
individual impact and organizational impact measures the
effectiveness of success. Shannon-Weaver stated that (1)
level of technical communication as a communication system that the information accurately and efficiently, (2) the
level of success is the SI information in the SI is the meaning; and (3) the level of effectiveness is the influence of
information on the recipient / user ([DeLone 2003], 10).
Based on the statement DeLone-McLean and ShannonWeaver mentioned above, it can be concluded that 6 (six)
the success of inter-related dimensions. Model is a process information system and made the first of several features that can be grouped into several levels of quality
and information systems. Next, the user use the features
to the system where they are satisfied or not satisfied by
the system or the resulting information. Results and information systems used to provide impact or influence to
each individual to the behavior of their work, and this group
is to provide impact or influence on the organization
([DeLone 2003], 11). As an illustration this description can
be seen in Figure 2.1.
Figure 2-1 Information Systems Success Model DeLone
and McLean
Quality system and quality of information in the Information System Success Model DeLone and McLean, on its
own or together to give effect to the use and satisfaction
of users ([Livari 2005], 9). This indicates that the use and
satisfaction of the user related, and directly affect the individual, which ultimately affect the organization. DeLone
and McLean states that the characteristics of the system’s
quality characteristics as obtained from the information
system itself, and as a characteristic quality of the information obtained from the results of information ([Livari
2005], 9).
Development of Flow Diagram (path diagram)
After the theoretical model was built, and described a path
diagram. Usually known that the causal relations expressed
in the form of equality. But in the SEM (in operation Amos)
kausalitas relationship is depicted in a path diagram. Furthermore, the language program will convert the image to
be equality, and equality to be estimated.
Destination dibuatnya path diagram is to facilitate researchers in view kausalitas relationships who want to test. Relationships between konstruk stated the shaft. Arrow that
points from a konstruk to konstruk others show causal
relationships.
On this research, the path diagram is constructed as shown
in Figure III-1 below:
DeLone and McLean inform characteristics that impact
the individual as an indication that the information system
has provided a better understanding to the user information about the decision to increase the productivity of
individual decision making, provide a change of user activity, and change the perception of decision makers
about the benefits and importance of system information
([Livari 2005], 9).
Display Screen
Views effectiveness of information systems academic performance In the prototype image in the 2.1 display the
effectiveness of information system on the university
website can Highest Raharja Department Head taking activities, active students in lecture, a lecturer in the active
fill the class. Head of Department can also oversees active
students, improving the quality of lecturers, teaching learning process (PBM).
Figure 2-2, The diagram in the Research Model
Conversion to the flow diagram in equation After steps 1
and 2 is done, researchers can begin to convert into a
model specification of a series, which are:
Equality-equality Structural (Structural Equations)
Equality is formulated to reveal the relationship between
kausalitas different konstruk, formed with the latent variables measurement model eksogenous and endogenous,
persamaannya form of:
P = ã ã11KS + 22KI + â21KP + ò1 (1)
KP = ã21KS + 22KI + ã â11P + ò2 (2)
DI = â21P + â22KP + ò3 (3)
pleteness of teaching materials that have been provided
in each meeting.
Equality of measurement model specification (Measurement Model)
Researchers determine which variables to measure konstruk
which, as well as a series of matrix showing the correlation
between the dihipotesakan or konstruk variables. The form
of indicators of latent variables and indicators eksogenous
185
latent endogenous variables are:
Measurement of the indicator variable eksogenous
X1 = ë11KS + ä1
X2 = ë21KS + ä2
X3 = ë31KS + ä3
X4 = ë41KS + ä4
X5 = ë51KS + ä5
X6 = ë61KI + ä6
X7 = ë71KI + ä7
X8 = ë81KI + ä8
X9 = ë91KI + ä9
Measurement of endogenous indicator variables
y1 = ë11P + å1
y2 = ë21P + å2
y3 = ë31P + å3
y4 = ë41KP + å4
y5 = ë51KP + å5
y6 = ë62KP + å6
y7 = ë72DI + å7
y8 = ë82DI + å8
y9 = ë93DI + å9
An Empirical Research on the Application of the DeLone
and McLean Model in the Kuwaiti Private Sector, found
that (1) of 29% quality of the information calculated by the
quality of the system. Calculation of F test is significant
where the value of F = 38.36 on the level of alpha <0.01.
Positive beta value of 0:53, indicating the quality information has a significant positive influence on the quality of
the system. So that the statistical calculation can be said
that the relationship between quality and quality of information systems is positive. (2) the quality of information
and quality system significantly affect user satisfaction.
43% of the two variables is calculated by the user satisfaction. Calculation of F test is significant where the value of
F = 34.98 on the level of alpha <0.01. Positive beta value of
0:41 for the quality of information and positive beta value
of 0:31 for the quality system. This indicates that these
two variables have a significant positive influence on user
satisfaction at the level p <0.01. (3) equal to 9% by the use
of SI is the impact of the individual. Calculation of F test is
significant where the value of F = 9.90 at alpha <0.01. Positive beta value of 0:32, indicating that the SI has a significant positive impact on individuals ([Almutairi 2005], 116).
Conclusion
Based on the results of research and interpretation, the
conclusion can be drawn as follows. Testing the relationship between quality and satisfaction to the user’s system
to provide results that the quality system and the relationship has a significant influence on user satisfaction. So,
when the system improved the quality of the user satisfaction will also increase. Testing the relationship between
186
quality and impact of information on user satisfaction results that provide quality information and have a relationship significant to the satisfaction of users. So, when the
quality of information improved the user satisfaction will
also increase. Testing the relationship between the use
and influence of user satisfaction and vice versa, to provide results that have a relationship and a significant influence on user satisfaction as well as vice versa. Both
these variables affect each other, so that when one variable is increasing the other variable will also increase. Testing the relationship between the use and influence to the
impact of individual results that the use and effect relationship has a significant impact on the individual. So,
when the impact of increased use of the individual will
also be increased. Testing the relationship between satisfaction and the influence of the user to the impact of individual returns that user satisfaction and the relationship
has a significant influence on individual impact. So, improved user satisfaction when the individual will also impact increased.
REFERENCES
[Almutairi 2005] Almutairi, Helail, “An Empirical Application of the DeLone and McLean Model in the Kuwaiti
Private Sector”, Journal of Computer Information Systems,
ProQuest Computing, 2005.
[Banker 1998] Banker, Rajiv D., et.al., “Software Development Practices, Software Complexity and Software Maintenance Performance: A Field Study”, Journal of Management Science, ABI / Inform Global, 1998.
[DeLone 1992] DeLone, William H. and Ephraim R. McLean,
“Information Systems Success: The Quest for dependent
Variable”, Journal of Information Systems Research, The
Institute of Management Sciences, 1992.
[DeLone 2003] DeLone, William H. and Ephraim R. McLean,
“The DeLone and McLean Model of Information Systems
Success: A Ten-Year Update”, Journal of Management Information Systems, ME Sharpe Inc., 2003.
[Doll 1994] Doll, William J., et.al., “A Confirmatory Factor
Analysis of the End-User Computing Satisfaction Instrument,” MIS Quarterly, University of Minnesota, 1994.
[Ghozali 2004] Imam Ghozali, “Structural Equation Model,
Theory, Concepts and Applications with the Program Lisrel
8:54”, Publisher Undip, Semarang, 2004.
[Goodhue 1995] Goodhue, Dale L. and Ronald L. Thompson, “Task-Technology Fit and Individual Performance”,
MIS Quarterly, University of Minnesota, 1995.
[Haavelmo 1944] Haavelmo, T., The Probability Approach
in Econometrica. Econometrica, 1944.
[1998 Hair] Hair, J. F., Multivariat Data Analysis, New Jersey, Prentice Hall, 1998.
[Hamilton 1981] Hamilton, and Norman L. Scott Chervany,
“Evaluating Information System Effectiveness - Part 1:
Comparing Evaluation Approaches,” MIS Quarterly, University of Minnesota, 1981.
[Hayes 2002] Hayes, Mary, “Quality First”, Information
Week, 2002.
[Ishman 1996] Ishman, Michael D., “Information measuring Success at the Individual Level in Cross-Cultural Environments”, Information Resources Management Journal,
ABI / Inform Global, 1996.
[Jerry81] Jerry Ardra F. Fitzgerald Fitzgerald and Warren D.
Staliings, Jr.., Fundamentals of system analys (second edition, New York: Jhon Willey & Sens, 1981)
[Joreskog 1967] Joreskog, K. G., Some Contribution to
Maximum Likelihood Factor Analysis, Psychometrika, 1967.
[Kim 1988] Kim, Chai and Stu Westin, “Software Maintainability: Perceptions of EDP Professionals,” MIS Quarterly,
ABI / Inform Global, 1988.
[Lee 1995] Lee, Sang M., et.al., “An Empirical Study of the
Relationships among End-User Information Systems Acceptance, Training, and Effectiveness”, Journal of Management Information Systems, ABI / Inform Global, 1995.
[Lin 2004] Lin, Fei Hui and Jen Her Wu, “An Empirical
Study of End-User Computing Acceptance Factors in Small
and Medium Enterprises in Taiwan: Analyzed by Structural Equation Modeling”, The Journal of Computer Information Systems, ABI / Inform Global , 2004.
[O’Brian 2003] James A. O’Brien, “Introduction to Information System”, Eleventh Edition, Mc Graw Hill, 2003
[O’Brien 2005] O’Brien, James A., Introduction to Information Systems, 12th ed., McGraw-Hill, New York, 2005.
[Pasternack 1998] Pasternack, Andrew, “Hung Up on Response Time”, Journal of Hospitals & Health Networks,
ABI / Inform Global, 1998.
[Rai 2002] Rai, Arun, et.al., “Assessing the Validity of IS
Success Models: An Empirical Test and Theoritical Analysis”, Journal of Information Systems Research, ProQuest
Computing, 2002.
[Tonymx 60] F. Neuschen Management, by system, (second edition, New York: McGrawhill, 1960).
[Riduwan 2004] Riduwan, Method & Technique Developing Thesis, First Printed, Alfabeta, Bandung, 2004.
[Sandjaja 2006] Sandjaja, B, and Albertus Heriyanto, Research Guide, Reader Achievements, Jakarta, 2006.
[Satzinger 1998] Satzinger, John W. and Lorne Olfman,
“User Interface Consistency Across End-User Applications: The Effects on Mental Models”, Journal of Management Information Systems, ABI / Inform Global, 1998.
[Seddon 1994] Seddon, Peter B. and Min Yen Kiew, “A
Partial Test and Development of DeLone and McLean’s
Model of IS Success”, University of Melbourne, 1994.
[Livari 2005] Livari, Juhani, “An Empirical Test of the
DeLone-McLean Model of Information System Success,”
The Database for Advances in Information Systems,
ProQuest Computing, 2005.
[Lucas 1975] Lucas, Henry C.Jr., “Performance and the Use
of an Information System”, Journal of Management Science, Application Series, 1975.
[Ngo 2002] Ngo, David Chek Ling, et.al., “Evaluating Interface Esthetics,” Journal of Knowledge and Information
Systems, Verlag London Ltd.., 2002.
[Nur 2000] Nur Indriantoro, Computer Anxiety of Influence Skills Lecturer In Use of Computers, Journal Accounting and Auditing (JAAI) Vol.3 No.1, FE UII, Yogyakarta,
2000.
187
Paper
Saturday, August 8, 2009
16:00 - 16:20
Room L-210
The Concept of Model-View-Controler (MVC) as Solution Software Development
(case study on the development of solutions Software on-line test)
Ermatita, Huda Ubaya, Dwiroso Indah
Computer Science Faculty of Sriwijaya University Palembang-Indonesia.
E-mail: [email protected]
Abstract
Software development is a complex task and requires adaptation to accommodate the needs of the user. To make it easier
to changes the software, in maintenance, now has developed concept in the development of the software, the modelview-controller pattern, which is the architecture that can help facilitate in the development and maintenance of software, because in this architecture for a three-layer model, namely, the view and controller in development done independently, so that it can provide convenience in the development and maintenance. In addition, this architecture can also
view a simple and interesting for the user. Software system on-line test is software that requires interaction with the user,
and maintenance of adaptive software. Because the test system on-line requires the development of software to accommodate the needs of this growing quickly. This paper to analyse the Model-View-Controller and try development, to apply
it in the development of software system test on-line.
Keywords: Model-view controller, architecture, pattern on-line test
I. INTRODUCTION
1.1.Background
Software development is a complex task. Software
development process, from concept to implementation,
known by the term System Development Life Cycle (SDLC)
which includes stages such as requirement analysis,
design, code generation, implementation, testing and
maintenance (Pressman, 2002; 37-38). In the software
development also requires architecture or pattern that can
help in the development of software.
Currently, many software development utilizing a
concept of programming by using the Model-viewcontroller pattern, which is the architecture with a lot of
help in the development of software that is easy to
maintenance, especially those based interaction with the
GUI (Graphical User Interface) and the web. This is in
accordance with the statement Ballangan (2007) : “MVC
pattern is one of the common architecture especially in
the development of rich user interactions GUI Application.
Its main idea is to decouple the model. The interaction
between the view and the model is managed by the
controller. “
In addition Boedy, B (2008) [1] states that: MVC is a
programming concept that applied to many of late. By
188
applying the MVC to build an application will be impact
to ease at the time the application enters the maintenance
phase. Development process and integration is becoming
easier to do. Basic idea of MVC is actually very simple,
namely to try to separate model, view, and controller.
The concept of software development with the
architectural model, controller and view this is very helpful
in the development of software test on-line. This is because
software development with the concept of model view
controller, and this can help make it easier to do
maintenance and make the look interesting, so that user
interaction with the software more attractive and easier.
This study focused on the concept of software-based
GUI to apply the Model-View-Controller pattern is applied
to the development of software Testing On-line at the
Computer Science Faculty of Sriwijaya University
considerations related to the concept in terms of quality,
flexibility and ease of maintenance (expansion or
improvement), as cited by many of the literature.
From the background that has been described above
will be conducted the research with the title: “The concept
of Model-View-Controler (MVC) in the case study
solutions Software development test on-line”
1.2. Problem Formulation
Research was conducted in order to solve the
problem how architecture Model-View-Controller (MVC)
in the ease of making the software, in this case study
application to take the exam online.
1.3 Objectives and Benefits
1.3.1 Research Objectives
This study aims to understand the development
of Software architecture model with the pattern modelview-controller with on a case study to implement the
software system on-line test.
1.3.2 Benefit Research
Benefits obtained with this research are:
1.Understanding the concept of MVC approach in the
development of software test on-line.
1.4 Research Method
Steps will be done in this research are:
1. Study of literature on the concept of architectural
pattern Model-View-Controller.
2. Implementing MVC architectural pattern in
development software test on-line.
II. OVERVIEW REFERENCES
2.1Definition
Popadyn, 2008 [8] defines: Model - View - Controller
is an Architectural pattern used in software engineering.
Nowadays such complex patterns are Gaining more and
more popularity. The reason is simple: the user interfaces
are becoming more and more complex and a good way of
fighting against this complexity is using Architectural
patterns, such as MVC.
MVC pattern consists of three layers as in research by
Passetti (2006): The pattern separates Responsibilities
across the three components, each one has only one
responsibility.
• The view is responsible only for rendering the UI
elements. It gives you a presentation of the model. In
most implementations the view usually gets the state and
the data it needs to display directly from the model.
• The controller is responsible for interacting between the
view and the model. It takes the user input and figures out
what it means to the model.
• The model is responsible for business behaviors,
application logic and state management. It provides an
interface to manipulate and retrieve its state and it can
send notifications of state changes to observers.
Each layer has the responsibility of each of each integrated
with one another. The MVC abstraction can be graphically
represented as follows.
Figure 1. MVC abstraction
Events typically cause a controller to change a model,
or view, or both. Whenever a controller changes a model’s
data or properties, all dependent views are automatically
updated. Similarly, whenever a controller changes a view,
for example, by revealing areas that were previously
hidden, the view gets data from the underlying model to
refresh itself.
2.2 Common Workflow
The common workflow of the pattern is shown on the
next diagram.
Figure 2. Common workflow MVC pattern
Control flow generally works as follows:
·
·
·
The user interacts with the user interface in someway
(e.g. presses buttons, enters some input information,
etc.).
A controller handles the input event from the user in
terface.
The controller notifies the model of the user action,
possibly resulting in a change in the model state
189
·
·
A view uses the model to generate an appropriate user
interface. The view gets its own data from the model.
The model has no direct knowledge of the view.
The user interface waits for further user interactions,
which then start a new cycle.
will see it is easy to cleanly separate Views from Controllers
in a Server-side Web Application.
2.3 Model-View-Controller Architecture
On the development of software architecture with
the pattern or model-view-controller, the software
development divide into three parts, namely the model,
view and controller.
Biyan (2002) points out: MVC is a design pattern that
enforces the Separation between the input, processing,
and output of an application. To this end, an application
is divided into three core components: the model, the view,
and the controller. Each of these components handles a
discreet set of tasks.
a. Model
Model here as a representation of the data involved
in a transaction process. Each time the method / function
of an application need to access to data in a, the function
/ method is not directly interaction with the source data
through the model but should be first. In this model only
allowed to interact directly with the source data.
b. View
View as the presentation layer or user interface
(display) for the user of an application. Data needed by
the user will be formatted in such a way that can be run
and presented with the view that the format is adjusted
to the user requirement.
c. Controller
Controller is logic aspect of an application. Controller
will determine the process bussiness of applications are
built. Controller will respond to each user’s input with the
conduct of the model and view so that the appropriate
request / demand from the user can be met well.
Each layer are interconnected and mutual dependence
of each other, this is like that disclosed by Anderson, DJ
(2000), As we are building each View on demand, it must
request data from the model every time it is instantiated.
Hence, there is no notification Model to View. The creation
of the View object which must demand any necessary
data from the model.
Views and Controllers together can be considered the
Presentation Layer in a Web Application. However, as we
190
Figure 3. Client to Presentation Layer Interaction
showing the MVC Separation at the Server
The Model, on the other hand, is separate from the
Presentation Layer Views and Controllers. It is there to
provide Problem Domain services to the Presentation
Layer including access to Persistent Storage. Both View
and Controller may message down to the model and it in
turn will send back appropriate responses.
Some of the MVC in field research has been done,
among others:
Sauer in the research entitled “MVC-Based Modeling
Support for Embedded Real-Time Systems” states that
“Effects of real-time requirements on the model, view,
controller and communication components have to be
identified, eg in the scenario for signalling to a warning
message and handling it by the user or in the case of
exception handling. It has to be answered, which time
constraints apply to each component “effects in real-time
requirement on the model view controller can be identified
so that it can be signed quickly.
Ogata in their paper describes the design of the
framework that the Model-View- Controller is called Eventdriven MVC-Active is based on the active object. On this
model treats each input by using the Event-driven
mechanism and the process of placing objects in the active
display. This model is very suitable for applications that
focus on the GUI. This model has been implemented to
the Smalltalk.
Naturo, et all, (2004) propose a design pattern that
supports the construction of adaptable simulation
software via an extension of the Model / View / Control
design pattern. The resulting Model / Simulator / View /
Control pattern incorporates key concepts from the DEVS
modeling and simulation methodology in order to promote
a Separation of modeling, simulation, and distributed
computing issues. The advantage of this simulation
approach to software design is considered in the context
of other documented attempts to promote component
based simulation development [6].
Veit (2003) propose to use the model-view-controller
paradigm as a benchmark for AOSD approaches, since it
combines a set of typical problems concerning the
Separation and integration of contens. [11]
Morse, SF et all(2004) introduce Model-ViewController Java Application Framework. This paper
propose a simple application framework in Java that
follows a Model-View-Controller design and that can be
used in introductory and core courses to introduce
elements of software engineering . Although it is focused
on applications having a graphical interface, it may be
modified to support command-line programs. The
application framework is presented in the context of
development tools Apache Ant and JUnit. [5]
Software development through the MVC approach,
maintenance and evolution of the system more easy to
do, with the divide in layers. The layers in the software
divide into three layer.
By applying the concept of MVC, the source code of an
application to be more tidy and easier to maintenance and
developed.
In addition, the program code has been written that
can be used again for other applications by changing only
some. Basically, the MVC to separate the data processor
referred to as a model and user interface (UI) or HTML
template in this case called the View. Mean while, as the
fastener Controller Model “View to handle the request so
that the user data is displayed correctly.
2.4 Online Testing
Testing activities as one of the activities in the
teaching-learning can be done anywhere with the help
of information technology. This activity is called with
the test on-line.
In the test conducted on-line, the need for the
many variations of problems and supply problems are
very important addition to the maintenance of the
software is needed to accommodate the needs of
adaptation to the development of the user.
Effectiveness and level of security required in both
documents the shoptalk addition, the tests also
required a good appearance and reliable in showing
shoptalk test. Is required for a concept development
and architecture in both software development support
on-line test this.
III. RESULTS AND DISCUSSION
3.1. Analysis of Needs
The results of the analysis needs in the form of
Functional Requirement have 3 main menus: the login
process, the main page, the lecturers and the student
Menu
3.2. MVC Architecture Concept Analysis Software
Testing Online
Architecture Model-View-Controller pattern is that
can help build the project more effectively. In the
development of this pattern is done with divide
component of Model, View and Controller in the
development of software. Divide is useful to separate
the parts in the application so that the ease in
application development and maintenance. This is in
line with the expression: Model-View-Controller design
software to assist with the needs divide similar. Model
View Controller is a class analysis to the stages of the
design phase. [7]
On a system equipped with the
software, the changes are often the user interface. User
interface is the part that dealt directly with the user and
how it interacts with the application, the focus point
made to conversion based on ease of use.
Business-logic in a complex user-interface to make
conversion to the user interface became more complex
and simple mistakes occur. Changes one part has the
potential of the overall software.
MVC can provide a solution to the problem by dividing
the pattern into sections separate, Model, View and
Controller, split between the part and create a system of
interaction between them.
Figure 5. MVC architecture is applied to the software
test online
3.2.1. Model layer
Layer model in the MVC pattern that represents the
data used by applications as business processes
associated it.
Domain Model is a representation of the model layer.
Whole class at the model are in a layer model, including
the class that supports it (DAO class).
3.2.2. View layer
This layer contains the entire details of the
implementation of the user interface. Here, the
components provide a graphical representation internal
application process flow and lead to the application
user interaction. There is no other layer that interacts
with the user, only the View.
On the software Online Testing System which is being
developed, a layer view of the files containing the
display for the user that consists mainly of HTML
script.
191
3.3.3. Controller layer
Controller layer in MVC provides detailed program
flow of each layer and transition, will be responsible for
gathering events made by the user from the View and
update components of the model using data entered by
the user.
On the software Online Testing System which is being
developed, there is only one controller class that is Front
Controller class. Use one Controller class is because the
scope of the software that is still small, so the controller
does not need separation.
Model / View / Control Design Pattern with the DEVS
Formalism to achieve Rigor and Reusability in
Distributed Simulation, JDMS, Vol. 1, Issue 1, April
2004 Page 19-28, © 2004 The Society for Modeling
and Simulation International
[7] Nugroho. A, 2002, “Analysis and Design Object
oriented System,” Bandung: Information. Passeti,
2006, The MVC Pattern © 2006, P & P Software
www.pnp software.comhttp: / / images.google.co.id /
imgres? Imgurl = http://www.pnp-software.com
eodisp/images/mvc -original.
IV. CONCLUSION
Results from this research are:
1. Development of software can be divide in several
layers.
2. Divide layer in the development of software
architecture with the concept of Model-ViewController can assist in system maintenance and
evolution
3. Software development Testing system
architecture apply online with the ModelController-View very helpful in further
development, because the software system test
online this very need for reliability Users
interface.
[8] Popadiyn, P, “Exploring the Model - View - Controller
(MVC) pattern,” Pencho Popadiyn posted by on Dec
17, 2008
REFERENCES
[1] Boedy, B, 2008, Model-View-controller, avalable on
http; / / MVC / Model-view-controller.html
[11] Veit, and M Herrmann, S, 2003, “Model View Controller
and Object Teams: A Perfect Match of Paradigms”,
avalaibale on http://citeseerx.ist.psu.edu/viewdoc
summary?doi=10.1.1.12.5963
[2] David J. Anderson, 2000, Using MVC Pattern in Web
Interactions, http://www.uidesign.net/Articles
Papers/UsingMVCPatterninWebInter.html
[12] Model-View-Controller Pattern, Copyright © 2002
eNode, Inc.. All Rights Reserved.http: / /
www.enode.com / x / markup / tutorial / mvc.html
[3] Hariyanto, B, 2004., “Object-oriented Systems
Engineering”, Bandung: Information Cackle, B, 2002,
the MVC design pattern brings about better
organization and code reuse, Oct 30, 2002 8:00:00 AM
[13] MVC Pattern (MVC Framework) http://209.85.173.132
search?q=cache:e2SN1LcxrZEJ:www.javapassion.com
j2ee/MVCPatternAndFrameworks_speakernoted.pdf
+ MVC + pattern & hl = en & ct = clnk & cd = 14 & gl
= id
[4] Mathiassen, L, Madsen, Andreas. M, Nielsen, Peter.
A, and the Stage, A, 2000, “Object Oriented Analysis
& Desig. Issue 1. Forlaget Marko, Denmark.
[5] Morse, SF, and Anderson, CL, 2004, Design and
Application Introducing Software Engineering
Principles in introductory CS Courses: Model-View
Controller Java Application Framework http:/
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.
1.73.9588
[6] Nutaro, A and Hammonds, P, 2004, combining the
192
[9] Pressman, RS, 2002, “Software Engineering:
Practitioners Approach” Translated by: CN.
Harnaningrum (Book 1). Yogyakarta: ANDI. Sutabri,
T, 2004, “Analysis of Information Systems”,
Yogyakarta: ANDI.
[10] O’Brien, James A. (2003). Introduction to information
systems: Essentials for the e-business enterprise,
edition to-11. McGraw-Hill, Boston.
[14] ——, 2006, MVC Pattern (MVC Framework) http:/
w w w. j a v a p a s s i o n . c o m / j 2 e e
MVCPatternAndFrameworks_speakernoted.pdf
[15] http://www.mercubuana.ac.id/sistem.php, access in 4
september 2008 http://id.wikipedia.org/wiki
Perangkat_lunak, “software”, tgl.4 access in
September 2007.
[16] file: / / / D: /% MVC 20teori/Mengenal% 20MVC%
20di% 20Zend% 20Framework% 20% 97%
20Pemrograman% 20Web.htm
Paper
Saturday, August 8, 2009
13:55 - 14:15
Room AULA
Signal Checking of Stegano Inserted on Image Data Classification by
NFES-Model
M. Givi Efgivia
Staf Pengajar STMIK Muhammadiyah Jakarta
Safaruddin A. Prasad
Staf Pengajar Fisika, FMIPA, UNHAS, Makassar
Al-Bahra L.B.
Staf Pengajar STMIK Raharja, Tangerang
E-mail : [email protected], [email protected]
E-mail : [email protected]
Abstract.
In this paper, we propose an identification method of the land cover from remote sensing data with combining neuro-fuzzy
and expert system. This combining then is called by Neuro-Fuzzy Expert System Model (NFES-Model). A Neural network
(NN) is a part from neuro-fuzzy has the ability to recognize complex patterns, and classifies them into many desired
classes. However, the neural network might produce misclassification. By adding fuzzy expert system into NN using
geographic knowledge based, then misclassification can be decreased, with the result that improvement of classification
result, compared with a neural network approximation. An image data classification result may be obtained the secret
information with the inserted by steganography method and other encryption. For the known of secret information, we
use a fast fourier transform method to detection of existence of that information by signal analyzing technique.
Keywords: steganography, knowledge-based, neuro-fuzzy, expert system, signal analyzing.
1. Introduction
Neuro-fuzzy expert system model (NFES – Model) can be
divided into two sub-systems which consist of neuro-fuzzy
system and expert system. Neuro-fuzzy system is a combination of neural networks and fuzzy systems, where each
has independent areas. The connections to each other are
merely marginal but both bring benefit for the solution of
many problems.
Lotfi A. Zadeh introduced the concept of fuzzy sets in
1965. In 1974, E.H. Mamdani invented a fuzzy inference
procedure, thus setting the stage for initial development
and proliferation of fuzzy system applications. Logic programming also played an important role in disseminating
the idea of fuzzy inference, as it emphasizes the importance of non-numerical knowledge over traditional mathematical models [4].
Expert systems are computer programs which use symbolic knowledge to simulate the behavior of human experts, and they are a topical issue in the field of artificial
intelligence (AI). However, people working in the field of
AI continue to be confused about what AI really is proposed by Schank [6]. In other words, there are attempts to
conf properties (or attributes) to a computer system under
the guise of AI, but the practitioners find difficulty in defining these properties! It is generally accepted that an
expert system is useful when it reaches the same conclusion as an expert [7].
The most recent wave of fuzzy expert system technology
uses consolidated hybrid architectures, what we call Synergetic AI. These architectures developed in response to
the limitations of previous large-scale fuzzy expert systems.
193
The NFES-Model is developed and implemented to analyze of land cover classification on the field of Maros District on South Sulawesi Province, Indonesia.
The fuzzy logic is used to analyze of remote sensed data
for land cover classification since Maros District is complex geography, the remotely sensed image has various
geometrical distortions caused by an effect the complex
earth surface, such as the shadow of hills.
Remotely sensed image data sampled from a satellite includes specific problems such as large image data size,
difficulty in extracting characteristics of image data and a
quantity of complex geographical information in a pixel
due to its size of 30 m2. In the past, we have used a statistical method such as a maximum likelihood method without
considering these problems. The maximum likelihood
method identifies a recognition structure by a statistical
method using the reciprocal relation of density value distribution per one category. This method is based on an
assumption that image data follows the Gaussian distribution.
yi(1) = xi(1)
Layer-2 : The fuzzification layer. Neurons in this layer represent fuzzy sets used in the antecedents of fuzzy rules. A
fuzzification neuron receives a crisp input and determines
the degree to which this input belongs to the neuron’s
fuzzy set. The activation function of a membership neuron
is set to the function that specifies the neuron’s fuzzy set.
We use triangular sets, and therefore, the activation functions for the neurons in layer-2 are set to the triangular
membership functions. A triangular membership function
can be specified by two parameters {a, b} as follows:
An image data classification may be inserted by the secret
information for intelligence requires or information hiding
art on the image data by the steganography method. Now,
if a governance institute to obtain the image data received,
then to appear ask, what that image data is contain the
secret information?
Signal of an image data classification or other image data
can be checked by using a fast Fourier transform algorithm. This checking the main for uncovering the secret
screen is entered in to image data received.
2. Neuro-Fuzzy Expert System (NFES)
2.1. Architecture of NFES-Model
Figure 1 showed NFES-Model architecture as neural network (NN) architecture with four hidden layers, one input
layer, and one output layer. In this NFES-Model architecture showed parallel structure and data followed in the
model, respectively for learning (backward path) and classification (forward path). It is in the image data processing
will be improved upon classification result and the image
classification can be visualized.
Each layer in the NFES-Model (figure 1) is associated by
certain stage in the fuzzy inference processing. In completely, each layer is explanation contain as follows:
Layer-1 : The input layer. Each neuron in this layer transmits external crisp signals directly to the next layer. That is,
194
Figure-1. The architecture of NFES-Model
Layer-3 : The fuzzy rule layer. Each neuron in this layer
corresponds to a single fuzzy rule. A fuzzy rule neuron
receives inputs from the fuzzification neurons that represent fuzzy sets in the rule antecedents. For instance, neuron R1, which corresponds to Rule-1, receives inputs from
neurons PR1, PR2, and PR3.
In a neuro-fuzzy system, intersection can be implemented by the product operator. Thus, the output of neuron I in layer-3 is obtained as:
yi( 3) = x1(i3) × x2( 3i ) × ... × xki( 3)
y R( 31) = µ PR1 × µ PR 2 × µ PR 3 = µ R1
Layer-4 : The output membership layer. Neurons in this
layer represent fuzzy sets used in the consequent of fuzzy
rules.
Each output membership neuron combined all its inputs
by using the fuzzy operation union. This operation can be
implemented by the probabilistic OR ( ⊕ ). That is,
y
(4)
Oi
= µ R 2 ⊕ µ R 3 ⊕ µ R 4 ⊕ µ R 6 ⊕ µ R 7 ⊕ µ R 8 = µ O1
Layer-5 : The defuzzification layer. Each neuron in this layer
represents a single output of the neuro-fuzzy expert system. All neurons in layer-4 are combines them in to a union
operation for product operation results, and it is called a
sum-product composition.
y=
µ O1 × aO1 × bO1 ⊕ µ O 2 × aO 2 × bO 2 ⊕ ... ⊕ µ O 7 × aO 7 × bO 7
µ O1 × bO1 ⊕ µ O 2 × bO 2 ⊕ ... ⊕ µ O 7 × bO 7
)
∈
yi(j4 ) = xi(14 ) ⊕ x2( 4i ) ⊕The
... ⊕next
xki( 4operation
is defuzzification to be input
for neuron in the next layer.
Layer-6 :
The output networking. The neuron in
this layer is accumulation of all processing series in NFES
networking. In the NFES implementation, the neuron in
the layer-6 is appears as classification map.
Which,
N = number of pixels in the j-th class
x = value of pixel in the classification image
G = value of pixel in the ground-truth image.
Then index of three (3) showed the input data is
consist of three channels.
Step-4 : IF Îj > Ît THEN return to step-2 (t = tolerance)
Step-5 : make the step-4 until km iteration
Step-6 : IF Îj > Ît THEN return to step-1
{
showed that C is a set which the elements are x such as x
is element of j-th class.
3 The Experimental Design
Figure 2 showed the land cover classification procedure
scheme of NFES-Model. The image data classification using neuro-fuzzy expert system (NFES) is divided became
of three partition, that is namely, pre-processing by fuzzy
c-mean method, pattern recognition by neuro-fuzzy system, and the checking by knowledge representation.
3.1. Pre-processing by Fuzzy C-Mean
Clustering implies a grouping of pixels in multispectral
space. Pixels belonging to a particular cluster are therefore
spectrally similar. Fuzzy C-Mean (FCM) is a one from grouping is based Euclidian distance. Prasad [6] group using
FCM algorithms to land cover classification. Such as is
carried out by Sangthongpraow [7] group also.
If x1 and x2 are two pixels whose similarity is to be checked
then the Euclidean distance between them is
d ( x1 , x2 )
= x1 − x2
Each input variables is used on the networks, we must
established haw much the fuzzy sets are used for the domain partition of each variable. By the domain partition for
each variable and linguistic terms, then we can do classification and we are obtained it classification result [5].
2.2. Algorithm NFES
An algorithm presented of NFES-Model to land cover identification. The NFES algorithm can be written with detailed
as follow as:
Step-1 : Determine number of m-th membership functions
for k-th inputs
Step-2 : Rule generated for j-th class
Step-3 : Make a training and error calculate in j-th class
(
}
Step-7 : Îj < Ît THEN C = x | x ∈ C j . This expression
=
{(x − x ) (x − x )}
1/ 2
t
1
2
1
2
1/ 2
⎧N 2
2 ⎫
= ⎨∑ x1i − x2 i ⎬
⎩ i =1
⎭
(
)
(1)
Where N is number of spectral components.
) with the formula
∑∑ (x
3
∈j =
N
k =1 i =1
k
i
− Gik
)
2
3* N
195
or secret information inserted. Signal checking it using the
FFT (Fast Fourier Transform).
Figure-2. The NFES-Model procedure scheme for land
covers classification
A common clustering criterion or quality indicator is the
sum of squared error (SSE) measure, defined as:
∑ ∑ (x − M ) (x − M )
t
SSE
=
C i x∈C i
=
i
∑∑ x−M
C i x∈C i
i
2
i
(2)
Where Mi is the mean of the i-th cluster and xÎCi is a pattern assigned to that cluster [6][7].
3.2. Pattern Recognition by Neuro-Fuzzy System
In this part, do it processing in to four steps. The first step
is fuzzification processing of crisp value. The second step
is editing of membership function, include to determining
of number of membership function for each input. The
thirth step is training and testing process. The fourth step
is defuzzification processing for pre-classified requirement.
The next step is checking the pre-classified result. What it
optimal classification or not? If pre-classified result or next
classified result is not optimal yet, then the up-dated of
knowledge base and then checking classified in loop.
Optimalization of classified is doing by seek number of
misclassification. If the value of misclassification to reaches
is desire, then the checking is stopped. Then we obtained
the final classification. From the final classification result,
we are checking it image about the existing of signal noise
196
Figure 3. Landsat ETM7 image of false color composite
(band-1, band-2, band-3) in year of 2001 from Marusu
Districk in South Sulawesi.
3.3. The Checking by Knowledge Representation
Table-1 shows about premis categories for production rules
of network structure of NFES-Model.
Table-1. Premis category for production rules
With the premis category for production rules, then attribute items to become “what the pixel value to be appropriated with the mean symbol in table-1?”. And because
network structure is consist of three inputs, then atribut
items will become 18 kinds (3 x 6). Inference result by forward chaining method will be reduced of rules to be except
8 rules. If R is value of pixels in band-1, G is value of pixels
in band-2, and B is value of pixels in band-3, then the eight
rules each is
1.
IF R least value
AND G least value
AND B least value
OR R small value
THEN classified is Hutan (forest)
2.
IF R least value
AND G least value
OR G small value
OR G medium small
AND B small value
OR B medium small
OR B medium value
THEN classified is Air (water)
3.
IF R least value
OR R small value
AND G least value
OR G small value
AND B least value
THEN classified is Tegalan/kebun (garden)
4.
IF R small value
OR R medium small
OR R medium value
AND G least value
OR G small value
OR G medium small
AND B small value
OR B medium small
OR B medium value
THEN classified is Tambak (embankment)
5.
IF R small value
OR R medium small
OR R medium value
AND G small value
OR G medium small
OR G medium value
AND B least value
OR B small value
OR B medium small
THEN classified is Sawah (paddy field)
6.
IF R medium small
OR R medium value
AND G small value
OR G medium small
AND B medium small
OR B medium value
THEN classified is pemukiman (urban)
7.
IF R medium value
AND G medium value
OR G large value
AND B large value
THEN classified is lahan gundul (bare land)
8.
IF R large value
AND G medium value
OR G large value
AND B medium value
OR B large value
THEN classified is awan (cloud).
The visualization of all rules in the computer screen can be
seen as in figure-4.
Figure-4. The network structure of NFES-Model created
by rule production
3.4. Fast Fourier Transform Algorithm
If a function f (t ) is periodic with period T, then it can be
expressed as an infinite sum of complex exponentials in
the manner
f (t ) =
∞
∑F e
n = −∞
n
jnω o t
,ω o =
2π
T
(3)
197
=
in which n is an integer,
ω o is an angular frequence and
the complex expansion coefficients Fn (Fourier series) are
given by
(9)
Where B(r) and C(r) will be recognized as the discrete Fourier transform of the sequences Y(k) and Z(k).
T /2
Fn =
1
f (t )e − jnω ot dt
∫
T −T / 2
(4)
The transform itself, which is equivalent to the Fourier
series coefficients of (4), is defined by
F (ω ) =
∞
∫ f (t )e
− jωt
dt
−∞
(5)
Let the sequence φ (k ) , k = 0,…,K-1 be the set of K
samples taken of f(t) over the sampling period 0 to To. The
samples correspond to times
. The continuous
function f(t) is replaced by the samples and is replaced by
, with r = 0,1,…,K-1. Thus . The time variable t is replaced
by , k = o,…,K-1. With these changes (5) can be written in
sampled form as
, r = 0,…,K-1
(6)
with
.
(7)
It is convenient to consider the reduced form of (6):
, r = 0,…,K-1
(8)
Assume K is even; in fact the algorithm to follow will require K to be expressible as K = 2m where m is an integer.
From form two sequences Y(k) and Z(k) each of K/2
samples. The first contains the even numbered samples of
and second the odd numbered samples,
Y(k): f(0), f(2), …, f(K-2)
Z(k): f(1), f(3), …, f(K-1)
So that
Y(k) = f(2k)
Z(k) = f(2k + 1) , k = 0,…, (K/2)-1.
A supervised learning algorithm of NFES to adapted the
fuzzy sets is continuously a cyclic via learning sets until
has obtain the final criterion is appropriate, for example, if
number of misclassification to indicate the value is acceptance well be reached, or error value can’t decrease again.
In table-2, presented the land cover classification result
in Marusu District, South Sulawesi, Indonesia, and land
cover area with the assumption that each pixels look after
the interest of 30 m2 area of land. Classification using NFES
in a form of knowledge based expert system. Development
of the rule base referred by two circumstances, namely, the
map information (earth form) and geographic knowledge.
As the implemented of the NFES-Model, we are demonstrated of the image classified result is showed in figure-4,
where the image classification are consist of water area
(Air), forest area (Hutan), paddy field area (Sawah), embankment area (Tambak), garden area (Tegalan/Kebun),
urban area (Urban), bare land area (Lgdl), and cloud area
(Awan). From the calculation result, we are obtained the
tabulation data of classified result as showed in table-2.
The training error by back-propagation method to get value
is 6.6365 for 100 iteration, and error 0.68957 for 1000 iteration. Then with used the method is propose, we are obtain
error at same 0.00013376.
Table-2. Calculation Result
Object Number of pixels Areas (ha)
Water (Air)Forest (Hutan)Paddy field (Sawah)Embankment
(Tambak)Garden (Kebun)Urban (Pemukiman)Bare land
(Lahan gundul/Lgdl)The covered of Cloud
(Awan)Classified areaSurvey area (size: 483 x 381)Unclassified areaPercent classifiedPercent unclassified
525635994018720147893363280144293871182553184023147099.20
%0.80 %
1576917982561644371009840412882615476655207441
Figure-5. The land covers classification of Marusu, South
Sulawesi , Indonesia
5. Conclusion
Equation (8) can then be written
=
=
198
4. Discussion and Result
Verification result using by NFES-Model to land cover clas-
sified has been showed decreases of misclassification. With
using artificial neural network approximation, or backpropagation neural network (BPNN), misclassification up
to 20% (investigation result to obtain 12.29%), then if
using the NFES-Model, with the test-case using LandSatETM7 data of Marusu District, South Sulawesi,
misclassification is 0.8% only.
The signal checking of the original image, the image data
classification, and the image data classification with the
stegano/secret information are showed in Figure-6 and
Figure-7.
ploma, Braunschweig, 1999.
[10] Simpson, J.J. and Keller, R.H., An Improved Fuzzy Logic
Segmentation of Sea Ice, Clouds, and Ocean in
Remotely Sensed Arctic Imagery, Remote Sens.
Environ., 1995, Vol.54, pp. 290 – 312.
Figure-6a. The signal checking of an original image
REFERENCES
[1] Funabashi, M. et al., Fuzzy and neural hybrid expert
system: Synergetic AI, IEEE Expert, 1995, pp. 32-40.
[2] Skidmore et al, An operational GIS expert system for
mapping forest soil, Photogrammetric Engineering
& Remote Sensing, 1996, Vol.62, No.5, pp. 501-511.
Figure-6b. The signal checking of the image data
classification
[3] Maeda, A. et al., A fuzzy-based expert system building
tool with self-tuning capability for membership
function, Proc. World Congress on Expert Systems,
Pergamon Press, New York, 1991, pp. 639-647.
[4] Murai, H., Omatu, S., Remote sensing image analysis
using a neural network and knowledge-based processing”, Int. J. Remote Sensing, 1997, Vol.18, No.4,
pp. 811-828.
[5] Jang, J. S. R., ANFIS: Adaptive-Network-based Fuzzy
Inference Systems, IEEE Transactions on Systems,
Man, and Cybernetics, 1993, Vol. 23, No. 3, pp. 665685.
[6] Prasad, S.A., Sadly, M., Sardy, S., Landsat TM Image
data Classification of Land Cover by Fuzzy CMean, Proc of the Int. Conf. on Opto-electronics
and Laser Applications ICOLA’02, pp. D36-D39,
October 2-3, 2002, Jakarta, Indonesia. (ISSN : 9798575-03-2)
Figure-6c. The signal checking of the image data
classification with the stegano
Figure-7a. The image data classification without the
stegano
[7] Sangthongpraow, U., Thitimajshima, P., and
Rangsangseri, Y., Modified Fuzzy C-Means for Satellite Image Segmentation, GISdevelopment.net,
1999.
[8] Enbutu, I. Et al., Integration of multi-AI paradigms for
intelligent operation support systems: Fuzzy rule
extraction from a neural network, Water Science
and Technology, 1994, Vol. 28, no. 11-12, pp. 333340.
[9] Nauck, U., Kruse, R., Design and implementation of a
neuro-fuzzy data analysis tool in java, Thesis Di-
Figure-7b. The image data classification with the
stegano
199
Paper
Saturday, August 8, 2009
16:00 - 16:20
Room L-212
A THREE PHASE SA–BASED HEURISTIC FOR SOLVING
A UNIVERSITY EXAM TIMETABLING PROBLEM
Mauritsius Tuga
Jurusan Teknik Informatika Universitas Katolik Widya Mandira Kupang
Email: [email protected]
Abstract
As a combinatorial problem, university examination timetabling problem is known to be NPcomplete, and is defined as
the assignment of a set of exams to resources (timeslots and rooms) subject to a set of constraints. The set of constraints
can be categorized into two types; hard and soft. Hard constraints are those constraints that must by compulsory
fulfilled. Soft constraints are non-compulsory requirements: even though they can be violated, the objective is to
minimize the number of such violations. The focus of this paper is on the optimization problem, where the objective is to
find feasible solution i.e. solution without hard constraint violation, with minimum soft constraint violations. Some
heuristics based on the simulated annealing (SA) are developed using three neighborhood structures to tackle the
problem. All heuristics contain three phases, first a feasible solution is sought using a constructive heuristic, followed
by the implementation of SA heuristics using single neighborhood. In Phase 2, a hybrid SA is used to further minimize
the soft constraint violations. In Phase 3 the hybrid SA is run again several times using different random seeds to
proceed on the solution provided by Phase 1. The heuristics are tested in the instances found in the literature and the
results are compared with several other authors. In most cases the performance of the heuristics are comparable to the
current best results and they even can improve the state-of-the art results in many instances.
Keyword: Computational Intelligence, Simulated Annealing, Heuristic/ Metaheuristic, Algorithm, University
Examination Timetabling, Timetabling/ scheduling, Combinatorial Optimization
1. INTRODUCTION
The goal of an Examination Timetabling Problem (ExTP) is
the assignment of exams to timeslots and rooms, and the
main requirement is that no students nor invigilators
should be assigned to more than one room at the same
time. Among the most representative variants of ExTPs, it
must be cited the uncapacitated and capacitated ones. In
the uncapacitated version, the number of students and
exams in any time slot is unlimited. Meanwhile, the
capacitated version imposes limitations on the number of
students assigned to every timeslot. Also, another point
worthy of mention is that the uncapacitated version of
examination timetabling is divided into two problems, i.e
uncapacitated with and without cost. The uncapacitated
without cost problem can be transformed into a Graph
Coloring Problem and has been addressed by the authors
in [16]. In this paper the uncapacitated with cost problem
will be addressed. A more complete list of examination
timetabling variants can be found in[12].
200
2. Problem Formulation and Solution
Representation
Any instance of this problem will contain a set of events
or exams, a set of resources and a set of constraints. For
instance I, let V = {v1, v2,…, vn} be the set of events
(exams) that have to be scheduled. Let G=(V,E) be a graph
whose vertices are the set of V , and {vi,vj} ??E(G) if and
only if events vi and vj are in conflict. The graph G is
called the conflict graph of I.
The set of rooms is denoted by R, the set of timeslots is
denoted by W and let B = R x W be a Cartesian product
over the set of rooms and the set of time slots. The set B
is the resource set of the instance. In our implementation,
each member of B will be represented by a unique integer,
in such a way that it is easy to recover which room and
time slot a resource belongs to.
For each event vi, there is a specific set Di?B, which
contains all candidate resources that could be used by
179
event vi and in general, Di _ Dj, “i _ j. Let the domain
matrix D is an adaptive matrix contains |V| rows and each
row is made of the set Di. This matrix will be used to
control the hard constraints. The soft constraints will be
represented as an objective function as described in
Section 5.
The problem is to assign each event to its resource from
its domain while minimizing the objective function.
A solution or a timetable is represented by a one
dimensional array L where L(i)=b means event or exam I
is assigned to resource b.
3. Techniques applied to Examination
Timetabling Problems
There are a significant number of researchers focusing on
this problem. In fact, this problem have been attracting
the highest number of researchers in the timetabling
research area [12]. Some approaches that produced the
best solution(s) for at least one of the instances we are
using, according to the survey in [12] will be discussed
next.
Merlot et al. [11] used three-stage approach; constraint
programming to generate initial solutions, followed by
Simulated Annealing and Hill Climbing.
Yang et al. [17] applied a Case Base Reasoning (CBR)
strategy to solve the problem. To minimize the soft
constraint violations, they used the so called “Great
Deluge” algorithm which is another metaheuristic very
similar to SA. Abdullah et al. [2] implemented a local search
based heuristic using a large neighborhood structure
called Ahuja-Orlin’s neighborhood structure. The
“Exponential Monte Carlo” acceptance criterion was used
to deal with non-improving solutions. An application of
heuristics based on multi-neighborhood structures was
used by Burke et al. [4]. In this strategy, they applied local
searches using several neighborhood structures.
4. SA-based Heuristics
In this work the timetabling process is carried out in three
phases.
4.1 Phase 1. Constructive Heuristic (CH) + Single SA.
A Constructive Heuristic (CH) that is similar to the one in
[15,16] is used to find initial feasible solutions.
The feasible solutions found by this heuristic will be
further processed by a SA-based method to minimize the
number of soft constraint violations. Many kinds of SA
using the following three neighborhood structures are
tested.
180
1. Simple neighborhood: This neighborhood contains
solutions that can be obtained by simply changing the
resource of one event.
2. Swap neighborhood: Under the simple neighborhood,
an exam is randomly chosen and a new resource is allocated
to it. However, this may involve some bias as there might
be events which do not have any valid resources left at
one stage of the search. This might create a disconnected
search space. In the swap neighborhood, the resources
of two events are exchanged, overcoming the
disconnection of the search space that might occur in the
simple neighborhood.
3. Kempe chain neighborhood operates over two selected
timeslots and was used in [4,11,14] to tackle examination
timetabling problems. It swaps the timeslot of a subset of
events in such a way that the feasibility is maintained.
Figure 1. A bipartite graph induced by the events assigned
to Timeslot T1 and T2 before moving the Event 3 to
Timeslot T2 To illustrate the idea, assume that a solution
of an examination timetabling problem assigns Event
1,3,5,7,9 to Timeslot T1 and Event 2,4,6,8,10 to Timeslot
T2 (Figure 1). The lines connecting two events indicate
the corresponding events are in conflict. By considering
events as vertices and the lines as edges, this assignment
induces a bipartite graph. A Kempe chain is actually a
connected subgraph of the graph. If we choose for
example Event 3 in T1 to move to T2, then to keep the
feasibility, all events in the chain containing Event 3 have
to be reassigned. In this case, Event 2 and 8 have to be
moved to Timeslot T1 and Event 7 has to be moved to T2.
After the move is made, another feasible assignment is
obtained (Figure 2).
Figure 2. A bipartite graph induced by the events assigned
in Timeslot T1 and T2 after moving Event 3 in Figure 1 to
Timeslot T2 This example shows how a Kempe chain move
can be made triggered by the Event 3 in Timeslot T1.
Different new feasible solution may be obtained if we
choose another event as trigger. However, if the chosen
trigger in the last example is Event 7 instead of Event 3, we
will obtain the same chain, and end up with the same new
feasible solution.
Based on these three neighborhood structures some SA
heuristics are developed; called simple SA, Swap SA
1
3
5
7
2
4
6
8
201
10
9
T1 T2
1
2
5
8
9
3
4
6
7
10
T1 T2
and Kempe SA each of which uses simple, swap and
Kempe chain neighborhood respectively. This group of
SAs is referred to as single SA as they use only one
neighborhood structure. After conducting some tests it
becomes clear that the SA using the Kempe chain
neighborhood structure is the most suitable SA to be used
in this Phase. The use of the CH followed by Kempe SA
will be referred to as Phase 1 heuristic.
4.2 Phase 2. Hybrid Simulated Annealing
The use of Single SAs and a Hybrid SA called HybridSA
are tested in this Phase. HybridSA is a SA using two
neighborhood structures, embedded by the Kempe Chain
Hill Climbing Heuristics ( KCHeuristics).
The neighborhood used in the SA part are simple
neighborhood and swap neighborhood. The pseudocode
for Hybrid SA is presented in Figure 3.
Figure 3. The pseudocode for HybridSA Based on some
tests, the SA process using a Kempe chain neighborhood
- in the first stage, followed by the HybridSA in the second
stage seems to be the best combination for the problem.
The implementation of Phase 1 heuristic followed by the
HybridSA will be called Phase 2 heuristic.
4.3 Phase 3. Extended HybridSA
The Extended HybridSA is essentially an extension of
Phase 2. In this phase the Hybrid SA in the second phase
is rerun several times fed by the same good solution
provided by the Phase 1 heuristic. That is, after running
Phase 1 and Phase 2 several times, the solution found by
the Phase 1 heuristic which gives the best solution in
Phase 2 will be recorded. Subsequently, the HybridSA is
rerun 10 times using the recorded solution with different
random seeds. This will be referred to as
Phase 3 heuristic.
The reason behind this experiment is that the solutions
produced by the Kempe SA in the first phase must be
good solutions representing a promising area.
202
This area should thus be exploited more effectively to
find other good solutions. We assume that this can be
done by the HybridSA procedure effectively.
4.4 Cooling schedule
The cooling schedule used in all scenarios of the SA
heuristics are very sensitive. Despite many authors have
investigated this aspect it turned out that none of their
recommendations was suitable for the problem at hand.
Some preliminary tests then had to be carried out to tune
in the cooling schedule.
1. Initial temperature:
is the average increase in cost and is the acceptance ratio,
defined as the number of accepted nonimproving solutions
divided by the number of attempted non-improving
solutions. The acceptance ratio is chosen at random within
the interval [0.4, 0.8] . A random walk is used to obtain
the .
2. Cooling equation: Many cooling equations are tested
and found out that the best one is the following.
The value for _ is chosen within the interval [0.001,
0,005].
3. Number of trials: The number of trials in each temperature
level is set to a|V| where a is linearly increased. Initially, a
is initially set to 10.
As in [16] there is a problem of determining the temperature
when a solution is passed to a Simulated Annealing
heuristic. The problem is to set a suitable temperature so
the annealing process can continue to produce other good
solutions. We have developed an alternative temperature
estimator based on the data obtained from running the
single SA heuristic using simple neighborhood structures
for each instance. We found that using a quadratic
function that relates temperature and cost seems to be
the most suitable one.
This is empirically justified based on observations
gathered from the tests.
5. Computational Experiments
5.1 The Instances
The instances used for the examination timetabling
problem derive from real problems taken from institutions
worldwide and can be downloaded from http://
www.cs.nott.ac.uk/~rxq/data.htm. These instances were
introduced by Carter et al. [8] in 1996 and in turn were
taken from eight Canadian academic institutions; from the
King Fahd University of Petroleum and Minerals in
Dhahran; and from Purdue University, Indiana, USA. The
number of exams are ranging from 81 to 2419 and the
number of students that are to be scheduled ranging from
181
611 to 30.032 students. The number of timeslots varies
from instance to instance and is part of the original
requirements posed by the university.
In this paper we address the uncapacitated with cost
problem, where the objective is to find a feasible solution
with lowest cost. In this problem, the hard constraint is
there is no student conflict. Any number of events can be
assigned to a timeslot as long as there is no student clash.
The soft constraint for this problem states that the
students should not sit in exams scheduled too close one
to another, i.e. the conflicting exams have to be spread
out as far as possible. The objective function for this
problem is recognized as a proximity cost and was posed
by Carter et al. [8]. Let S be the set of students and Vi be
the set of exams taken by the student i. Let also vij denote
the j-th exam of student i. Given an exam timetable L, the
objective function is:
5.2 Computational Results and Analysis
All tests were run on a PC Pentium IV 3.2 GHz running
under Linux. In this problem the execution time for the
tests were relaxed. Here we used the number of iterations
where the improvement could not be found. However, to
avoid an extremely long execution, for each
instance we set the maximum time length at 10,000 seconds.
Experiments conducted by other authors reportedly spent
more time than we used for each instance. Abdullah et al.
[2] reported that their approaches were run overnight for
each instance.
The CH process is able to generate the feasible initial
solutions for all instances using the given number of
timeslots within just a few seconds. The three heuristics
are effective in tackling the problem.
The use of HybridSA in Phase 2 significantly improve the
solutions found in the first phase in all instances. But to
reach those costs, the CPU time consumed is almost two
times larger than that of Phase 1. The CPU time for these
two phases vary considerably depending on the
instances, being as fast as 152 seconds and up to 10,000
seconds.
.
Instance name Burke
Et al. [18]
Merlot
Et al. [11]
Burke
Et al.[7]
Burke
Et al.[6]
Asmuni
Et al.[3]
Kendall
Et al.[10]
Car91 4.65 5.1 5.0 4.8 5.29 5.37
182
Car92 4.1 4.3 4.3 4.2 4.56 4.67
Ear83 37.05 35.1 36.2 35.4 37.02 40.18
Hec92 11.54 10.6 11.6 10.8 11.78 11.86
Kfu93 13.9 13.5 15.0 13.7 15.81 15.84
Lse91 10.82 10.5 11.0 10.4 12.09 Pur93 - - - - - Rye92 - - - 8.9 10.35 Sta83 168.93 157.3 161.9 159.1 160.42 157.38
Tre92 8.35 8.4 8.4 8.3 8.67 8.39
Uta92 3.2 3.5 3.4 3.4 3.57 Ute92 25.83 25.1 27.4 25.7 27.78 27.6
Yor83 37.28 37.4 40.8 36.7 40.66 Instance name Yang
Et al.[17]
Abdullah
Et al.[2]
Burke
Et al.[5]
Burke
Et al.[4]
Our Results
HybridSA Ext.HybridSA
Car91 4.5 5.2 5.36 4.6 4.49 4.42
Car92 3.93 4.4 4.53 4.0 3.92 3.88
Ear83 33.7 34.9 37.92 32.8 32.78 32.77
Hec92 10.83 10.3 12.25 10.0 10.47 10.09
Kfu93 13.82 13.5 15.2 13.0 13.01 13.01
Lse91 10.35 10.2 11.33 10.0 10.15 10.00
Pur93 - - - - 4.71 4.64
Rye92 8.53 8.7 - - 8.16 8.15
Sta83 158.35 159.2 158.19 159.9 157.03 157.03
Tre92 7.92 8.4 8.92 7.9 7.75 7.74
Uta92 3.14 3.6 3.88 3.2 3.17 3.17
Ute92 25.39 26.0 28.01 24.8 24.81 24.78
Yor83 36.35 36.2 41.37 37.28 34.86 34.85
Tabel 1. Comparison between the normalized cost obtained
by the SA heuristics and some current results on
Examination Uncapacitated with Cost Problem. The values
marked in bold italic indicate the best cost for the
corresponding instance.
Also, it is to be noted that the Extended HybridSA heuristic
in Phase 3 gives another contribution to tackle the problem,
as it can still improve the solution found in Phase 2.
Unfortunately due to the limitation on the report length
no table can be presented to compare the results gained
from each phase. However, in Table 1 we present our best
results alongside results found in the literature. The data
for the following table was taken from the survey
conducted by Qu et al. [12]. Note that there are actually
more reports in this same problem presented in [12].
However, as pointed out by Qu et al. [12], some of the
reports might have used different versions of the data
203
set, or even addressed a different problem. We therefore
do not include those reports in the comparison shown in
Table 1.
Table 1 shows that compared to the other methods, the
three-phase SA is very robust in handling this problem. It
is able to produce the best solution found so far and
sometimes improves the quality of the solutions already
found in many instances.
It only fails to improve the result for the instance Uta92
by a very small difference to the current best cost Note
that due to the different application of the rounding system,
we do not know the exact relative position on the
achievement in instances Hec92, Kfu93 and Lse91
compared to those found by Burke et al. [4]. It is also
worth mentioning that without the Extended HybridSA,
the HybridSA heuristic itself can improve the current state
of the art results in many instances (Table1).
6. CONCLUSION
With this data set the HybridSA in collaboration with the
Phase 1 heuristic performed well and the results were
comparable to other methods. However, this approach was
not robust enough to handle the problem.
The extended HybridSA seemed to be a better choice for
this kind of problem. It could match the best cost found
so far in the literature, and even improve the quality of the
solutions for many instances.
7. REFERENCES
[1] S. Abdullah, E.K. Burke, B. McCollum, 2005, An
Investigating of Variable NeighborhoodSearch for
University Course Timetabling, in Proceeedings
of Mista 2005: The 2nd Multidisciplinary
Conference on Scheduling: Theory and
Applications, New York, pp.413-427.
[2] S. Abdullah, S. Ahmadi, E.K. Burke, M. Dror,(2007),
Investigating Ahuja-Orlin’s Large Neighborhood
Search Approach for Examination Timetabling,
Operation Research Spectrum 29) pp.351-372.
[3] H. Asmuni, E.K. Burke, J. Garibaldi and B. McCollum,
in: E. Burke and M. Trick (Eds.): PATAT
2004,Lecture Notes in Computer Science, 3616,
Springer-Verlag Berlin Heidelberg 2005, (2005) pp.
334-353.
[4] E.K. Burke, A.J. Eckersley, B. McCollum, S. Petrovic, R.
Qu, ,(2006).Hybrid Variable Neighborhood
Approaches to University Exam Timetabling,
Technical Report NOTTCS-TR-2006-2, School of
CSiT University of Nottingham.
204
[5] E.K. Burke, B. McCollum, A. Meisels, S. Petovic,R. Qu,
A Graph-Based Hyperheuristic for Educational
Timetabling Problems, European Journal of
Operational
Research 176(2007) pp.177-192.
[6] E.K. Burke, Y. Bykov, J. Newall, S. Petrovic, A TimePredefined Local Search Approach to Exam
Timetabling Problems. IIE Transactions 36 6(2004)
pp. 509-528.
[7] E.K. Burke, J.P. Newall, (2004) Solving Examination
Timetabling Problems through Adaption of
Heuristic Orderings, Annals of Operational
Research 129 pp. 107-134.
[8] M.W. Carter, G. Laporte, S.Y. Lee, (1996) Examination
Timetabling: Algorithmic Strategies and
Applications,Journal of Operational Research
Society ,47 pp.373-383.
[9] S. Even, A. Itai, A. Shamir, (1976) On the Complexity of
Timetabling and Multicommodity Flow Problems,
Siam Journal on Computing 5(4) pp. 691-703.
[10] G. Kendall , N.M. Hussin, An Investigation of a Tabu
Search based Hyperheuristic for Examination
Timetabling, In: Kendall G., Burke E., Petrovic S.
(eds.), Selected papers from Multidisciplinary
Scheduling; Theory and Applications, (2005) pp.
309- 328.
[11] L.T.G. Merlot, N. Boland, B.D. Hughes, P.J. Stuckey,
(2001) A Hybrid Algorithm for the Examination
Timetabling Problems, in: E. Burke and W. Erben
(Eds.): PATAT 2000,Lecture Notes in Computer
Science, 2079, Springer-Verlag Berlin Heidelberg
2001, pp 322-341.
[12] R. Qu, E.K. Burke, B. McCollum, (2006).A Survey of
Search Methodologies and Automated
Approaches for Examination Timetabling,
Computer Science Technical Report No. NOTTCSTR-2006-4
[13] J. Thompson and K. Dowsland. General colling
schedules for a simulated annealing based
timetabling system. In E. Burke and P. Ross,
editors, Proceedings of PATAT’95, volume 1153
of Lecture Notes in Computer Science, pages 345–
363. Springer-Verlag, Berlin, 1995
183
[14] J.M. Thompson, K.A. Dowsland, (1998) A Robust
Simulated Annealing Based Examination
Timetabling System, Computers Ops.Res. Vol.25
No.7/8 pp.637-648.
[15] M. Tuga, R. Berretta, A. Mendes, (2007), A Hybrid
Simulated Annealing with Kempe Chain
Neighborhood for the University Timetabling
Problem, Proc. 6 th IEEE/ACIS International
Conference on Computer and Information Science,
11-13 July 2007, Melbourne-Australia.pp.400-405.
[16] M. Tuga, Iterative Simulated Annealing Heuristics
for Minimizing the Timeslots Used in a University
Examination Problem, submitted to IIS09
Yogyakarta 2009.
184
[17] Y. Yang, S. Petrovic, (2004) A Novel Similarity Measure
for Heuistic Selection in Examination Timetabling,
in: E.K. Burke, M. Trick (Eds), Selected Papers
from the 5th International Conference on the
Practice and Theory of Automated Timetabling
III, Lecture Notes in Computer Science, 3616,
Springer-Verlag, pp.377-396.
[18]E.K. Burke, J.P. Newall, Enhancing Timetable Solutions
with Local Search Methods, in: E. Burke and P. De
Causmaecker (Eds.), PATAT 2002,Lecture Notes
in Computer Science, 2740, Springer-Verlag, (2003)
pp. 195-206.
.
205
Paper
Saturday, August 8, 2009
16:25 - 16:45
Room L-212
QoS Mechanism With Probability for IPv6-Based IPTV Network
Primantara H.S, Armanda C.C, Rahmat Budiarto
School of Computer Sciences, Univeristi Sains Malaysia, Penang, Malaysia
[email protected], [email protected], [email protected],
Tri Kuntoro P.
School of Computer Science, Gajah Mada University, Yogyakarta, Indonesia
[email protected]
Abstract
A convergence of IP and Television networks, known as IPTV, gains popularity. Unfortunately, today’s IPTV has limitation, such as using dedicated private IPv4 network, and mostly not considering “quality of service”. With the availability
of higher speed Internet and the implementation of IPv6 protocol with advanced features, IPTV will become broadly
accessible with better quality. IPv6 has a feature of Quality of Service through the use of its attributes of traffic class or
flowlable. Existing implementation of current IPv6 attributes is only to differentiate multicast multimedia stream and non
multicast one, or providing the same Quality of Service on a single multicast stream along its deliveries regardless
number of subscribers. Problem arose when sending multiple multicast streams on allocated bandwidth capacity and
different number of subscribers behind routers. Thus, it needs a quality of service which operates on priority based for
multiple multicast streams. This paper proposes a QoS mechanism to overcome the problem. The proposed QoS mechanism consists of QoS structure using IPv6 QoS extension header (generated by IPTV provider) and QoS algorithm in
executed in routers. By using 70% configuration criteria level and five mathematical function models for number of
subscribers, our experiment showed that the proposed mechanism works well with acceptable throughput.
Index Terms— IPTV, IPv6, Multiple Multicast Streams, QoS Mechanism
I. INTRODUCTION
A convergence of two prominent network technologies,
which are Internet and television, as known as Internet
Protocol Television (IPTV), gains popularity in recent years,
as in July 2008 Reuters’ television survey reported that
one out of five American people watched online television
[1]. With the availability of higher speed Internet connection, the IPTV becomes greatly supported for better quality.
IPTV provides digital television programs which are distributed via Internet to subscribers. It is different from conventional television network, the advantage of operating
an IPTV is that subscribers can interactively select television programs offered by an IPTV provider as they wish
[2]. Subscribers can view the programs either using a computer or a normal television with a set top box (STB) connected to the Internet.
206
Ramirez in [3], stated that there are two types of services
offered to IPTV subscribers. First, an IPTV provider offers
its contents like what conventional television does. The
IPTV broadcaster streams contents continuously on provided network, and subscribers may select a channel interactively. The data streams are sent in a multicast way.
The other one is that an IPTV provider with Video on Demands (VOD) offers its content to be downloaded partly
or entirely until the data videos are ready for subscribers
to view. The data are sent in a unicast way.
Currently IPTV is mostly operated in IPv4 network and it is
privately managed. Therefore, IPTV does not provide
“quality of service” (QoS) for its network performance and
IPTV simply uses “best effort” [2]. Since the privately managed network offers a huge and very reliable bandwidth
for delivering IPTV provider’s multicast streams (channels)
[4] and IPTV subscribers are located relatively closer to
IPTV providers [2], the IPTV network performance is excellent.
In near future, the IPv4 address spaces will be no longer
available. Moreover, there is a need to implement “quality
of service” as IPTV protocol is possibly implemented in
open public network (Internet) rather than in its private
network to obtain more ubiquitous subscribers.
The solution to this problem is the use of IPv6 protocol.
IPv6 not only providing a lot of address spaces, but it also
has more features, such as security, simple IP header for
faster routing, extension header, mobility and quality of
services (QoS) [5,6]. The use of QoS in IPv6 needs to utilize attributes of flowlable or traffic class of IPv6 header
[5]. In addition, since an IPTV is operated as a standard
television on which multiple viewers possibly watch the
same channel (multicast stream) from the same IPTV service provider, the IPTV stream has to be multicast in order
to save bandwidth and to simplify stream sending process. IPv6 is capable of providing these multicast stream
deliveries.
In addition to IPTV’s unicast VOD deliveries, an IPTV provider serves multiple channels. Each channel sends a
multicast stream. On the other hand, IPTV’s subscribers
may view more than one different channel which can be
from the same or different IPTV providers. Therefore, each
multicast stream (channel) may have different number of
subscribers. Further more, even in a multicast stream, the
number of its subscribers under a router and another router
can be different.
The use of current IPv6’s QoS which employs flowlable or
traffic class attributes of IPv6 header is suitable for single
multicast stream delivery. Meanwhile, IPTV provider needs
to broadcast multiple multicast streams (channels). With
regards to the various numbers of subscribers joining
multiple multicast streams, then the QoS for each multicast
stream would be differentiated appropriately. Thus, the
current IPv6’s QoS could not be implemented on multiple
multicast streams, even though it uses Per Hop Behavior
(PHB) on each router [7,8,9]. This is because the multicast
stream will be treated a same “quality of service” on each
router, regardless the number of subscribers of the multicast
stream exist behind routers.
The solution to this problem is to use another mechanism
to enable operation of QoS mechanisms for multiple
multicast streams with also regards to the number of joining subscribers on the streams. The proposed mechanism
utilizes QoS mechanism which implements a new IPv6 QoS
extension header (IPv6 QoS header, for short) as QoS structure, and QoS mechanism to employ such algorithm. IPv6
QoS header will be constructed on IPTV provider and attached to every multicast stream packet of data, and QoS
mechanism is operated on each router to deal with the
packet of data which carry IPv6 QoS header.
The main focus of this research will be on designing QoS
mechanism, and evaluating its performance by using NS-3
network simulator with regard to QoS measurement, which
includes throughput, delay and jitter [10,11]. To simulate
the role of the number of subscribers, five mathematical
function models are used.
II. RELATED WORK
IPTV high level architecture consists of four main parts,
which are content provider, IPTV service provider, network provider and subscribers [12]. Firstly, a content provider supplies a range of content packets, such as video
and “traditional” television live streaming. Secondly, an
IPTV service provider (IPTV provider in short) sends its
contents to its ubiquitous subscribers. Thirdly, it is a network provider which offers network infrastructure to reliably deliver packets from an IPTV provider to its subscribers. Finally, subscribers are users or clients who access
the IPTV contents from an IPTV provider. A typical IPTV
infrastructure, which consists of these four main parts, is
shown in Figure 1.
Figure 1. A Typical IPTV Infrastructure [13]
Some researches on IPTV QoS performance and multicast
structure have been conducted. An Italian IPTV provider
sends about 83 multicast streams [4] on a very reliable
network. Each multicast stream with standard video format requires about 3 Mbps of bandwidth capacity [2,4].
Meanwhile, two types of QoS are Integrated Services
(IntServ) which is end-to-end base, and Differentiated Services (DiffServ) which is per-hop base [14,15]. A surprising research work on multicast tree’s size and structure on
the Internet has been conducted by Dolev, et. al. [16]. The
observed multicast tree was committed as a form Single
Source Multicast with Shortest Path Tree (SPT). The authors significantly found that by observing about 1000
receivers in a multicast tree, the distance between root and
receivers was 6 hops taken by most number of clients. The
207
highest distance taken from the observation was about 10
hops.
III. QoS MECHANISM
The proposed QoS mechanism consists of two parts,
which are QoS structure as IPv6 QoS extension header
and QoS mechanism executed in each router.
III.1 IPv6 QoS Extension Header on Multicast Stream Packet
Each multicast stream’s packet of data needs to carry the
IPv6 QoS header with the purpose of enabling intermediate routers all the way to reach IPTV’s subscribers. The
structure of IPv6 QoS header is shown in Figure 2, that
also shows the location of the IPv6 QoS header in IPv6
datagram.
ing the streams.
Based on these values, an intermediate router knows how
to prioritize forwarding an incoming multicast stream with
the QoS value.
III.2 QoS Mechanism on Intermediate Router
QoS mechanism works as Queuing and Scheduling algorithm to run a forwarding policy to perform DiffServ. Every
connected link to a router has different independent queuing and scheduling. Thus, any incoming multicast stream
can be copied into several queuing and scheduling process.
The algorithm for queuing and scheduling is composed of
three parts as follows.
a. Switching and queuing any incoming stream
This part aims to place the stream into appropriate queue
by reading the QoS value in IPv6 QoS extension header.
The algorithm for switching and queuing is shown in Figure 3.
Figure 3. Switching and Queuing Algorithm
Figure 2. IPv6 QoS extension header
Each IPv6 QoS header, which is derived from standard
IPv6 extension header format, maintains a number of QoS
value structures. QoS value defines attributes of network
address (64 bits), netmask (4 bits) and QoS (4 bit). Network
address is an address of the “next” link connected to the
router. Netmask is related to network address’s netmask.
QoS is the value of priority level. This QoS is calculated
with formulae in equation 1.
⎢N
⎥
QoS value = ⎢ dw x16⎥
⎣ N tot
⎦
(1)
where :
Ndw
: Number of all subscribers only “under” the router
Ntot
: Number of total subscribers request-
208
b. Queue
Queue consists of N number of queue priority levels. Every level is a queue which can hold incoming multicast
stream to be forwarded. The priority queue levels are based
on QoS value. These levels are shown in Figure 4.
Figure 4. Queue Priority Levels
c. Scheduling
Scheduling for datagram forwarding is to select a queue
from which a dequeuing process to forward a queued
multicast datagram to corresponding link occurs. The algorithm is shown in Figure 5.
Figure 5. Scheduling Algorithm
IV. ExperimentS
The experiments are conducted by using NS-3 network
simulator to measure IPTV performance with regard to QoS
measurements (delay, jitter and throughput). However,
before doing experiments, some steps are carried out which
include configuring network topology, setting up QoS
mechanism for each router, and configuring five mathematical function models to represent the models of the numbers of subscribers joining multicast streams.
IV.1 Network Topology
The network topology for our simulation is configured as
in Figure 6.
Each link shown in Figure 6 is configured with 150 Mbps,
except those which are for local are network (LAN1, LAN2
and LAN3) and L01. Some links are not necessary, as the
multicast tree does not create any “loop”.
In this simulation, an IPTV Multicast Stream Server generates about 50 multicast streams and 8 unicast traffics to
represent IPTV channels and VoDs, respectively. Each
multicast stream is generated as constant bit rate (CBR) in
3 Mbps, and also for each unicast traffic as well. Therefore, the total of bandwidth required to send all traffic is
greater than the available bandwidth capacities of links.
Consequently, some traffic will not be forwarded by a router.
IV.2 Setting Up QoS Mechanism on Routers
Each router in this simulation is equipped with the QoS
mechanism configuration. QoS mechanism is composed
of 16 QoS priority level queues, or 17 QoS priority level
queues if there is unicast traffic which is placed in the
lowest level. In addition, Criteria level is set to 70%. It
means that if the 70% of total of all queue size is occupied,
then the next incoming packet will be placed into appropriate queue priority level based on probability of its QoS
value. Ciriteria level 70% is a mix between using priority
and probability with a tendency to employ priority mechanism. The 70% criteria level is to show that priority is more
important than the probability.
IV.3 Models of the Numbers of Subscribers
The numbers of subscribers for multicast streams to ease
the evaluation of QoS measurement are modeled into five
mathematical function models. Each model defines how
the numbers of subscribers which are represented by QoS
priority levels are related to the number of multicast streams.
For example, a constant model means each QoS priority
level has the same number of multicast streams. For instance, three multicast streams per QoS priority level.
Therefore, it would be a total of 48 multicast streams for all
16 QoS priority levels.
Table 1. Five Mathematical Function Models
V.
RESULT and Discussion
The result of the experiments is shown in Figure 7 which
demonstrates the tracing file. On highlighted part, it shows
about a content of simulated multicast stream packet of
data.
Figure 6. Network Topology
209
Figure 7. Tracing File of Simulated Network
Other results are based on QoS measurements on a node
in nearest network (receiving multiple multicast streams)
and a node (receiving unicast streams) in the same network. The results are shown in Figure 8 to Figure 10.
Figure 9. Average Jitter of a Node Receiving Multicast
Streams and a Node Receiving Unicast Traffic
Figure 8. Average Delay of a Node Receiving Multicast
Streams and a Node Receiving Unicast Traffic
Figure 10. Throughput of a Node Receiving Multicast
Streams and a Node Receiving Unicast Traffic
The most important result is throughput, because it is considered the network reliability. Delay and jitter do not considerably disrupt the network; it can be overcome by providing more buffers on subscribers’ node.
210
Throughputs of multicast streams are above 55% and
throughputs of unicast traffic are about 35 to 60%. Average delays of multicast streams depend on the mathematical function models, whereas average delays of unicast
traffic are almost the same for all unicast traffic. Average of
jitter for both types of traffic is relatively low, and less than
50 µs.
Systems and Applications, January 03 - 06, 2005,
IEEE Computer Society.
[11]McCabe, J. D., 2007, “Network analysis, architecture,
and design”, 3rd ed, Morgan Kaufmann.
[12]ATIS-0800007, 2007, “ATIS IPTV High Level Architecture”, ATIS IIF.
VI. CONCLUSION
The proposed QoS mechanism works well as expected.
Based on the experiments, with 70% criteria level and five
mathematical function models for subscribers, all type of
traffic can be successfully forwarded with various throughputs which are about 35% to 74%. However, throughputs
of unicast traffic are less than multicast streams, because
the unicast traffic is placed into the lowest queue priority
level.
References
[1]Reuters, 2008, “Fifth of TV viewers watching online:
survey”, 29 July 2008, [online], www.reuters.com/
a r t i c l e / i n t e r n e t N e w s /
idUSN2934335520080729?sp=true
[13]Harte, L., 2007, “IPTV Basics : Technology, Operation,
and
Services”,
[online]
http://
www.althosbooks.com/ipteba1.html
[14]RFC 2210, 1997, “The Use of RSVP with IETF Integrated Services”.
[15]RFC 4094, 2005, “Analysis of Existing Quality-of-Service Signaling Protocols.”.
[16]Dolev, D, Mokryn, O, and Shavitt, Y., 2006, “On
Multicast Trees: Structure and Size Estimation”,
IEEE/ACM Transactions On Networking, Vol. 14,
No. 3, June 2006
[2]Weber, J., and Newberry, T., 2007, “IPTV Crash Course”,
New York: McGraw-Hill.
[3]Ramirez, D., 2008, “IPTV security: protecting high-value
digital contents”, John Wiley.
[4]Imran, K., 2007, “Measurements of Multicast Television
over IP”, Proceeding 15th IEEE Workshop on Local
and Metropolitan Area Networks.
[5]Hagen, S., 2006, “IPv6 Essential”, O’Reilly
[6]Zhang, Y., and Li, Z., 2004, “IPv6 Conformance Testing:
Theory And Practice”, IEEE
[7]RFC 2597, 1999, “Assured Forwarding PHB Group”.
[8]RFC 3140, 2001, “Per Hop Behavior Identification
Codes”.
[9]RFC 3246, 2002, “An Expedited Forwarding PHB (PerHop Behavior)”.
[10]Maalaoui, K., Belghith, A., Bonnin, J.M., and
Tezeghdanti, M., 2005, “Performance evaluation of
QoS routing algorithms”, Proceedings of the ACS/
IEEE 2005 international Conference on Computer
211
Paper
Saturday, August 8, 2009
16:25 - 16:45
Room L-210
Sampling Technique in Control System for Room Temperature Control
Hany Ferdinando, Handy Wicaksono, Darmawan Wangsadiharja
Dept. of Electrical Engineering, Petra Christian University, Surabaya - Indonesia
[email protected]
Abstract
A control system uses sampling frequency (or time) to set the time interval between two consecutive processes. At least
there are two alternatives for this sequence. First, the controller reads the present value from sensor and calculates the
control action then sends actuation signal. The other alternative is the controller reads the present value, send the
actuation signal based on the previous calculation and calculates the present control action. Both techniques have their
own advantages and disadvantages. This paper evaluates which the best alternative for slow response plant, i.e. room
control system. It is controlled by AVR microcontroller connected to PC via RS-232 for data acquisition. The experiment
shows that the order of processes has no effect for the slow response plant
Keywords— sampling, temperature, control
I. Introduction
Sampling frequency is used to set the time interval between two consecutive actions. For 1 kHz, the time interval is 1ms. This time interval must be less than the dynamic behavior of the plant. Good control system must
choose appropriate sampling frequency in order to get
good performance but this is not enough. The order of the
action on one sampling time must be considered also.
This paper evaluates the idea in [1].
The goal of the experiment is to implement the idea in [1]
and compare it with the ordinary order.
The sampling time might be disturbed by the delay for
some reasons. This makes the system has sampling jitter,
control latency jitter, etc. When the delay is equal to the
sampling time then we have a problem [2].
If the processes inside one sampling time have periods,
then we discuss the multirate sampling control [3]. Here,
each process has its own sampling time so we can insert
acceptable delay time to compensate the control action.
But what if the delay time exceeds the limit?
Since sampling time and processes give a lot of influences,
we have to choose the sampling time appropriately and
consider the processes and the possibility of delay in the
212
system. Considering this delay and all processes within
one sampling time, [1] proposes another idea to keep the
interval between the processes constant.
A. Alternative I
This alternative is shown in figure 1. The first alternative
sends the current control action for the current situation.
It makes the control action is relevant to the present value.
This is the advantage of this method. Unfortunately, this
technique cannot guarantee, that the time interval between
two control actions is constant. This is due to the fact that
the controller can receive interrupt signal and it makes the
time consumes for control action calculation is longer than
the normal way and this is the disadvantage of this alternative.
B. Alternative II
This alternative uses little bit different order, i.e. the controller reads the present value and sends the previous result of control action calculation [1]. After sending the
previous result, the controller calculates the control action and keep the result until the next ‘tick’. Figure 2 shows
the timing diagram for this alternative.
This alternative guarantees that the time interval as discussed in above section is constant for all processes. But
the controller uses previous result of control action calculation for current situation. It looks like it does not make
sense for the current situation is handled with calculation
which is made based from previous situation.
II. Design Of The System
Figure 3b. No interrupt, method 2
The incubator was made of acrylic with dimension 40x30x30
cm. It uses LM35 [4 as temperature sensor and three bulbs
controlled via TRIAC [5,6] as heaters. The AVR
microcontroller [7] is used to read the LM35 voltage with
its internal ADC and actuates the heaters. The control algorithm used in the AVR is parallel PID controller tuned
with Ziegler-Nichols method [8]. The AVR (programmed
with BASCOM AVR) [9] communicates with PC (programmed with Visual Basic 5) via serial communication for
data acquisition. The PC also initiates the start command.
Figure 1. Alternative 1
Figure 3a and 3b showed that both methods have the same
response. Method 1 reaches stability after 11 minutes while
method 2 in 10 minutes.
The next experiments involved disturbance, i.e. by opening windows (figure 4a and 4b) and dry ice (figure 5a and
5b).
Figure 4a. No interrupt, method 1 with disturbance (window)
Figure 2. Alternative 2
III. EXPERIMENTAL RESULTS
All experiments start and end at the same temperature, i.e.
30oC and 32oC and not all results are presented in this
paper
Figure 3a. No interrupt, method 1
Figure 4b. No interrupt, method 2 with disturbance (window)
Experiments in figure 4a and 4b show that the system can
handle disturbance in 4.1 and 4.35 minutes for method 1
and 2 respectively.
Figure 5a. No interrupt, method 1 with disturbance (dry
ice)
213
Both experiment in figure 6a and 6b have the same settling
time, i.e. 9 minutes.
Figure 5b. No interrupt, method 2 with disturbance (dry
ice)
time to handle this disturbance, i.e. 7 minutes.
Table 1. Summary of figure 3 to 5
Figure 7b. Interrupt type 1, method 2 with disturbance (window)
The experiment in figure 7a and 7b need 3.8 and 6 minutes
to handle this disturbance for method 1 and 2.
Next is experiment with high priority Interrupt Service Routine (ISR). It is simulated with random delay and its value
will not exceed certain limit such that all processes is still
within one sampling time and it will be called interrupt type
1.
Figure 8a. Interrupt type 1, method 1 with disturbance (dry
ice)
Figure 6a. Interrupt type 1, method 1
Figure 8a. Interrupt type 1, method 2 with disturbance (dry
ice)
The experiment in figure 8a and 8b need 4 and 5.7 minutes
to handle this disturbance for method 1 and 2.
Table 2. Summary of figure 6 to 8
Figure 6b. Interrupt type 1, method 2
Now the random delay is set more than one sampling time.
This kind of interrupt will be called as interrupt type 2.
Figure 7a. Interrupt type 1, method 1 with disturbance (window)
214
Figure 9a. Interrupt type 2, method 1
The settling time for the experiment (9a and 9b) is 10 and 6
minutes for method 1 and 2.
Figure 11a. Interrupt type 2, method 1 with disturbance
(dry ice)
Figure 9b. Interrupt type 2, method 2
Figure 11b. Interrupt type 2, method 2 with disturbance
(dry ice)
Experiment in figure 10a and 10b need 5.1 and 5.4 minutes
to handle this disturbance for method 1 and 2.
handle this disturbance.
Table 3. Summary of figure 9 to 11
IV. Discussion
Figure 10a. Interrupt type 2, method 1 with disturbance
(window)
Table 1 shows the summary of the experiments when there
their responses are almost the same.
These results make sense for the plant is categorized as
slow response plant. The order of processing and actuating processes has no effect. Beside the sampling time is
small enough compare to the dynamic behavior of the plant.
The experiments involved simulated ISR (random delay)
showed almost the same results. The disturbance signal
by opening window made the experiment gave different
result. This is due to the homogeneous air condition inside the chamber.
If the authors hoped for different result, then table 2 is not
satisfying. The system never looses its sampling time. The
performance of the system is still good.
Figure 10b. Interrupt type 2, method 2 with disturbance
(window)
215
V. Conclusions
From the experiments the authors conclude that the order
of the process in one sampling time has no effect for this
plant. This will be the same also for other slow response
plant. The sampling time used in this project is small enough
compare to the dynamic behavior of the plant. It is interesting to use bigger sampling time because the 2nd method
use actuated signal from the previous sampling.
[4] National Semiconductor. LM35 Datasheet. November
2000. Accessed on March 17, 2006. <http://
www.alldatasheet.com/pdf/8866/NSC/LM35.html>
With this result, it is interesting to repeat the experiment
with fast response plant like motor speed control. The main
point is to evaluate the performance of the system using
the 2nd method.
[6] Fairchild Semiconductor, MOC3020 Datasheet. Accessed on March 18, 2006 <http://
w w w. a l l d a t a s h e e t . c o m / p d f / 2 7 2 3 5 / T I /
MOC3020.html>
The disturbance signals, i.e. window and dry ice give different result. The authors recommended dry ice for this is
more stable than by opening the window. In order to have
faster response for the heating process, it is necessary to
change the bulbs with heater.
[7] Atmel Corporation, ATMega8 Datasheet. Oktober 2004.
Accessed on March 15, 2006. <http://
www.alldatasheet.com/pdf/80258/ATMEL/
ATmega8-16PI.html>
References
[1] Amerongen, J., T. J. A. De Vries. Digital Control Engineering. Faculty EE-Math-CS, Department of Electrical Engineering, Control Engineering Research
Group, University of Twente. 2003
[2] Gambier, A. “Real-time Control System: A Tutorial”. Proceeding of 5th Asian Control Conference. Australia,
2004
[3] Fujimoto, H and Y. Hori. “Advanced Digital Motion
Control Based on Multirate Sampling Control”. Proceeding of 15th Triennial World Congress of the International Federation of Automatic Control
(IFAC), Barcelona, Spain, 2002.
216
[5] Adinegara Astra, Arief. “Kendali Rumah Jarak Jauh
Memanfaatkan Radio HT”. Tugas Akhir S1.
Surabaya: Universitas Kristen Petra, 2004
[8] Ogata, K. Modern Control Engineering 4th ed. Upper
Saddle River, NJ, 2002.
[9] MCS Electronics, BASCOM v1.11.7.8 User Manual.
2005. April 25, 2006. <http://avrhelp.mcselec.com/
bascom-avr.html>
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room L-212
Statistic multivariate using Auto Generated Probabilistic
Combination on Forming schedule study plan
Suryo Guritno
Untung Rahardja
Hidayati
Ilmu Komputer
Universitas Gadjah Mada
Indonesia
[email protected]
STMIK RAHARJA
Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
[email protected]
STMIK RAHARJA
Raharja Enrichment Centre (REC)
Tangerang - Banten, Republic of Indonesia
[email protected]
Abstract
In support of continuing prosperous evolution of information technology, statistic multivariate field of study has always
been one the fundamental platform needed in laying the principal foundation of the knowledge. Destination information
system implementation in various companies in general, namely to help improve performance and service to the customer
more effective and efficient. Universities in the world, for example, to avoid errors and data redundancy, system information is used to form one Schedule Plan Study (JRS) students each semester. But it was not expected, because of problems
with regard to space and time conflicts occur everywhere. To anticipate, while the experiment is done using the concept
of permutation combination, which is the application of the science branch of multivariate statistics in IT. The concept is
enough to help decrease the number of conflicts more than 50% of all conflicts. However, the concept is less effective and
not in accordance with their needs, can not be detected even earlier if the status of permanent conflicts. Method is known
as Auto Generated Probabilistic Combination (AGPC). AGPC is a method that combines sequence difference from the
objects that is to be done without repeating the object of every order, and the sequence criteria are based on the status of
conflicts, permutation for each item per-item conflict. In this article have also identified at least 4 issues in regard to the
fundamental concept of permutation combination, defines the method Auto Generated Probabilistic Combination (AGPC)
as a means of tackling the new conflicts, and that is the last AGPC in the SIS-OJRS in Raharja Universities. AGPC this
method is very important to be developed especially in the process of JRS, because its function is to provide effective
warning in the status of the conflicts that are permanent, and to make the process of re-permutation. It can be said that
this method AGPC erase the operational work in approximately 80% of the original work and does not need to do to
cancel the entire schedule conflicts manually.
Index Terms— JRS, Conflicts, Permutation Combination, AGPC
I. Introduction
Preparation of Schedule Study Plan (JRS) is made as one
of the regular agenda that is run each semester in the university environment. Preparation activities that are conducted on all students to take a course or determine who is
taken in each semester. The process is a complex schedule
in determining whether the determination of the schedule
and schedule students to take enough lecturers and improves as the concentration of which is dominant for all
time at that time. In order to avoid errors and data redundancy, the University set a policy to use a system informa-
tion processing to make the process of schedule Plan Study
(JRS). But not enough to run smoothly, condition because
the data resulted in the system faced the problem that is a
quite complex problem with regard to the conflicts. Initially the problem is handled manually through a process
to cancel, but in fact more and more time is not possible to
re-do things that are manual. There is a concept that can
be seen that when the issue is conflicts. The concept is
known by the name of Permutation Combination. The concept was developed with the adopted principles of statis-
217
tical knowledge about permutation and combination. How
it works is rearrange the order in which subjects had to be
taken by a student for class schedule and then determined
based on the sequence. For example, say a student to take
4 subjects, namely subjects A, B, C, and D. If sorted for the
first time into the ABCD, the subject is a determination as
conflicts. For the first, subjects choose a schedule where,
different from the next order of subjects, such as B, C, and
D are tied to the course schedule on the previous order.
Subject B, for example, need to find a schedule that does
not conflicts with subjects A, and so on for subjects C and
D. But the order will still be conflicts if it turns out as a
class devoted to subjects B only and accidental conflicts
with the schedule A subject that has been taken. Basically
the first priority of the main.
II. Platform Theory
A. JRS
According to the administration of the PSMA Gunadarma
University, charging Card Plan Study is an activity undertaken by all students to take an active or determine the
course taken in each semester. The script SOP Bogor Agriculture Institute (IPB), The plan is the process of student
study the determination of educational activities that will
be implemented in the semester the students will come.
Teachings include eye, Subject, Training, Seminar, Practice Field, KKN, Internship and Final Project Work.
·
KRS is Card Plan Study, lists the Eyes Teaching
students to be taken in the semester to come (including
over semester).
·
KSM is a Student Study Card, containing a list of
Mata Teachings taken in the semester the student is running, is based on KRS.
·
In the script SOP UIN Sunan Kalijaga have about
KPRS. KPRS Card is a study plan changes, the changes
include the teaching of the selected students in KRS.
B. Permutation
clopedia, permutation is a rearrangement of items in the
ing. If there is an alphabetic string abcd, then the string
can be written back with a different sequence: acbd, dacb,
and so on. Read more there are 24 ways to write a fourth
letter in the order in which different each other.
abcd abdc acbd acdb adbc adcb
bacd badc bcad bcda bdac bdca
cabd cadb cbad cbda cdab cdba
dabc dacb dbac dbca dcab dcba
Each strand contains a new element with the same original
string abcd, just written with a different sequence. So each
new thread that has a different sequence of the strand is
218
called the permutation of abcd.
1). A large measure may be a permutation
To make a permutation of abcd, can assumptive that there
are four cards with each letter, we want the bucket back.
There are 4 empty boxes that we want to fill with each card:
Card
Empty Box
————— ———————
a b c d [][][][]
So we can fill each box with the card. Of course, every card
that has been used can not be used in two places at once.
The process is described as follows:
In the first, we have 4 options for the card is inserted.
Card
Box
————— ———————
a b c d [][][][]
^ 4 choice: a, b, c, d
Now, living conditions card 3, then we have 3 options to
stay for the card is inserted in the second.
Card
Box
————— ———————
a * c d [b] [ ] [ ] [ ]
^ 3 choice: a, c, d
Because two cards have been used, then for the third, we
have two options to stay.
Card
Box
————— ———————
a * c * [b] [d] [ ] [ ]
^ 2 choice: a, c
The last one, we only have a choice.
Car d
Box
————— ———————
a * * * [b] [d] [c] [ ]
^ 1 choice: a
Last condition of all boxes are filled.
Card
Box
————— ———————
* * * * [b] [d] [c] [a]
In each step, we have a number of options that are reduced. So the number of all possible permutation is 4 × 3 ×
2 × 1 = 24 units. If the number of card 5, the same way that
there can be 5 × 4 × 3 × 2 × 1 = 120 possibilities. So if the
generalization, the number of permutation of n elements is
n!.
III. Problems
Scanning may permutation combination method can overcome these conflicts, has even succeeded in lowering the
number of conflicts more than 50% of the original. But
after that the result be not the maximum, it is not effective
and not in accordance with their needs. Permutation combination method has a job that is sort objects into a sequence that is not the same as the previous order. Thus,
this method only reorder subjects, without looking at the
number of classes devoted to each subject. Basically if
you have 2 subjects, while the second course is open only
1 class, schedule conflicts and coincidences, even if the
permutation is done how many times the results are certainly conflicts. This is the reason why the method is said
permutation combination is not effective. Because you can
not detect the fact that the earlier status conflicts are permanent. Not only that, permutation method can be any
combination does not comply with the requirement due
course sequence generated does not match that should
be. This means, if there is a student taking 4 subjects ABCD,
and accidental conflicts in the 2 subjects, namely subjects
C and D. Then the permutation is done so that the order be
BACD, whether the status of conflicts may be missing?
The answer, most likely state conflicts will not be lost.
Because that is clear is that conflicts subjects C and D,
and the beginning of the permutation B subjects, the subjects are clearly B is not conflicts, so it does not need to
change a schedule. Now the problem is why this combination permutation method does not comply with the said
requirement, because this method is less sensitive to the
status of conflicts.
Based on the contention that there are 4 issues preference
of making this article, namely:
1. Such as whether the method can be effectively used to
overcome conflicts?
2. Such as whether the method can inform the status of
earlier conflicts that are permanent?
3. Such as whether the method can be sensitive to the
status of conflicts and permutation for subjects who
have conflicts?
4. Such as whether the method can not only stop permu
tation to sort the subjects, but also sort by class opened
per-course?
IV. Troubleshooting
To overcome the problems as described above, can be
done through the application of the method Auto Generated Probabilistic Combination (AGPC). Here are 5 characteristics of the Auto Generated Probabilistic Combination
(AGPC) that is applied in the process of handling conflicts
Schedule Study Plan (JRS):
1. Data required classes for each subject that are con
flicts.
2.
There is a warning that if all subjects are conflicts that
can only be opened one class, and the permutation is
not performed.
3. Permutation is done for the entire class for each course
are the conflicts, followed by a search and classes for
other subjects that are not conflicts.
4. If that is not generated schedule conflicts, the permu
tation has been stopped.
5. If up to the permutation process was done all the
schedule conflicts are still produced, all schedules
are displayed for selection can be done as well as
keep the schedule replaces the old schedule, and be
back by the permutation a schedule.
To be able to handle conflicts effectively, it needs an examination to assess whether the initial status conflicts are
conflicts that are permanent or not? If so, then the need of
warning that the status of conflicts can not be removed,
so that the process of permutation need not be done. This
is no point [1] and [2]. Not only that, how to work with
different AGPC any permutation combination in general.
AGPC precede the process of permutation are subject to
conflicts with the first goal on the issue of the problem.
This also makes AGPC become more effective. Permutation process performed AGPC not permutation-based subjects, but is based on the permutation of classes in each
subject. This is no point [3]. Here is an example:
If there are a schedule conflicts. Schedule that consists of
5 subjects that consisted of subjects A, B, C, D, and E.
And there are 3 subjects that conflicts are the subjects A,
B, and D. After review, the course opened a 2-class, namely
A1 and A2. Subjects B 1 opened the B1 class. While the
course was opened C-class 2, namely C1 and C2. So the
number of permutation for the three subjects was obtained
from the following formula:
P = jumlah MK A x jumlah MK B x jumlah MK C
=2x1x2=4
Order, namely:
1. A1, B1, C1
2. A1, B1, C2
3. A2, B1, C1
4. A2, B1, C2
Then proceed with the determination of classes for other
subjects that are not conflicts. This would adjust the schedule on-schedule study previously. Generated when an order of the schedule conflicts that are not, although not
complete permutation, the permutation process is stopped.
And the system provides the option to schedule if you
want to be a fixed schedule replaces the previous schedule? This is the point of no explanation [4]. Like others
with the process if the permutation is complete, the entire
schedule of all conflicts are generated. This is no point [5].
The system will display the entire schedule has been created. And users are given the option, if you want to select
219
one of the schedule-the schedule, or would like to make
the process of re-permutation schedule.
Using the methods of handling conflicts Auto Generated
Probabilistic Combination (AGPC) has been implemented
on the University Raharja, namely the information system
SIS OJRS (Online JRS). Students Information Services, or
commonly abbreviated SIS, is a system developed by University Raharja for the purpose of the system of information services to students at an optimal. Development of
SIS is also an access to the publication Raharja Universities in the field of computer science and IT world in particular. SIS has been developed in several versions, each
of which is a continuation from the previous version of
SIS. SIS OJRS (Online Schedule Study Plan) is the version
to the SIS-4. Appropriate name, SIS OJRS made for the
needs of the student lecture, which is to prepare the JRS
(Schedule Study Plan). The end result to be achieved from
the SIS that is produced this OJRS JRS (Schedule Study
Plan) and KST (Study Cards Stay) the minimal number of
conflicts may even reach up to 0%. Therefore, to conflicts
with the reduction can be an effective way, the method is
applied Auto Generated Probabilistic Combination (AGPC)
this.
Figure 2. Warning on the AGPC
But another case if one of the two subjects have opened
more than 1 class, then the permutation still running. However, at the time when the process of permutation and then
produced a stack schedule conflicts are not, then the permutation process is stopped. Then schedule the order of
data displayed results is the process of permutation. Next
appearance.
Figure 3. Views Log AGPC order if found not to schedule
conflicts
The image is only produced 2 permutation process, where
the permutation to-2 contains the order of the schedule
conflicts that do not. When click on the OK button, the
schedule has been approved to replace the old schedule
KST student concerned. KST displayed below the OK
button after a new executable.
Figure 1. Study Cards Stay (KST) on the SIS OJRS
The above picture is the view KST students. From this
page you can know information about what the class obtained by a student this semester, what day, the room where,
at the time, and what the status of its class, the class conflicts or not. At the top of the KST there is also a text link
“Auto_List_Repair”. Its function is to run this method
AGPC. If you find that the two subjects which are listed on
the conflicts over the KST subjects MT103 and PR183
each opened only 1 class, then a warning will appear stating that the status of permanent conflicts, so that the status of conflicts can not be eliminated and the permutation
does not run . Next display warning.
220
Figure 4. Study Cards Stay (KST) on the SIS OJRS results
AGPC
There is one more condition on the AGPC this method at
the time when the process is complete permutation runs,
but the order of the schedule generated conflicts are entirely fixed, the schedule-the schedule will also be displayed.
Next appearance.
The table above is a table which is the main place where
the data required to store the initial data permutation. These
fields are required in compliance with the existing system.
To Field, Kode_MK, NIM, No, and Jml_Kelas Class is a
field that describes the data that must be met before the
process of permutation actually executed. Jml_Kelas The
contents of the field that determines whether the permutation to be feasible or not. If feasible, then start the process
of permutation process is executed and entered into the
table CT_List2.
Figure 5. Views Log AGPC if not found the order of the
schedule conflicts that are not
In the column “Ket” shown in the image [5] above, a written schedule that all are composed of “conflicts”. When
the status of conflicts is clicked, it means that those with
schedule conflicts have been approved to replace the old
schedule KST students concerned. However, if you want
to repeat the method AGPC conflicts based on status in
one of the regular schedule you have, simply click on the
button with the “Repeat” is listed under the status “conflicts”. Distance permutation then executed again.
A. Database
SIS OJRS implemented on the University Raharja database using SQL Server. In the database server is either
integrated various databases that the database used by
the Student Information Services (SIS), and Green Orchestra (GO), Raharja Multimedia Edutainment (RME), etc.. SIS
OJRS use the same database as the database used by the
Student Information Services (SIS).
This database is created on the table-table is needed regarding the process AGPC. There are two types of tables
that must be prepared, namely: a table that contains data
CT_List permutation classes for subjects who are conflicts, and the table CT_List2 is a table that contains a
combination of data permutation classes for subjects and
those with conflicts are not conflicts.
Figure 6. Structure of the table tbl CT_List
Figure 7. Structure of the table tbl CT_List2
B. Listing Program
Check early
‘Delete data di CT_In_New1 dan CT_In_New2
Sql=”Delete from CT_List where NIM=’”&trim(strnim)&”’”
set rs=conn.execute(Sql)
Sql2=”Delete
from
CT_List2
where
NIM=’”&trim(strnim)&”’”
set rs2=conn.execute(Sql2)
‘cari data mhsnya
Sqla=”select * from sumber_mahasiswa where
NIM=’”&trim(strnim)&”’”
set rsa=conn.execute(Sqla)
‘Looping Kelas Bentrok
‘seleksi kst yang bentrok
Sql3=”select * from CT_11_7_12_2 where
NIM=’”&trim(strnim)&”’ and Bentrok=1"
set rs3=conn.execute(Sql3)
Nom2=1
while not rs3.eof
‘seleksi jumlah kelas untuk kode_MK tersebut
Sql4=”select COUNT(Kelas) as jum1 FROM (select Kelas
from
CT_11_7_12_2
where
Kode_MK=’”&trim(rs3(“Kode_MK”))&”’
and
Shift=”&rsa(“Shift”)&” and NIM is null GROUP BY
Kelas)DERIVEDTBL”
set rs4=conn.execute(Sql4)
‘seleksi kelas-kelas untuk kode_MK tersebut
Sql5=”select Distinct Kode_MK,Kelas FROM
CT_11_7_12_2
where
Kode_MK=’”&trim(rs3(“Kode_MK”))&”’
and
221
Shift=”&rsa(“Shift”)&” and NIM is null GROUP BY
Kode_MK,Kelas ORDER BY Kode_MK,Kelas”
set rs5=conn.execute(Sql5)
Nom=1
while not rs5.eof
‘seleksi datanya
Sql6=”select
*
from
CT_List
where
Kode_MK=’”&trim(rs5(“Kode_MK”))&”’
and
NIM=’”&trim(strnim)&”’ and Kelas=”&rs5(“Kelas”)&””
set rs6=conn.execute(Sql6)
‘jika blm ada datanya
If rs6.eof then
‘insert datanya
Sql7=”Insert
into
CT_List(Ke,Kode_MK,NIM,No,Kelas,Jml_Kelas)
Values(“&Nom&”’,”&trim(rs5(“Kode_MK”)&”’,”&trim(strnim)&”’”,&Nom2&””,&rs5(“Kelas”)&””,&rs4(“jum1”)&”)”
set rs7=conn.execute(Sql7)
Nom=Nom+1
Else
Nom=Nom
end if
rs5.movenext
wend
Nom2=Nom2+1
rs3.movenext
wend
The number of permutation
‘Jumlah looping
Sql8=”select distinct Kode_MK,Jml_Kelas,NIM from
CT_List where NIM=’”&trim(strnim)&”’”
set rs8=conn.execute(Sql8)
Nom3=1
Nom3a=1
while not rs8.eof
Nom4=rs8(“Jml_Kelas”)
Nom3=Nom3*Nom4
Nom3a=Nom3a+1
rs8.movenext
wend
Nom3b=Nom3a-1
Warning
‘Jika jumlah kemungkinan looping hanya 1
If Nom3=1 then
<div align=”center”>
<p><font
color=”#FF0000"
size=”3"
face=”tahoma”><strong>BENTROK TIDAK DAPAT
DIHILANGKAN </strong></font></p><p><strong><font
color=”#000000" size=”3" face=”tahoma”>&lt;&lt;— <a
href=”Tampil_Kelas_Shift4.asp?NIM=<%= strnim %>”
target=”_self”>Back To KST</a></font></strong></p></
222
div>
Permutation Run
elseif Nom3 > 1 then
‘jika = 2
if Nom3b=2 then
Nom5=1
‘seleksi kelas yg No=1
Sql9a=”select distinct * from CT_List where
NIM=’”&trim(strnim)&”’ and No=1 order by Ke”
set rs9a=conn.execute(Sql9a)
while not rs9a.eof
‘seleksi kelas yg No=2
Sql9b=”select distinct * from CT_List where
NIM=’”&trim(strnim)&”’ and No=2 order by Ke”
set rs9b=conn.execute(Sql9b)
while not rs9b.eof
‘insert kode_mk untuk No=1
Sql10a=”insert
into
CT_List2(Ke,Kode_MK,Kelas,NIM,No,Jml_Kelas)
vaules(“&Nom5&”’,”&rtm
i (rs9a(“Kode_MK”)&”’”,&rs9a(“Keals”)&”’,”&rtm
i (srtnm
i )&”’”,&rs9a(“No”)&””,&rs9a(“Jm_lKeals”)&”)”
set rs10a=conn.execute(Sql10a)
‘insert kode_mk untuk No=2
Sql10b=”insert
into
CT_List2(Ke,Kode_MK,Kelas,NIM,No,Jml_Kelas)
vaules(“&Nom5&”’,”&rtm
i (rs9b(“Kode_MK”)&”’”,&rs9b(“Keals”)&”’,”&rtm
i (srtnm
i )&”’”,&rs9b(“No”)&””,&rs9b(“Jm_lKeals”)&”)”
set rs10b=conn.execute(Sql10b)
‘Insert MK yg lainnya
Sql11=”select distinct Kode_MK from CT_11_7_12_2
where NIM=’”&trim(strnim)&”’ and Bentrok=0"
set rs11=conn.execute(Sql11)
Nom6=3
while not rs11.eof
‘cari jumlah kelas yg dibuka untuk kode_mk tersebut yg
tidak penuh
Sql12=”select COUNT(Kelas) as jum2 FROM (select TOP
100 PERCENT Kelas from CT_11_7_12_2 where
(Kode_MK=’”&trim(rs11(“Kode_MK”))&”’ and NIM is
null) GROUP BY Kelas ORDER BY Kelas) DERIVEDTBL”
set rs12=conn.execute(Sql12)
‘insert kode_mk nya
Sql13=”Insert
into
CT_List2(Ke,Kode_MK,NIM,No,Jml_Kelas)
Values(“&Nom5&”,’”&trim(rs11(“Kode_MK”))&”’,’”&trim(strnim)&”’,”&Nom6&”,”&rs12(“jum2”)&”)”
set rs13=conn.execute(Sql13)
Nom6=Nom6+1
rs11.movenext
wend
‘cari kstnya
‘cari data mhsnya
Sql14=”select * from sumber_mahasiswa where
NIM=’”&trim(strnim)&”’”
set rs14=conn.execute(Sql14)
‘cari urutan listnya
Sql15=”select
*
from
CT_List2
where
NIM=’”&trim(strnim)&”’ and Ke=”&Nom5&” Order By
No”
set rs15=conn.execute(Sql15)
while not rs15.eof
If isnull(rs15(“Kelas”)) then
‘cari di view, kelas yg ga bentrok
Sql16=”select * from View_CT_11_7_12_2 where NIM is
Null and Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
Shift=”&rs14(“Shift”)&” and (Kode_Waktu Not in (select
Kode_Waktu
from
CT_List2
where
NIM=’”&trim(strnim)&”’ and Ke=”&Nom5&” and
Kode_Waktu is not null)) and (Kode_Waktu2 Not in (select
Kode_Waktu
from
CT_List2
where
NIM=’”&trim(strnim)&”’ and Ke=”&Nom5&” and
Kode_Waktu is not null)) and (Kode_Waktu Not in (select
Kode_Waktu2
from
CT_List2
where
NIM=’”&trim(strnim)&”’ and Ke=”&Nom5&” and
Kode_Waktu2 is not null)) and (Kode_Waktu2 Not in (select Kode_Waktu2 from CT_List2 where
NIM=’”&trim(strnim)&”’ and Ke=”&Nom5&” and
Kode_Waktu2 is not null)) order by No”
set rs16=conn.execute(Sql16)
‘jika ketemu kelas yg tidak bentrok
If not rs16.eof then
‘update kelasnya
Sql17=”update
CT_List2
set
Keals=”&rs16(“Keals”)&”K
, ode_Wakut=’”&rtm
i (rs16(“Kode_Wakut”)&”’K
, ode_Wakut2=’”&rtm
i (rs16(“Kode_Wakut2”)&”’B
, enrtok=0
where
NIM=’”&trim(rs15(“NIM”))&”’
and
Ke=”&rs15(“Ke”)&”
and
Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
No=”&rs15(“No”)&””
set rs17=conn.execute(Sql17)
’jika ga ketemu kelas yg tidak bentrok
elseif rs16.eof then
‘cari kelas apa saja
Sql17=”select * from View_CT_11_7_12_2 where NIM is
Null and Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
shift=”&rs14(“Shift”)&” order by No”
set rs17=conn.execute(Sql17)
‘update kelasnya
Sql18=”update
CT_List2
set
Keals=”&rs17(“Keals”)&”K
, ode_Wakut=’”&rtm
i (rs17(“Kode_Wakut”)&”’K
, ode_Wakut2=’”&rtm
i (rs17(“Kode_Wakut2”)&”’B
, enrtok=1
where
NIM=’”&trim(rs15(“NIM”))&”’
and
Ke=”&rs15(“Ke”)&”
and
Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
No=”&rs15(“No”)&””
set rs18=conn.execute(Sql18)
end if
else
‘cari di view, kelas yg ga bentrok
Sql16=”select * from View_CT_11_7_12_2 where NIM is
Null and Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
Shift=”&rs14(“Shift”)&” and Kelas=”&rs15(“Kelas”)&”
and (Kode_Waktu Not in (select Kode_Waktu from
CT_List2 where NIM=’”&trim(strnim)&”’ and
Ke=’”&Nom5&”’ and Kode_Waktu is not null)) and
(Kode_Waktu2 Not in (select Kode_Waktu from CT_List2
where NIM=’”&trim(strnim)&”’ and Ke=’”&Nom5&”’ and
Kode_Waktu is not null)) and (Kode_Waktu Not in (select
Kode_Waktu2
from
CT_List2
where
NIM=’”&trim(strnim)&”’ and Ke=’”&Nom5&”’ and
Kode_Waktu2 is not null)) and (Kode_Waktu2 Not in (select Kode_Waktu2 from CT_List2 where
NIM=’”&trim(strnim)&”’ and Ke=’”&Nom5&”’ and
Kode_Waktu2 is not null)) order by No”
set rs16=conn.execute(Sql16)
‘jika ketemu kelas yg tidak bentrok
If not rs16.eof then
‘update kelasnya
Sql17=”update
CT_List2
set
, ode_Wakut2=’”&rtm
i (rs16(“Kode_Wakut2”)&”’B
, enrtok=0
Keals=”&rs16(“Keals”)&”K
, ode_Wakut=’”&rtm
i (rs16(“Kode_Wakut”)&”’K
where
NIM=’”&trim(rs15(“NIM”))&”’
and
Ke=”&rs15(“Ke”)&”
and
Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
No=”&rs15(“No”)&””
set rs17=conn.execute(Sql17)
’jika ga ketemu kelas yg tidak bentrok
elseif rs16.eof then
Sql16a=”select * from View_CT_11_7_12_2 where NIM is
Null and Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
Kelas=”&rs15(“Kelas”)&” order by No”
set rs16a=conn.execute(Sql16a)
‘update kelasnya
Sql17=”update
CT_List2
set
Kode_Waktu=’”&trim(rs16a(“Kode_Waktu”)&”’,Kode_Waktu2=’”&trim(rs16a(“Kode_Waktu2”)&”’,Bentrok=1
where
NIM=’”&trim(rs15(“NIM”))&”’
and
Ke=”&rs15(“Ke”)&”
and
Kode_MK=’”&trim(rs15(“Kode_MK”))&”’ and
No=”&rs15(“No”)&””
set rs17=conn.execute(Sql17)
end if
end if
‘cari bentroknya
Sql19=”select sum(Bentrok) as jum_bentrok from CT_List2
where
NIM=’”&trim(rs15(“NIM”))&”’
and
Ke=”&rs15(“Ke”)&””
set rs19=conn.execute(sql19)
Ke_brp=rs15(“Ke”)
If rs19(“jum_bentrok”) <> 0 then
Sql20=”select
*
from
CT_List2
where
NIM=’”&trim(rs15(“NIM”))&”’ and Ke=”&rs15(“Ke”)&”
and Bentrok=1"
set rs20=conn.execute(Sql20)
while not rs20.eof
Sql21=”select
*
from
CT_List2
where
NIM=’”&trim(rs20(“NIM”))&”’ and Ke=”&rs20(“Ke”)&”
223
and Kode_Waktu=’”&trim(rs20(“Kode_Waktu”))&”’ and
Bentrok=0
and
Kode_MK<>’”&trim(rs20(“Kode_MK”))&”’ or
NIM=’”&trim(rs20(“NIM”))&”’ and Ke=”&rs20(“Ke”)&”
and Kode_Waktu2=’”&trim(rs20(“Kode_Waktu2”))&”’
and
Bentrok=0
and
Kode_MK<>’”&trim(rs20(“Kode_MK”))&”’ or
NIM=’”&trim(rs20(“NIM”))&”’ and Ke=”&rs20(“Ke”)&”
and Kode_Waktu=’”&trim(rs20(“Kode_Waktu2”))&”’ and
Bentrok=0
and
Kode_MK<>’”&trim(rs20(“Kode_MK”))&”’ or
NIM=’”&trim(rs20(“NIM”))&”’ and Ke=”&rs20(“Ke”)&”
and Kode_Waktu2=’”&trim(rs20(“Kode_Waktu”))&”’ and
Bentrok=0
and
Kode_MK<>’”&trim(rs20(“Kode_MK”))&”’”
set rs21=conn.execute(Sql21)
while not rs21.eof
Sql22=”update CT_List2 set Bentrok=1 where
NIM=’”&trim(rs21(“NIM”))&”’ and Ke=”&rs21(“Ke”)&”
and Kode_MK=’”&trim(rs21(“Kode_MK”))&”’”
set rs22=conn.execute(Sql22)
rs21.movenext
wend
rs20.movenext
wend
end if
rs15.movenext
wend
Sql23a=”select sum(Bentrok) as jum_bentrok2 from
CT_List2 where NIM=’”&trim(strnim)&”’ and
Ke=”&Ke_brp&””
set rs23a=conn.execute(Sql23a)
If rs23a(“jum_bentrok2”) = 0 then
response.redirect(“Tampil_CT_List2.asp?rsnim=”&strnim)
end if
Nom5=Nom5+1
rs9b.movenext
wend
rs9a.movenext
wend
response.redirect(“Tampil_CT_List2.asp?rsnim=”&strnim)
224
V. Conclusion
Auto Generated Probabilistic Combination (AGPC) is an
important part in the process of schedule Plan Study (JRS).
How it works is sensitive to the status of conflicts based
on the permutation AGPC make this exactly right to handle
conflicts core problem. In addition AGPC can provide early
warning status if the conflicts are permanent, so that the
permutation does not need to be done. This is very helpful
because in addition to saving hard drive capacity, AGPC
also provide a clear status of the schedule can not be lost
status conflicts.
References
[1]Andi (2005). Aplikasi Web Database ASP Menggunakan
Dreamweaver MX 2004. Yogyakarta: Andi Offset.
[2]Bernard, R, Suteja (2006). Membuat Aplikasi Web
Interaktif Dengan ASP. Bandung: Informatika.
[3]Untung Rahardja (2007). Pengembangan Students
InformationServices di Lingkungan Perguruan Tinggi
Raharja. Laporan Pertanggung Jawaban. Tangerang:
Perguruan Tinggi Raharja.
[4]Anonim (2009). Standar Operating Procedure (SOP) Institute Pertanian Bogor.
[5]Anonim (2009). Standar Operating Procedure (SOP)
Penyusunan KRS dan KPRS UIN Sunan Kalijaga.
[6]Santoso (2009). Materi I : Permutasi dan Kombinasi.
Diakses pada tanggal 5 Mei 2009 dari : http://
ssantoso.blogspot.com/2009/03/materi-i-permutasidan-kombinasi.html
Paper
Saturday, 8 August 2009
15:10 - 15:30
Room M-AULA
EVALUATION SETS TO BRING OFF INFORMATION TECHNOLOGY
BASES COBIT’S FRAMEWORK PEMBANGUNAN JAYA SCHOOL
CASE STUDY
Dina Fitria Murad, Mohammad Irsan
Information System Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
email: [email protected]
ABSTRACT
Dignity Pembangunan Jaya school that bernaungan with Dignity Education Foundation, in carry on its business
process doesn’t take down from IT’s support. That IT’s purpose especially in back up service to student / schoolgirl and
oldster. So far was become sumberdaya IT’s change supporting. Changing that generally is mark sense commutation or
application Pembangunan Jaya, changing supporting infrastructure, etc.. This changed effort is meant to be able to more
get IT’s point, so gets to back up service process eminently. Research that worked by it does analisis to set brings off IT and
develops solution proposal get bearing by set to bring off IT, by use of default COBIT (Control Objectives Information and
related Technolgy), with emphasis on domain deliver and support (DS), one that constitutes forwarding needful service
sooth. This research is deep does data collecting, with pass through kuesioner’s purpose to get data about system which
walks and interview be utilized to know next expectation.
Key word: Manner brings off IT, COBIT
1. BACKGROUND
Information technology (IT) presto amends, and it gives
its exploit opportunity. That developing gets to give opportunity will innovate product or new service gets IT’s
support basis, so gets to make firm more amends or last
regular. IT’s purpose to back up firm business process,
slated that happening process appropriate one is expected.
But such, IT’s purpose not only wreaks benefit, but can
also evoke jeopardy. In result qualified information service, information adjusment of technology (IT) constituting absolute requirement needful. In a general way implement IT will be espoused a cost requirement consequence
that tingi, well of procurement facet hardware ,
Pembangunan Jaya software , implementation and system
preserve as a whole. That thing did by expectation gets to
be reached its strategical and IT’s strategy already been
defined in particular and plan and firm business strategy
as a whole.
To the effect firm will be reached if IT Diimplementasikan’s
planning and strategy ala in harmony with planning and
organization business strategy already been defined. IT’s
implement that in harmony with that institution aim just
gets resultant if backed up by manner system brings off
IT( IT Governance ) one that good since planning phase,
implementation and evaluation. Manner brings off IT constitutes integrated part on corporate management, ranging leadership and structure and processes in
organisational one ensures that IT organization backs up
strategy and organization target as a whole. Mark sense
manner implement brings off expected IT gets to give a lot
of benefit, for example:
1. Menguragi is jeopardy
2. Harmonising IT with objective business
3. Strengthening IT as unit of main business
4. Transparent more business operation
5. Increasing effectiveness and efficiency
Dignity Pembangunan Jaya school that bernaungdibawah
Dignity Education Foundation, in carry on its business
process doesn’t take down from IT’s support. That IT’s
225
purpose especially in back up service to student / schoolgirl and oldster. So far was become sumberdaya IT’s change
supporting. Changing that generally is mark sense commutation or application Pembangunan Jaya, changing supporting infrastructure, and doesn’t mark sense audit IT.
This changed effort is meant to be able to more get IT’s
point, so gets to back up service process eminently. Infrastructure hardware and also software one that available
at all unit which is PC (workstation), server, printer etc.,
generally difungsikan to carry on application that prop
knockabout operational.
That implementation sets to bring off IT happens effective, organization needs to assess insofar which sets to
bring off IT that present happens and identify step-up
who can be done. That thing is prevailing on all process
which needs to be brought off that consists in IT and
manner process brings off IT Is alone. Model purpose
maturity (maturity) in this case will make easy estimation
by pragmatic approaching most structure to easy scale
apprehended and consistent.
b) IT’s purpose enables firm to exploit opportunity and
maximizes benefit.
c) IT’s resource purpose that responsible.
d) Management in point will jeopardy what do relate IT.
Framework to set brings off IT who is pointed out as it
were on following image, figuring manner process brings
off that begins with determination objective IT firm, one
that give startup instruction. One series of IT’s activity
that is done, then done by measurement.
2. BASIS FOR THEORY
2.1 Information system are an activity of procedure those
are organized, if dieksekusi will provide information to back
up decision making and operation in organisational (Henry
C. Lucas in [JOGIYANTO 2003] 14). An old hand at other
names that information system is at deep an organization
that bridge transactions processing requirement daily, backing up operation and provides particular reporting to on
one’s side extern which need( [ROBERT 2003] 6 ).
But Information System can also be defined as one formal
procedure series whereabouts gathered data, processed
as information and is distributed to wearing. [HALL 2001]
2.2
COBIT (Control Objectives Informatioan and
Related Technology) used to mean as intent as operation
for information and technology concerning, COBIT is
dikenalkan’s first time on year 1996 one constitutes tools
(tool) one that is made ready to manage information technology (IT Governance Tool). COBIT was developed as
one common application and was accepted as default which
well for operation practice and kemananan TI.
2.3 Definition about manner brings off IT one that is
taken from IT Governance Institute are as follows: Manner brings off IT is defined as responsibility of executive
and board of directors, and consisting leadership, organization chart and process that ensures IT firm backs
up and expand objective and organization strategy. [IGI
2005]
To the effect manner brings off IT is to be able to lead IT’s
effort, so ensures performa IT according to following objective accomplishment [IGI 2003]
a) IT in harmony with firm and promised gain realization.
226
Framework’s I. image sets to bring off IT [IGI 2003]
Measurement result is weighed with objective, one that
will get to regard instruction already being given on IT’s
activity and needful objective change [IGI 2003]
COBIT integrates good practice to IT and providing framework to set brings off IT, one that gets to help grasp and
jeopardy management and gets gain that gets bearing with
IT. COBIT’S implementation thus as framework manner
brings off IT will get to give gain [IGI 2005]
a) The better harmonization, up on business focus.
b) One view, can be understood by management about
thing which done by IT.
c) Responsibility and clear ownership is gone upon on
orientation processes.
d) Can be accepted in common with third party and order
maker.
e) Understanding share between the interested parties,
gone upon on one common language.
f) Accomplishment is COSO the need( Committee of
Sponsoring Organisations of the Treadway
Commision )to environmentally conducts IT.
In understands framework COBIT, need acknowledged
about main characteristic where framework COBIT is made,
and principle that constitutes it. There is characteristic
even main framework COBIT is business focused, process
oriented, controls based and measurement driven , meanwhile principle that constitutes it is [IGI 2005]
“to gives utilised organization the information required
reach its objective, organization needs to bring off and
restrains sumberdaya IT by use of process horde that most
structure to give the information required service.”
Academic concept
“That studying is agreeable “ all superior programs to be
packed and led by that aim educative student to be motivated, are ardour and like teaching and learning process,
feeling convenience and pleasingly in follows study, so
they can absorb knowledge more a lot of and get to develop all ability potency that its proprietary as optimal as
maybe.
Curriculum and superior Program at presentasikan by accentuates effectiveness and kedinamisan what does follow protege developing. In its process, teaching and learning activity leads protege to become likes studying, independent, creative deep faces and look for solution a problem.
II.Team Work’s image Managerial
3. STUDY
So far to evaluate manner brings off IT who is engaged
hardware and software exploit at Schooled Dignity
Pembangunan Jaya was done precisely, for it to need marks
sense measurement to that default. Default that is utilized
is COBIT (Control Objectives Informatioan and Related
Technology).
Respondent determination to be adjusted by IT’s service
in its bearing with service on user. Respondent in this
case will involve indigenous respondent a part or logistic
IT and structural management. Respondent involvement
that originates function upon because function TI acts as
provider of IT’s service, meanwhile another function as
user of service.
Respondent is agglomerated bases IT’s function and non
IT. IT’s respondent is differentiated group up senior and
staff group. IT’s group intended senior is range structural
management and functional pro at IT. IT’s group staff here
fathoms a meaning is IT’s staff that gets bearing face to
face with performing services IT that current happens.
Meanwhile group non IT, ranging structural management
at deep Foundation job unit Dignity Education.
This respondent agglomeration is meant for gets to get
view that adequately medley da helps in do IT’s process
elect its following. There is amount even respondent
wholly being pointed out as it were on III. table 1 hereunder.
Respondent
Total
TI is Staff Non IT 6 4
Total
10
Respondent i. table on elect processes
This indentifikasi’s performing will result elected process,
and it constitutes legiatan or elect activity processes,
where in aktvitas this will choose IT’s processes from domain DS
3.1 Sample Elect method
Tech snow ball sampling which is data collecting which is
begun of some bodies that criterion pock to be made subject, be next that subject as information source about men
which can make sample.
Protege thus is expected gets to do and finds quality and
effloresce innovations optimal alae.
3.2 Data Collecting method
Elect activity processes IT of IT’s processes that exists in
domain DS who is done, gone upon on behalf zoom processes. Information hits to increase this behalf, gotten
from party concerning. Utilised back up that information
acquisition, done by downloading by use of kuesioner.
kuesioner that developed this was gone upon on Man-
227
agement Awareness diagnostic [IGI 2000]
According to the information required, kuesioner is emphasized on behalfs level estimation. Then, to help respondent in understand process who will assess its behalf zoom,
each process on kuesioner that was espoused by process
description, one that brief process aim, as it were TI’s process description in COBIT [IGI 2005] .
3.3 Instrumentation
TI’s processes that will do elect and estimation is process
IT in domain DS. Domain this hits forwarding needful service reality, one that ranges forwarding service, security
and continuity management, support services on user, and
management on data and operational facility.
poses IT’s process that ranges in domain DS comprise of:
1)
DS1 Define and Manage service Levels
2)
DS2 Manage is services’s Pihak Ketiga
3)
DS3 Manage Performance and Capacity
4)
DS4 Ensure Continuous service
5)
DS5 Ensure Systems security
6)
DS6 Identify and Allocate Costs
7)
DS7 Educate and Train Users
8)
DS8 Manage service Desk and Incidents
9)
DS9 Manage the Configuration
10)
DS10 Manage Problems
11)
DS11 Manage is Data
12)
DS12 Manage the Physical Environment
13)
DS13 Manage Operations
3.4
Analisis’s tech Data
Utilised backs up analisis to determine process who will
choose to do following things:
a. Accounting estimations selection percentage base to
increase its behalf for each process IT.
b. Base behalfs level percentage that will do identifica
tion to know in as much as which increases require
ment will process that.
c. Base acquired information above, will do pengkajian
more for elect need processes, and to the thing of done
by percentage count each TI’s process another.
To know in as much as which increases requirement will
IT’s processes, can be seen on following table where each
estimation have mean as follows:
a.
1: really insignificant
b.
2: inessential
c.
3: little bit essential
d.
4: essential
e.
5: momentously
Code
Total
DS1
228
Process Requirement On Processes(%)
1
2
3
4
Define and Manage service Levels
5
DS2
DS3
Manage is services’s Pihak Ketiga
Manage Performance and Capacity
DS4
Ensure Continuous service
DS5
Ensure Systems security
DS6
Indetify and Allocate Costs
DS7
Educate and Train Users
DS8
Manage service Desk and Incidents
DS9
Manage the Configuration
DS10
Manage Problems
DS11
Manage is Data
DS12
Manage the Physical Environment
DS13
Manage Operations
Table II.. The need percentage on process
After been done penelusuran to process the need is begun from DS1 until with DS13. III. table 4 succeeding stages
which is determine rating percentage process, if anything
appreciative same presentase, in this case is gone upon
on IT’s respondent view senior and Non IT just, since
enough represents from provider flank service and IT’s
service user
Code
DS1
Process Requirement On Processes(%)
IT
Non IT Total
Senior Staff
Define and Manage service Levels
DS2
Manage is services’s Pihak Ketiga
DS3
Manage Performance and Capacity
DS4
Ensure Continuous service
DS5
Ensure Systems security
DS6
Indetify and Allocate Costs
DS7
Educate and Train Users
DS8
Manage service Desk and Incidents
DS9
Manage the Configuration
DS10
DS11
DS12
Manage Problems
Manage is Data
Manage the Physical Environment
DS13
Manage Operations
III. table. Rating percentage processes
3.4. 1 Management Awareness
Management Awareness need to be done to know view
and organizer expectation, user and management party to
peyedia services and pemngguna services IT.
3.4. 2 Level measurement Maturity Process
Base identification result upon, therefore succeeding stage
is do zoom measurement maturity (maturity) that process.
This pengukurau’s activity over and above will result estimation about currently condition, will also result estimation about condition which is expected.
On estimation activity processes, its estimation object is
process to be chosen and its result is estimation increase
maturity condition of currently and condition of that expected, and repair target that will be done. It at picture as it
were is pointed out on following:
III. image. Estimation processes IT
Utilised up to means process estimation, done by downloading. It is done by use of interview. That developed
interview is gone upon on things as follows:
(1) Model maturity to process IT that.
(2) Maturity attribute table .
It is regarded for kuesioner’s Pembangunan Jaya that is
attributed to make easy respondent in does estimation.
Base judgment upon, resulting interview can be seen as it
were is pointed out on Attachment B.
Respondent that is involved for interview inlay especially
is respondent on IT’s function or part, one that its daily
runs face to face, and knows problem that gets bearing by
process to be chosen. To back up analisis, acquired data
from kuesioner, will at o and is done:
(1) Doing count average to each attribut stuffing of all
respondent, well for estimation condition of currently and
also condition of that expected.
(2) Level estimation maturity that process diperolah by
undertaking count average all attribute, well for condition
now and also condition of that expected.
(3) Representasi is condition second each attribute processes that deep shaped diagram.
(4) Remedial objective identification process is chosen
that will be done
Maturity’s attribute
Requirement On Processes(%)
now
one that is expected
AC
Awareness and Communication
PSP
Policies, Standards and Procedures
TA
Tools and Automation
ONE
Skills and Exepertise
RA
Responsibility and Accountability
GSM
setting and Measurement’s field goal
Averagely Totaled
Table IV.. Measurement result increases maturity on DS is
chosen
4. ANALISIS AND INTERPRETATION
4.1
Technological Management Condition analysis
Pembangunan Jayas Schooled Information Dignity
To know condition of Schooled information technology
management Dignity Pembangunan Jaya is done some
analisis what do consisting of:
a. Analisis domiciles TI’s function
b. Analisis management awareness
c. Analisi increases maturity
Explanation and result of each analisis is described in this
following explanation
4.2 Analisis domiciles IT’s Function
Informations Technological unit get to lift full answer to
bring off information technology as a whole (initiation,
planning, implementation, monitoring and control).
4.3 Analisis Management Awareness
Identification management awareness done by proposes
kuisioner management awareness to all unit. kuisioner’s
form management awareness that gets to be seen on Attachment A. kuisioner management awareness’s respondent List can be seen on this following Table.
NO.
1
2
3
4
5
6
7
8
9
10
Respondent
Total
SPJ’S principal 1
Spv IT 1
IT is TK’s Unit 1
TU is TK’s unit 1
IT is SD’s Unit 1
TU is SD’s Unit 1
IT is SMP’S Unit 1
TU is SMP’S Unit1
IT is SMA’S Unit 1
TU is SMA’S Unit
TOTAL 10
1
229
Table V. Respondent kuisioner
DS7
0%
DS8
10%
DS9
0%
DS10
0%
DS11
30%
DS12
0%
DS13
0%
Teach and coaches user 0%
10%
20%
70%
Adjoin and gives tips to user
0%
0%
30%
60%
Bringing off configuration 0%
0%
40%
60%
Bring off about problem and incident
0%
0%
30%
70%
Bringing off data 0%
0%
0%
70%
Bringing off facility
0%
0%
40%
60%
Bringing off operation
0%
0%
20%
80%
Table VI. kuisioner’s recapitulation result management
awareness base requirement zoom to process
Total kuesioner that is broadcast as much 10 sheets by
totals returns as much 10 sheets. The need scale that is
utilized in kuisioner management awareness differentiated as 5 levels, beginning of “ so inessential “, “inessential “, “little bit essential “, “essential “, and “ momentous
“. rekapan kuisioner’s result management awareness
berdaasarkan increases requirement to process gets to be
seen on this following table:
Code Process How important that process for aim to
carry on business
really insignificant
inessential
little bit essential essentialmomentously
1
2
3
4
5
DS1
Define and brings off service zoom 0%
0%
0%
30%
70%
DS2
Bringing off third party service
0%
0%
0%
20%
80%
DS3
Bringing off Performance and capacity
0%
0%
10%
20%
70%
DS4
Ensuring sustainable service
0%
10%
10%
20%
60%
DS5
Ensuring system security 0%
0%
0%
30%
70%
DS6
Doing identification and cost allocation
0%
0%
0%
20%
80%
230
kuisioner’s recapitulation result on Table upon can be simplified by merges requirement zoom “ So not necessarily “,
“Not necessarily “ and “ Applicable “ as requirement zoom
“ Not Necessarily “, and merges requirement zoom “ Need
“ and “ Really Need “ as requirement zoom “ Need “.
Moderation result increases that requirement ditun jukkan
on this following Table:
Code Process
inessential
essential
DS1
Define and brings off service zoom 0%
100%
DS2
Bringing off third party service
0%
100%
DS3
Bringing off Performance and capacity
10%
90%
DS4
Ensuring sustainable service
20%
80%
DS5
Ensuring system’s security
0%
100%
DS6
Doing identification and cost allocation
0%
100%
DS7
Teach and coaches user 10%
90%
DS8
Adjoin and gives tips to user
10%
90%
DS9
Bringing off configuration 0%
100%
DS10
Bring off about problem and incident
0%
100%
DS11
Bringing off data 0%
100%
DS12
Bringing off facility
0%
100%
DS13
Bringing off operation
0%
100%
Table VII. Recapitulation moderation result kuisioner management awareness base requirement zoom to process
Graphic appearance of yielding recapitulation management
awareness base requirement zoom to TI’s processes Dignity Pembangunan Jaya Schools can be seen on this following image
Image IV. Skin graphic yielding kuisioner’s recapitulation
management awareness base requirement zoom to process
kuisioner’s recapitulation result management awareness
on table upon is analysed more by assumes that proprietary process has presentase greatering to constitute important or insignificant process available deep TI’s management process. Process that needs there is in
pengeloalan IT’s model Dignity Pembangunan Jaya School
can be seen on table this following:
Code Process Inessential
Essential
DS1
Define and brings off service zoom
v
DS2
Bringing off third party service
v
DS3
Bringing off Performance and capacity
v
DS4
Ensuring sustainable service
v
DS5
Ensuring system security
v
DS6
Doing identification and cost allocation
v
DS7
Teach and coaches user
v
DS8
Adjoin and gives tips to user
v
DS9
Bringing off configuration
v
DS10
Bring off about problem and incident
v
DS11
Bringing off data
v
DS12
Bringing off facility
v
DS13
Bringing off operation
v
VIII table. Process who shall there is in TI’s management
model Dignity Pembangunan Jaya School
4.4
Maturities Level analysis
Maturities level analysis be done by undertaking maturity
zoom estimation that mangacu on model COBIT’S maturity Management Guidelines. COBIT’S maturity model
has 6 level process TI, for example:
a.
0 -Non Existent , management process be not
been applied.
b.
1 -Ad Hoc Initial /, management process done
by ala not periodic and not organized.
c.
2 - Repeatable , process was done by ala repetitive.
d.
3 -Defined Process, process have most documentation and documents, observation and alae uncommitted reporting periodic.
e.
4 -Managed and Measurable , process most
keeps company and be measured.
f.
5 -Optimized, best practice , was applied deep
management process.
IT’s processes whatever available is evaluated by use of
maturity model then as compared to maturity zoom targets
that concluded of vision, target and interview result, therefore gets to be concluded that for gets to back up Schooled
aim attainment Dignity Pembangunan Jaya at least maturity zoom that is done has available on zoom 4( Managed
and Measurable).
Base interview result with respondent, gotten by answer
and statement those are proposed while do measurement
interview increases maturity can be seen on attachment B.
Level maturity already most ranging identification on level
maturity 1 (Ad Hoc Initial /) until 4( Managed and Measurable). Estimation result increases kamatangan that can
be seen on Table IV. 5 its followings:
Code
DS1
DS2
DS3
DS4
DS5
DS6
DS7
DS8
DS9
DS10
DS11
DS12
DS13
TI’s process
Maturity zoom
Define and brings off service zoom 4
Bringing off third party service
4
Bringing off Performance and capacity
Ensuring sustainable service
2
Ensuring system security 3
Doing identification and cost allocation
Teach and coaches user 4
Adjoin and gives tips to user
2
Bringing off configuration 2
Bring off about problem and incident
Bringing off data 2
Bringing off facility
3
Bringing off operation
1
3
3
3
VIII table. Estimation result increases kamatangan
Base interview result and finding result that as opinion /
opinion of respondent was gotten to usufruct maturity
zoom measurement that is pointed out on table that..
4.5 Recommendation
Recommendation application to settle maturity zoom gap
directed to by step who shall be passed through in achieving expected maturity zoom. Recommendation application
increases maturity to process TI that has to increase maturity 1 will be led for attainment makes towards to increase
maturity 2, then is drawned out to increase maturity 3, and
in the end making for maturity zoom 4, such too its thing
for process what do have to increase another maturities.
Recommendation to settle maturity zoom gap on processes
231
IT’s managements Dignity Pembangunan Jaya Schools can
thru do this following activity.
1. DS3 brings off performance and capacity
Recommendation to make towards maturity zoom 4 - Managed and Measurable :
a. Arranged by corporate internal forum for gets to look
for solution with upstairs about problem which arises
deep performance and capacity management.
b. Acquisition process and software to measure perfor
mance and system capacity and compares it by in
creases service already be defined.
c. Peripheral automation to keep company sumberdaya
specific as disk of storage, network server and net
work gateways .
d. Update forte requirement routinely to all data manage
ment process to get membership and certification.
e. Under one’s belt formal training for staff what do relate
with performance and capacity management corre
sponds to plan and doing sharing science and fol
lowed by evaluation to trainings strategical
effectiveness.
2. DS4 Ensures to service sustainable
Recommendation to make towards maturity zoom 3 Defined Process:
a. It does communication hit sustainable service require
ment consistently.
b. Strategical documentation that is gone upon on sys
tem behalf and business impact.
c. It does periodic’s reporting hit sustainable service
examination.
d. It utilizes component that have tall accessibility and be
applied redundansi system.
e. Inventaris is system and main component looked after
by tights ala.
f. Individual follows service default and accept training.
g. Its did pendefinisian and accountability establishment
for planning and continual examination.
h. Its established intent and measurement in ensure con
tinual service and concerned by business aim.
i. It does measurement and process observation.
3. Its applied IT balanced scorecard in main performance measurement.
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Arranged one procedure to ensure that continual ser
vice whole ala was understood and needful action was
accepted widely at organization.
232
b. Data that most structured about service shall be got
ten, analysed, reported, and is applied.
c. Under one’s belt training is provided for processes
sustainable service.
d. Applied by accountability and default for services
sustainable.
e. Changing business field, result of continual service
examination, and result of sustainable service
examination and performing internal best is regarded
deep care activity.
f. ketidaksinambungan’s incident services to be clasified
and step-up aim for each acknowledged incident by all
the interesting party.
4. DS5 Ensures system security
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Policy and security performing proveded with by spe
cific security basic.
b. Analisis is impact and TI’s security jeopardy is done
consistently.
c. Examination to trouble constitutes to process default
and terformalisasi what does wend on step-up.
d. Security process coordinations IT goes to all
organisational security functions.
e. Standarisasi to identify, autentifikasi and user authori
zation.
f. analisis’s exploit cost / implemented supportive
benefit size security.
g. Done by security staff certification.
h. Responsibility for the security IT is established clearly,
brought off and is applied.
i. IT’s security reporting linked by business aim.
5. DS6 Does to identify and cost allocation
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Doing evaluation and cost observation, and taking
action while process don’t walk effectively or efficient.
b. Cost management process increased by kontinu’s ala
and applies best internal performing.
c. Direct cost and indirect identified and reported by pe
riodic ala and most automation on management, busi
ness process owner, and user.
d. All internal cost management expert is involved.
e. Akuntabilitas and cost management accountability
services information be defined and is understood thor
oughly at all level and backed up by formal training.
f. Cost reporting services to be linked by business aim
and zoom deal services.
6. DS8 Adjoins and give tips to user.
Recommendation to make towards maturity zoom 3 Defined Process :
a. distandarisasi’s procedure and is documented and be
done informal training.
b. Its made Frequently Asked Questions (FAQs) and user
guidance.
c. Question and about problem is traced manually and
kept company by individual.
d. Forte requirement in adjoins and give tips to identified
user and documented comprehensive.
e. It develops formal training planning.
f. Under one’s belt formal training for staff.
7. It does escalation about problem.
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Procedure for mengkomunikasikan, mengeskalasi and
solving about problem is formed and is communicated.
b. Staff help desk get direct interaction with management
staff about problem.
c. diotomasikan’s peripheral and tech with gnostic basis
about problem and solution that centrally.
d. Person help desk coached and process is increased
through specific software purpose for work one
particular.
e. Responsibility is worded and effectiveness is kept
company.
f. The root cause of about problem identified and tren is
reported, impacted on be done corrective about
problem ala periodic.
g. Increased process and applies best internal
performing.
8. DS9 Brings Off configuration
Recommendation to make towards maturity zoom 3 Defined Process :
a. Requirement for mengakurasikan and completes con
figuration information be understood and is applied.
b. Procedure and work performing is documented,
distandarisasi and is communicated.
c. Configuration management peripheral that similar
diimplementasikan at exhaustive platform .
d. It does automation to help deep traces changing equip
ment and software .
e. Configuration data utilized by interrelates process.
f. Forte requirement in bring off identified configuration
and documented comprehensive.
g. It develops planning and be done formal training.
h. Ownership and configuration management Responsi
bility is established and restrained by responsible party.
i. Severally intent and measurement in configuration
management is established.
j. IT balanced scorecard applied in base performance
measurement.
9. Its did supervisory deep brings off configuration.
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Procedure and configuration management default is
communicated and is merged deep training, happen
ing deviation will be kept company, traced and is
reported.
b. Configuration management system enables to be done
it conducts distribution and release management one
that good.
c. Analisis is exemption and physical verification is
applied consistently and the root cause it is identified.
d. Peripheral most automation is utilized as technology
of emphasis to apply default and increases stability.
e. Forte requirement routinely at update to all
configuration management process to get membership
and certification.
f. Training formaling to staff concerning data manage
ment is done according to plan and sharing was done
by sharing science.
g. Done by evaluation to trainings strategical
effectiveness.
h. Configuration management responsibility defined by
clear ala, established and is communicated deep
organisational.
i. Intent attainment indicator and performance have
disepakati user and monitored by process already be
ing defined and concerned by business aim and TI’s
strategy plan.
j. Applied IT
Balanced Scorecard in assess
configuration management performance. Fixed up on
an ongoing basis on configuration management
process is done.
10. DS10 Brings Off about problem and incident
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Management process about problem comprehended
at all organisational deep level.
b. Method and procedure was documented, communi
cated, and is measured to reach effectiveness.
c. Management about problem and incident integrated
by all bound up process, as changed as, availibility of
and configuration management, and adjoins customer
in brings off data, facility and operate for. Peripheral
purpose most now was beginning exploited
appropriate strategical peripheral purpose
standardization.
233
d. Science and forte is perfected, looked after and is de
veloped to superordinate level because information ser
vice function was viewed as asset and contributor is
main for attainment to the effect TI.
e. Responsibility and ownership gets clear character and
acknowledged.
f. Ability responds to incident be tested by ala periodic.
g. Largely about problem and incident is identified, re
corded is reported and is analysed for kontinu’s ala
step-up and is reported to side stakeholder .
11. DS11 Brings Off data
Recommendation to make towards maturity zoom 3 Defined Process:
a. Done by understanding socialization will data
management requirement so the need that was
understood and is accepted at firm as a whole.
b. Its issued kind of form letter of level management on
for gets to do effective steps in processes data
management.
c. Severally procedures to be defined and is documented
as basis in does severally base activity in data
management as process backup / restoration and
equipment deletion / media.
d. Its arranged strategical purpose tools default to do
automation in data management system.
e. Its utilized many tools for need backup / restoration
and equipment deletion / media.
f. Forte requirement in bring off identified data and
documented comprehensive.
g. It does planning and formal training performing.
h. Ownership and data management accountability is
established and about problem integrity and data
security restrained by party that accounts for.
i. Its established many aims and measurements in bound
up data management with aim carries on business.
12. Observation and process measurement is done and IT
balanced scorecard applied in main performance measurement.
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Done by requirement socialization for whole ala data
management and needful action at organization.
b. Arranged periodic ala corporate internal forum for gets
to look for solution with upstairs about problem which
arises deep data management.
c. Comprehensive procedures on process data
management, one that points on default, one that
applies internal best practice , formalized and
disosialisasikan widely and is done sharing
knowledge .
234
d. Purpose tools the newest one appropriate purpose
standardization plan tools and dirintegrasikan with
tools one that another. Tools that was utilized for
mengotomasikan to process main in brings off data.
e. Requirement skill routinely at update to all data man
agement process to get membership and certification.
f. Training formaling to staff concerning data
management is done according to plan and sharing
science is done.
g. Done by evaluation to trainings strategical
effectiveness.
h. Responsibility and ownership on data management
defined by clear ala, established and is communicated
deep organisational. There is culture to give
appreciation as effort motivate this role.
i. Intent attainment indicator and disepakati’s performance
by user and monitored by process already being
defined and concerned by business aim and TI’s
strategy plan.
j. Applied IT Balanced Scorecard in assess data
management performance and done by repair on
an ongoing basis on data management process.
13. DS12 Brings Off facility
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Requirement to nurse proceedings environment is con
ducted was understood with every consideration, that
thing most mirror on organization chart and budget
allocation.
b. sumberdaya’s recovering energy proceedings is
merged into organisational jeopardy management
process.
c. Formed planning for entirely organisational, available
integrated examination and set and studied things is
merged into strategical revision.
d. Mechanism conducts default be attributed to draw the
line access goes to facility and handle factor of safety
and environmentally.
e. Physical security requirement and environmental was
documented, access is kept company and restrained
by tights ala.
f. Integrated information is utilized to optimize insurance
and cost range that bound up.
g. Peripheral purpose most now was beginning exploited
appropriate strategical peripheral purpose
standardization.
h. Severally peripheral was integrated with another
peripheral and is utilized for mengotomasikan to
process main in brings off facility.
i. Forte requirement routinely at update to all facility
management process to get membership and
certification.
j. Training formaling to staff concerning facility
management was done according to plan and sharing
science was done.
k. Done by evaluation to trainings strategical
effectiveness.
l. Responsibility and ownership was formed and
icommunicated.
m. Facility staff was utterly been coached deep situated
emergency, as it were security and health performing.
n. Management keeps company effectiveness conduct
and compliance with standard one applies.
14. DS13 Brings Off to had out
Recommendation to make towards maturity zoom 2 Repeatable :
a. It imbeds organisational full care to main role that is
carried on hads out TI in provide TI’s support
function.
b. Requirement to do coordination among user and
system operation is communicated.
c. Its did operate for support, operation default and TI’s
operator training.
d. Budget for peripheral is allocated bases perkasus’s case.
15. Ownership and responsibility on management hads
out to be applied.
Recommendation to make towards maturity zoom 3 Defined Process :
a. Done by management requirement socialization
computer operation in organisational.
b. Done by sumberdaya’s allocation and on the job
training .
c. Repetitive function is defined, documented and is
communicated formally for person to had out and
customer.
d. It does conduct afters tights one place to new talk shop
on operation and formal policy to be utilized to reduce
instance amount that don’t scheduled.
e. Scheduling purpose most automation and another
peripheral to be expanded and distandarisasi to draw
the line operator intervention.
f. Forte requirement in bring off identified data and
documented comprehensive.
g. It develops planning and formal training performing.
h. IT’s support activity another identified and bound
up duty assignment with that responsibility is defined.
16.
Instance and task result already been solved is
recorded, but reporting goes to to side management in a
bind or uncommitted.
Recommendation to make towards maturity zoom 4 Managed and Measurable :
a. Requirement for Management to had out whole ala was
understood and needful action was accepted widely at
organization.
b. Deviation of aught norm is solved and is corrected
presto.
c. Made by service deal and formal care with vendor.
d. Operate for supported pass through sumberdaya’s
budget to capital expenditure and sumberdaya is man.
e. sumberdaya’s purpose hads out to be optimized and
work or task working out already been established.
f. Available effort to increase automation zoom processes
as tool to ensure kontinu’s ala step-up.
g. Training is carried on and is formalized, as part of
career Pembangunan Jaya.
h. Support responsibility and computer operation is
defined clearly and its ownership is established.
i. Schedule and task is documented and is communicated
to TI’s function and business client.
j. Done by fitting with about problem and supported
accessibility management process by analisis cause of
failing and error.
k. Measurement and activity observation daily did by
service zoom and performance deal already
distandarisasi.
5. SHELL
Base examination already being done to hypothesises, have
resulted severally conclusion as follows:
1. IT ‘s processes on domain DS that basically been
needed for gets to be applied. It pointed out by tall
estimation result on option 4 (essential) and 5
(momentously) on each IT ‘s process.
2. Entirely doman’s deep process Delivery and Support
need for is done in TI’s management Dignity
Pembangunan Jaya School. Largely IT’s processes
better handled by TI’s job Unit Dignity Pembangunan
Jaya Schools.
3. Largely increases current process maturity haven’t
reached expected target. To get up to target which is
expected therefore needed by penyetaraan’s steps that
doing to pass through recommendation application on
each process which have maturity zoom gap.
4. Base gap whatever available, therefore prescribed re
medial target that covers to process DS3, DS3, DS5,
DS6, DS8, DS9, DS10, DS11, DS12, and DS13.
5. Process who will be inserted deep model management
IT will choose to base process that has to increase
smallest maturity and ekspektasi is largest management.
DS13’S process (Doing indentifikasi and cost
allocation) constituting process that has to increase
smallest maturity and ekspektasi is largest management.
235
Allocable tips of yielding observational it for example:
1. Managements model design IT Dignity Pembangunan
Jaya School needs to be perfected through feedback
or acquired entry while do implementation.
2. On every attainment phase passed on by action that
need to be able to been done aught gap mantle. On
each attribute maturity given by needful action.
3. Proposal sets to bring off TI this ought to gets to be
looked backward by periodic ala to be done
Pembangunan Jaya according to technology progress.
LITERATURE
6. [JERRY 2001] Jerry Fitzgerald, of system analysis’s
fundamental, 2001.
7. [JOGIYANTO 2003] ogiyanto, Analisis and Design, Andi,
Yogyakarta, 2003.
8. [KHALIL 2000] Khalil, Tarek M., Management of Technology: TheKey to Competitiveness and Wealth
Creation . International ed., McGraw Hill, 2000.
9. [PEDERIVA 2003] ederiva, A, The COBIT Maturity is
in’s Model a. Vendor Evaluation Case, Information
Systems Control Journal is Volume 3, Information
Systems Audits and Control Association, 2003
1. [IGI 2000] he COBIT Steering Committee and the IT Governance Institute, COBIT (3rd Edition) Implementation Tools Set, IT Governance is Institute, 2000 .
10. [ROBERT 2003] obert a., Accounting Information Systems (Prentice’s new jersey Hall, 2003)
2. [IGI 2003] he IT Governance Institute, Board Briefing on
IT Governance, 2nd Edition, IT Governance Institute, 2003.
11. [REINGOLD 2005] eingold, S., Refining IT Processes
Using COBIT, Information Systems Control Journal is Volume 3, 2005, Information Systems Audits
and Control Association, 2005
3. [IGI 2005] he IT Governance Institute, COBIT 4.0: Control Objectives, Management Guidelines, Maturity Models, IT Governance Institute, 2005.
4. [GULDENTOPS2003]
Guldentops, E. (2003), Maturity Measurement First
the Purpose, Then the Method, Information Systems Control Journal Volume 4, Information Systems Audits and Control Association, 2003.
5. [INDRAJIT 2000] ndrajit, Echo r.., Information System
management and Information Technology, Gramedia,
Jakarta, 2000.
236
12. [VAN 2004] an Grembergen, W., De Haes, S.,
Guldentops, E., Structures, processes and Relational
Mechanism for IT Governance, in for Information
Technology Governance’s Strategics, Grembergen’s
van, W, Idea’s editor Inc’s Group, 2004
13. [WEBER 1998] Weber, Ron., Information Systems Control and Audits. Prentice is Hall, 1998.
Paper
Saturday, August 8, 2009
16:50 - 17:10
Room L-212
ELECTRONIC CONTROL OF HOME APPLIANCES WITH IP BASED
MODULE USING WIZNET NM7010A
Asep Saefullah, Augury El Rayeb
STMIK Raharja - Tangerang
[email protected]
Abstraction
Use of remote control system has been increasing in line with the globalization era in which human movement and the
broad and quick movement. At this time the public has known that for the controlling of an electronic home appliances
from remote can be done using the remote control. The problem of remote control by the limited distance between the
signal emanated by the remote and the signal received by the electronic home appliances, when the distance between
home electronic appliances which are controlled by a remote/controller through limit tolerance. To solve the problem
then the system must be designed using network technology that can be accessed anywhere and at anytime during the
availability of network. Internet network used by TCP/IP-based module starter kit network NM7010A-LF as a bridge
between the AVR microcontroller system with a computer network for controlling the electronic equipment, AVR
microcontroller system works as a web server. The result is a prototype of electronic home appliances that can control
home electronic appliances from a remote that is not barred by the distance, place and time so that ultimately improve the
effectiveness, efficiency and comfort in control.
Keywords : NM7010A, TCP/IP, AVR, ElectronicHome Appliances
I. INTRODUCTION
In general, people with tools such as remote controller
remote that can control an electronic equipment, such as
television, audio, video, cars, and so forth. Remote control
using constrained by the limited ability of the remote in a
signal that will be shed by the recipient. So that the use of
remote control is limited by distance, when distance between equipment is controlled with a controller that passes
the tolerance limit, then the equipment can not function
according to the desired.
Internet is an extensive network of global interconnect
and use TCP/IP protocol as the exchange of packets (packet
switching communication protocol) that can be accessed
by anyone, anywhere, can be used for various purposes.
Internet also has a large influence on the science, and
views the world.
To solve the problems of limited distance from the remote
control, the internet is a technology solution to be imple-
mented. Applications will utilize TCP/IP Starter Kit based
on the network module NM7010A-LF as a bridge between
the AVR Microcontroller System with computer network.
The AVR Microcontroller System will function as a web
server in the making of tool-distance remote controller.
TCP/IP Starter Kit which is a means of developing TCP/IP
based on NM7010A module network that functions as a
bridge between microcontroller with internet or intranet
network without requiring computers assistance. TCP/IP
is suitable for embedded applications that require communication with the internet or intranet.
Hypothesis is by using a remote control system through
the internet media, will make a control system which is no
longer limited by distance and time. Process control can
be done anywhere and anytime. Control system using the
interface protocol TCP/IP starter kit and combined with
the microcontroller technology can produce a performance
of the remote control more practical and efficient.
237
A. PROBLEMS
The limited distance control through the remote control is
caused by the limited power scattered from a remote control, the control can not be done from a distance as far
more extreme is the place or city. To control remotely the
electronic equipment, the distance between the means of
control and that control must be in the range of the remote
control coverage distance. Internet is an appropriate solution, because it is not by the distance, where there are the
internet connection controlling can be made.
To make communication between equipment that will be
controlled with the control, needed a tool that can control
the IP-based communications, and that tool is Wiznet
NM7010A module. Next problem is how the modules
(Wiznet NM7010A) can communicate with microcontroller
which is a tool to control actuator (electronic home appliances).
Figure 1. Diagram of embedded ip system
Computers are used to make the program to the AVR
microcontroller, while a series of RS 485 converter as a
mediator of the computer to the AVR mikrokontroler to
control home electronic appliances. the software requirement for making the program is as follows :
1. Windows 98, ME, XP dan 2000
2. Code Vision AVR Downloader
3. BasCom AVR
II. DISCUSSION
Internet technology can be used to overcome the distance
in a remote controlling system, for example in the case of
controlling the equipment in an electricity industry. Without being restricted by space and time, controlling the
process can be done from anywhere from a place that has
internet access. By controlling the system remotely
through internet, system controlling is no longer only for
the local scope, but global.
Process control can be done anywhere, anytime without
the officer / operator to come. System control can be done
via the Internet to test and supervision so that more practical and efficient. The diagram’s for remote controlling
system via the internet protocol (IP) can be seen below.
Each sub-system in this design has the function and tasks
that related with one another, there are 6 sub-blocks of the
system will be described in relation with the system that
will be developed as follows :
Computer
Power Supply
RS 485 Converter
Microcontroller AVR
Driver
Electronic home appliances
Computer
Electronic home appliances
AVR Microcontroller
Driver
RS 485 Converter
Power Supply
238
Each component requires DC power, both of RS 485 converter, microcontroller and driver. Therefore, the system
require power supply. The schema for power supply can
be seen below
Figure 2. Schematic for power supply
RS 485 Converter module is used as an interface between
microcontroller with internet network through ethernet. RS
485 Converter module is a network-based module using
NM7010A with the following specifications :
a. NM7010A Based who can handle the internal commu
nication protocol (TCP, IP, UDP, ICMP, ARP) and
ethernet (DLC, MAC).
b. Using the I2C interface technology for communication
with microcontroller.
c. Mini I2C address can be selected from the 128 address
options that are available (0, 2, 4...252, 254)
d. Equipped with LED status indicators as the network
(collision / link, 10/100 act, full / half duplex)
e. Requires 5 Volts DC power supply and has a voltage
regulator 3,3 Volts DC with 300 mA currents.
f. Compatible with DT-AVR Low Cost Series system con
trollers and also support other controller.
The outputs of microcontroller control the drivers that will
control the electronic home appliances, electronic home
appliances is the end of the load control. Before entering
into the actuator, output of mikrokontroler need to be
strengthened through the driver
A. AT mega 8535 AVR Microcontroller
AVR Microcontroller is one type of Microcontroller’s architecture that a mainstay atmel. This architecture has been
designed to have various advantages among other existing atmel’s microcontroller. One of the advantages is the
ability of In System Programming, so AVR microcontroller
chip can be programmed directly in the system into a series of applications. In addition, AVR has Harvard architecture that uses the concept of memory and a separate bus
for data and program, and already operates a single pipeling
level, so that instruction execution can take place very
quickly and efficiently.
Figure 4. Diagram’s of TCP/IP Starter Kit NM7010A
Figure 3. AT 90S8535 AVR Microcontroller pin out configuration
(Source: Prasimax Mikron Technology Development Center :2007 )
A. Hardware Design
Hardware design of Wiznet NM7010A network module as
seen in figure 4 and figure 5. Figure 5 is a schematic diagram’s
of NM7010A while the block diagram’s shown in figure 4.
This design will use NM7010A network module as a bridge
between DT-AVR Low Cost Micro System with computer
networks to create a simple web server. Program was developed using the compiler BASCOM-AVR © version
1.11.8.1. In this BASCOM-AVR © compiler there are commands that support the interface with the module NM7010A.
.
Figure 5. Schematic’s of TCP/IP Starter Kit NM7010A
A. I2C (Inter-Integrated Circuit) Communication
To connect the TCP / IP Starter Kit NM7010A with DTAVR Low Cost Micro System which was used I2C protocol with two cable SDA (Serial Data) and SCL (Serial Clock)
for sending data in serial. SCL is a path that is used to
synchronize the data transfer on the I2C lines, while the
SDA is for a data path.
Some devices can be connected to the I2C in the same
path where the SCL and SDA be connected to all devices,
but there is only one device that controls the SCL is the
master device. The path from the SCL and SDA is connected to the pull-up resistor with the resistance value
between 1K to 47K (1K, 1.8K, 4.7K, 10K, 47K).
With the pull-up, SCL and SDA line to be open drain, which
means the device is only necessary to give the signal 0
(LOW) to create a path to be LOW, and leave it blank (or
no signal) the pull-up resistor will make the path to be
HIGH. In I2C devices that have a role only one device to
239
be master (although some of the device possible, in the
same I2C lines, a master) and one or more slave devices. In
I2C path, only the master device can control the SCL line,
which means the data transfer must initialized first by the
master device through a series of clock pulse (not slaves,
but there is one case which is called clock streching). Slave
device task only responds to what the master device required. Slave can send and receive data to master, after
server initiate.
Master device must do some initialization before transfering
or send/receive data with it’s slave devices. Initialization
begins with the START signal (high to low transition on
the SDA line, and high conditions on the SCL line, symbol
S on the figure 7), and then data transfered and after that,
the STOP signal (low to high transition on the SDA line,
and high conditions on the SCL line, symbol P on the
figure 7) to indicate end of data transfer.
A large number of bytes may be sent in a data transfer,
there are no rule for that. If you want to transfer the data
that made of 2 bytes, the first delivery is 1 byte and 1 byte
more after that.
Each clock pulse that generate (on SCL path) are for every
bits data (on SDA path) that tranfer. So to allow 8-bit will
have 9 pulse to be generated in clock pulse (1 bit for ACK).
The chronology for the receiver device before provide signal ACK is as follows: when the sender is finished to send
the last bit (8th bit), sender release the SDA line to the pullup (remember the description open drain) so that a HIGH.
When such conditions occur, recipients must provide conditions LOW to SDA when the 9th clock pulse is located in
the HIGH condition.
If the SDA remains in HIGH conditions when in the 9th
clock pulse reach, then this signal is defined as a Not Acknowledge (NACK). Master device can generate a STOP
signal to finish the transfer, or repeat the START signal to
start a new data transfer.
Figure 8. Data (byte) transfer on I2C path
(Source: UM10204 I2C-bus specification and user
manual)
Figure 6. Start / Stop Signal
(Source: UM10204 I2C-bus specification and user
manual)
Each byte in the transfer must be followed by an Acknowledge bit (ACK) from the receiver, indicates the data was
successfully received. Bytes sent from the sender begin
with MSB bit. When bit is sent, the pulse clock (SCL) is set
to HIGH to LOW ago. Bits sent in the SDA line must be
stable during the period clock (SCL) HIGH. LOW or HIGH
condition of the data path (SDA) can only be changed
when the condition signal SCL is LOW.
Figure
7. Transfer bit on I2C path
(Source: UM10204 I2C-bus specification and user
manual)
240
To implement I2C protocol, will take samples from the routine of Peter Fleury’s and CodeVision AVR I2C routine (using the C language). The first thing that happened in the
communication is server send START signal. This will inform slave devices are connected in I2C path that will have
data transfer to be done by the master device and the slave
devices must be ready to monitor who’s address will be
called. Then the master device will send data such as address of slave device who want to access. Slave device
with the appropriate address given by master device will
forward the transaction data, the other slave device can
heed the transaction and wait until the next signal. After
get the slave device that match to the address, it’s time for
the master device to inform the internal address or register’s
number to be written to or read from the slave device. Number of locations or register’s number is dependent on the
slave device is accessed. After sending data forming the
slave device address and then the address of register internal slave who want to access, now is time to send the
master data bytes. Master device can continue to send
data bytes to slave device and byte by byte will be stored
in the register after that the slave device will automatically
increase the internal register address after each byte. When
the master device has finished writing all data to the slave
device, the master device will send a STOP signal to terminate data transaction. For the implementation of the I2C
code, used for examples I2C routines for AVR from Peter
Fleury’s routine and I2C routines provided in CodeVision
AVR.
A. DESIGN
In the module NM7010A, Set DIP Switch J3 on the TCP/IP
Starter Kit for the I2C address = CCH, set the switch 2, 3, 6,
7 to OFF position and switch 4, 5, 8 to ON position. After
the module connected and share resources with the correct, open NM7010A.BAS using the BASCOM-AVR and
change line 50 on the program to fit the computer network
that will be used. For example:
§
For the computer network that have gateway, and
the network setting value:
Gateway = 192.168.1.2
Subnet Mask
= 255.255.255.0
IP
= 192.168.1.88 (nomor IP dari modul
TCP/IP Starter Kit)
Then change line 50 on the program to become like this:
Config Tcpip = Int0 , Mac = 12.128.12.34.56.78 , Ip =
192.168.1.88 , Submask =
255.255.255.0 , Gateway = 192.168.1.2 , Localport = 1000 ,
Tx = $55 , Rx = $55 , Twi =&HCC , Clock = 300000
§
For the computer network that don’t have gateway, and the network setting value:
Subnet Mask
: 255.255.255.0
IP modul : 192.168.1.88 (nomor IP dari modul TCP/IP Starter
Kit)
Figure 9. NM7010A-LF module
B. Software Design
Microcontroller is the electronics components that it’s
performance depend on the program that entered and has
been working on. Before Microcontroller used in the system electronics chain, first must be filled with program that
was created by the programmer. Software program that
usually used to write the listing in assembly language program is BASCOM-AVR.
The flowchart program to the target block is as follows:
Then change line 50 on the program to become like this:
Config Tcpip = Int0 , Mac = 12.128.12.34.56.78 , Ip =
192.168.1.88 , Submask =255.255.255.0 , Gateway = 0.0.0.0 ,
Localport = 1000 , Tx = $55 , Rx = $55 , Twi = &HCC
,Clock = 300000
After that program NM7010A.bas re-compile and then
download the results of compilation in the DT-AVR Low
Cost Micro System using DT-HiQ AVR In System Programmer (ISP) or other device that supports mikrokontroler
ATmega8535. Then connect to the network computer system and run Microsoft Internet Explorer from the computer
connected to the network computer. Type in http:// <nomor
IP> / index.htm (for example http://192.168.1.88/index.htm)
on the Address bar, Microsoft ® Internet Explorer © then
show your site’s pages from this embedded web server.
[* File contains invalid data | In-line.JPG *]
Figure 10. Flowchart Program to Target Block
A. Code Vision Atmega 8535
CodeVisionAVR software is a C cross-compiler, where the
program can be written using C-language. By using the C
241
programming language-expected design time (developing
time) become shorter. After the program in C languagewritten and after a compilation results no error (error free)
then the download the hex file to microcontroller can be
done. AVR microcontroller support the ISP (In-System programming) system download.
the program open the port 80h socket 0 and start listening
to the network from socket 0, then the program returns to
step 3.
1.
A. The process of program
The program’s process of NM7010A.BAS in general is as
follows:
1. Program will reset NM7010A module by hardware,
activate the microcontroller’s interrupt and do
initialization to module NM7010A in I2C
communication mode.
2. Then make a declaration of program variables that will
be used, among other:
§ shtml as a string with 15 characters long to save the
suffix from the command that received.
§ Ihitcounter, Ihitcounter work as an integer to store the
number of visitors to this webserver.
3. Program get the status from socket 0.
4. If the status of socket 0 = 06h (established), then:
a) The program will check the Rx buffer from module
NM7010A, and if there is data received in the Rx buffer
then the program will read it.
b) If the data received is the command “GET” then the
program will save a suffix that follow the command into
a variable Shtml.
c) The Program check if Rx buffer is empty, if not empty
then the program will return to step 4.a.
d) If the Rx buffer is empty then the program sends
“HTTP/1.0 200 OK <CR><LF>” (OK sign) and also
send “Content-Type: text/html <CR><LF>” (format for
Html body that will be sent).
e) If Shtml = “/index.htm” then program will send the body
of index.htm and add the value of the variable
Ihitcounter with 1.
f) Program delete the contents of variables Shtml, and
then close the socket 0 and return to step 3.
5. When the status of socket 0 = 07h (wait connection
close) then the program will close the socket 0 and
return to step 3. 6.
If the status of socket 0 = 00h (connection closed) then
242
Figure 11. Web page web server based on embedded IP
on Ms. Internet Explorer
Web page of this application consists of a header, text,
and a visitor counter, as shown in Figure 11. This application can be developed into more complex, for example, to
send data from sensor dan to control devices through
computer network.
A. Writing Assembly Program on Microcontroller
Microcontroller is one of electronics components types
that it’s performance depends on the program in assembly
language that filled into mocrocontroller and has been
working on. So that for microcontroller to work and support these systems work as desired, it must first filled with
the correct assembly program, both in terms of assembly
language and how the program contents or filled.
Before the microcontroller used in the electronics system
chain, the microcontroller must be filled with program that
was created by the programmer. The purpose of that action is to make this embedded IC work in accordance with
the desire. Software used to write program listing in assembly language is BASCOM-AVR, the reason for using
this software has some advantages as compared to other
software.
After writing the program listing’s on the BASCOM-AVR
text editor is complete, then the text is stored in files with
names MOTOR DC.BAS (for example), this must be done
because the software only works on file with the name *.
BAS.
The next step is to compile the Basic language file into hex
file, that file will be MOTOR DC.BAS make MOTOR
DC.HEX file by pressing the F7 key on the keyboard or via
the menu. This *.Hex file will be inserted or downloaded
into the IC Atmega8535 (microcontroller). The steps above
can be seen in the figure 12 below.
Hex File that opened into application will be recognized by
the software then downloaded into microcontroller. then
click the chip menu and select the Auto program, as seen
below.
Figure 12. Program Compilation
After those steps are done, then we will have some files
after the compilation step, namely: MOTOR DC.BAS,
MOTOR DC.HEX (for example, means that we have 2 files
*.bas as source code and *.hex as compiled code) and
several other supporting files. and until this stage in the
process of writing and compiling the assembly program is
completed.
A.
Downloading Program into Microcontroller
At this step, the IC Atmega8535 initially filled with empty
start the program. While for the IC that already contains
the program, the program must be deleted first before automatically filled in with the new program. To begin, first
open the program BASCOM-AVR for mikrokontroler
AT8535, then select the device that will be used, namely
AT8535.
Figure 13. Choosing a Device (AT8535)
After selecting a device from compiler tab. Then the application ask for the hex file that will download into selected
device (microcontroller AT Mega 8535). Hex akan that entered into the IC mikrokontroler, in this case is MOTOR
DC.HEX (for example).
Figure 14. Proces of filled in the program into microcontroller
Downloading process begins with the “erase Flash &
EEPROM Memory”, which means the software perform
deletion of microcontroller’s the internal memory before
put the program into microcontroller’s the internal memory.
In the process of deletion of this, when percentage has
reached 100% then means that the internal memory has
been erased completely and in a state of empty. If percentage has not reached 100% but the software shows an error
sign, then the elimination process is fail. This is usually
caused by an error in the hardware downloader.
After the removal finished, software automatically “Verify
Flash Memory”. Software start download the hex file to fill
the program into microcontroller. As with the deletion, the
process is shown with percentage of progres. 100% indicates that program has been fully filled into microcontroller.
The emergence of a sign error indicates the process failed,
which is usually caused by errors in the hardware
downloader. If the steps above done correctly, then the
microcontroller (AT 8535) is ready and can be used to run
a system as desired.
A. Downloading Program into Microcontroller with Starter
kit Compiler that commonly used with AVR microcontroller
is C Compiler. CodeVision have been provided on the editor to create a working program in C language, after the
compilation process, we can put the program was created
into memory of microcontroller using a program that has
been provided by CodeVision AVR.
ISP (In System Programming) device types that is supported by CodeVision AVR many variations, among others: Kanda Systems STK200 + / 300, Atmel STK500/AVRISP,
243
Dontronics DT006, and others. To make “de Kits AVR ISP
Programmer Cable CodeVision” can be integrated with the
AVR, the following configuration be done:
- Run The CodeVision AVR Software.
- Choose Setting’s menu ’! Programmer.
- Choose type of ISP programmer ’! Kanda Sistem
STK200+/300.
- Then click OK Button
black housing “de Kits AVR ISP Programmer Cable” are as
shown in figure 17. Because the black housing has a form
of symmetrical, so the only sign as a guideline is a sign of
the triangle on one side black housing where the pin close
to the marker is the VCC pin 2.
Figure 17. AT 89S51 pin configuration
To perform a test to “de Kits AVR ISP Programmer Cable”,
start a new project as follows:
§
§
§
§
Place the AVR ISP Programmer Cable on the target
board that already connect to the target
microcontroller.
Select the Tools menu ’! Chip Programmer, or press
Shift + F4.
In the window Chip programmer menu select Read ’!
Chip Signature.
When the AVR ISP programmer cable works well and
ID microcontroller not damaged, then the target type
mikrokontroler will look like the picture below
(figure 18).
Figure 15. Choosing ISP Programmer in CodeVision AVR
Once CodeVision AVR has been configured, do test to “de
Kits AVR ISP Programmer Cable” by connect it with the
target board, and to the PC through LPT port, as shown in
following picture (figure 16):
Figure 18. de KITS AVR ISP Programmer Cable test with
the Read Chip Signature
(source link: www.innovativeelectronics.com)
Figure 16. Atmega 8535 Programmer Cable Connection
The black housing to connect the ISP header on the target
board is adjusted to the layout of the pin. Pin layout on the
This proses can only be done when there is a project open.
Press Shift+F9, download to target board by click on program button. After that the microcontroller ready to use.
III. SUMMARY
Based on test and design to the control module can be
244
summerized that:
a) DC motor as the electronic home appliances controlled
through the internet media with NM7010A network
module. Contolling the DC motor remotely can be done
By pressing F5 button on computer’s keyboard and
the computer connected to internet. We can control
the direction of it’s (DC Motor) rotation.
b) By using TCP/IP Starter Kit based on NM7010A net
work module as a bridge between DT-AVR Low Cost
Micro with Network or Internet, controlling the elec
tronic home appliances can be done remotely without
any barrier in distance and time.
LITERATURES
1. Asep Saefullah, Bramantyo Yudi W, (2008), Perancangan
Sistem Timer Lampu Lalu Lintas Dengan
Mikrokontroler AVR, CCIT Journal, Vol.2 No.1,
STMIK Raharja
2. Paulus Andi Nalwan, (2003), Panduan Praktis Tehnik
Antar Muka dan Pemrogaman Mikrokontroler
AT89C51, Gramedia, Jakarta
3. Untung Rahardja, Asep Saefullah, (2009), Simulasi
Kecepatan Mobil Secara Otomatis, CCIT Journal,
Vol.2 No.2, STMIK Raharja
4. Widodo Budiharto, (2005), Perancanan Sistem Dan
Aplikasi Mikrokontroller, PT. Elex Media
Komputindo, Jakarta
5. Widodo Budiharto, Sigit Firmansyah, (2005), Elektronika
Digital Dan Mikroprosessor, Penerbit Andi,
Yogyakarta
6. Wiznet7010A, (2009), http://www.wiznet.co.kr/en/
pro02.php?&ss[2]=2&page=1&num=98, accessed
on June 3th, 2009
245
Paper
Saturday, August 8, 2009
15:10 - 15:30
Room L-210
DIAGNOSTIC SYSTEM OF DERMATITIC BASED ON FUZZY LOGIC
USING MATLAB 7.0
Eneng Tita Tosida, Sri Setyaningsih, Agus Sunarya
Computer Science Department, Faculty of Mathematic & Natural Sciences, Pakuan University
1) [email protected], 2) [email protected], 3) [email protected]
Abstract
Diagnostic system of Dermatitic based on Fuzzy logic constructed with seven indication variables. These variables
have different intervals and used for determining status of domains in membership function of variables. The domains
classification identified as : very light, light, medium, heavy, chronic. The classification obtained from intuition and
confirmed to the expert. Membership function which built based on fuzzy rule base; consist of 193 rule and part of
Fuzzyfication input. System implemented with using MATLAB 7.0. Output was Dermatitic diagnostic-divided into :
Static Dermatitic (10-20%), Seboreic (21-35%), Perioral Dermatitic (36-45%), Numular Dermatitic (46-55%),
Herphetymorfic Dermatitic (66-80%), Athopic Dermatitic (81-90%), Generalyseate Expoliate Dermatitic (91-97%).
Keywords: dermatitic, fuzzy logic, domain, membership function, fuzzy rule base.
1. INTRODUCTION
The application of computer for disease diagnostic is
helpful with fast and accurate result. In disease diagnostic,
paramedics often seem to be doubt full since some of
diseases have indication that almost the same. Therefore
model of Fuzzy logic needed to solve the problem.
Fuzzy logic (obscure logic) is a logic which faced with
half of true concept between 0 and 1. Development of
theories shows that fuzzy logic can be used to model any
systems including Dermatitic Diagnostic
Crisp input converted into fuzzy data with fuzzy
membership function by fuzzyfication, on contrary
convertion output of defuzzyfication into wanted data ie.
Result of Dermatitic diagnostic.
Selected language programme is MATLAB 7.0 fullfilled
fuzzy logic toolbox which form fuzzy inference system
(FIS). Facilitating interaction between users and system,
MATLAB 7.0 provides Graphic User Interface (GUI) using
script*.m.files.
2. PROBLEM ANALYZING
Diagnostic of Dermatitic based on physical indication
examination and medical patient complaint, then defined
246
as fuzzy variable. Indication variables including itchiness,
redness,swelling, skin scab, skin scale, skin blist and skin
rash. For determining domain of fuzzy association, direct
interviews to the expert were used.
Tables 1. The Fuzzy variables
3. FORMING FUZZY ASSOCIATION AND SYSTEM
INPUT-OUTPUT VARIABLE MEMBERSHIP
FUNCTION
Function model for starting and ending fuzzy region
variables was shoulder-form curve, while for crossing was
triangle curve (Kusumadewi, 2004). Domain of any fuzzy
associations which had been formed can be seen in Tables
Tables 2. Fuzzy Associations
Figure 3. Itchiness membership function
Membership function of whole itchiness input
variables defined as :
From above rule can be identified ichtiness variables
which categorized as : very light, light, medium, heavy,
cronic. The process which held on whole of membership
function for input variables showed in Figure 4-9
respectively.
Membership function for itchiness input variables
can be seen in Figures 3.
Figures 4. Redness membership function
247
Figures 5. Swelling membership function
Figures 9. Skin rash membership function
Membership function of desease output variables
can be seen in Figures 10.
Figures 6. Skin scab membership function
Figures 10. Desease membership function
Membership function of desease output variables can be
defined as :
Figures 7. Skin scale membership function
Figures 8. Skin blist membership function
248
Based on modeling process and verification result
of the expert/physician for Dermatitic diagnostic system,
there were 193 fuzzy rule
z* =
+” µc (z) · z dz
+” µc (z) dz
Defuzzification process aims to change fuzzy value to
crisp. At
Figures 13. Rule Editor
Figures 11. Flowchart of system
Result of system implementation step with Matlab 7.0
shown in Figures 12-14, respectively :
Mamdani rule composition, there are some defuzzy
method, one of them is Centroid Method (Composite
Moment). From this method, the crisp of output variables
counted with finding variables value z* (center of gravity)
of its membership function.
4. DESIGN AND SYSTEM IMPLEMENTATION
Flowchart of Dermatitic Diagnostic system based on
Fuzzy logic is shown in Figures 11.
Figures 14. Fuzzyfication process
Running out of main form programme can be seen
in Figures 15 :
Figures 12. Membershipship Function
Figures 15. Main form and Diagnostic process
249
At diagnostic form, there were seven input (value = 65),
these values were user input values and these ones had
certain fuzzy values. While at result of diagnostic, there
were 75 which obtained from FIS and herphetymorfic
dermatitic was a kind desease of diagnostic result.
5. VALIDATION
The following case was comparison between expert
with programme output data, if itchiness = 17%, redness
= 24%, swelling = 15%, skin scab = 13%, skin scale = 25%,
skin blist = 19% and skin rash = 22%, so finding of any
input membership degrees agree with previous
membership function. Value of fuzzy membership for
itchiness variables at any associations :
Very light fuzzy association [17] = 0.65
Found from
f[17]
=
(30-17)/20 = 0.65
Light fuzzy association [17] = 0.35, found from :
f[17]
=
(17-10)/20 = 0.35
Medium fuzzy association [17] = 0
Heavy fuzzy association [17] = 0
Cronic fuzzy association [17] = 0
It was held for any input variables.
8. Fuzzy Interference
Fuzzy interference process used min-max rule, then
withdrawn the highest value from the result of first count
using OR command.
Based on the result, it could be determined that the
selected rule was rule no. 65, therefore the diagnostic result
was 65% medical patients suffer Static Dermatitic. The
ouput system shown in Figures 16.
Figures 16. Output of diagnostic
250
output produced certain Dermatitic diagnostic.
One of problem in this research was determining
fuzzy membership function in
system building since there was not standard form
yet released out by the expert. Therefore the result
obtained at real data examination unappropriate with part
of ouput data programme.
7. REFERENCES
[1]
Abdurohman,
A.
Bab
2,
http://
www.geocities.com/arsiparsip/tatf/ta-bab2.htm,
2001.
[2]
Goebel, G, An Introduction To Fuzzy Control
Systems, Public Domain. http://www.faqs.org/faqs/
, 2003.
[3]
Gunaidi, AA., The Shortcut MATLAB
Programming, Informatika, Bandung, 2006.
[4]
Kristanto, A., Kecerdasan Buatan, Graha Ilmu,
Yogyakarta, 2004.
[5]
Kusumadewi, S., Analisis & Desain Sistem Fuzzy
Menggunakan Toolbox Matlab, Graha Ilmu,
Yogyakarta, 2002.
[6]
Kusumadewi, S & Purnomo, H., Aplikasi Logika
Fuzzy untuk pendukung keputusan, Graha Ilmu,
Yogyakarta, 2004.
[7]
Marimin, Teori dan Aplikasi sistem pakar dalam
teknologi manajerial, IPB Press, Bogor, 2005.
[8]
Panjaitan, L.W., Dasar-dasar Komputasi Cerdas,
C.V ANDI OFFSET, Yogyakarta, 2007.
[9]
Piattini, Galindo & Urrutia, Fuzzy Databases
Modeling, Design and Implementation, Idea Group
Publishing, London, 2007.
[10]
Sugiharto, A. Pemrograman GUI (Graphic User
Interface) dengan MATLAB, C.V ANDI OFFSET,
Yogyakarta, 2006.
[11]
Sumathi, Sivanandam & Deepa, Introduction to
Fuzzy Logic using MATLAB with 304 figure and
37 tables, Springer, Berlin, 2007.
[12]
http://www.fuzzytech.com 25-Des-2007
h
t
t
p
:
/
/
[13]
www.medicastore.com\med\kategori_pyk18e5.html
06 Juni 2007
[14]
h t t p : / / w w w. m e d i c a s t o r e . c o m / m e d /
subkategori_pyk04f0.html?idktg=14&UID=20071118183038202.182.51.230
01-Januari-2008
Paper
Saturday, August 8, 2009
14:20 - 14:40
Room AULA
Design of Precision Targeting For Unmanned Underwater Vehicle
(UUV) Using Simple PID-Controler1
Sutrisno, Tri Kuntoro Priyambodo, Aris Sunantyo, Heru SBR
Departement of Mechanical and Industrial Engineering, Gadjah Mada University,
Jl. Grafika 2 Yogyakarta. 52281. Email: [email protected], Phone: 08122698931
b
Faculty of Mathematics and Natural Sciences, Gadjah Mada University,
c
Departement of Geodetical Engineering, Gadjah Mada University,
a
ABSTRACT
A precision targeting for torpedo using simple PID controller has been performed to get a solution model. The system has
been assumed to have two-dimansional character, such that the mechanical control mechanism would be performed
solely by rudder. A GPS/IMU system was employed in the model to provide the exact location and current trajectory
direction and will be used to compared between the instataneous correct direction and instataneous current direction.
This difference would drive PID control system to give correct angle deflection of the rudder. Some parameters of the PID
controller has to be well-tunned employing several schemes including the Routh-Hurwitz stability criterion.
Keywords: Torpedo, UUV, PID Controller, Precision Targeting
Nomenclature:
è
:
G
:
F
:
T
:
Rx
:
Ry
:
J
ç
á
ñ
:
:
:
:
drift angle
centre of grafity
rudder force
propeller thrust
resistance (drag)
additional resistance due to turning
motion
Moment inertia polar from M’ to G
“è = è0 – è1.
rudder deflection angle
instantaneous radial curvature
Introduction
About two-third of the earth are covered by oceans. About
37% of the world population lives within 100 km of the
ocean (Cohen, et al., 1007). The ocean is generally overlooked as we focus our attention on land and atmospheric
issues, we have not been able to explore the full depth of
the ocean and its abundance living and non-living resources. However a number of complex issues due to the
unstructured, hazardous undersea environment make it
diffcult to travel in the ocean. The demand for advanced
underwater robot technologies is growing and eventually
lead to fully autonomous, spacialized, reliable underwater
robotic vehicles. A self-contained, intelligent, decisionmaking AUV is the goal of current research in underwater
vehicles.
Hwang, et al.(2005) have proposed an intelli-gent scheme
to integrate inertial navigation system / global positioning
system (GPS) with a constructive neural network (CNN) to
overcome the limitation of current schemes, namely Kalman
filtering. The results has been analyzed in terms of positioning accuracy and learning time. The preliminary results indicates that the positioning accuracy were improved
by more than 55%, when the multi-layer-feed-forward neural network and CNN based scheme were implemented.
Huang and Chiang (2008) have proposed low cost attitude
determination GPS/INS integrated navigation system. It
consists of ADGPS receiver, NCU, low-cost MEMS IMU.
The flight test results shows that the proposed ADGPS/
251
INS integrated navigation system give reasonable navigation performance even when anomalous GPS data was provided.
Koh et al. (2006) have discussed a design of control module for UUV. Using modelling, simulation and experiment,
the vehicles model and its parameters have been identified. The mode cotroller gain values was designed using
non-linear optimizing approach. Swimming pool tests have
shown that the control module was able to provide reasonable depth and heading action keeping.
Yuh (2000) has surveyed some key areas in current stateof-the-art underwater robotic technology. Great efforts has
been made in developing AUVs to overcome challenging
scientific and engineering problems caused by the unstructured and hazardous ocean environment. In 1990, 30 new
AUV have been built worldwide. With the development of
new materials, advanced computing and sensory technologies, as well as theoretical advancement, R&D activities in
the AUV community increased. However, this is just the
beginning for more advanced, yet practical and reliable
AUVs.
PROBLEM DEFINITION
The focal point of this paper is the development of Indonesia defense technology. The Indonesia defense technology should not depend strongly on foreign technology, we had to develop our own technology. The components of our military technology should be able to be found
in the open market without any fear becoming the victim of
embargo.
Fig. 2. Several possible trajectory for UUV which its
current direction è1 toward the reference direction è0. The
response could be a) the wrong trajectory due to not enough
correction capability, b) and c) provide enough coorection
in such away the response could be smooh character or
sinusoidal character, and d) the wrong trajectory due to
too much correction capability.
Therefore we have to initiate our basic defense technology ourshelves, in which we had to create product based
on alternatif strategy, avoiding of further advancement of
foreign technology but further strengthen on our basic
military technology.
In this paper we present the development of basic torpedo
steering control using simple controller but the end result
should have high precision capability.
Rudder Lift
Rudder Drag
Abdel-Hamid et al. (2006) have employed offline pre-defined fuzzy model to improve the performance of integrated
inertial measurement units (IMU) utilizing micro-electromechanical-sensors (MEMS). The fuzzy model has been
used to predict the position and velocity error, which were
an input to a Kalman filter during GPS signal outage. The
test results indicates that the proposed fuzzy model can
age.
Fig.1. A torpedo is one of the unmanned underwater
vehicles, one of the branch of defence technology
252
M’
v i2
ρ
sin è + Rx + Fx –T = 0 …………..(4)
- M’cos è + Ry – F cos è = 0 ……. …(5)
F.p – Ry.a ‘“ N à0 then Ry = (F.p – N)/a.. (6)
c. The Result Equation for Controlling
From (5) one found
- M’cos è + Fcosá = 0
then
…(8)
Fig. 3. Deflection of the rudder (á) as a response of the
control system. The rudder produced rudder lift and
rudder drag.
mODELING SOLUTIONS
The torpedo system have the hull, where it had centre of
gravity, centrifugal force and acceleration have taken place.
The system, is assumed to have only two-dimensional in
character, has a rudder into which the resulting control
action is operated to have movement direction toward the
right target
a. The Governing Equations
The complete system of the forces acting on the torpedo
vessel at any instant are shown on Fig. 4. The ñ was the
instantaneous radius of curvature of the path. Let the components of F and R in the direction of the X and Y axes
denoted by the corres subscripts and let the inertia forces
be denoted as shown, then we have the governing equations (Rossell and Chapman, 1958)
and from (4) one found
……………….(9)
therefore
………..(10)
Fig. 4. The complete system of the forces acting on the
vessel at any instant. The applied force were the rudder
force F, hull resistance R, and the propeller thrust T
2
vi
dvi
M’
cos è = M’
sin è + Rx + Fx –T (1)
dt
ρ
M’sin è = - M’cos è + Ry – Fy ..(2)
J’= F.p – R.q = N …………………(3)
d. Controlled System
The negative feedback controlled system for the whole
torpedo system are illustrated on Fig. 5. The flow for unmanned underwater vehicle dynamics, such as torpedo,
have been modelled as input (á) – output (è) system.
In the torpedo vehicles we could use several difference
ways to measure current direction (è1) such as using GPS
and IMU system. The result of current direction (è1) would
be compared with reference direction (è0) and then one
can find the instataneous angle difference “ è
“è = è0 - è1
b. For steady turning:
We use combination of GPS / IMU to determine the current torpedo direction, by calculating the reference direction between the target reference point and the current
torpedo location measured by GPS/IMU syatenm.
The current torpedo direction are measure as the tangent
line of current trajectory. The resulting instantaneous di-
253
rection angle differences will be inputed to tha PID controller.
CONCLUSIONS
d. Sensitivity criteria
The complete control system using simple PID controller
to control the flow of torpedo dynamics in order to hit the
target precisely are presented in Fig. 5.
The instantaneous direction angle difference (“è)to drive
the PID controller to produce precise rudder deflection
angle (á) have been illustrated in Fig. 6.
In conclusion, precision targeting for unmanned underwater vehicles such as torpedo using simple controller
have been designed. It consists of PID controllers manipulating the control surface to get the right direction
toward the target precisely.
The process block diagram have to be analyzed using fluid
flow dynamics force balances. The resulting fluid dynamics for torpedo syatem, the ID controller can be designed
and tunned using Routh-Huruwich stability criteriaon.
REFERENCES
Fig. 5. The complete control syatem using simple PID
controller to control the flow of UUV dynamics to hit the
Hwang, Oh, Lee, Park, and Rizos (2005) Design of a lowcost attitude determination GPS-INS integrated navigation
syatem, GPS Solution, Vol. 9, pp. 294-311.
modeling, GPS Solution, Vol. 10, pp. 1-11.
Cohen, JE, Small, C. (1997) Estimates of coastal populations, Science, Vol. 278, pp. 1211-1212
Huang, YW and Chiang, KW (2008) An intelligent and
tem, GPS Solution, Vol. 12, pp. 135-146.
Koh, Lau, Seet, and Low (2006) A Control module Scheme
for UUV, J. Intell. Robot System, Vol. 46, pp. 43-45.
Rossell, HE and Chapman, LB (1958) Principles of Naval
Architechture, Vol II, New York
Yuh, J (2000) Design and control of Autonomous Under
24.
(Footnotes)
1
Fig. 6. The instataneous direction angle difference drove
PID controller to produce precise rudder deflection
angle.
In the system, presented on Fig. 6 or formulated on Eq.
(10) contains some functional characteristic of the rudder,
F, Fx, Fy, and of the hull drag Rx, Ry which have to be
supplied with the actual data.
At last some parameter to be adjusted for the PID controller, K, ôi, and ôD could solve using Root location, Routh
stability criterion, and Hurwitz stability criterion.
254
Paper presented on the International Conference on Creative Communication and Innovative Technology 2009
(ICCIT-09), August 8 th , 2009, Jl. Jenderal Sudirman No.
40, Tangerang Banten Indonesia.
Paper
Saturday, August 8, 2009
16:45 - 17:10
Room L-210
Ontology implementation within e-Learning Personalization System
for Distance Learning in Indonesia
Bernard Renaldy Suteja
Jurusan Teknik Informatika, Fakultas Teknologi Infomasi UK. Maranatha; [email protected]
Suryo Guritno
Ilmu Komputer Universitas Gadjah Mada; [email protected]
Retantyo Wardoyo
Ilmu Komputer Universitas Gadjah Mada; [email protected]
Ahmad Ashari
Elektronika dan Instrumentasi Universitas Gadjah Mada; [email protected]
Abstract
Website is the realization of the internet technology. Nowadays, as seen from the usage trend, the website has evolved. At
the beginning, website merely adopts the need for searching and browsing information. The initial step of the raise of this
website is often recognized as web 1.0 technology. At present, web 2.0 technology, which enables well web-to-web
interaction, has come. Kinds of interaction such as changing information (sharing), in the form of document (slideshare),
picture (flickr), or video (youtube), information exploitation (wikipedia), and also online communities creation (weblog,
web forum) are principally a service that involve communities (the core of web 2.0). These matters bring impacts caused
by the raise of social interactions in virtual world wide (the internet) that is followed by the appearance of learning
interaction and training anywhere-anytime which is termed as e-Learning. Basically, online learning requires selflearning method and learning habit, which is –unfortunately- possessed by a few Indonesian human resources. This
condition is being worst by the present e-Learning system that focuses merely on the delivery process of the same learning
substance content toward the learner, abandon the cognitive aspect and it does not offer approach or an interactive selflearning experience and also abandon the adaptation aspect of user with the system. Therefore, successful e-Learning in
Indonesia needs e-Learning system that applies web 2.0 technology which urges the learner to actively participate and
the system which stresses the personalization such as comprehensive ability, adaptive to levels learner capability and
possessing knowledge resources support.
Within the constructed e-Learning system, ontology is going to be applied as the representation of meaning of knowledge
formed by the learner who uses the system.
Keywords: e-Learning, personalization e-Learning, adaptive e-Learning, ontology
1. Background
The vast use of internet in the present time by people in
developed countries and developing countries like Indonesia has changed the way of living especially in each
operational activity. According to Internet World Stat, Indonesian netters reach 20 million up to 2007 and this number is recorded on the list number 14 after Canada. Internet
has changed the paradigm of place and distance that is
previously seemed far to be nearer. Therefore the use is
badly needed in Indonesia that geographically has thousands island.
Web site is the realization of the internet. As seen from the
usage trend up to now the web has evolved. At the beginning, website merely adopts the need for searching and
browsing information. The initial step of the raise of this
website is often recognized as web 1.0 technology. At
present, web 2.0 technology, which enables well web-toweb interaction, has come. Kinds of interaction such as
changing information (sharing), in the form of document
(slideshare), picture (flickr), or video (youtube), information exploitation (wikipedia), and also online communities
255
creation (weblog, web forum) are principally a service that
involves communities (the core of web 2.0). These matters
bring impacts like the increasing number of social interactions in virtual world wide (the internet) that is followed by
the appearance of learning interaction and training anywhere-anytime which is termed as e-Learning.
The development of e-Learning itself has successfully dragged the attention of many parties like industry and education. The existence of e-Learning in industry
has increased employees’ competency. For instance,
Mandiri Bank has launched Learning Management System (LMS) to train about 18 thousand employees spread
over 700 branches (Swa Magazine, 2003). To add, CISCO,
PT SAP Indonesia, PT Telekomunikasi Indonesia and IBM
Indonesia have applied e-learning system to develop their
human resources (Sanjay Bharwani, 2004). As well as in
education, e-Learning has given the change point of view
for teaching-learning process. Based on ASTD (American
Society for Training and Development) survey result in
2004, 90% of US Universities have more than 10.000 students who use e-Learning. While in business, the percentage reaches 60% (Ryann Ellis, 2004).
Simply, e-Learning in education is a process of
teaching-learning through a computer connected to the
internet, which all facilities provided in the learning venue
are functionally changeable with certain applications.
Learning substances are downloadable, while interaction
between teacher and students in the form of assigning
tasks can be done intensively in the form of discussion or
video conference.
In Indonesia the regulation from government or
related department to support the realization of e-Learning
for education is implied in Decree no.20 Year 2003 about
National Education System clause 31 and the National
Education Minister’s Decree and Act no. 107/U/2001 about
PTJJ, specifically permits education manager in Indonesia
to manage education through PTJJ by using IT.
Similar to e-Learning development, vendors of
system development appear, starting from open source
based system such as Moodle, Dokeos, Sakai etc to proprietary like Blackboard (Web CT). The vast development
of open source based system is due to the small amount of
e-Learning system investment. The investment includes
hardware and software if it is compared to learning conventionally. To mention, several universities in Indonesia
and overseas have applied this e-Learning system.
Yet, it is not a guarantee that the increasing number of e-Learning system supports the learning transformation or learning application itself. In 2000, a study held
by Forrester Group showed that 68% refused the e-Learn-
256
ing training concept. Meanwhile, the other study indicated
that from all registered e-Learning participant 50%-80%
did not accomplish the training (Delilo, 2000). It is similar
with e-Learning system application in Indonesia. The worst
thing is mostly established e-Learning systems are unusable at the end. Basically, online learning requires selflearning method and learning habit, which is –unfortunately- possessed by a few Indonesian human resources
only. This condition is being worst by the present e-Learning system that focuses merely on the delivery process of
the same learning substance content toward the existing
learner, abandon the cognitive aspect and it does not offer
approach or an interactive self-learning experience and also
abandon the adaptation aspect of user with the system.
Therefore, successful e-Learning in Indonesia needs eLearning system that applies web 2.0 technology which
urges the learner to actively participate and supported by
the system which stresses on the personalization such as
comprehensive ability, adaptive to levels of learner’s capability and possessing knowledge resources support.
Online learning that needs self-learning method
and habit to learn will be realized into an e-Learning system by using web 2.0 technology (wiki, blog, flickr, and
youtube) which focuses on the communities employed
service. Content learning will be collected from knowledge
resources web 2.0 based in which the metadata is managed using pedagogy ontology.
Ontology, a knowledge representation on a
knowledge base that is formed later, is used as a part toward system user in the formed social network. To sum up,
e-Learning system that stresses on the personalization such
as the ability to accommodate cognitive aspect of the user,
understandable and adaptive toward various users -at the
end- is capable to increase learner motivation of e-Learning system user.
2. Theoretical Review
2.1. e-Learning and Content
Electronic learning or e-Learning is a self-learning process facilitated and supported through the use of
ICT [1]. Generally, from the developing e-Learning system
nowadays, e-Learning –based on the interactivity- is classified into 2 groups:
• Static learning. The system user can download the
needed learning substance only (content). While the
adminis trator can upload substance files only. The ac
tual learning situation like communication is absent on
this system. The system is useful for those who can
learn by themselves from readers supplied on the sys
tem in the form of HTML, Power Point, PDF, or video. If
it is used, the system can functionally support teach
ing-learning activities done face-to-face in class.
• Dynamic learning. The facilities offered are more vary
from the first system mentioned. The facilities such as
discussion forum, chatting, e-mail, learning evaluation
tools, user management and electronic substance man
agement are available. These enable the user (students)
to learn in a learning environment that similar to class
room situation. This second system can be used to help
transformation process of learning paradigm from
teacher-centered to student-centered. It is no longer
the instructor who actively delivers the substance or
request students to ask about indigestible substance
but, here, the students are trained to learn critically and
actively. E-Learning system which is developed may
use collaborative learning method approach (collabo
rative learning) or learning from the process of the given
problem solving (problem-based learning).
The relation between learning condition and appropriate facilities can be seen in the table below (adopted
from Distance Learning and SunMicrosystems [2]) :
interaction through the system continuously within real
time, always giving the appropriate substance content
(Paulo, 2006).).
Picture 2. On-line personalization model
Off-line personalization (picture 3), walked by combining
provided students data that is analyzed later to gain course
content change recommendation.
.
Picture 3. Off-line personalization model
The appearance of Web Semantic technology, Meta data
may be added into e-Learning content (including
Picture 1. Perbandingan distance learning
e-Learning content is any digital resources which is used
to support learning process. E-Learning content can be
categorized into 2 parts:
- textual, including text based content like plain-text
and PDF
- non-textual, including multimedia content such as
audio, visual and animation
Textual content can be found easily through
search engine (like Google or Yahoo) by typing the keywords. This can be done by a skillful person only to gain
the needed content from some found content results and
then to be combined. Non-textual is not so simple. It is
hard to find although the person has used search engine.
Personalization is the next step of e-Learning evolution. According to Paulo Gomes et el, learner may feel
various cognitive style and create efficiency within the
proper use of e-Learning system for different background
and capability level. There are two personalization models: on-line personalization (picture 2), observe student
pedagogy attributes) and later be organized into ontology, so it will be easier in distribution, discovery and the
content use in such a better way. Through this way, it s
not only human can easily find and organized needed content but also smart agent. Smart agent in the application
will find and organize the content from heterogenic content source and then combine them to be customized
courseware with specific criteria and other rules. This customized courseware refers to groups of content (sourced
from heterogenic content) where related content and pedagogy are supported (Renaldy and Azhari, 2008)
2.2. e-Learning standardization
There is e-Leaning standardization that must be
used as a reference of system development:
2.2.1. LTSC
It is invented by Institute of Electrical and Electronic Engineers (IEEE) that has created many standard of technology for electrical, Information Technology and Science.
The aim of LSC is to form accreditation of technical standard, giving training recommendation, and a reference in
learning technology.
257
2.2.2. IMS
IMS is an important organization in e-Learning community since consortium among academic institution, company and government to build and support open specification for learning distribution and content development
and also student exchange among the different systems.
2.2.3. ADL
ADL create Shareable Courseware Object Reference
Model (SCORM). SCORM is a standard specification for
reusability and interoperability from learning content [7].
SCORM focuses on to two important aspects in
interoperability from learning content:
-
-
Defining the model aggregately to wrap learning
content
Defining API which is usable for communication
between learning content and the applied system
SCORM divides learning technology based on:
Learning
Management
System
(LMS)
Shareable Content Object (SCOs)-
Picture 4. Component of SCROM 1.2.
There are many tools to use SCORM like eXe-Learning.
Picture 6. Implementasi SCROM pada Moodle
2.3. Semantic Web Technology
Semantic web is the development of the next web
generation or commonly termed as the evolution of WWW
(World Wide Web) issued in 2002. Semantic web is defined as groups of technology, in which it enables the
computer to comprehend the meaning of information based
on metadata, namely the information of the content. With
the existence of metadata computer is expected to translate the input so the result will be displayed more detail
and exact. W3C (World Wide Web Consortium) that define metadata format is Resource Description Format (RDF).
Each unit of RDF has 3 composition namely subject, predicate and object. Subject and object are entities showed by
the text. While predicate is the composition that explain
subject point of view which is explained by the object. The
most interesting thing from RDF is an object can be a subject which is explained later by another object. So, object
or input can be explained clearly, in detail, and appropriate
with the user’s will, who give the input.
In order to reach the goal, it is necessary to give
meaning into each content (as attributes) which will be
used by web semantic technology into several layers:
Picture 7. Layer Web Semantik
Picture 5. Penggunaan SCROM pada eXe-Learning
The use on e-Learning has been supported, for example eLearning opensource Moodle.
258
-
XML Layer, represents the data
RDF Layer, represents the meaning of data
Ontology Layer, represents general form of rules/deals
-
about meaning of data
Logic Layer, applies intelligent reasoning with mean
ingful data.
•
•
Semantic web technology can be used to build
system by collecting e-Learning content from different
source to be processed, organized and shared to users or
artificial agent by using ontology. There are three important technology involved in the use of semantic web
namely: eXtensible Markup Language (XML), Resource
Description Framework (RDF), and Ontology Web Language (OWL).
•
•
•
2.3. Ontology Web
Ontology has many definitions as explained on
certain sources including what is revealed by scientist.
Neches et el gives the first definition about ontology; An
ontology is a definition from a basic understanding and
vocabulary relation of an area as a rule from terminology
combination and relation to define vocabulary.
Gruber’s definition that is mostly used by people is, “Ontology is an explicit specification of conceptualism.”
Barnaras, on CACTUS project, defines ontology based on
its development. The definition is, “ontology gives understanding for explicit explanation of concept toward knowledge representation on a knowledge base” [5].
There is a book that defines ontology; one of them is “The
Semantic Web”. It defines ontology as:
XML provides syntax for structured documents out
put, but it is not forced for XML document using se
mantic constraints.
XML language scheme for structure restriction of XML
document.
RDF data model for object (resources), and its relation,
provides simple semantic for that data model and it
may be served in XML syntax.
RDF scheme is vocabulary to explain properties and
classes from RDF source with semantics for equality
hierarchy of properties and classes.
OWL. Adds some vocabularies to explain properties
and classes such as: relationships among classes (for
instance, disjoint-ness), cardinality (a single one),
equality, types of properties, characteristic of proper
ties (for example, symmetry), mentioning one by one
classes.
Any languages that arrange ontology, as it is explained
above, have certain position in ontology structure.
Each layer will have additional function and
complexity from previous layer. User who has the
lowest processing function may comprehend although
not all ontology that is placed above[4].
1) A branch of metaphysic that focuses on nature
and relationship among living creature
2) A theory of living creature’s instinct
Ontology is a theory of the meaning of an object, a
property of an object, and its relation which may occur
on a knowledge domain. From a philosophy point of
view, ontology is a study of an exist thing. Besides,
ontology is a concept that systematically explains about
any real/exist thing. In a field of Artificial Intelligent
(AI), ontology has 2 related definitions. First, ontol
ogy is a representation of vocabulary which is special
ized for domain or certain subject discussion. Second,
ontology is a body of knowledge to explain a certain
discussion. Generally, ontology is used on Artificial
Intelligent (AI) and knowledge presentation.
All fields of knowledge may use ontology method to
connect and communicate one among others about
information exchange among different systems.
Picture 8. Layer Ontology
In every layer, each part has its own function:
•
•
•
•
XML has the function to save web page content
RDF is the layer to represent semantics of the web
page
Ontology layer to explain vocabulary of domain
Logic layer to take wanted data
In order to be usable, ontology must be expressed
in a real notation. An Ontology notation is a formal language of an ontology creation. Some components that
become the structure are:
259
Based on ontology and semantic web technology, a
planned platform education related may be created and it
is flexibly named as Architecture ontoedu. There are 5 components in this ontoedu:
-
-
user adaptation
Receiving parameter from user that is related with adap
tive transformation toward system.
auto composition
Being responsible to assign task as user’s response
education ontology.
Involve ontology activity & ontology substance
service modul
Dynamic model used to boost learning distribution.
content modul.
Dynamic model used to boost learning content distri
bution..
1. e-Learning System Design
Picture 9. Organisasi website berbasis ontology
2.3. OntoEdu
In ontoedu, ontology is used to illustrate concept of communication and relationship of education platform. Inside Ontoedu, there are 2 kinds of ontology involved; content ontology and activity ontology.
Education ontology is the core module to rule
the other component. By using ontology, ontoedu may
‘learn’ knowledge from education specialist and information specialist, so automatically may wrap it to be a wanted
content (user content) [3].
Smart agent development or intelligent based on production of e-Learning system personalization participates in
the occurrence of e-Learning evolution itself. Agent has
the capability to do the task in capacity for something or
for somebody else. Therefore, by penetrating intelligent
agent concept that is assigned to analyze profile, knowledge quality and learner capacity into e-Learning system,
a more personal / understandable e-learning system is
possible to be gained. The penetrated intelligent agent
analyzes existing learning models; therefore it can be categorized as intelligent tutoring system. Intelligent tutoring system applies learning strategy pedagogically, explaining content consecutively, kinds of received feedback and
how is learning substance delivered / explained.
The agent manages knowledge resources of existing web
2.0 technology on knowledge repository and its
representation into the system based on ontology of the
learner as well as theteacher.
Picture 10. Layer OntoEdu
260
Picture 11. Personalization e-Learning Framework
The arrangement is done by creating e-Learning system
that applies Asynchronous such as course content, discussion forum, mailing list, emails. Next, it is developed
into Synchronous such as quiz, chatting, videoconferencing. Then, it develops ontology-based agent,
which organizes tag and folksonomi in order to manage
knowledge resources web2.0 technology on picture 5, like
wiki, flickr, and youtube, that can be used as content course
support in e-Learning system.
E-Learning system that is built applies 5 main concepts
from Intelligent Learning System (picture 8), namely Student Model, Pedagogical Module, Communication Model,
Domain Knowledge, Expert Model. In pedagogical module, the best achievement module is helped by agent teacher
character that is capable to know the level of learning capability and also giving motivation in the form of feedback
to e-Learning user and also feedback to the teacher who is
involved about course that is managed in e-Learning system.
Picture 14. Main concept intelligent learning system
Picture 12. Sumber daya pengetahuan web 2.0
In order to support personalization towards e-Learning
users, they can do ‘customize interface e-Learning’ including to manage knowledge resources that is collected by
themselves or suggested by that system. Knowledge resources is managed by intelligent agent by applying ontology to do representation from knowledge, illustration
from the mentioned knowledge formation is on picture 6.
3.1. Ontology Architecture
A prototype of e-Learning is arranged as follow
by using ontology on education, especially on teaching
part.
In the creation of this ontology, the initial step
such as searching and web browsing and then categorizing towards discovered substance and finally processed
by identification and definition from main concept and
metadata content [8]. The result of categorization leads to
domain concept for ontology as follows:
•
•
•
•
•
•
Courses: identifying course with syllabuses, notes,
and course works.
Teaching material: such as Tutorial (article that ex
plains tasks in detail), Lectures (lecture note/slides in
various form/format), lab material, book (online book),
tool (ready-to-use software), code sample, work ex
ample, and white paper.
Assessments: Quizzes (brief query with brief answer),
Multiple Choice Questions (MCQ) Exams tests with
open question, another form of test.
Support Materials: collections (all sources like
homepage and portal), Background reading (basic
knowledge), Forum, Resources that support learning
Experts: identifying as experienced teacher commu
nity.
Institutions: including organization of teacher re
sources, experts of the field, and university/college.
Picture 13. Ontology hasil representasi knowledge
261
As an example, ontology from courses and experts that is
related each other can be gained as follows:
Picture 15. Skema perancangan ontology e-Learning
3.2. The Use of Tool Altova Semantic Work
Tool altova semantic work is used to arrange ontology. By using Altova Semantic Work, ontology development is done with pictures (visual) [6]. RDF, RDFS, OWL
and syntax checking are creatable and changeable. Anything related to semantics..
Picture 17. Diagram ontology dari Couses dan Expert
Representation form in RDF for Ontology courses is:
Picture 18. Skema RDF ontology Couses dan Expert
It appears that domain courses have correlation as property ‘assessed_by’ with domain assessment and property
‘support_by’ with domain Support Materials.
Picture 16. Ontology berbasis format metadata RDF.
262
3.2. Ontology Testing with pOWL
The compatibility of produced ontology can be
tested by using pOWL. pOWL is a web-based appliction
that is used to collaborate semantic web creation.
Owl has SQL query ability and based on API to handle
RDF layer, RDFS and OWL.
Picture 19. Tampilan awal pOWL
The result of Class, Properties and Instance from created
Ontology can be seen as follows:
Picture 21. Class Diagram Sistem yang dibuat
Picture 20. Tampilan Class, Properties dan Instance
3.2.
Ontology-based e-Learning system design
The system is build based on Object Oriented programming by using LAMP technology and the use of Prado
framework. Class diagram of the system are as follows:
Class E-Learning Page is the down line class of the class
TPage. Class E-Learning Page provides methods related
with page (web page), such as page change, page initialization and display or page content. The next is description of the existing method on E-LearningPage: Class TUser.
Class E-LearningUser is the down line class TUser. Class
E-Learning is used to meet the need of information about
user data that login. Class E-LearningDataModule is the
263
down line class of class TModule. Class ELearningDataModule is used to fulfill the need of connection with database.
3. System Implementation
At the beginning of operation, the system will request
user authentication. Each registered account will have different access right and ontology creation from formed
knowledge on each learning level of the user.
Picture 24. Tampilan Profile Pengguna
4.1. The management of exercises and examinations
The following part is the implementation user interface
manages exercises and e-Learning application that will be
developed. The present exercise is collaborated with analyses from the ontology agent towards learner capability
and formed knowledge repository. So, the exercise gained
by the learner will meet the grade of learner adaptation.
Picture 22. Tampilan awal sistem
Initially, it displays news, events, and the newest article
that is administered by system administrator.
Picture 23. Tampilan berita, event, artikel
The user that login to the system will find a pull down
menu that is systematically arranged on top. These menus
are very dependent from access right of the user. Profile
Setup will be a basic reference of agent in analyzing capability grade and system adaptation towards the user.
Picture 25. Tampilan Manage Soal dan Ujian
264
4.1. Manage Assignment Implementation
The following Manage Assignment Implementation is the
implementation of user interface manages assignment from
e-Learning application to be developed. The present assignment is collaborated with analyses from the ontology
agent towards learner capability with formed knowledge
repository. So, the learner may gain assignments that meet
the grade of learner capability and earner adaptation. The
assignment given by the agent will appear together with
correlated web 2.0 knowledge resources such as wiki, flickr,
and so on.
Appendices
[1]Arouna Woukeu, Ontological Hypermedia in Education: A framework for building web-based educational portals, 2003
[2]Chakkrit Snae and Michael Brueckner, Ontology-Driven
E-Learning Sistem Based on Roles and Activities
for Thai Learning Environment, 2007
[3]Cui Guangzuo, Chen Fei, Chen Hu, Li Shufang,
OntoEdu: A Case Study of Ontology-based Education Grid Sistem for E-Learning, 2004.
[4]Emanuela Moreale and Maria Vargas-Vera, Semantic
Services in e-Learning: an Argumentation Case
Study, 2004
[5]I Wayan Simri Wicaksana, dkk, Pengujian Tool Ontology Engineering, 2006.
[6]Kerstin Zimmermann, An Ontology Framework for eLearning in the Knowledge Society, 2006
Picture 26. Tampilan Manage Tugas
[7]Nophadol Jekjantuk, Md Maruf Hasan, E-Learning content management An ontology-based approach,
2007.
4. Conclusion
Ontology that is made in this research can create wellorganized e-Learning especially in the use of e-Learning
content. In future, the efforts are expected to broaden or
the development of ontology domains in order to create
good integrity of e-Learning system itself or others.
The architecture of e-Learning prototype may use ontology on education, especially teaching and learning.
The following are conclusion of the use of ontology within
e-Learning development system:
·
·
·
·
·
·
[8]S.R. Heiyanthuduwage and D. D. Karunaratne, A
Learner Oriented Ontology of Metadata to Improve
Effectiveness of Learning Management Sistems,
2006
Increases learning quality
Leads the teacher and the learner to get relevant
information
Proofs the grade of efficiency of towards retrieval of
e-Learning system (the consumed time to get
information)
Creates agent that handles repository knowledge
ontology-based
Applies easiness to access needed information
Improvises and maximize teaching and/or learning of
the user
265
Paper
Saturday, August 8, 2009
16:25 - 16:45
Room L-211
The Wireless Gigabit Ethernet Link Development at TMR&D
Azzemi Ariffin, Young Chul Lee, Mohd. Fadzil Amirudin, Suhandi Bujang,
Salizul Jaafar, Noor Aisyah Mohd.
AKIB#6#System Technology Program, Telekom Research & Development Sdn. Bhd., TMR&D Innovation Centre,
Lingkaran Teknokrat Timur, 63000 Cyberjaya, Selangor Darul Ehsan, MALAYSIA [email protected]*Division of
Marine Electronics and Communication Engineering, Mokpo National Maritime University (MMU) 571 Chukkyo-dong,
Mokpo, Jeonnam, KOREA 530-729 [email protected]
Abstract:
TMR&D is developing a millimeter-wave point-to-point (PTP) Wireless Gigabit Ethernet Communication System. A 60
GHz Low Temperature Co-Fired Ceramics (LTCC) System-On-Package (SoP) Transceiver capable of gigabit data rate
has been built and demonstrated with a size of 17.76 x 17.89 mm2. A direct Amplitude-Shift Keying (ASK) modulation
and demodulation scheme is adopted for the 60GHz-band transceiver. A BER of 1×10-12 for data rate of 1.25 gigabit-persecond (Gbps) on 2.2 GHz bandwidth at 1.4 km was demonstrated. This paper reports a new PTP link that has been
installed at TMR&D site to demonstrate wireless gigabit operation and performance of its key components. The link will
operate at the 57.65-63.35 GHz band incorporating TMR&D’s designed millimeter-wave LTCC SoP RF Transceiver
module. This LTCC SoP RF Transceiver is suitable for short-range wireless networking systems, security camera TV, video
conferencing, streaming video like HDTV (high definition television) and wireless downloading systems for small power
application.
Key-Words: - PTP link, Wireless gigabit, LTCC, SoP, RF Transceiver.
1 Introduction
Every new generation of wireless networks require more
and more cell-sites that are closer and closer together combined with the fast growing demand for the capacity of the
transmission links. Millimeter-wave (MMW) radio has recently attracted a great deal of interest from scientific world,
industry, and global standardisation bodies due to a number of attractive features of MMW to provide multi-gigabit transmission rate. Wireless broadband access is attractive to operators because of its low construction cost,
quick deployment, and flexibility in providing access to
different services. It is expected that the MMW radios can
find numerous indoor and outdoor applications in residential areas, offices, conference rooms, corridors, and libraries. It is suitable for in-home applications such as audio/video transmission, desktop connection, and support
of portable devices while for the outdoor PTP MMW systems, connecting cell-sites at one kilometer distance or
closer, it will offer a huge backhaul capacity.
266
The increasing demands for high-data rate communications have urged to develop MMW broadband wireless
systems. Demands for high-speed multimedia data communications, such as a huge data file transmission and
real-time high definition TV signal streaming, are markedly
increasing, e.g., Gigabit Ethernet networks are now beginning to be widely used. Wireless transmission with 1Gbps
and greater data rates is very attractive [1-2]. Carrier frequencies of wireless communications are also increasing
from 2.4 GHz and 5 GHz to MMW such as 60 GHz bands
[3]. For wireless communications applications, there has
been a tremendous interest in utilising the 60 GHz band of
frequency spectrum because of the unlicensed wide bandwidth available, maximisation of frequency reuse due to
absorption by oxygen (O2), and the short wavelength that
allows very compact passive devices. However, commercial wireless PTP links started to become available in the
57-64 GHz band [4] and, in the 71-76 and 81-86 GHz bands.
PTP links at 60 GHz can be used in wireless backhaul for
mobile phone networks and able to provide up to 1 Gbps
data rates. Sections of the 57-64 GHz band are available in
many countries for unlicensed operation [5-6].
According to the International Telecommunication Union
(ITU) Radio Regulations, the band 55.78~66 GHz, 71~76
GHz, 81~86 GHz, 92~94 GHz and 94.1~100 GHz are available for fixed and mobile services in all three ITU regions
as depicted in Fig. 1. In Europe, the 59~66 GHz band has
been allocated for mobile 2 services in general. In USA
and Canada, the 57~64 GHz band is assigned as an unlicensed band. In Japan, the 59~64 GHz band has been made
available on an unlicensed basis for millimeter wavelength
image/data systems. In Korea, the 57~64 GHz band is assigned as an unlicensed band.
Fig. 1: Worldwide 60 GHz Band Allocation
The usefulness of 60 GHz PTP links is limited however,
because of additional propagation loss due to O2 absorption at this band. The specific attenuation characteristic
due to atmospheric O2 of 10-15 dB/km makes the 60 GHz
band unsuitable for long range (>2 km) communications
so that it can be dedicated entirely to short range (< 1km)
communications. For a short distances to be bridged in an
indoor environment (<50m) the 10-15 dB/km attenuation
has no significant impact [7].
TMR&D is developing the PTP ultra broad-bandwidth
wireless link up to 1.25Gbps data rate on 2.2 GHz bandwidth (BW) using MMW 60GHz frequency band. This
Wireless Gigabit Ethernet link has a function of a media
converter to connect a fiber link to a full duplex wireless
link seamlessly with 1.25 Gbps data rate for both directions. The data input and output interface is a 1000BASESX optical transceiver module with LC connectors. Millimeter waves can permit more densely packed communication links, thus it provides very efficient spectrum
utilisation, and they can increase spectrum efficiency of
communication transmissions within restricted frequency
band.
We completed our first wireless gigabit ethernet system
(PTP link demonstrator) incorporating with TMR&D’s
LTCC SoP RF Transceiver module in December 2008. It
operates at 57.65-63.35 GHz band and is suitable for ASK
data rates of 1 Gbps and a maximum line of sight (LOS)
path of 1.4 km for BER<10-12. Outdoor propagation data
has been collected since January 2009. Three sites have
been tested at different locations, i.e. 0.8km, 0.9 and 1.4km.
The V-band transceiver modules include GaAs MMICs
together with the IF baseband in a metal housing attachment. The entire LTCC SoP RF Transceiver module was
designed by TMR&D’s Researchers; the LTCC fabrication was outsourced to third party. The transmitter output
is 10 dBm and the receiver NF is 8 dB. The antennas were
commercially purchased, low-cost Cassegrain type with
48 dBi gain and beamwidth of 0.6 deg (Figure 2).
Nowadays the 60 GHz band is considered to provide wireless broadband communication and the R&D for 60 GHz
technology is very competitive in worldwide. The research
and development for 60 GHz band is mandatory and urgent for national broadband system in future.
2 Wireless Gigabit Research at TMR&D
TMR&D has involved several years in microwave and
fabrication process facility. We developed numerous
MMICs for a range of purposes including MMW satellite
receiver, LMDS and MVDS applications. We also developed RF Transceiver for 3G Node B Base Station and IEEE
802.16d WiMAX Subscriber Station. These projects were
funded by the Telekom Malaysia under Basic Research
grant.
Fig. 2: Millimetre wave links at the TMR&D Innovation
Centre, Cyberjaya. A pre-commercial 60 GHz link
267
Table 1: TM MMW PTP Wireless Gigabit Ethernet Communication System Specification
ings in downtown or campus area where higher speed is
required. Backup link for optical fiber is easily installed
when a system is needed to replaced. Therefore, services
can continue seamlessly even though any problems are
on the link path. Other applications of 60 GHz are as below.
a. Wireless high-definition multimedia interface (HDMI).
Uncompressed video can be wirelessly transmitted from a
DVD player to a flat screen [8]. b. Fast up and download of
high-definition movies.
Users can download high-definition movies from a video
kiosk onto their mobile device or at home can download a
movie from their mobile device onto the computer. c. Wireless docking station. A laptop computer can be wirelessly
connected to the network, the display, an external drive,
the printer, a digital camera etc.
3 Millimeter-Wave Front End
Table 1 shows the system specification of the TM MMW
PTP Wireless Gigabit Ethernet Communication System.
This system can be used as PTP link, and establishing
high-speed backbone networks, such as backbone link or
wireless backhaul. This system is also used for satellite,
broadcasting and observation purposes. Figure 3 illustrates the PTP system suitable for applications which can
serve high capacity PTP up to 1.25Gbps Wireless Gigabit
Ethernet link. Thus, the outdoor units are optimised for
Ethernet radio links or mobile communication backhauls.
TMR&D’s advanced ASK (Amplitude Shift Keying) transceiver module has the best quality and superior performance to transmit ultra high speed digital data in millimeter
wave. The maximum data rate is 1.25Gbps on 2.2 GHz of
bandwidth for Gigabit Ethernet applications. We have developed low-cost multi-chip modules (MCMs) based on
the multilayer LTCC technologies; 60GHz-transmission
lines (CPW, MSL, eMSL), BPFs, patch antennas, active
modules (PA, LNA, multiplier (MTL), Tx, and Rx), and Lband LPFs. Utilising these technologies, we have developed 60 GHz-band broadband wireless transceiver namely
MyTraX (LTCC SoP Transceiver). The block diagram
shown in figure 4 includes the antenna, diplexer and ASK
LTCC SoP Transceiver with the optical transceiver being
connected.
Fig. 3: 60GHz PTP Wireless Gigabit Ethernet Link
This PTP system is the fastest wireless solutions for PTP
wireless in IP network such as Fast and Gigabit Ethernet
applications. The interconnection between two endpoints
apart from last mile can be easily deployed and installed.
Current solutions included voice, leased line or optical
fibers are too expensive to configure, and it’s very difficult
or impossible to transmit when high data rate is required.
So, the performances of the links are usually limited.
This system can be deployed with full-duplex security systems, and it can be used for wireless link between build-
268
Fig. 4: 60GHz Point-to-Point Transceiver block diagram
In this ASK LTCC SoP Transceiver module, it consists of
receiver (Rx) and transmitter (Tx) block. This transceiver is
based on the ASK modulation method. The ASK has a
carrier wave which either switched ON or OFF. For the Rx
block, it consists of low noise amplifier (LNA) block, demodulator and low pass filter (LPF) whereas for the Tx
block, it consists of frequency doubler, modulator (Mixer)
and power amplifier (PA).
At Rx block, the signal received coming from antenna is
downconverted to Intermediate Frequency (IF) signals and
then to the original signals via baseband. For Tx block, the
IF signals from the baseband are fed to the Tx block and
upconverted to 60 GHz band, then transmit through antenna (Figure 5).
Fig. 6: LTCC SoP Transmitter module
Fig. 7: LTCC SoP Receiver module
Figure 8 shows frequency response for the Tx output power
with IF sweep from 10-1500 MHz and LO at 58.752 GHz.
The peak output power for the ASK modulated 60GHz
band signal is plotted versus frequency. The output power
is 13dBm. There is no resonance and oscillation problems
occur at Tx module. The measured frequency spectrum of
the LTCC Tx module is shown in figure 9.
Fig. 5: Block diagram of the ASK LTCC SoP Transceiver
TMR&D develops several kinds of LTCC (Low Temperature Co-Fired Ceramics) MMW modules in 60GHz band.
These LTCC modules have superior RF performance so
that the whole systems equipped with the module can
operate more stable. Using multi-layer LTCC based SoP
technology, various research efforts have been made for
compact SoP RF systems. For 1.25Gbps wireless Ethernet
link, fabricated 60GHz Tx and Rx modules were downsized
into 13.82 x 6.55 mm2 (Figure 6) and 11.02 x 4.31 mm2 (Figure 7), respectively. The integration of Tx and Rx will produce the LTCC SoP Transceiver with a size of 17.76 x 17.89
mm2.
Fig. 8: Frequency response of peak output power for Tx
module
269
Isolation for Tx and Rx position need to be considered as
it is required to avoid any signal losses during transmitting. Isolation of 80 dBc is required between the Tx and Rx
block. When the 5
Tx and Rx block is placed in the same area of the transceiver module, the isolation requirement should be satisfied. There is no resonance and oscillation problems occur
at Transceiver (TxRx) module. Figure 11 shows the TxRx
module test result and figure 12 shows the transceiver
module in a metal housing.
Fig. 9: Measure frequency spectrum of the LTCC
Tx module
For high sensitivity of the Rx, low-noise and high-gain
components should be chosen. The Rx IF output test is
plotted versus frequency. The sweep frequency is from 10
MHz–1500 MHz. The IF output is marked at -1.33 dBm
with input power at -40 dBm. There is no resonance and
oscillation problems occur at Rx module. The Rx performance is shown in figure 10. The NRZ Eye-Pattern is shown
at the IF output level with data rate of 1.25 Gbps.
Fig. 11: LTCC SoP Transceiver module test result
IF Output (Input Power=-40dBm)
Fig. 12: LTCC SoP Transceiver module in a metal
housing
4 Conclusion
IF Output Level (Input Power=-40dBm, 1.25Gbps NRZ
Eye-Pattern)
Fig. 10: LTCC SoP Rx with 7 order LPF module
270
MMW technologies are becoming important for the high
data rate communications of the future and research efforts are placed to reduce the cost of MMW front ends.
TMR&D has developed key components to make PTP
Wireless Gigabit Ethernet Communication System possible.
It is now integrating the RF LTCC components into a complete link demonstrator for pilot testbed to test Wireless
Gigabit Ethernet link, streaming video like HDTV (high
definition television) or TiVo systems and obtain outdoor
propagation data. Again, the 60 GHz band is available un-
licensed worldwide. This 60 GHz technology can provide
new businesses and business models such as corporations and wireless hot spots which may provide Gigabit
Ethernet connectivity, as well as video, to its customers.
One of the main applications of these radios is replacement of fiber at the last mile.
[5] Federal Communications Commission, “Amendment of
Parts 2, 15 and 97 of the Commission’s Rules to Permit Use
of Radio Frequencies Above 40 GHz for New Radio Applications”, FCC 95-499, ET Docket No. 94- 124, RM-8308,
Dec. 15, 1995. Available via: ftp://ftp.fcc.gov./pub/Bureaus/
Engineering_Tech nology/Orders/1995/fcc95499.txt
References:
[6] http://www.gigabeam.com
[1] K. Ohata, K. Maruhashi, M. Ito, and T. Nishiumi, “Millimeter-Wave Broadband Transceivers” NEC Journal of
Advanced Technology, Vol. 2, No. 3, July 2005, pp. 211-216.
[7] P.F.M. Smulders, “60 GHz Radio: Prospects and Future
Directions”, Proceedings Symposium IEEE Benelux Chapter on Communications and Vehicular Technology, 2003,
pp. 1-8.
[2] K. Ohata, K. Maruhashi, M. Ito, S., et al. “Wireless 1.25
Gb/s Transceiver Module at 60 GHz-Band” ISSCC 2002,
Paper 17.7, pp. 298-299.
[3] T. Nagatsuma, A. Hirata, T. Kosugi, and H. Ito, “Over100GHz millimeter-wave technologies for 10Gbit/s wireless
link” Workshop WM 1 Notes of 2004 IEEE Radio and
Wireless Conference, September 2004.
[8] Harkirat Singh, Jisung Oh, Chang Yeul Kweon,
Xiangping Qin, Huai-Rong Shao and Chiu Ngo, “A 60 GHz
Wireless Network for Enabling Uncompressed Video Communication”, IEEE Communications Magazine, December 2008, pp. 71-78.
[4] Stapleton L (Terabeam) “Terabeam GigalinkTM 60 GHz
Equipment Overview (Invited)” Intern. Joint Conf. of 2004,
pp. 7-28.
271
Paper
Saturday, August 8, 2009
14:45 - 15:05
Room M-AULA
Implementation of IMK in Educational Measurement
Saifuddin Azwar
Faculty Psychology
Gadjah Mada University
Yogyakarta, Indonesia
Untung Rahardja
Faculty of Information System
Raharja University
Tangerang, Indonesia
[email protected]
Siti Julaeha
Faculty of Information System
Raharja University
Tangerang, Indonesia
[email protected]
Abstract
Science & Technology development is followed by the development of the age, demanding each student not only has a
good intellectual acumen, but also must have the discipline and dedication that is very high, not least, students also must
be committed to the rule does not apply because when the student was sliding and eliminated competition from the world
of work. Unfortunately universities in general less attention to this problem. To date, the university is only a list value that
contains the Cumulative Performance Index (IPK) as one of the students received in describing or determining success
after 4 (four) year course in universities. To answer the challenges the world of work, list the value of IMK must be side
by side with a list of values in the GPA provides a comprehensive assessment of students. Therefore in this article
presented some problems breaking methodology, including identifying at least have some problems with regard to the
fundamental methods of assessment of students a long time, defines the methods through the list of EQ assessment IMK
value, design value IMK through the list of flowchart, and the last is to build a list of values through IMK Macromedia
Dreamweaver MX. The end result of this article, namely a draft assessment was born discipline students who we call the
term IMK. IMK is the average value of the Index Quality Students (IMM) each semester. IMK that this is a role in
measuring a student’s EQ continuously for 4 (four) years which should capture the value in the form of a list of IMK.
Keywords: EQ, a list of values IMK, IMM, IPK.
I. Introduction
At this time about the old paradigm that intellect
intellectual (IQ) as the only measure decline that ingenuity
is often used as parameters of success and the success of
the performance of human resources, the emergence of
the paradigm has finish by other intelligences in the success and to determine the success of someone in his life.
Based on a survey conducted Lohr, written by Krugman in
272
the article “On The Road On Chairman lou” (The New York
times 26/06/1994), said that IQ was surely not enough to
explain the success of someone. [Raha081]
College graduates during the time measured from many
Cumulative Performance Index (IPK) or identical to the IQ
as an indicator graduation. A fact that after 4 (four) year
students studying at universities declared passed with
only one title index measuring the Cumulative Performance
Index. IPK see whether the user is a graduate of the company can answer the new employee acceptance of the
terms defined? If the stakeholders on the test set and the
chain does not see the candidate’s GPA is, if means the
system of higher education, does not “link and match”
with the user?
In the end we realize that the need to EQ more dominant
want tested by users before the graduates received at a
company. The problem is, its own stakeholders feel that
the difficulty of the test chain by the company may not
necessarily reflect the real EQ. That a candidate, seemingly has a “Good Attitude” in a short time (short time), at
the time received and work with the tempo of the old (long
time) appeared to have a (Bad Attitude).
To answer the challenges the world of work is as if each
university can take advantage of ICT, a capture-the IMM
is a form of EQ assessment of the student on a continual
basis, so that in the end can remove Cumulative Quality
Index (IMK), which measures the dominant emotion of
the intellect graduates and can also be used as an index
measuring education through attendance.
II. PROBLEMS
In order to answer the challenge of quality graduates,
universities need a system of assessment that lead to behavior, discipline and commitment to rules that are running. This is a challenge that must be faced in the current
era of globalization where the company does not always
take the assessment only in terms of versatility and ability
to absorb lecture material but also in terms of discipline
someone. Therefore, the required measurement facilities
EQ practical, smoothly and accurately, where the results
mirror the value of EQ is a student for 4 (four) year course
in universities. [Raha3072]
Need to realize that until this time there is no EQ
is capable of measuring accurately. However, during 4 (four)
years educating students, universities and the opportunities that have a great potential to educate as well as measure the student’s EQ.
Based on the contention that there are 2 issues preference
that is making this article:
1. How can be able IMK measure as a tool in educational
measurement?
2. Such as whether the output should be received by the
students as a form of measuring the value of EQ for a
student in the university?
III. LITERATURE REVIEW
Many of the previous research conducted on educational measurement in developing educational measurement to be performed this study as one of the libraries of
the application of the method of research to be conducted.
Among them is to identify gaps (Identify gaps), to avoid
re-creating (reinventing the wheel), identify the methods
that have been made, forward the previous research, and
to know other people who specialize, and the same area in
this research. Some Literature review are as follows:
1. Research that is done by I Made Suartika Department
of Mechanical Engineering Faculty of Engineering
Mataram University “Design and Implementation of
Performance Measurement System With Integrated
Method Performance Measurement Systems (Case
Study: Department of Mechanical Engineering Mataram
University)”. In this research to do to ensure quality
education in the Department of Mechanical
Engineering, the design required a performance
measurement system (SPK) that is integrated with the
method of IPMS (Integrated Performance Measurement
Systems). With the method IPMS, Key Performance
Indicators (KPI) Department of Mechanical
Engineering requirement is determined by
stakeholders through four stages, namely; stakeholder
identification requirement, external monitor, the
determination of objectives, and identification of KPI.
In this research shows there is no system of book
keeping and neatly organized , there is no adequate
database system, the system of administration that have
not been organized, has not been effective evaluation
of the suitability of the curriculum development with
the quality of graduates needed by the graduates, lack
of control over the implementation of the curriculum
and syllabus on the teaching-learning process, and
others. It is necessary for the solution of the problems
is done to improve performance (performance)
Department of Mechanical Engineering, namely:
Department of System management services, learning
management, and management relations with the
outside world. To measure the level of success,
efficiency, and effectiveness of activities carried out,
needed a performance measurement system (SPK)
Major and its implementation.
2. Research that is done by Chahid Fourali Research
Department, City & Guilds, 1 Giltspur Street, London
EC1A 9DD “Using Fuzzy Logic in Educational
Measurement: The Case of Portfolio Assessment.” This
paper highlights the relevance of a relatively new
quantitative methodology known as fuzzy logic to the
273
task of measuring educational achievement. This
paper introduces the principles behind fuzzy logic and
describes how these principles can be applied by
educators in the field of evidence assessment of the
portfolio. Currently, the portfolio assessment is con
sidered as a step in measuring performance. Finally,
this article argues that although fuzzy logic has been
successful in the industry’s contribution must be very
important in the social sciences. At least it should pro
vide social scientists with tools that are more relevant
with the field investigation.
occurs in humans, and environment outside the sys
tem so that the scope is too broad.
5. Further research is done by Glen Van Der Vyver,
University of Southern Queensland, Toowoomba,
Australia “The Search for Adaptable ICT Student ‘.
Research was conducted to Troubleshooting in the
business world and in the field of technology. ICT
itself is short for Information and Communication
Technologies. Research conducted by Glen Van Der
Vyver is on ICT.
3. Research that is done by Liza Fajarningtyas, Indah
Baroroh, Naning AW, ST, MT Department of Industrial
Engineering, Institute of Technology, Ten November
“Performance Measurement System Design Higher
Education Institutions Using the Balanced Scorecard
Method.” Research was conducted to measure the
performance of a Higher Education Institution with the
Balanced Scorecard method. Balanced Scorecard is not
only used by a company or business and industry, but
this method is also used for higher education. In this
karay, expressed that the Balance Scorecard method
has more benefits and goodness of using a system of
performance measurement Existing. With the Balance
Scorecard Measuring the performance of a system that
is running in the university and more can arrange a
plan in accordance with the strategic vision of the
mission of a university can continue to grow.
However, this method only in use to measure the
performance of a university institution, not to the
performance of students themselves. Meanwhile, the
Journal will be made later in the performance
measurement is a student by using linear regression
and correlation between GPA and IMK. Therefore, the
Balance Scorecard is not one - a method to measure its
performance.
IV. TROUBLESHOOTING
4. Research that is done by Dawn M. VanLeeuwen, As
sistant Professor New Mexico State University “As
sessing the Reliability of measurements Generalizability
Theory: An Application to Inter-Rater Reliability.”
Research that is done by Dawn M. VanLeeuwen
introduce application Generalizability Theory to assess
the reliability of measurement. This theory can be used
to assess the reliability in the presence of several
sources of error. In particular, the application of
Generalizability Theory for the measurement of the rat
ing involves several considerations and some
applications that can measure the variables are low.
This theory is also related to the object measured.
Usually the object of the measure that is human error,
and rating the condition or environment. However, in
this theory only in the GT use to measure an error that
Like the GPA, IMM will be recorded continuously. So that
students know the value of each discipline, the management should be a university-IMM capture all data in the
form of a list value IMK. From the list of values must be
calculated on average for its then packed into a value Cumulative Quality Index (IMK). IMK that this should be
trusted as a measure of the value of quantitative measure
behavior, discipline, emotional and a student for 4 (years
old) studying at the university. Not only that, should the
value of IMK has become absolute level EQ picture of a
student forever.
274
To be able to answer all the above problems, it has launched
a new assessment system through the Student Quality
Index (IMM).
Student Quality Index (IMM) is a system that is prepared
to measure and know the level of discipline a student attendance by using Online (AO). AO through the presence
of all these students when the Teaching Learning Activities (KBM) will be recorded in whole, will be recorded so
that it also delays the time the student.
Level of discipline that IMM is described through the
measurement of the level emotional (EQ) is a student continuously recorded from time to time, as well as the University record the value of each semester until the students
finally produced a Cumulative Performance Index (IPK).
So that each student can measure the academic ability,
usually the management of universities to provide student learning in the form of a list of values. In the list of
values generated an average of all values that have been
obtained which is called with GPA. To date, the GPA is to
be continued as a quantitative value to measure the level
of intellect to absorb all the students in the lecture material.
With the list of values IMK this list with a combined value
of GPA, it is expected that universities have been able to
challenge all the world of work. The graduate of the user,
there is no need to test the filter to know the graduate level
EQ, because the EQ is clearly envisaged in the list of values measured by the IMK management of the entire university to its students for 4 (years old).
A. Designing Algorithms
1. Algorithm update the list of values IMK
Var
Main ()
{
Select Database Genap 20072008
Select NIM, Nama_Mhs, Kode_Kelas, Mata_Kuliah, Sks,
IMM Into DMQ from A_View_IMM_All_Detail
Select Database Ganjil 20072008
Select NIM, Nama_Mhs, Kode_Kelas, Mata_Kuliah, Sks,
IMM from A_View_IMM_All_Detail
Repeat Until
Select NIM,Kode_Kelas from DMQ where NIM ,Kode
If
Add data on DMQ
If
Update data DMQ where NIM ,Kode_Kelas
If then
Else
}
2.
Algoritma Daftar nilai IMK
Figure 1. Flowchart Register Value IMK
Var
Char strNIM
Float AM, Total_AM, Total_SKS, IMK
Main ()
{
strNIM = request(“NIM”)
Select NIM, Nama_Mhs, Kode_Kelas, Mata_Kuliah, Sks,
IMM where NIM = strNIM
Repeat until
AM=IMM*SKS
Total_AM=Total_AM+AM
If
Select sum(sks) as jum_sks where NIM = strNIM
Total_SKS = jum_sks
IMK = Total_AM / Total_SKS
}
B. Designing Through Program Flowchart
C. Applications Program
Software used to create a program list that is the value of
IMK ASP, ASP Because a framework that can be used to
create dynamic web. ASP is used for many applications
related to the database, using either Microsoft Access
database to SQL server or Oracle database. Scripting the
most widely used in writing are ASP VBScript. [Raha207]
ASP is Macromedia Dreamweaver MX, which is dynamic
using the database connection. To connect between ASP
with SQL database is used (Structured Query Language).
ASP (Active Server Pages) is an object more precisely
Component Object Model (COM), not a programming language that we often see. ASP ISAPI was developed on the
basis that consists of 6 (six) simple objects. However, because the structures are combined with other Microsoft
technology, the object is to be useful. Sixth object is the
Application, Session, Response, Request, Server and Object Context. [Andi05]
SQL is the abbreviation of Structured Query Language.
This language is a standard that is used to access the
Relational database. At this time a lot of software that uses
SQL as a language to access the sub data. This software is
usually called RDMS (Relational Database Management
System).
275
Database used is SQL server where SQL is designed to be
used in client server in the intranet and internet environment.
In making the SQL database server does not provide the ability to create a form, report, and so forth. SQL
server database and provides only the right (privileges),
security and all that related to database management. Type
of data that can be used in almost the same SQL server
with Microsoft Access but just a different name, the following list naming conversion for Access - SQL Server:
Table 1. List naming conversion for Access - SQL Server
Figure 2. Program listing updates list value IMK
D. Program Listing
IMK value list is a program that uses the method DMQ
(Query Data Mart), so that the listing program that will
display the listing includes the list of update values IMK,
listing and a list value IMK. Following the program listing:
a. Program listing updates list value IMK
Figure 3. Listing the value of the program list view IMK
V. IMPLEMENTATION
The concept of assessment of student discipline through
Quality Index Students (IMM) has been implemented on
the University Raharja. IMM is a result of fusion between
the program Raharja Multimedia Edutaiment (RME) versions 1 and On-line attendance (AO).
276
THE SCREEN
The screen (interface) Quality Index Students (IMM) has
been integrated with some system information such as
Raharja Multimedia Edutainment (RME) versions 1, Online attendance (AO), and Panel Chair. The interface - the
interface consists of:
a. IMM on RME Interface
Figure 6. Recapitulation IMM per student
Figure 4. IMM on RME Interface
In the picture above there is a number of quantitative IMM:
1246. That number is the number of active students in the
semester following the process is active Teaching Learning Activities (KBM).
When the value in the click, it will open a URL that contains all active students in detail and its value IMMTH,
IMMT, and IMMG. URL has the interface as the image
below.
Interface described above value IMMTH, IMMT, IMMG
and for all classes taken by a student in this case “AHMAD
ZAMZAMI” as one example. Value above the value as
follows:
· “IMMT = 100, mean AHMAD ZAMZAMI never have
been present in the classroom.
· “IMMTH = 100, mean AHMAD ZAMZAMI did not not
attend the lecture.
· “IMMG = 100, mean AHMAD ZAMZAMI a student
with a discipline that is not high because the entry does
not have and never late in the KBM.
While IMM is the average value of all classes
IMMG that are traveled by each student in one semester.
IMM will be enshrined in the form of a “Charter IMM”.
IMM Charter not only contain a value IMM students, but
also the highest value IMM, IMM lowest, IMM deviation,
the average - the average IMM, and the most important is
the ranking of IMM. The ranking can be measured against
that level of discipline that a student be at the point where
all proportionate to the student active at this time? If the
student is a student the best, then he should get the ranking of “1”. Next is the interface “Charter IMM”.
Figure 5. List of all students
In the interface described above can IMMTH that is a
value that describes the level of attendance of students
following the KBM. IMMT is a value that describes the
level of accuracy of the student into the classroom, while
the average value is IMMG - IMMTH between the average and IMMT. To be able to see the detail data value
discipline a student for each class that take on this semester, please click on the name of students. Next image from
their interface.
Figure 7. Charter IMM
277
b. IMM on Panel Interface Board
Unlike the previous interface, the interface panel on the
IMM leaders describe this special value Cumulative Quality Index (IMK) of each student. IMK is the average value
in the IMM entire semester that has been executed by
each student. IMK is packed in a “Value List IMK” format
that can be seen in the picture below.
REFERENCES
[1]Andi. Aplikasi Web Database ASP Menggunakan
Dreamweaver MX 2004. Yogyakarta: Andi Offset.
2005.
[2]Fourali, Chahid. Using Fuzzy Logic In Educational
Measurement. London : Research Department.
1997.
[3]Rahardja, Untung. Thesis Program Studi Magister
Teknologi Informasi. Analisis Kelayakan Investasi
Digital Dashboard pada Manajemen Akademik
Perguruan Tinggi: Studi Kasus pada Perguruan
Tinggi Raharja. Jakarta: Fakultas Ilmu Komputer.
Universitas Indonesia. 2007.
[4]Rahardja, Untung., Maimunah., Hidayati. Artikel CCIT
Journal Edisi 1 Vol. 1. Metode Pencarian Data
dengan Menggunakan Intelligence Auto Find
System (IAFS). Tangerang: Perguruan Tinggi
Raharja. 2007.
[5]Rahardja, Untung., Fitria Murad, Dina. Usul Penelitian
Hibah Bersaing Perguruan Tinggi. Tangerang:
Perguruan Tinggi Raharja. 2007.
Figure 8. Register value IMK
This is a list of values that should be given to students every semester as a form of self-evaluation of its
assessment for the EQ Learning Teaching Activities (KBM).
VI. CONCLUSION
Based on the description above, the Register
concluded that the value IMK is suitable to be developed
dilingkungan Universities. Register Value Through IMK,
university can prove to the user’s graduates, graduates
that they actually have a competency that is not only measured by the value of intellectual but also based on the
value emosionalnya.
References
278
[6]Rahardja, Untung. Proposal Perancangan IMM pada
Perguruan Tinggi. Tangerang: Perguruan Tinggi
Raharja. 2008.
[7]Suartika, I Made, P Suwignjo, dan B Syairuddin.
Perancangan dan implementasi sistem
pengukuran kinerja dengan metode integrated
performance measurement systems. Mataram: University of Mataram. 2008.
[8]Van Der Vyver, Glen. The Search for the Adaptable ICT
Student. Toowoomba, Australia: University of
Southern Queensland. 2009.
[9]Vanleeuwen, Dawn M. Assessing reliability of measurements with generalizability theory: An application to inter-rater reliability. New Mexico State
University. 1999.
Paper
Saturday, August 8, 2009
16:25 - 16:45
Room L-211
Strategic Study and Recommendation Information System Model
for Universities
Henderi,
Sugeng Widada, Euis Siti Nuraisyah
Information Technology Department – Faculty of Computer Study STMIK Raharja
Jl. Jenderal Sudirman No. 40, Tangerang 15117 Indonesia
email: [email protected]
Abstract
Strategic information systems planning is done to set the policy planning and implementation of information technology
as support Bussiness process solutions and the problem with a high level of accuracy and know the possibilities of a
system procedure which is less precise. To achieve these purposes, information systems planning strategy an organization should be done through the stages: goal and poblem, Succes critical factor (CSFs), SWOT analysis, technology
analysis and impact analysis of the strategy vision system, and review of the business model proccess organization. At the
higher, strategic planning and information system model was developed through the stages, and includes information
services to be provided. Determination of information system strategic planning and the selection model developed next
is determined by the specific needs and determine the factors. This paper discusses the foundations and stages of strategic
information systems planning Universities as organizations in general. To clarify the discussion, in this paper also
explained briefly about the implementation of information systems strategy in the Universities as a university organization.
1. INTRODUCTION
The development of social economy and technology have
any impact on the education sector pekembangan. The
development of this form including environmental education to be very competitive. This is to encourage educational institutions to improve its ability to compete to win
the competition or at least able to survive. Utilization of
information technology is one aspect which the determination is in competition.
Response to these changes will impact the organizational
structure, corporate culture, organizational strategy, regulations, procedures that have been there, information technology, management and business process Universities.
This comes as universities strive to be charged is able to
get a potential student’s academic and funding aspects of
the way ensuring access for all prospective students.
In addition to ease access for prospective students, the
use of information technology should be widely useful for
business process performance and Universities. The suc-
cess of a university in building or developing information
technology depends on the accuracy of the information
systems strategic planning by management, the method
used in the construction and development, maintenance,
carried out with konsekwen and evaluated periodically.
This is in line with the definition of strategic planning system in general information that is planning the implementation of information technology in the organization with
the perspective of top management attention to the interests of management and include top of the direct interest
of including the leaders of the company [1].
2. SCOPE
study was conducted to discuss the stages in
the foundation and create a strategic information systems
planning Universities, which can be used as a reference
for the development of information systems can improve
the performance of value and competitive organization.
279
3. DISCUSSION
3.1 Strategic Planning of Information System
Strategic planning of information system functions
as a framework for the creation or development of
information systems in organizations that
terkomputerisasi, which in its implementation can be
executed separately and developed or modified using
the appropriate tool bantu. Approach to planning
information system and a wide right in the end allows
the company achieve coordination between the
systems built in, to give maximum facilities to the
development of the next system information, giving
ease of long-term changes in the system, and in turn
is able to identify the precise use of the computer to
reach the target organization.
In strategic planning information system, a
university should be able to see the strategic
information needed to effectively run the
organization and the most possible, and show how
the strategic information technology can be used to
improve the ability of the organization. Planning must
also be related to the encouragement top
management’s strategic objectives and critical
success factors (CSFs), related to how information
technology is used to create new opportunities and
provide competitive advantage. The idea of planning
a strategic information system universities to load
high-level view of the university functions, and need
to data and informations.
3.1.1 Pyramid for Strategic Information System Planning
A methodology that can be used in formulating a strategic
perspective the information required universities are using the pyramid with the strategic planning of information
system which consists of: (a) goal and problem analysis,
(b) The critical success factor analysis, (c) technology
impact analysis, (d) strategy vision system [2]. Stages of
the method is implemented with the principles of top down
and consists of a top management perspective and the
perspective of MIS (management information system) is
depicted in the form of a pyramid as follows:
Figure 1.b. Pyramid for Stategic Planning of System information Perspective MIS planner
a. Goals and problems Analysis Conducted to obtain
structured representation of goals and problems of a
university where the next is associated with: (1) units
from university or department, (2) The motivation of
individual managers (MBO), (3) The need and the in
formation systems.
b. Critical success factor (CSFs) Analysis
Identify a field or section should be done with the uni
versities so that both run smoothly. Analysis of CSFs
can again be described: (1) Critical assumption set (set
of assumptions is critical), (2) Critical decision set (set
of decisions is critical), (3) Citical information set (set
of information is critical ). Critical means must be
considered as very beperan or affect / determine the
success or failure.
c. Technology Impact Analysis Analysis of the impact of
technology is conducted to examine links between
changes very quickly from tekonologi with business
opportunities and Universities ancamannya. From this
study obtained Identify priorities and sequence of op
portunities and threats (which determines the priority
must be used the opportunity and determine the
priority threats by others who need / look for a
solution), so that executives can view and make
decisions or take appropriate action.
d. View of the Strategic Systems (Strategy Systems
Vision) It is to do in order to assess strategic
opportunities to create or develop a new system so
that the university can better compete. Strategic
systems that can be obtained from the restructuring or
changes in business Universities.
e. The Model of Function-The Higher Educations
Review is intended to map the business functions of
the Universities and correlates it with the
organizational unit, location, and entity, where the data
stored. Mapping is done with matrik the computer
systems.
Figure 1. a. Pyramid for strategic information systems planning perspective of the top management
280
f. Entity – Relationship Modeling Activities to create
a map of the relationships between entity-entity which
is a study of data-data that must be stored in the data
base company.
3.1.2 Analysis of the sequence of Strategic Information
Systems Planning
To be able to formulate strategic planning with a good
information system and in accordance with the needs of a
university, and easily adjusted in the future analysis needs
to be done systematically and logically. Sequence analysis can be done using the sequence information strategy
planning as follows [2]:
3.1.3 Pyramid of Information System
In addition to sequence analysis must consider, so that
strategic planning of information system can produce blueprint of development and the development of information
systems as needed Peguruan High, effective, efficient and
effective, the strategic planning should also consider the
form of a pyramid system following information:
Figure 3. Pyramid of Information System
Based on the above image pyramid appears that
there are four activities for each information system either
functional or enterprice in perspective. Description for each
activity are as follows:
Figure 2. Sequence analysis of strategic information
systems planning
From the figure 2, the image sequence analysis on the
planning, it appears that the analysis begins with the activities to create a model (which is a view) to the overall
scope of information system that is capable of “serving”
the needs of all organizations (in this case the university).
This model can be further followed up in parallel (can also
sekuensial) in two blocks of the type of task specification
/ work in the strategic planning of information system,
namely strategic planning block analysis / logical (left
block), and block planning fisical / perspective of MIS
(management information system) . The image also appears that the implementation and completion of each activity on each block of the series, so that the settlement of
every activity that is highly influenced by the type of settlement of an earlier.
However, in reality the implementation and specification
of each type of task / job on every block sometimes does
not always order, even for some of the activities can be
carried out and completed in parallel. This is possible because the sequence analysis of the implementation of strategic information systems planning at the university have
restrictions on large and completely Universities, whether
local, national or multinational, the difference between type,
and differences in management.
Tabel 1 Deskripsi kegiatan berdasarkan lapisan piramida
sistem informasi secara fungsional dalam perspektif data
dan aktifitas
3.2 Direction of Strategic Plan (Development) Information
System for Universities
So far, Universities have used information technology to
support various functions in the basic dynamics of the
academic process. Have a period of decline weakness /
kekuangan existing system, University sebuh recommended to build a comprehensive and integrated system
that is capable of taking, process, and presents the information valid and up to date of any activities that occur in
281
the process of academic. Thus the expected information to
the campus community society can be realized.
3.3 Conceptual Model for Strategic Information System
Planning in Universities (Recommendation)
Strategic planning information system universities include
the information service as a whole. The following is a recommendation conceptual model of information system strategic planning Universities. On this image appears in general that the service information system divided into two,
namely a service to internal and external to the service [3].
Internal services as well as restrict access to the outside
world (for security).
3.4 Model (Strategy) Development Information System
Specific Universities
Based on the model of service that appear in Figure 4,
universities will develop information systems that are suitable and in accordance with their needs. Development and
information system development is done by first making
decomposition tehadap model simplification and design
of planning strategies that are in the picture 4. This simplification does not intend to change the perspective of the
initial system that was built remains a “sub-system” from
the other system. Sub-system to represent business processes (business process) in the Universities. On simplification this “respostory” as a data warehouse deliberately
lose in the state that the purpose of each sub-system can
be developed independently. Simplification model is described as follows.
Figure 4. Business Process Model of Information System
for Higher Education Strategic Planning Frame Work
(Model and supporting services)
Four in the top image is the image that is still global and
functional besifat / lojik, so that not describe the allocation of physical partitions and higher in the organization
such as whether the system will be built in scattered or
concentrated.
the implementation of the strategic planning of information system will be built in this spread, then the distribution should be integrated from the data processing system
(hardware, data, process) at the location where the enduser / work. Conversely, if the system will be done at the
centralized computer capability should consider the layout of the center and higher geographically. If a higher
current campus consists of several buildings and geographically separate the information system must be built
on spread. Thus, the service may have the hardware in
each Program, or the Center for higher office. Then this
model is still equipped with the necessary description of
the integration / data that is used with inter-subsystem
information. Integration is usually because the data for
peak menajemen still need integration between the data
subsystem, and can be an integrator or datawarehouse
applications that access data from all subsystem [4].
282
Figure 5. Model of information system development is
simplified Universities
Based on the development model in the picture 5, the module developer system information consists of the Universities [5]:
4. Conclusion
To be able to build and implement information systems
that match the specific needs of higher education so that
the work needs to serve the academic sivitas, the high
pergurun must make perecanaan strategic information systems with attention to goals and problems Peguruan High
(goal and problems), critical success faktors (CSFs) , technology impact analysis, strategy vision system, and review of the model functions of the higher educations.
Strategic planning of information system universities are
expected to become a reference in building the information
system both in terms of university management at all levels of ease in monitoring and decision making. From the
outside world (society), should provide clear information
on the activities undertaken and services offered to the
public so that for example if you choose a university, already know about the purpose of education, curriculum,
facilities and so forth. In terms of all consumers, should
make it easier for service information. Real implementation
in the field still has its own challenges, because in many
cases, brought many changes and it requires resources
that are not less.
REFERENCES
[1] Laudon C. Kenneth, Laudon P. Jane, 2002, Management Informatioan System: Managing The Digital
Firm, Prenhall International, Inc. New York
Figure 7. Table modules of information system universities
Based on the explain in detail above, it appears that the
development of information systems STMIK Raharja is
also a complex system that is derived from strategic planning information systems. Development / implementation
will be carried out (built) from a variety of modules or subsystems with attention to several factors or parameters
and strategy implementation.
[2] Martin James : “Information Engineering, Book II Planning and Analysis”, Prentice Hall, Englewood Cliffs
New Jersey, 1990
[3]Liem Inggriani : “Model Information System for Universities,” Development of Materials upgrading SIM
Kopertis Region IV, Bandung 2003
[4] Inmon W. H, Imhoff C & Sousa : “Corporate Information Factory”, Wiley Computer Publishing, 1998
[5] Henderi : “MIS : Tools For College In The Healthy
Facing competitors, “Seminar Papers Scientific
Raharja University, December 2003.
283
Author Index
E
A
Aan Kurniawan
66
Edi Winarko
72, 167
Abdul Manan
159
Eneng Tita Tosida
Agus Sunarya
246
Ermatita
167, 188
Ahmad Ashari
255
Euis Sitinur Aisyah
174, 279
Al-Bahra Bin Ladjamudin
193
Aris Martono
148
G
Aris Sunantyo
251
Gede Rasben Dantes
Armanda C.C
206
Asef Saefullah
119, 139, 237
246
32
H
Augury El Rayeb
237
Handy Wicaksono
212
Azzemi Ariffin
266
Hany Ferdinando
212
Henderi
B
Bernard Renaldy Suteja
Bilqis Amaliah
255
99
Heru SBR
251
Hidayati
217
Huda Ubaya
188
J
C
Chastine Fatichah
52, 11, 148, 279
87, 99
Jazi Eko Istiyanto
Junaidi
45, 92
174
D
Danang Febrian
Darmawan Wangsadiharja
61
212
M
Maimunah
Diah Arianti
99
Mauritsius Tuga
Diyah Puspitaningrum
52
Mauridhi Hery Purnomo
Dina Fitria Murad
225
200
40
Mohammad Irsan
225
266
Djiwandou Agung Sudiyono Putro
80
Mohd. Fadzil Amiruddin
Djoko Purwanto
40
Muhamad Yusup
Dwiroso Indah
188
284
119, 148
72
Muhammad Tajuddin
159
M. Givi Efgivia
193
Author Index
U
N
Nenet Natasudian Jaya
159
Noor Aisyah Mohd. Akib
266
Nurina Indah Kemalasari
87
Untung Rahardja
45, 72, 92, 109, 217, 272
V
Valent Setiatmi
45
P
Padeli
183
W
Primantara H.S
206
Widodo Budiharto
40
Wiwik Anggraeni
61
R
Rahmat Budiarto
Schedule
Retantyo Wardoyo
206
109, 167, 255
Y
Yeni Nuraeni
126
Young Chul Lee
S
Safaruddin A. Prasad
193
Saifuddin Azwar
272
Z
Salizul Jaafar
266
Zainal A. Hasibuan
Sarwosri
133
Shakinah Badar
109
Siti Julaeha
272
92
Sri Setyaningsih
246
Sugeng Santoso
139, 174, 183
Sugeng Widada
279
Suhandi Bujang
266
Suryo Guritno
Sutrisno
66, 159
80
Sfenrianto
Sri Darmayanti
266
217, 255
251
T
Tri Kuntoro Priyambodo
Tri Pujadi
206, 251
103
285
286
Schedule
Schedule
287
288
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
289
290
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
291
292
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
293
294
Location
DRIVING DIRECTIONS TO GREEN CAMPUS RAHARJA TANGERANG BY CAR
Location
295