acsij publication - Advances in Computer Science : an International

Transcription

acsij publication - Advances in Computer Science : an International
Advances in Computer Science:
an International Journal
Vol. 4, Issue 4, July 2015
© ACSIJ PUBLICATION
www.ACSIJ.org
ISSN : 2322-5157
ACSIJ Reviewers Committee 2015





























Prof. José Santos Reyes, Faculty of Computer Science, University of A Coruña, Spain
Dr. Dariusz Jacek Jakóbczak, Technical University of Koszalin, Poland
Dr. Artis Mednis, Cyber-Physical Systems Laboratory Institute of Electronics and Computer
Science, Latvia
Dr. Heinz DOBLER, University of Applied Sciences Upper Austria, Austria
Dr. Ahlem Nabli, Faculty of sciences of Sfax,Tunisia
Prof. Zhong Ji, School of Electronic Information Engineering, Tianjin University, Tianjin, China
Prof. Noura AKNIN, Abdelmalek Essaadi University, Morocco
Dr. Qiang Zhu, Geosciences Dept., Stony Brook University, United States
Dr. Urmila Shrawankar, G. H. Raisoni College of Engineering, Nagpur, India
Dr. Uchechukwu Awada, Network and Cloud Computing Laboratory, School of Computer
Science and Technology, Dalian University of Technology, China
Dr. Seyyed Hossein Erfani, Department of Computer Engineering, Islamic Azad University,
Science and Research branch, Tehran, Iran
Dr. Nazir Ahmad Suhail, School of Computer Science and Information Technology, Kampala
University, Uganda
Dr. Fateme Ghomanjani, Department of Mathematics, Ferdowsi University Of Mashhad,
Iran
Dr. Islam Abdul-Azeem Fouad, Biomedical Technology Department, College of applied
Medical Sciences, SALMAN BIN ABDUL-AZIZ University, K.S.A
Dr. Zaki Brahmi, Department of Computer Science, University of Sousse, Tunisia
Dr. Mohammad Abu Omar, Information Systems, Limkokwing University of Creative
Technology, Malaysia
Dr. Kishori Mohan Konwar, Department of Microbiology and Immunology, University of
British Columbia, Canada
Dr. S.Senthilkumar, School of Computing Science and Engineering, VIT-University, INDIA
Dr. Elham Andaroodi, School of Architecture, University of Tehran, Iran
Dr. Shervan Fekri Ershad, Artificial intelligence, Amin University of Isfahan, Iran
Dr. G.UMARANI SRIKANTH, S.A.ENGINEERING COLLEGE, ANNA UNIVERSTIY, CHENNAI, India
Dr. Senlin Liang, Department of Computer Science, Stony Brook University, USA
Dr. Ehsan Mohebi, Department of Science, Information Technology and Engineering,
University of Ballarat, Australia
Sr. Mehdi Bahrami, EECS Department, University of California, Merced, USA
Dr. Sandeep Reddivari, Department of Computer Science and Engineering, Mississippi State
University, USA
Dr. Chaker Bechir Jebari, Computer Science and information technology, College of Science,
University of Tunis, Tunisia
Dr. Javed Anjum Sheikh, Assistant Professor and Associate Director, Faculty of Computing
and IT, University of Gujrat, Pakistan
Dr. ANANDAKUMAR.H, PSG College of Technology (Anna University of Technology), India
Dr. Ajit Kumar Shrivastava, TRUBA Institute of Engg. & I.T, Bhopal, RGPV University, India
ACSIJ Published Papers are Indexed By:
Google Scholar
EZB, Electronic Journals Library ( University Library of Regensburg, Germany)
DOAJ, Directory of Open Access Journals
Bielefeld University Library - BASE ( Germany )
Academia.edu ( San Francisco, CA )
Research Bible ( Tokyo, Japan )
Academic Journals Database
Technical University of Applied Sciences ( TH - WILDAU Germany)
AcademicKeys
WorldCat (OCLC)
TIB - German National Library of Science and Technology
The University of Hong Kong Libraries
Science Gate
OAJI Open Academic Journals Index. (Russian Federation)
Harvester Systems University of Ruhuna
J. Paul Leonard Library _ San Francisco State University
OALib _ Open Access Library
Université Joseph Fourier _ France
CIVILICA ( Iran )
CiteSeerX _ Pennsylvania State University (United States)
The Collection of Computer Science Bibliographies (Germany)
Indiana University (Indiana, United States)
Tsinghua University Library (Beijing, China)
Cite Factor
OAA _ Open Access Articles (Singapore)
Index Copernicus International (Poland)
Scribd
QOAM _ Radboud University Nijmegen (Nijmegen, Netherlands)
Bibliothekssystem Universität Hamburg
The National Science Library, Chinese Academy of Sciences (NSLC)
Universia Holding (Spania)
Technical University of Denmark (Denmark)
TABLE OF CONTENTS
A Survey on the Privacy Preserving Algorithm and techniques of Association Rule
Mining – (pg 1-6)
Maryam Fouladfar, Mohammad Naderi Dehkordi
« ACASYA »: a knowledge-based system for aid in the storage, classification,
assessment and generation of accident scenarios. Application to the safety of rail
transport systems – (pg 7-13)
Dr. Habib HADJ-MABROUK, Dr. Hinda MEJRI
Overview of routing algorithms in WBAN – (pg 14-20)
Maryam Asgari, Mehdi Sayemir, Mohammad Shahverdy
an Efficient Blind Signature Scheme based on Error Correcting Codes – (pg 21-26)
Junyao Ye, Fang Ren, Dong Zheng, Kefei Chen
Multi-lingual and -modal Applications in the Semantic Web: the example of Ambient
Assisted Living – (pg 27-36)
Dimitra Anastasiou
An Empirical Method to Derive Principles, Categories, and Evaluation Criteria of
Differentiated Services in an Enterprise – (pg 37-45)
Vikas S Shah
A comparative study and classification on web service security testing approaches
– (pg 46-50)
Azadeh Esfandyari
Collaboration between Service and R&D Organizations – Two Cases in Automation
Industry – (pg 51-59)
Jukka Kääriäinen, Susanna Teppola, Antti Välimäki
Load Balancing in Wireless Mesh Network: a Survey – (pg 60-64)
Maryam Asgari, Mohammad Shahverdy, Mahmood Fathy, Zeinab Movahedi
Mobile Banking Supervising System- Issues, Challenges & Suggestions to improve
Mobile Banking Services – (pg 65-67)
Dr.K.Kavitha
A Survey on Security Issues in Big Data and NoSQL – (pg 68-72)
Ebrahim Sahafizadeh, Mohammad Ali Nematbakhsh
Classifying Protein-Protein Interaction Type based on Association Pattern with
Adjusted Support – (pg 73-79)
Huang-Cheng Kuo, Ming-Yi Tai
Digitalization Boosting Novel Digital Services for Consumers – (pg 80-92)
Kaisa Vehmas, Mari Ervasti, Maarit Tihinen, Aino Mensonen
GIS-based Optimal Route Selection for Oil and Gas Pipelines in Uganda – (pg 93-104)
Dan Abudu, Meredith Williams
Hybrid Trust-Driven
– (pg 105-112)
Recommendation
System
for
E-commerce
Networks
Pavan Kumar K. N, Samhita S Balekai, Sanjana P Suryavamshi, Sneha Sriram, R. Bhakthavathsalam
Correlated Appraisal of Big Data, Hadoop and MapReduce – (pg 113-118)
Priyaneet Bhatia, Siddarth Gupta
Combination of PSO Algorithm and Naive Bayesian Classification for Parkinson
Disease Diagnosis – (pg 119-125)
Navid Khozein Ghanad, Saheb Ahmadi
Automatic Classification for Vietnamese News – (pg 126-132)
Phan Thi Ha, Nguyen Quynh Chi
Practical applications of spiking neural network in information processing and learning
– (pg 133-137)
Fariborz Khademian, Reza Khanbabaie
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
A Survey on the Privacy Preserving Algorithm and
techniques of Association Rule Mining
Maryam Fouladfar1, Mohammad Naderi Dehkordi2
Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
1
[email protected],
2
[email protected]
Abstract
in order to support a variety of domains marketing,
weather forecasting, medical diagnosis, and national
security. But it is still a challenge to mine certain kinds
of data without violating the data owners ’privacy .For
example, how to mine patients private data is an ongoing
problem in health care applications .As data mining
become more pervasive, privacy concerns are increasing.
Commercial concerns are also concerned with the
privacy issue. Most organizations collect information
about individuals for their own specific needs. Very
frequently, however, different units within an
organization themselves may find it necessary to share
information. In such cases, each organization or unit
must be sure that the privacy of the individual is not
violated or that sensitive business information is not
revealed .Consider, for example, a government, or more
appropriately, one of its security branches interested in
developing a system for determining, from passengers
whose baggage has been checked, those who must be
subjected to additional security measures. The data
indicating the necessity for further examination derives
from a wide variety of sources such as police records;
airports; banks; general government statistics; and
passenger information records that generally include
personal information; demographic data; flight
information; and expenditure data. In most countries, this
information is regarded as private and to avoid
intentionally or unintentionally exposing confidential
information about an individual, it is against the law to
make such information freely available. While various
means of preserving individual information have been
developed, there are ways for circumventing these
methods. In our example, in order to preserve privacy,
passenger information records can be de-identified
before the records are shared with anyone who is not
permitted directly to access the relevant data. This can be
accomplished by deleting from the dataset unique
identity fields. However, even if this information is
In recent years, data mining is a popular analysis tool to extract
knowledge from collection of large amount of data. One of the
great challenges of data mining is finding hidden patterns
without revealing sensitive information. Privacy preservation
data mining (PPDM) is answer to such challenges. It is a major
research area for protecting sensitive data or knowledge while
data mining techniques can still be applied efficiently.
Association rule hiding is one of the techniques of PPDM
to protect the association rules generated by association rule
mining. In this paper, we provide a survey of association
rule
hiding methods for privacy preservation. Various
algorithms have been designed for it in recent years. In this
paper, we summarize them and survey current existing
techniques for association rule hiding.
Keywords: Association Rule Hiding, Data Mining,
Privacy Preservation Data Mining.
1. Motivation
computers have promised us a fountain of wisdom but
delivered a deluge of information. this huge amount of
data makes it crucial to develop tools to discover what is
called hidden knowledge. these tools are called data
mining tools. so, data mining promises to discover what
is hidden, but what if that hidden knowledge is sensitive
and owners would not be happy if this knowledge were
exposed to the public or to adversaries? this problem
motivates for write this paper.
2.
Introduction
The problem of privacy preserving data mining has
become more important in recent years because of the
increasing ability to store personal data about users and
the increasing sophistication of data mining algorithm to
leverage this information. A number of data mining
techniques have been suggested in recent years in order
to perform privacy preserving Data mining techniques
have been developed successfully to extracts knowledge
1
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
deleted, there are still other kinds of information,
personal or behavioral that, when linked with other
available datasets, could potentially identify subjects. To
avoid these types of violations, we need various data
mining algorithm for privacy preserving. We review
recent work on these topics. In this paper, it has been
tried to focus on data -mining background in advance,
while the important part of the paper has been focusing
on introduction of different approaches of data-mining
and algorithms of data mining privacy preserving for
sanitizing sensitive knowledge in context of mining
association rules or item sets with brief descriptions. It
has been tried to concentrate on different classifications
of data mining privacy preserving approaches.
3.
discovering all itemsets and computation time.
Generally, only those item sets that fulfill a certain
support requirement are taken into consideration.
Support and confidence are the two most important
quality measures for evaluating the interestingness of a
rule. The support of the rule X →Y is the
percentage of transactions in T that contain X ∩Y .
It determines how frequent the rule is applicable to the
transaction set T . The support of a rule is represented by
the formula (1):
(
)
|
|
| |
(1)
where | X∩Y| is the number of transactions that contain
all the items of the rule and n is the total number of
transactions. The confidence of a rule describes the
percentage of transactions containing X which also
contain Y . It is given by (2):
Privacy Preserving Data Mining Concepts
Today as the usage of data mining technology has
been
increasing,
the
importance of securing
information against disclosure of unauthorized access is
one of the most important issues in securing of privacy
of data mining [1]. The state or condition of being
isolated from the view or presence of others is privacy
[2] which is associated with data mining so that we are
ab le to conceal sensitive information from revelation to
public [1]. Therefore to protect the sensitive rule from
unauthorized publishing, privacy preserving data
mining (PPDM) has focused on data mining and
database security field [3].
(
)
|
|
| |
(2)
Confidence is a very important measure to determine
whether a rule is interesting or not. The process of
mining association rules consists of twomain steps. The
first step is, identifying all the itemsets contained in the
data that are adequate for mining association rules. These
combinations have to show at least a certain frequency
and are thus called frequent itemsets. The second step
generates rules out of the discovered frequent itemsets.
All rules that has confidence greater than minimum
confidence are regarded as interesting.
3.1 Association Rule Mining Strategy
Association rules are an important class of regularities
within data which have been extensively studied by the
data mining community. The problem of mining
association rules can be stated as follows: Given I = {i1 ,
i2 , ... , im } is a set of items, T = {t1, t2 , ... , tn} is a set of
transactions, each of which contains items of the itemset
I . Each transaction ti is a set of items such that ti ⊆I . An
association rule is an implication of the form: X →Y,
where X ⊂I , Y ⊂I and X ∩Y = Ø. X (or Y ) is a set of
items, called itemset. In therule X→Y, X is called the
antecedent, Y is the consequent. It is obvious that the
value of the antecedent implies the value of the
consequent. The antecedent, also called the “left
handside” of a rule, can consist either of a single item or
of a whole set of items. This applies for the
consequent, also called the “right hand side”, as
well. Often, a compromise has to be made between
3.2 Side Effects
As it is presented in (Fig. 1), R is denoted as all
association rules in the database D, as well as SR for
the sensitive rules, the none sensitive rules ~SR,
discovered rules R’ in sanitized database D’. The circles
with the numbers of 1, 2, and 3 are possible problems
that respectively represent the sensitive association rules
that were failed to be censored, the legitimate rules
accidentally missed, and the artificial association rules
created by the sanitization process.
2
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
artifactual patterns created by the adopted privacy
preserving technique. For example, in [4], Oliveira and
Zaiane define two metrics misses cost and artifactual
pattern which are corresponding to lost information and
artifactual information respectively. In particular, misses
cost measures the percentage of nonrestrictive patterns
that are hidden after the sanitization process. This
happens when some non-restrictive patterns lose support
in the database due to the sanitization process. The
misses cost (MC) is computed as (4):
( )
Fig. 1 Side Effects
)
(4)
( )
The percentage of sensitive information that is still
discovered, after the data has been sanitized, gives an
estimate of the hiding failure parameter. Most of the
developed privacy preserving algorithms are designed
with the goal of obtaining zero hiding failure. Thus, they
hide all the patterns considered sensitive. However, it is
well known that the more sensitive information we hide,
the more non-sensitive information we miss. Thus, some
PPDM algorithms have been recently developed which
allow one to choose the amount of sensitive data that
should be hidden in order to find a balance between
privacy and knowledge discovery. For example, in [4],
Oliveira and Zaiane define the hiding failure (HF) as the
percentage of restrictive patterns that are discovered
from the sanitized database. It is measured as (3):
( )
( )
(
where # ∼ RP (D) and # ∼ RP(D′) denote the number of
non-restrictive patterns discovered from the original
database D and the sanitized database D′ respectively. In
the best case, MC should be 0%. Notice that there is a
compromise between the misses cost and the hiding
failure in their approach. The more restrictive patterns
they hide, the more legitimate patterns they miss. The
other metric, artifactual pattern (AP), is measured in
terms of the percentage of the discovered patterns that
are artifacts. The formula is (5):
| ||
| |
|
(5)
where |X | denotes the cardinality of X . According to
their experiments, their approach does not have any
artifactual patterns, i.e., AP is always 0. In case of
association rules, the lost information can be modeled as
the set of non-sensitive rules that are accidentally hidden,
referred to as lost rules, by the privacy preservation
technique, the artifactual information, instead, represents
the set of new rules, also known as ghost rules, that can
be extracted from the database after the application of a
sanitization technique.
(3)
where #RP (D) and #RP(D′) denote the number of
restrictive patterns discovered from the original data base
D and the sanitized database D′ respectively. Ideally, HF
should be 0. In their framework, they give a specification
of a disclosure threshold φ , representing the percentage
of sensitive transactions that are not sanitized, which
allows one to find a balance between the hiding failure
and the number of misses. Note that φ does not control
the hiding failure directly, but indirectly by controlling
the proportion of sensitive transactions to be sanitized for
each restrictive pattern.
4.
Different Approaches Sin PPDM
Many approaches have been proposed in PPDM in
order to censor sensitive knowledge or sensitive
association rules [5,6]. Two classifications in existing
sanitizing algorithm of PPDM shown in (fig. 2).
When quantifying information loss in the context of the
other data usages, it is useful to distinguish between: lost
information representing the percentage of non-sensitive
patterns (i.e., association, classification rules) which are
hidden as side-effect of the hiding process; and the
artifactual information representing the percentage of
3
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
M. Atallah et al [13], tried to deal with the problem of
limiting disclosure of sensitive rules. They attempt to
selectively hide some frequent item sets from large
databases with as little as possible impact on other, nonsensitive frequent item sets. They tried to hide sensitive
rules by modifying given database so that the support of
a given set of sensitive rules, mined from the database,
decreases below the minimum support value.
Item RestrictionBased
Item AdditionBased
Data-Sharing
Item ObfuscationBased
Sanitizing Algorithm
PPDM
Pattern-Sharing
Heuristic Based
techniques
Border Based
techniques
Sanitizing
techniques
Rule RestrictionBased
Data distortion
techniques
N. Radadiya [14]
proposed an algorithm called
ADSRRC which tried to improve DSRRC algorithm.
DSRRC could not hide association rules with multiple
items in the antecedent (L.H.S) and consequent (R.H.S.),
so it uses a count of items in consequence of the sensible
rules and also modifies the minimum number of
transactions to hide maximum sensitive rules and
maintain data quality.
Data blocking
techniques
Exact techniques
Reconstruction
Based techniques
Cryptography Based
techniques
Y. Guo [15] proposed a framework with three phases:
mining frequent set, performing sanitation algorithm
over frequent item sets, and generate released database
by using FP-tree-based inverse frequent set mining.
Fig. 2 Classification of Approaches
Sanitizing Alghorithm
Border-based: In this approach by the concepts of
borders, the algorithm tries to preprocess the sensitive
rules, so the minimum number of them will be censored.
Afterward, Database quality will maintain as well while
side effects will be minimized [14,9]. One of the
approaches used are as follows.
data-sharing: In data-sharing technique, without
analyzing or any statistical techniques, data will be
communicated between parties. In this approach, the
algorithms suppose change database by producing
distorted data in the data base [6,7,8].
pattern-sharing: In pattern-sharing technique, the
algorithm tries to sanitize the rules which are mined from
the data set [6,8,9].
Y. Jain et al [16] proposed two algorithms called ISL
(Increase Support of Left hand side) and DSR (Decrease
Support of Right hand side) to hide useful association
rule from transaction data. In ISL method, confidence of
a rule is decreased by increasing the support value of
Left Hand Side (L.H.S.) of the rule, so the items from
L.H.S. of a rule are chosen for modification. In DSR
method, confi dence of a rule is decreased by decreasing
the support value of Right Hand Side (R.H.S.) of a rule,
so items from R.H.S. of a rule are chosen for
modification. Their algorithm prunes number of hidden
rules with the same number of transactions scanned, less
CPU time and modification.
Sanitizing techniques
Heuristic-Based: Heuristic-based techniques resolves
how to select the appropriate data sets for data
modification. Since the optimal selective data
modification or sanitization is an NP-Hard problem,
heuristics is used to address the complexity issues. The
methods of Heuristic based modification include
perturbation, which is accomplished by the alteration of
an attribute value by a new value (i.e., changing a 1value to a0- value, or adding noise), and blocking, which
is the replacement of an existing attribute value with a
“?” [10,11,12]. Some of the approaches used are as
follows.
Exact: In this approach it tries to formulate the hiding
problem to a constraint satisfactory problem (CSP). The
solution of CSP will provide the minimum number of
transactions that have to be sanitized from the original
database. Then solve it by helping binary integer
4
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
supports and support counts from original database
D. The second phase runs sanitization algorithm over
programming (BIP), such as ILOG CPLEX, GNU GLPK
or XPRESS-MP [14, 9]. Although this approach
presents a better solution among other approaches, high
time complexity to CSP is a major problem. Gkoulalas
and Verykios proposed an approach in finding an
optimal solution for rule hiding problems [17].
frequent itemset FS and get the sanitized frequent
itemsets of FS’. The third phase is to generate released
database D’ from FS’ by using inverse frequent set
mining algorithm. But this algorithm is very complex as
it involves generation of modified dataset from frequent
set.
Reconstruction-Based: A number of recently proposed
techniques address the issue of privacy preservation by
perturbing
the
data
and
reconstructing
the
distributions at an aggregate level in order to
perform the association rules mining. That is, these
algorithms are implemented by perturbing the data first
and then reconstructing the distributions. According to
different methods of reconstructing the distributions and
data types, the corresponding algorithm is not the same.
Some of the approaches used are as follows.
Cryptography-Based: In many cases, multiple parties
may wish to share aggregate private data, without
leaking any sensitive information at their end. This
requires secure and cryptographic protocols for sharing
the information across the different parties[24,25,26,27].
one of the approache used are as follows.
The paper proposed by Assaf Schuster et al.[28]
presents a cryptographic privacy-preserving association
rule mining algorithm in which all of the cryptographic
primitives involve only pairs of participants. The
advantage of this algorithm isits scalability and the
disadvantage is that, a rule cannot be found correct
before the algorithm gathers information from k
resources. Thus, candidate generation occurs more
slowly, and hence the delay in the convergence of the
recall. The amount of manager consultation messages is
also high.
Agrawal et al. [18] used Bayesian algorithm for
distribution reconstruction in numerical data. Then,
Agrawal et al.[19] proposed a uniform randomization
approach on reconstruction-based association rule to
deal with categorical data. Before sending a transaction
to the server, the client takes each item and with
probability p replaces it by a new item not
originally present in this transaction. This process is
called uniform randomization. It generalizes Warner’s
“randomized response” method. The authors of [20]
improved the work over the Bayesian-based
reconstruction procedure by using an EM algorithm for
distributionreconstruction.
5.
Conclusion
We present a classification and an extended description
and clustering of various algorithms of association rule
mining. The work presents in here, which indicates the
ever increasing interest of researchers in the area of
securing sensitive data and knowledge from malicious
users. At present, privacy preserving is at the stage of
development. Many privacy preserving algorithms of
association rule mining are proposed, however, privacy
preserving technology needs to be further researched
because of the complexity of the privacy problem.
Chen et. al. [21] first proposed a Constraint-based
Inverse Itemset Lattice Mining procedure (CIILM) for
hiding sensitive frequent itemsets. Their data
reconstruction is based on itemset lattice. Another
emerging privacy preserving data sharing method
related with inverse frequent itemset mining is
inferring original data from the given frequent
itemsets. This idea was first proposed by Mielikainen
[22]. He showed finding a dataset compatible with a
given collection of frequent itemsets is NPcomplete.
References
[1] S.R.M. Oliveira, O.R. Zaıane, Y. Saygin, “Secure
association rule sharing, advances in knowledge
discovery and data mining, in: Proceedings of the 8th
Pacific-Asia
Conference
(PAKDD2004), Sydney,
Australia, 2004, pp.74–85.
A FP-tree based method is presented in [23] for inverse
frequent set mining which is based on reconstruction
technique. The whole approach is divided into three
phases: The first phase uses frequent itemset mining
algorithm to generate all frequent itemsets with their
[2]
Elena Dasseni, Vassilios S. Verykios, Ahmed
K.Elmagarmid, and Elisa Bertino, “Hiding Association
5
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Rules by using Confidence and Support,” In Proceedings
of the 4th Information Hiding Workshop (2001), pp.369–
383.
15th ACM Int. Conf. Inf. Knowl. Manag. ACM Press, New
York, New York, USA, pp 748–757
[18] Chris Clifton, Murat Kantarcioglou, XiadongLin and
Michaed Y.Zhu, “Tools for privacy preserving distributed
data mining,” SIGKDD Explorations 4, no. 2, 2002.
[3] Verykios, V.S., Elmagarmid, A., Bertino, E., Saygin,
Y., and Dasseni, E. Association rule hiding. IEEE
Transactions on Knowledge and Data Engineering, 2004,
16(4):434-447
[19] Alexandre Evfimievski, Ramakrishnan Srikant, Rakesh
Agrawal, Johannes Gehrke. Privacy Preserving Mining of
Association Rules. SIGKDD 2002, Edmonton, Alberta
Canada.
[4] Oliveira, S.R.M., Zaiane, O.R.: Privacy preserving frequent
itemset mining. In: IEEE icdm Workshop on Privacy,
Security and Data Mining, vol. 14, pp. 43–54 (2002).
[20] D. Agrawal and C. C. Aggarwal, "On the design and
quantification of privacy preserving data mining
algorithms", In Proceedings of the 20th Symposium on
Principles of Database Systems, Santa Barbara,
California, USA, May, 2001.
[5] Oliveira SRM, Zaiane OR (2006) A unified framework for
protecting sensitive association rules in business
collaboration. Int J Bus Intell Data Min 1:247–287.
[6] HajYasien A (2007) Preserving privacy in association rule
mining. Ph. D Thesis, University of Griffith.
[21] Chen, X., Orlowska, M., and Li, X., "A new
framework for privacy preserving data sharing.", In:
Proc. of the 4th IEEE ICDM Workshop: Privacy and
Security Aspects of Data Mining. IEEE Computer
Society, 2004. 47-56.
[7] Oliveira SRM, Za OR, Zaiane OR, Saygin Y (2004) Secure
association rule sharing. Adv. Knowl. Discov. Data Min.
Springer, pp 74–85.
[22] Mielikainen, T. "On inverse frequent set mining". In:
Proc. of the 3rd IEEE ICDM Workshop on Privacy
Preserving Data Mining. IEEE Computer Society, 2003.
18-23.
[8] Verykios VS, Gkoulalas-Divanis A (2008) Chapter 11 A
Survey of Association Rule Hiding Methods for Privacy.
Privacy-Preserving Data Min 267–289.
[9] Gkoulalas-Divanis A, Verykios VS (2010) Association rule
hiding for data mining. Springer.
[23] ZongBo Shang; Hamerlinck, J.D., “Secure Logistic
Regression of Horizontally and Vertically Partitioned
Distributed Databases,” Data Mining Workshops, ICDM
Workshops 2007. Seventh IEEE International Conference
on 28-31 Oct. 2007, pp.723–728.
[10]Oliveira SRM, Zaiane OR (2002) Privacy preserving
frequent itemset mining. Proc. IEEE Int. Conf. Privacy,
Secur. data mining-Volume 14. pp 43–54
[24] DuW., AtallahM.: SecureMulti-party Computation: A
Review and Open Problems.CERIAS Tech. Report
2001-51, Purdue University, 2001.
[11] Verykios VS, Pontikakis ED, Theodoridis Y, Chang L
(2007) Efficient algorithms for distortion and blocking
techniques in association rule hiding. Distrib Parallel
Databases 22:85–104. doi: 10.1007/s10619-007-7013-0
[25] Ioannidis, I.; Grama, A, Atallah, M., “A secure protocol
for computing dot-products in clustered and distributed
environments,” Proceedings of International Conference
on Parallel Processing, 18-21 Aug. 2002, pp.379–384.
[12] Saygin Y, Verykios VS, Clifton C, Saygm Y (2001) Using
unknowns to prevent discovery of association rules. ACM
SIGMOD Rec 30:45–54.
[26] A. Sanil, A. Karr, X. Lin, and J. Reiter, “Privacy
preserving analysis of vertically partitioned data using
secure matrix products,” Journal of Official Statistics,
2007.
[13] Atallah M, Bertino E, Elmagarmid a., et al. (1999)
Disclosure limitation of sensitive rules. Proc. 1999 Work.
Knowl. Data Eng. Exch. (Cat. No.PR00453)
[14] Radadiya NR, Prajapati NB, Shah KH (2013) Privacy
Preserving in Association Rule mining. 2:208–213.
[27] M. Kantarcioglu, C. Clifton, “Privacy-preserving
distributed mining of association rules on horizontally
partitioned data,” The ACM SIGMOD Workshop on
Research Issues on Data Mining and Knowledge
Discovery (DMKD’02). ACM SIGMOD’2002 [C].
Madison, Wisconsin, 2002, pp.24–31.
[15] Guo Y (2007) Reconstruction-based association rule
hiding. Proc. SIGMOD2007 Ph. D. Work. Innov. Database
Res. pp 51–56
[16] Jain YK, Yadav VK, Panday GS (2011) An Efficient
Association Rule Hiding Algorithm for Privacy Preserving
Data Mining. Int J Comput Sci Eng 3:2792–2798
[28]Assaf Schuster, Ran Wolff, Bobi Gilburd," PrivacyPreserving Association Rule Mining in LargeScale
Distributed Systems", fourth IEEE symposium on Cluster
Computing and Grid, 2004.
[17] Gkoulalas-Divanis A, Verykios VS (2006) An integer
programming approach for frequent itemset hiding. Proc.
6
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
« ACASYA »: a knowledge-based system for aid in the storage,
classification, assessment and generation of accident scenarios.
Application to the safety of rail transport systems
Dr. Habib HADJ-MABROUK1, Dr. Hinda MEJRI2
1
Ability to supervise research
French Institute of Science and Technology for Transport, Land and networks
[email protected]
2
Assistant Professor
Higher Institute of Transport and Logistics
University of Sousse, Tunisia
[email protected]
Regulatory context of research
interest appears thus by the creation of Community
institutions to the image of the European railway agency
(ERA), with which France will have to collaborate; but
also by the installation of safety checking and evaluation
tool like the statistic statement of rail transport or the
safety common goals and methods. These measurements
will be essential on France as it was the case for the
introduction of the railway infrastructure’s manager and
like the case for the national authorities of safety (NAS).
Parallel to this European dash, one also notes an
awakening in France since the decree 2000-286 of the
30/03/00 relative to the railway security, which replaces
the decree of the 22/03/42 which constituted hitherto, the
only legal reference on the matter.
France also sets up new mechanisms, contained in laws
and regulations in order to improve the security level. We
note the introduction of organisms or independent
technical services (ITS) in charge of certification, technical
organization of investigation or even the decree related to
the physical and professional ability conditions of staff.
Concerning the aptitude of staff, it is necessary to stress
that the next challenge to take up for Europe passes by the
necessary harmonization of the work conditions which is at
the same time a requirement for the safety and
interworking.
the safety railway formerly within the competence of the
only Member States and occulted a long time by the
European Union, gradually will become a nearly exclusive
field of the Community policy, this in particular by the
means of the project of interworking. The European
This study thus, shows that the safety from a theoretical
and legal perspective undergoes and will undergo many
changes. We notice in particular the presence of a
multiplicity of actors who support and share the
responsibility for the railway safety in France and Europe.
Abstract
Various researches in artificial intelligence are conducted to
understand the transfer of expertise problem. Today we perceive
two major independent research activities: the acquisition of
knowledge which aims to define methods inspired specially from
software engineering and cognitive psychology to better
understand the transfer of expertise, and the automatic learning
proposing the implementation of inductive, deductive, abductive
techniques or by analogy to equip the system of learning abilities.
The development of a knowledge-based support system
“ACASYA” for the analysis of the safety guided transport
systems insisted us to use jointly and complementary both
approaches.
The purpose of this tool is to first, to evaluate the completeness
and consistency of accidents scenarios and secondly, to
contribute to the generation of new scenarios that could help
experts to conclude on the safe character of a new system.
“ACASYA” consists of three learning modules: CLASCA,
EVALSCA and GENESCA dedicated respectively to the
classification, evaluation and generation accident scenarios.
Key-words: Transport system, Safety, Accident scenario,
Acquisition, Assessment, Artificial intelligence, Expert system,
Machine learning.
1.
7
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
intellectual tasks and has the ambition to giving computers
some of the human mind functions: learning, recognition,
reasoning or linguistic expression. Our research has
involved three specific aspects of artificial intelligence:
knowledge acquisition, machine learning and knowledge
based systems (KBS).
That they are public or are deprived, they have all of the
obligations to respect and partly subjected to the
independent organisms control.
2.
Introduction
As part of its missions of expertise and technical assistance,
IFSTTAR evaluates the files of safety of guided
transportation systems. These files include several
hierarchical analysis of safety such as the preliminary
analysis of risks (PAR), the functional safety analysis
(FSA), the analysis of failure modes, their effects and of
their criticality (AFMEC) or analysis of the impact of the
software errors [2] and [3]. These analyses are carried out
by the manufacturers. It is advisable to examine these
analyses with the greatest care, so much the quality of
those conditions, in fine, the safety of the users of the
transport systems. Independently of the manufacturer, the
experts of IFSTTAR carry out complementary analyses of
safety. They are brought to imagine new scenarios of
potential accidents to perfect the exhaustiveness of the
safety studies. In this process, one of the difficulties then
consists in finding the abnormal scenarios being able to
lead to a particular potential accident. It is the fundamental
point which justified this work.
A development of the knowledge base in a KBS requires
the use of techniques and methods of knowledge
acquisition in order to collect structure and formalize
knowledge. It has not been possible with knowledge
acquisition to extract effectively some types of expert
knowledge to analysis and evaluate safety. Therefore, the
use of knowledge acquisition in combination with machine
learning appears to be a very promising solution. The
approach which was adopted in order to design and
implement the tool “ACASYA” involved the following
two main activities:
 Extracting, formalizing and storing hazardous
situations to produce a library of standard cases
which covers the entire problem. This is called a
historical scenario knowledge base. This process
entailed the use of knowledge acquisition techniques,
 Exploiting the stored historical knowledge in order to
develop safety analysis know-how which can assist
experts to judge the thoroughness of the
manufacturer’s suggested safety analysis. This second
activity involves the use of machine learning
techniques.
If cognitive psychology and software engineering
generated support methods and tools for the knowledge
acquisition, the exploitation of these methods remains still
limited, in a complex industrial context. We estimate that,
located downstream, machine learning can advantageously
contribute to complete and strengthen the conventional
means of knowledge acquisition.
The ACASYA tool [4], which is the subject of this paper,
provides assistance in particular during the phase in which
the completeness of functional safety analysis (FSA) is
evaluated. Generally, the aim of FSA is to ensure that all
safety measures have been considered in order to cover the
hazards identified in the preliminary hazard analyses and
therefore, to ensure that all safety measures are taken into
account to cover potential accidents. These analyses
provide safety criteria for system design and
implementation of hardware and software safety. They also,
impose a safety criteria related to sizing, exploitation and
maintenance of the system. They can bring out adverse
security scenarios that require taking the specification.
The application of knowledge acquisition means, described
in addition in [5], led primarily on the development of a
generic model of accident scenarios representation and on
the establishment of a historical knowledge base of the
scenarios that includes about sixty scenarios for the risk of
collision.
The acquisition of knowledge is however faced the
difficulty to extract the expertise evoked in each step of the
safety evaluation process. This difficulty emanates from
the complexity of the expertise which encourages the
experts naturally, to decline their know-how through
significant examples or accident scenarios lived on
automated transport systems already certified or approved.
Consequently, the update of expertise must be done from
examples. Machine learning [[6] and [7]] makes it possible
to facilitate the transfer of knowledge, in particular from
experimental examples. It contributes to the development
3.
Approach used to develop the
“ACASYA” system
The modes of reasoning which are used in the context of
safety analysis (inductive, deductive, analogical, etc.) and
the very nature of safety knowledge (incomplete, evolving,
empirical, qualitative, etc.) mean that a conventional
computing solution is unsuitable and the utilization of
artificial intelligence techniques would seem to be more
appropriate. The aim of artificial intelligence is to study
and simulate human intellectual activities. It attempts to
create machines which are capable of performing
8
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
considered by the manufacturer. These situations provide a
stimulus to the expert in formulating new accident
scenarios.
of KBS knowledge bases while reducing the intervention
of the knowledge engineer.
Indeed, the experts generally consider that it is simpler to
describe experimental examples or cases rather than to
clarify processes of decision making. The introduction of
the automatic learning systems operating on examples
allows generating new knowledge that can help the expert
to solve a particular problem. The expertise of a field is not
only held by the experts but also, implicitly, distributed
and stored in a mass of historical data that the human mind
finds it difficult to synthesize. To extract from this mass of
information a relevant knowledge for an explanatory or
decisional aim, constitutes one of the automatic learning
objectives.
4.1.
Functional organization of the “ACASYA”
system
As is shown in figure 1, this organization consists of four
main modules. The first formalization module deals with
the acquisition and representation of a scenario and is part
of the knowledge acquisition phase. The three other
modules, CLASCA, EVALSCA and GENESCA, under the
previously general principle, cover the problems of
classification, evaluation and generation.
CLASCA
Scen ario classification
The learning from examples is however insufficient to
acquire all the know-how of experts and requires
application of the knowledge acquisition to identify the
problem to solve, extract and formalize accessible
knowledge by the usual means of acquisition. In this
direction, each of the two approaches can fill the
weaknesses of the other. To improve the transfer process
expertise, it is thus interesting to reconcile these two
approaches.
Our approach is to exploit by learning, the base of
scenarios examples, in order to produce knowledge that
can help the experts in their mission of a system safety
evaluation.
New scenario
Static
descriptio n
Fo rmalization
Dynamic
descriptio n
Class Ck
EVALSCA
Scen ario Ev alu ation
Historical scenario
kn owled ge base
Su mmarized failu res
likely to in duce a
system fault
GENESCA
Scen ario Generation
Gen erated scenarios
Validation of
gen erated
scenarios
4.
The “ACASYA” system of aid to safety
analysis
Validated scenarios
Fig. 1: Functional organization of the ACASYA system [1]
4.2.
Functional architecture of the “CLASCA”
system mock-up
The ACASYA system [[1] and [4]] is based on the
combined utilization of knowledge acquisition techniques
and machine learning. This tool has two main
characteristics. The first is the consideration of the
incremental aspect which is essential to achieve a gradual
improvement of knowledge learned by the system. The
second characteristic is the man/machine co-operation
which allows experts to correct and supplement the initial
knowledge produced by the system. Unlike the majority of
decision making aid systems which are intended for a nonexpert user, this tool is designed to co-operate with experts
in order to assist them in their decision making. The
ACASYA organization is such that it reproduces as much
as possible the strategy which is adopted by experts.
Summarized briefly, safety analysis involves an initial
recognition phase during which the scenario in question is
assimilated to a family of scenarios which is known to the
expert. This phase requires a definition of scenarios classes.
In a second phase, the expert evaluates the scenario in an
attempt to evolve unsafe situations which have not been
CLASCA [8] is a learning system which uses examples in
order to find classification procedures. It is inductive,
incremental and dedicated to the classification of accident
scenarios. In CLASCA, the learning process is
nonmonotonic, so that it is able to deal with incomplete
accident scenario data, and on other hand, interactive
(supervised) so that the knowledge which is produced by
the system can be checked and in order to assist the expert
in formulating his expertise. CLASCA incrementally
develops disjunctives descriptions of historical scenarios
classes with a dual purpose of characterizing a set of
unsafe situations and recognizing and identifying a new
scenario which is submitted to the experts for evaluation.
CLASCA contains five main modules (figure 2):
1. A scenario input module ;
2. A predesign module which is used to assign values to
the parameters and learning constraints which are
9
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
combination of a set of elementary failures having the
same effect on the system behavior. This evaluation
approach allows to attract the attention of the expert on
eventual failures not taken into account during the design
phase and can cause danger to the safety of the
transportation system. In this sense, it can promote the
generation of new accident scenarios.
required by the system. These parameters mainly
affect the relevance and quality of the learned
knowledge and the convergence speed of the system;
3. An induction module for learning descriptions of
scenario classes ;
4. A classification module, that aims to deduct the
membership of a new scenario from the descriptions
classes induced previously and by referring to
adequacy rate;
5. A dialogue module for the reasoning of the system
and the decision of experts. In justification the system
keeps track from the deduction phase in order to
construct its explanation. Following this rationale
phase of classification decisions, the expert decides
either to accept the proposed classification (in which
case CLASCA will learn the scenario) or to reject this
classification. In the second case it is the expert who
decides what subsequent action should be taken. He
may, for example, modify the learning parameters,
create a new class, edit the description of the scenario
or put the scenario on one side for later inspection.
The second level of processing considers the class deduced
by CALASCA in order to evaluate the scenario
consistency. The evaluation approach is centered on the
summarized failures which are involved in the new
scenario to evaluate. The evaluation of this scenario type
involves the two modules below [4] (figure 3):
 A mechanism for learning CHARADE’s rules [9]
which makes it possible to deduce sf recognition
functions and so to generate a basic evaluation rules ;
 An inference engine which exploits the above base of
rules in order to deduce which sfs are to be
considered in the new scenario to assess.
These two steps are detailed below-after:
Validation
and expert
decision
A classification
Classification
(deduction)
Parameters
adjustment
Classification
parameters
Baseof historical
scenarios
Enrichment
of the base
Learning
parameters
Predesign
Learning
(induction)
Acceptability
conditions
for a scenario
New scenario
for classification
Historical scenario
Scenarios
input
Current knowledge learnt
(descriptions of scenario classes)
Accident
scenario
Fig. 2: Architecture of the CLASCA system mock-up
Fig. 3: Architecture of the EVALSCA system mock-up [3]
4.3. Functional architecture of the “EVALSCA”
system mock-up
4.3.1. Learning from
recognition functions
The objective of the module EVALSCA [[1] and [4]] is to
confront the list of the summarized failures (sf) proposed
in the scenario to evaluate with the list of archived
historical summarized failures, in order to stimulate the
formulation of unsafe situations not considered by the
manufacturer. A sf is a generic failure, resulting from the
This phase of learning attempts, using the base of
examples which was formed previously, to generate a
system of rules reflecting the functions of recognition
summarized failures. The purpose of this stage is to
generate a recognition function for each sf associated with
a given class. The sf recognition function is a production
failures
summarized
10
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
rule which establishes a link between a set of facts
(parameters which describe a scenario or descriptors) and
the sf fact. There is a logic dependency relationship, which
can be expressed in the following form:
4.3.2. . Deduction of the summarized failures which
are to be considered in the scenario to evaluate
During the previous step, the CHARADE module created a
system of rules from the current basis of learning examples
and which is relative to the class Ck offered by the
CALASCA system. The sf deduction stage requires
beforehand, a transfer phase of rules which have been
generated and transferred to an expert system in order to
construct a scenario evaluation knowledge base. This
evaluation contains (figure3):
If
Principe of cantonment (PC)
and
Potential risks or accidents (R)
and
Functions related to the risk (FRR)
and
Geographical _ zones (GZ)
and
Actors involved (AI)
and
Incidents _ functions (IF)
Then Summarized failures (SF)
 The base of rules, which is split into two parts: a
current base of rules which contains the rules which
CHARADE has generated in relation to a class which
CLASCA has suggested at the instant t and a store base
of rules, which composed of the list of historical bases
of rules. Once a scenario has been evaluated, a current
base of rules becomes a store base of rules ;
 The base of facts, which contains the parameters which
describe the manufacturer's scenarios to evaluate and
that’s enriched, over interference, from facts or
deducted descriptors.
A base of evaluation rules can be generated for each class
of scenarios. Any generated rule must contain the PR
descriptor in its conclusion. It has proved to be inevitable
to use a learning method which allows production rules to
be generated from a set of historical examples (or
scenarios). The specification of the properties required by
the learning system and analysis of the existing has led us
to choose the CHARADE’s mechanism [9]. To generate
automatically a system of rules, rather than isolated rules,
and its ability to produce rules in order to develop sf
recognition functions make an undeniable interest to
CHARADE. A sample of some rules generated by
CHARADE is given below. These relate to the
initialization sequence class.
If
Actors involved = operator _ itinerant,
Incident _functions = instructions
Elements-involved = operator _in _cc.
Then
Summarized failures = SF11
(Invisible element on the zone of completely automatic driving)
Actors involved = AD _ with _redundancy,
Functions related to the risk =train localization,
Geographical _zones = terminus
If
Principle of cantonment = fixed _cantonment
Functions related to the risk = initialization
Incident _functions = instructions
Then
Summarized failures = SF10
(erroneous _re-establishment of safety frequency/high voltage),
Functions related to the risk = SF10
(erroneous _re-establishment of safety frequency/high voltage
permission),
Functions related to the risk
Functions related to the risk = alarm _management,
Functions related to the risk = train _localization.
[0]
This scenario evaluation knowledge base which has been
described above (base of facts and base of rules) exploited
by forward chaining by an inference engine, generates the
summarized failures which must be involved in the
description of the scenario to evaluate.
The plausible sfs deduced by the expert system are
analyzed and compared to the sfs which have actually been
considered by the scenario to evaluate. This confrontation
can generate one or more sfs not taken into account in the
design of protective equipment and likely to affect the
safety of the transport system. The above suggestion may
assist in generating unsafe situations which have not been
foreseen by the manufacturer during the specification and
design phases of system.
[0]
4.4. Functional architecture of the “GENESCA”
system mock-up
In complement as of two previous levels of treatment
which involve the static description of the scenario
(descriptive parameters), the third level [10] involves in
particular the dynamic description of the scenario (the
model of Petri) like to the three mechanisms of reasoning:
the induction, the deduction and the abduction. The aid in
the generation of a new scenario is based on the injection
of a sf, declared possible by the previous level, in a
particular sequencing of Petri network marking evolution.
11
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
exploitable scenarios systematically, but only the embryos
of scenarios which will stimulate the imagination of the
experts in the formulation of accident scenarios. Taking
into account the absence of work relative to this field,
originality and complexity of problem, this difficulty was
predictable and solutions are under investigation.
This approach of generation includes two distinct
processes: the static generation and the dynamic generation
(figure 4). The static approach seeks to derive new static
descriptions of scenarios from evaluating a new scenario. It
exploits by automatic learning the whole of the historical
scenarios in order to give an opinion on the static
description of a new scenario.
5.
If the purpose of the static approach is to reveal static
elements which describe the general context in which the
new scenario proceeds, the dynamic approach is concerned
to create a dynamics in this context in order to suggest
sequences of events that could lead to a potential accident.
The method consists initially, to characterize by learning
the knowledge implied in dynamic descriptions of
historical scenarios of the same class as the scenario to
evaluate and to represent them by a “generic” model. The
next step is to animate by simulation this generic model in
order to discover eventual scenarios that could eventually
lead to one or more adverse safety situations.
Conclusion
The ACASYA system created to assist safety analysis for
automated terrestrial transit systems satisfies classification,
evaluation and generation objectives of accident scenario.
It demonstrates that machine learning and knowledge
acquisition techniques are able to complement each other
in the transfer of knowledge. Unlike diagnostic aid systems,
ACASYA is presented as a tool to aid in the prevention of
design defects. When designing a new system, the
manufacturer undertakes to comply with the safety
objectives. He must demonstrate that the system is
designed so that all accidents are covered. At the opposite,
the experts of certification aim to show that the system is
not safe and, in this case, to identify the causes of
insecurity. Built in this second approach, ACASYA is a
tool that evaluates the completeness of the analysis
proposed by the manufacturer. ACASYA is at the stage of
a model whose first validation demonstrates the interest of
the aid to safety analysis method and which requires some
improvements and extensions.
More precisely, the dynamic approach involves two
principal phases (figure 3):
 A modeling phase which must make it possible to
work out a generic model of a class of scenarios. The
Modeling attempts to transform a set of Petri
networks into rules written in logic of proposals;
 A simulation phase which exploits the previous
model to generate possible dynamic descriptions of
scenarios.
References
[1]
Hadj-Mabrouk H. "Apport des techniques
d'intelligence artificielle à l'analyse de la sécurité
des systèmes de transport guidés", Revue Recherche
Transports Sécurité, no 40, INRETS, France, 1993.
[2]
Hadj-Mabrouk H. "Méthodes et outils d’aide aux
analyses de sécurité dans le domaine des transports
terrestres guidés", Revue Routes et Transports,
Montréal-Québec, vol. 26, no 2, pp 22-32, Été
1996.
[3]
Hadj-Mabrouk H. "Capitalisation et évaluation des
analyses de sécurité des automatismes des systèmes
de
transport
guidés",
Revue
Transport
Environnement Circulation, Paris, TEC no 134, pp
22-29, Janvier-février 1996.
[4]
Hadj-Mabrouk H. "ACASYA: a learning system for
functional safety analysis", Revue Recherche
Transports Sécurité, no 10, France, Septembre
1994, p 9-21.
Fig. 4: Approach help to generating embryos accident scenarios
During the development of model GENESCA, we met with
methodological difficulties. The produced model does not
make it yet possible to generate new relevant and
12
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[5]
Angele J., Sure Y. "Evaluation of ontology –based
tools workshop", 13th International Conference on
Knowledge
Engineering
and
Knowledge
management EKAW 2002, Siguenza (Spain),
September 30th (pp: 63-73)
[6]
Cornuéjols A., Micelet L., Kodratoff Y. "
Apprentissage artificiel: Concepts et algorithmes",
Eyrolles éd, Août 2002.
[7]
Ganascia J.-G "L’intelligence artificielle", Cavalier
Bleu Eds, Mai 2007.
[8]
Hadj-Mabrouk H. "CLASCA, un système
d'apprentissage automatique dédié à la classification
des scénarios d'accidents", 9ème colloque
international de fiabilité & maintenabilité. La
Baule, France, 30 Mai-3 Juin 1994, p 1183 - 1188.
[9]
Ganascia J.-G. "AGAPE et CHARADE : deux
mécanismes d'apprentissage symbolique appliqués à
la construction de bases de connaissances", Thèse
d'Etat, Université Paris-sud, mai 1987.
[10]
Mejri L. "Une démarche basée sur l’apprentissage
automatique pour l’aide à l’évaluation et à la
génération de scénarios d’accidents", Application à
l’analyse de sécurité des systèmes de transport
automatisés. Thèse de doctorat, Université de
Valenciennes, 6 décembre 1995, 210 p.
13
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Overview of routing algorithms in WBAN
Maryam Asgari 1, Mehdi Sayemir 2, Mohammad Shahverdy 3
1
Computer Engineering Faculty, Islamic Azad University, Tafresh, Iran
[email protected]
2
Safahan Institute Of Higher Education, Esfahan, Iran
[email protected]
3
Computer Engineering Faculty, Islamic Azad University, Tafresh, Iran
[email protected]
increased medical costs As research on this subject
shows that medical expenses in the year 2022 will
specialty 20% of America's GDP in which in its own
field is a major problem for the government. As
another proof of this claim, we can mention the
growth of medical costs in America 85/1 trillion in
1980 to $ 250 billion in 2004. This is despite the fact
that 45 million people in America are without health
insurance. Checking these statics only brings one
thing to the researchers mind and that is the need of
change in health systems so that the costs of
treatment is lowered and the health care in form of
Prevention is raised [2, 3, 4, 5, 6 and 7].
Abstract
The development of wireless computer networks and
advances in the fabrication of integrated electronic circuits
is one of the key elements in making miniature sensors,
Makes it possible to use the wireless sensor networks for
environmental monitoring in and around the bodies of
animals. This precinct of researches is called the wireless
research around the body or WBAN and IEEE Institute has
assigned two standards to this matter being LEEE.802.15.6
and IEEE.802.15.4. WBAN aim to facilitate, accelerate and
improve the accuracy and reliability of medical care and
Because of its wide range of challenges, many studies have
been devoted to this precinct. according to IEEE.802.15.6 ,
the topology in WBAN is in star form and one step and two
step communications are supported but Due to changes in
body position and the different states of the human that
body takes (for example walking , running , sitting and …)
connecting nodes in one or two step mode via sync or PDA
is not always possible . The possibility of using multi-step
communication and in result the existence of multiple ways
between Source and destination brings up this question that
in which way and by which adjoining the transmitter sends
the data to the receiver. So far, many routing algorithms
have been proposed to address this question in this article
we are going to evaluate them.
WBAN has come to increase the speed and Accuracy
of health care, provide quality for human life by
providing cost savings. The sensors in WBAN
networks be put inside or on the body. In both ways,
nodes need to wirelessly communicate with sink and
as a result making radiation that can increase the
temperature of nodes and its surrounding areas in
long periods and as the result be harmful for body
and bring serious injuries to surrounding tissues [1].
Broadly speaking, proposing any way to reduce the
amount of damage to the tissues, is based on the
following two rules:
Keywords: Routing Algorithms, WBAN
1. Introduction
1. Reducing the power of sent signals via the
transmitter of sink
According to the latest census and statistical analysis,
the population of the world is increasing and on the
other hand, with the development of medical
technologies and social security, increased life
expectancy and therefore, the aging of the population,
is inevitable [1]. The aging of the population,
however, causes problems such as the need for
medical care for the elderly, and thus leads to
2. Using multi step communication instead of one
step communication
It is clear that the lower power of sent signals are ,
the lower area surrounding node is damaged but with
lowering the power of sent signals ,communications
between transmitter and sink are more likely to
14
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
disconnect and in other word , Reliability link will be
reduced.
have more initiative in times of crisis [2, 3]. Statistics
show that more than 30% of the causes of death in
developed countries is due to cardiovascular
problems However, if monitoring technology is used
it can greatly reduce the number. For example with
using WBAN you can steadily monitor blood
pressure, body temperature and heart rate of the
patient, which are all vital signs. WBAN sensors can
send amount of vital signals via a device connected to
the internet for example cell phone after they are
measured. Cell phone can send the data via phone’s
internet connection to the doctor or the medical team
and at the end; medical team can decide what is
necessary to do.
Due to need of keeping the connection active due to
sensitivity of its usage being so important, the need
of making guarantied links is of high priority.
Providing the availability of the reliability of links is
needed in high levels. All of these challenges make it
Inevitable to change the one step connections to sync
and multi-step connections.
As mentioned on IEEE 802.15.6 standard , topologies
of WBAN are in star shape motion , so that
connection between nodes and sink (hub) is one or
two stepped because human body experiences
different motions in limited time (motions like
running , walking , sitting ,sleeping , ...)there is
always a chance that connection between nodes and
sink to be broken and network become partition [8, 9,
10]. A solution to solve this problem is that nodes
improve their signal power but as mentioned, this
solution will result in nodes to have temperature rise
and as the result to have tissues surrounding the
nodes to injure and as the result using multi step
connection is inevitable. [11, 12, 13 and 14]
Ways of using WBAN in medical field is parted to
three sections:
1.hideable WBANs : these clothing or in more formal
way , wearable equipment , normally can be cameras
, sensors for checking vital signals ,sensors for
communication with a central unit and sensors for
controlling the surrounding area of the person .for
example for military use , with equipping soldiers
with these clothes they can be tracked ,measured their
activity ,tiredness or even check their vital signals
and plus that if athletes use this clothes they can
check their medical symptoms online and at will that
will result in lowering the possibility of injuries for
another example There may be cases in which a
patient is allergic to substances or gases Thus, using
this type of clothing, the patient may be alerted
before dangerous disorders happen and will the place
Thus, for any reason and in any position to look at
the relationship between wireless nodes, replacing a
single communication step with the sink, the tie will
be a useful step. The ability of using multi step
communication and as of result being multiple ways
between source and destination brings up this
question that in which way and with which tool the
transmitter sends its data to the receiver. So far, many
routing algorithms have been proposed to address this
question in this article we are going to evaluate them.
2. WBANs placed inside of the body: Statistics show
that in 2012, 4.6 percent of people in the world,
nearly 285 million people suffer from diabetes and it
is expected that in 2030 this figure will reach 438
million Research also shows that in the absence of
control of the disease, many problems such as loss of
vision, will threaten the patient. Using sensors and
functions that are embedded in the body, such as a
syringe that when it’s necessary it will insert suitable
dosage of insulin to patient’s body can greatly
facilitate the process of controlling diabetes. Plus, as
mentioned, one of the leading causes of death
worldwide is cancer and is predicted that in 2020,
more than 15 million people will die from this
disease. If the built-in WBAN is used, the ability to
monitor the growth of cancer cells is provided thus
Control tumor growth and reduce the death toll is
easily accessible
Other parts of this article have been sorted as follows:
Second part is devoted to the usage of WBAN in
medical field. Third part describes the problems in
navigation of WBAN and in forth section is devoted
to analyzing some well-known navigation algorithms
and comparison and assessment are provided in fifth
Section. The conclusion is in sixth section.
2. Usage of WBAN in medical field
Due to the growth of technology, usage of medical
care services will result in a Massive transformation
in health field. There are expeditions that using
WBAN will significantly change the systems of
health care and will make doctors able to have more
speed and accuracy in finding out the illnesses and to
15
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
3.control tools and medical equipment’s in longdistance : the ability of WBAN sensors to connect to
internet ,being able to have network between tools
and medical equipment’s and Provides an acceptable
controlling of the equipment’s from long distance
that is called living with limited assist or AAL , In
addition to saving time, costs are greatly reduced.
power usage between different nodes and by doing
this , prevent early death.
4. Increased longevity: A good routing algorithm
must be able to transfer data paths selected so that the
total time of network activity increases.
5. Efficient communication radius: A good routing
algorithm
should
consider
an
efficient
communication radius of nodes. The higher
communication range of a node is the higher the
energy usage will be but if communication range of a
node is very low there is a chance that the mentioned
node will lose communication with other nodes and
network divide to several pieces. Also If the radius of
communication is very low, usually a number of
options for routing to the destination are reduced this
results usage a same way by a node and this result in
temperature rise of the neighbor node and increase of
energy usage in node
3. Routing challenges in WBAN
So far, numerous routing algorithms for ado networks
and wireless sensor networks have been presented
WBAN networks are very much maligned to
MANET in the position and motion of the nodes, of
course, the movement of the nodes in the WBAN are
usually grouped This means that all network nodes
move with keeping their position toward one another
while in MANET each node moves independently
from other nodes. In addition, energy consumption is
more restrictions on WBAN networks because a node
insertion or replacement battery in WBAN, especially
when the node is placed inside the patient's body, is
much harder than replacing a node in a traditional
sensor networks because surgery is usually required.
Hence, it is more important to have more longevity in
WBAN networks also the rate of change of topology
and speed in WBAN nodes is far greater than sensor
networks. Based on what was said, routing protocols
designed for MANET and WSN are not useable in
WBAN. Challenges that are raised in the WBAN
networks, are summarized below:
6. Finite number of jumps: as mentioned before,
number of jumps in WBAN standard must be one to
two steps. Use of higher-quality channels can
increase the reliability of packets but at the same time
usually the number of steps are increased, however,
despite restrictions on the number of steps in the
IEEE 802.15.6 standard is intended, routing
algorithms usually do not pay attention to these
limitations.
7. Usage in Heterogeneous Environments: WBANs
usually consists of different sensors with different
data transfer rates. Routing algorithms must be able
to provide quality services in a variety of different
applications.
1. body movements: moving of nodes because of
human body position causes serious problems for
providing service in WBAN Because the quality of
the communication channel between nodes with each
other, as a function of time and due to changes in
posture of body, is not stable As a result, an
appropriate routing algorithm must be able to adapt
itself to a variety of changes in the network topology.
4. The routing algorithms in WBAN
So far, numerous routing algorithms for a WBAN
networks have been provided and each has been
trying to resolve basic variety of challenges posed in
the previous section.
2. The temperature and interference: The temperature
of a node for computing activities or relationships
with other nodes, usually increases and this increase
in temperature may cause damage to the human body.
A good routing algorithm must manage the data
sending schedule so that a specified node is not
always chosen as relay node.
4.1. OFR & DOR routing algorithms
OFR routing algorithm is the same flooding
algorithm used in other types of networks. As the
name suggests, in this algorithm ,for sending a
package between transmitter and receiver , no
navigation is done but the transmitter sends the copy
of the package to its neighbors .each neighbor
(basically each node it network) sends the package to
its neighbor after receiving it . With this method,
3. Reduce energy consumption: a good routing
algorithm must be able to use intermediate nodes as
the relay nodes instead of sending the data directly to
a remote destination so that it prorate the overhead
16
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
multiple copies of a package arrives at the receiver.
Based on this, the receiver saves the first package
that has less delay and sweep away other packages.
OFR method is usable in different varieties, for
example, it has high reliability (because it uses all the
potential of the network) and has a small delay but
because of using too much resources, energy usage
and temperature created in it will rise and also it has
low throughput.
not only consider current state of the channel but also
consider the state of channel in t period of time unit
before current state. Obviously, the larger the value
of t is the less impact on the instantaneous channel
quality will be. In other word, by determining a value
of t large enough, the channel quality of real-time
changes will in no way affect the amount of LLF.
The quality factor of the link between node I and j at
and it always has the amount
time t is shown with
between zero (no connection) and one (full
connection) and after each time cutting , the amount
will update via (1) relation :
On the opposite side of OFR method is the DOR
method that its function is completely opposite of
OFR method. Is routing algorithm of DOR sender
only sends its data to the receiver when a direct
communication link is established between them and
if a link is unavailable, transmitter holds its data in
the buffer until it establishes the link? In other word,
there is no routing in DOR. However, unlike OFR,
DOR algorithm uses fewer resources, but because it
does not benefit from multi step Communication, it
suffers a lot of delay and sometimes it’s
unacceptable, it’s because of this reason that its only
usage is in networks that are sensitive to delay. Plus,
by increasing the distance between transmitter and
receiver, even the possibility that the sender will not
be able to send data to a receiver.
{
(
)
(1)
In each section satisfied between the link nodes I and
j
will increase rate of w.as mentioned ,
determining the amount of w will have great impacts
of usage of PRPLC algorithm so that the lower the
amount w is , the amount of speed of
in having
connection will reduce
but if channel lose
connection ,
will decrease fast.it is expected that
the amount of w is in a way that for channel that have
had long amount of connection
will decrease
slow and increase fast and vice versa , For channels
that have been cut for a long time and have poor
quality, and slowly increase and the decrease fast. In
other word, the amount of w in each time cutting,
must be updated, number 2 relation show the way of
updating w:
DOR and OFR algorithms are basically useless but
low processing overhead and other benefits and
features, are usually used to compare other
algorithms.
4.2. PRPLC routing algorithm
In this algorithm [15] meters known as a living link
factor (LLF) is defined. Each node has a duty to
calculate LLF for its link to sink and other nodes and
give these information to other nodes. This factor
determines how the quality of the channel between
the transmitter and the other nodes is. Method of
calculating LLF is that higher values for a link show
that link is more likely to be in next period of time.
As you know, there is always the possibility that the
quality of the channel between two nodes drop due to
changes in the body temporarily and after a few
moments revert back to normal. For example,
Assume that the communication between two nodes
one on the wrist and the other one on the chest is fine
in normal mode but when the person puts his hand
behind his back, this channel technically will have
disorder. PRPLC algorithm uses time window in
calculating LLF to ignore instantaneous channel
changes, in other word, while calculating LLF it will
∑
)2(
In this relation
is the amount of time
window, also the amount of
in r time cutting is 1
if the channel between I and j is connected, if not so
the amount is 0. When node I wants to send data to
node d and node j is in the neighborhood of node I , if
node I will send its data to node j. in other
word , be considering that LLF has a better position
between node j and destination, node I prefers to send
its data to destination via node j .
4.3. ETPA Routing algorithm
In [17] an energy-aware routing algorithm that
considers measured temperature and transmitting
17
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
power at the same time has been presented as ETPA.
This multi-step algorithm uses a cost estimate
function for choosing best neighbor. Cost of each
neighbor, is a function of temperature, energy level
and signal strength received from the neighbor. In
this algorithm to reduce interference and eliminating
the time listening to a channel, the TDMA method is
used in other word , Each time frame is divided into
N slots so that N is the number of network nodes and
each node to send its own time slice. At the
beginning of each period (includes 4 time frames),
each node for example node j sends a hello message
to neighbor nodes ,then each node tries to test the
signal power sent from each neighbor node and
record in a table. After sending hello messages, each
node will be able to calculate the cost of sending via
each neighbor, and then send the data is sent through
the cheapest neighbor
efficiency because in calculating LLD the inertia of
the moving body is not important. Inertial
measurement sensors can easily collect data on
acceleration and direction of motion of the body. In
addition, the sensors can lead to sudden changes in
body movement that can detect sudden changes in the
quality of the links.
Algorithm BAPR [16] in summary is a routing
method that combines information from the relay
node selection algorithms that have emerged with
inertial motion (such as ETPA & PRPLC). In this
algorithm, each node has a routing table. Routing
table contains of records that have 3 parts. The first
part of the destination node ID, second part the ID of
the relay node and the third part shows the
connection fee. the meaning of connection fee is a fee
of connection between transmitter node and relay
node, unlike routing algorithms in MANET , routing
algorithms in BARR can have several records for one
destination. In BAPR relay node is chosen via
communication fee in this way that nodes with
highest fee are in priority for selection .the reason for
this kind of choosing is that based of fee calculating
method in BAPR, link with higher fee has the higher
reliability and from there BPR wants to improve the
chance of sending the package successfully so it
chooses a relay node with higher fee.
Equation (3) shows how to calculate the cost of
sending from node j to node I
(
)
(
)
(
)
)3(
In this equation a is the non-negative factor , is the
power of signal received in node I ,
is highest
power received ,
is highest energy in a node
(starting energy) and
is highest temperature
permitted in a node. Each node chooses lowest
costing neighbor while sending and sends the
package to that node. If a suitable neighbor is not
found transmitter saves the package in its buffer and
calculates the possibility of sending again in time
frame. ETPA suggests that the packages for more
than two time frames remain in the buffer, are
discarded. The simulation results show that this
algorithm has good performance
In BAPR, Information relating to motor inertia and
local topology is considered in calculating the cost of
connection. The cost function of this algorithm
collects the data of the motor inertia to cover
immediate changes to network topology and network
topology history to cover long-term changes in
topology. That is why in BAPR when topology
changes are quick, information about the movements
of the body are more valuable, otherwise the history
of link is more important. In this algorithm it is
assumed that the momentum vector of the body ⃗⃗⃗⃗⃗⃗⃗⃗
can be measured via inertial measurement sensors
4.4. BAPR routing algorithm
As we saw, PRPLC algorithm tries to minimize the
effects of instantaneous channel quality vibrations in
estimating function of channel quality. This way of
viewing the channel, has a big problem and that is
Topology changes that occur due to changes in body
position, the will not affect the channel quality
measurement functions in speed. In other word ,
although occurring things like getting blocked , does
not affect the salary factor of channel in PRPLC
algorithm but accruing an event for a long time will
slowly effect LLF. Considering that in situations in
terms of walking or running, the body is constantly
changing, PRPLC algorithm will basically lose its
5. Comparison and Analysis
In this part of the article we are going to review,
analyze and evaluate routing algorithms described in
the previous section. The most important criteria in
evaluating the performance of a routing algorithm in
WBAN networks are mainly longevity and energy
efficiency, reliability, successful delivery rate, packet
delay. Therefore we will appraisal BAPR, ETPA and
PRPLC algorithms in terms of the criteria for
successful and use OFR and DOR algorithms as
18
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Indicators to measure the performance of these
algorithms.
expected, the number of jumps in OFR routing
algorithm is more than other algorithms while the
DOR algorithm has a minimum number of jumps in
between other algorithms (just one jump).number of
jumps in BAPR algorithm is in better place than
PRPLC and ETPR but there is not a high difference
between BAPR and ETPA algorithms. Of course we
need to mention that the number of jumps in just
calculated for packages that have been delivered
successfully thus eliminating packages in PRPLC and
ETPA prevents the increase of steps in algorithms.
5.1. Average rate of successful delivery
As figure 1 shows, fee of delivering the massage in
OFR algorithm is higher than every other algorithms
and BAPR algorithm is in second place with a small
difference from OFR. As you see, delivery fee price
in BAPR in 30 percent higher than PRPLC algorithm
that is a significant improvement
Fig1. The average rate of successful delivery
Figure 3: Average number of jumps
5.2. Average end to end delay
5.4. Other parameters
Connection in every algorithm is the same except the
DOP algorithm. Since OFR algorithm uses flooding
method, delay in this algorithm is a lower bound for
routing algorithms. In other word, none of the routing
algorithms will have less delay than OFR algorithm.
Based on this, delay in all three algorithms of
PRPLC, ETPA and BAPR is acceptable.
A class of routing algorithms does not pay attention
to temperatures generated by the nodes, which in
some cases can even cause damage to body tissues of
the patient. While the ETPA pays special attention to
this issue, as the temperature of the relay nodes, is
being placed in fee estimate function. On the other
side, BAPR routing algorithm is opposite of named
algorithms and need equipment such as measurement
sensor and inertial measurement. Although OFR
algorithm has an acceptable performance most of the
times but because of using network resources to
much, is never used. Plus the overhead processing in
ETPA and BARP are high compared to PFR and
DOR but PRPLC algorithm has a medium overhead
processing compared to other algorithms.
6. Conclusions
Due to the growing population and increasing life
expectancy, the traditional methods of treatment, will
not be efficient because it imposes heavy cost to the
economy of a country. With regard that prevention
and care, are one of the simplest ways to reduce
deaths and medical costs, WBAN networks for
monitoring patient's vital parameters and injection
materials needed for patient’s body in specific times
Figure 2: The average end-to-end delay
5.3. the average number of jumps
The number of jumps in a message, in a sense
represents the amount of usage of resources. Thus, as
19
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[8] J. Xing and Y. Zhu, “A survey on body area
network,” in 5th International Conference on
Wireless Communications, Networking and Mobile
Computing (WiCom ’09), pp. 1 –4, Sept. 2009.
[9] S. Wang and J.-T. Park, “Modeling and analysis
of multi-type failures in wireless body area networks
with semi-markov model,” Comm. Letters., vol. 14,
pp.6–8,Jan.2010.
[10] K. Y. Yazdandoost and K. Sayrafian-Pour,
“Channel model for body area network (BAN),”
Networks,p.91,2009.
[11] M. Shahverdy, M. Behnami & M. Fathy ” A
New Paradigm for Load Balancing in WMNs”
International Journal of Computer Networks (IJCN),
Volume(3):Issue(4):2011,239
[12] “IEEE p802.15.6/d0 draft standard for body
area
network,”
IEEE
Draft,
2010.
[13] D. Lewis, “IEEE p802.15.6/d0 draft standard
for body area network,” in 15-10-0245-06-0006,
May.2010.
[14] “IEEE p802.15-10 wireless personal area
networks”July,2011.
[15] M. Quwaider and S. Biswas, “Probabilistic
routing in on-body sensor networks with postural
disconnections,” Proceedings of the 7th ACM
international symposium on Mobility management
and wireless access (MobiWAC), pp. 149–158,
2009.
[16] S. Yang, J. L. Lu, F. Yang, L. Kong, W. Shu,
M, Y. Wu, “Behavior-Aware Probabilistic Routing
For Wireless Body Area Sensor Networks,” In
Proceedings of IEEE Global Communication
Conference (GLOBECOM), Atlanta, Ga, pp. 4444449,Dec,2013.
[17] S. Movassaghi, M. Abolhasan, and J. Lipman,
“Energy efficient thermal and power aware (ETPA)
routing in body area networks,” in 23rd IEEE
International Symposium on Personal Indoor and
Mobile Radio Communications (PIMRC), Sept.
2012.
have been released. In the standard created for
WBAN that is known with the name of LEEE
802.15.6 suggests star topology and one and multistep communications for sending data from nodes to
sink. However, due to the change in body position
during the day, one step connection of nodes to the
sink will not be continuously connected. To solve this
problem, using a multi-step communication has been
proposed. Using multi-step communication has
always
coincided
with
the
concept
of
synchronization, for this reason, much research has
been done on routing algorithms in WBAN. In this
article, we reviewed some of the proposed routing
algorithms within the WBAN, discussed the strengths
and weaknesses of them and finally we compared
them with each other.
References
[1] Milenkovic, C. Otto, and E. Jovanov, “Wireless
sensor networks for personal health monitoring:
Issues and an implementation,” Computer
Communications (Special issue: Wireless Sensor
Networks: Performance, Reliability, Security, and
Beyond, vol. 29, pp. 2521–2533, 2006.
[2] C. Otto, A. Milenkovic’, C. Sanders, and E.
Jovanov, “System architecture of a wireless body
area sensor network for ubiquitous health
monitoring,” J. Mob. Multimed., vol. 1, pp. 307–326,
Jan.2005.
[3] S. Ullah, P. Khan, N. Ullah, S. Saleem, H.
Higgins, and K. Kwak, “A review of wireless body
area networks for medical applications,” arXiv
preprint arXiv:1001.0831, vol. abs/1001.0831, 2010.
[4] M. Chen, S. Gonzalez, A. Vasilakos, H. Cao, and
V. Leung, “Body area networks: A survey,” Mobile
Networks and Applications, vol. 16, pp. 171–193,
2011.
[5] K. Kwak, S. Ullah, and N. Ullah, “An overview
of IEEE 802.15.6 standard,” in 3rd International
Symposium on Applied Sciences in Biomedical and
Communication Technologies (ISABEL), pp. 1 –6,
Nov.2010.
[6] S. Ullah, H. Higgin, M. A. Siddiqui, and K. S.
Kwak, “A study of implanted and wearable body
sensor networks,” in Proceedings of the 2nd KES
International conference on Agent and multi-agent
systems: technologies and applications, (Berlin,
Heidelberg), pp. 464–473, Springer-Verlag, 2008.
[7] E. Dishman, “Inventing wellness systems for
aging in place,” Computer, vol. 37, pp. 34 – 41, May.
2004.
20
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
an Efficient Blind Signature Scheme based on Error Correcting
Codes
Junyao Ye1, 2, Fang Ren3 , Dong Zheng3 and Kefei Chen4
1
Department of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai 200240, China
[email protected]
2
3
School of Information Engineering, Jingdezhen Ceramic Institute, Jingdezhen 333403,China
[email protected]
National Engineering Laboratory for Wireless Security, Xi’an University of Posts and Telecommunications
Xi’an 710121, China
[email protected]
4
School of Science, Hangzhou Normal University, Hangzhou 310000, China
[email protected]
The concept of blind signature was first proposed by
Chaum et al.[2] in CRYPTO'82. In a blind signature
mechanism, the user can get a valid signature without
revealing the message or relevant information to the signer.
What's more, the signer won't be able to connect the
signature with the corresponding signature process in the
future. In 1992, Okamoto proposed a blind signature
scheme[3] based on schnorr signature[4]. In 2001, Chien
et al.[5] proposed a partial blind signature scheme based
on RSA public key cryptosystem. In 2007, Zheng Cheng
et al.[6] proposed a blind signature scheme based on
elliptic curve. There are a variety of mature blind signature
scheme used in electronic cash scheme[7]. Hash
function[8] can compress the message of arbitray length to
fixed length. A secure hash function has the characteristic
of onewayness and collision-resistance, which is widly
used in digtal signature.
There are many blind signature schemes at present, but the
development of post-quantum computers has posed a huge
threat to them. Code-based public key cryptography can
resist the attack from post-quantum algorithm. Until now,
just a literature[9] related to blind signature based on error
correcting codes. In this paper[9], the authors proposed a
conversion from signature schemes connected to coding
theory into blind signature schemes, then give formal
security reductions to combinatorial problems not
connected to number theory. This is the first blind
signature scheme which can not be broken by quantum
computers via cryptanalyzing the underlying signature
scheme employing Shor's algorithms[1]. In our paper, we
propose a blind signature scheme based on Niederreiter
[10] public key cryptosystem. Our scheme realizes the
blindness, unforgeability, non-repudiation of the bind
signature scheme, lastly we analyze the security of our
scheme.
Abstract
Cryptography based on the theory of error correcting codes and
lattices has received a wide attention in the last years. Shor’s
algorithm showed that in a world where quantum computers are
assumed to exist, number theoretic cryptosystems are insecure.
Therefore, it is important to design suitable, provably secure
post-quantum signature schemes. Code-based public key
cryptography has the characteristic of resisting the attack from
post-quantum computers. We propose a blind signature scheme
based on Niederreiter PKC, the signature is blind to the signer.
Our scheme has the same security as the Neiderreiter
PKC.Through performance analysis, the blind signature scheme
is correct; also it has the characteristic of blindness,
unforgeability and non-repudiation. In addition, its efficiency is
higher than the signature scheme based on RSA scheme. In the
near future, we will focus our research on the group signature
and threshold ring signature based on error correcting codes.
Keywords: Code-based PKC, Blind Signature, Unforgeability,
Non-repudiation, Error Correcting Codes.
1. Introduction
Digital signature algorithms are among the most useful
and recurring cryptographic schemes. Cryptography based
on the theory of error correcting codes and lattices has
received a wide attention in the last years. This is not only
because of the interesting mathematical background but as
well because of Shor’s algorithm[1], which showed that in
a world where quantum computers are assumed to exist,
number theoretic cryptosystems are insecure. Therefore, it
is of utmost importance to ensure that suitable, provably
secure post-quantum signature schemes are available for
deployment, should quantum computers become a
technological reality.
21
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
The remainder of this paper is organized as follows.
Section 2 discusses theoretical preliminaries for the
presentation. Section 3 describes the digital signature,
blind signature and RSA blind scheme. Section 4 describes
the proposed blind signature based on Niederreiter PKC.
Section 5 formally analyses the proposal scheme and
proves that the scheme is secure and efficient. We
conclude in Section 6.
the vector subspace , i.e. it holds that,
+.
2.2 SDP and GDP
A binary linear error-correcting code of length
and
dimension , denoted , - -code for short, is a linear
subspace of
having dimension . If its minimum
distance is , it is called an ,
--code. An , --code
is specified by either a generator matrix
or by
(
)
parity-check
matrix
as
*
+ *
|
|
+.
The syndrome decoding problem(SDP), as well as the
closely related general decoding problem(GDP), are
classical in coding theory and known to be NPcomplete[11].
Definition 5(Syndrome decoding problem). Let r,n, and w
be integers, and let (H,w,s) be a triple consisting of a
.
matrix
, an integer w<n, and a vector
Does there exist a vector
of weight wt(e)
such
that
?
Definition 6(General decoding problem). Let k,n, and w
be integers, and let (G,w,c) be a triple consisting of a
matrix
, an integer
, and a vector
.
Does there exist a vector
such that (
)
?
2. Preliminaries
We now recapitulate some essential concepts from coding
theory and security notions for signature schemes.
2.1 Coding Theory
The idea is to add redundancy to the message in order to
be able to detect and correct the errors. We use an
encoding algorithm to add this redundancy and a decoding
algorithm to reconstruct the initial message, as is showed
in Fig1, a message of length is transformed in a message
of length with
.
Noise
e
c=
m
r
Channel
*
y=c+e
2.3 Niederreiter Public Key Cryptosystem
Fig1. Encoding Process
A dual encryption scheme is the Niederreiter[10]
cryptosystem which is equivalent in terms of security to
the McEliece cryptosystem[12]. The main difference
between McEliece and Niederreiter cryptosystems lies in
the description of the codes.
The Niederreiter encryption scheme describes codes
through parity-check matrices. But both schemes have to
hide any structure through a scrambling transformation
and a permutation transformation. The Niederreiter
cryptosystem includes three algorithms.
( )
1.Choose n, k and t according to ;
2.Randomly pick a parity-check matrix
of an [n, k,
2t+1] binary Goppa code;
3.Randomly pick a
permutation matrix ;
4.Randomly pick a (
) (
) invertible matrix ;
5.Calculate
;
(
) where
6.Output
( ), and
is
an efficient syndrome decoding algorithm.
(
)
algorithm maps any bit strings to codewords of
length n and constant weight t.
1.Calculate
;
2.Output c.
(
)
Definition 1(Linear Code). An (n, k)-code over
is a
linear subspace
of the linear space
. Elements
of
are called words, and elements of are codewords.
We call n the length, and k the dimension of .
Definition 2(Hamming Distance, Weight). The Hamming
distance d(x, y) between two words x, y is the number of
positions in which x and y differ. That is, ( )
|*
+|, where
(
) and
(
).
Here, we use | | to denote the number of elements, or
cardinality, of a set S. In particular, d(x, 0) is called the
Hamming weight of x, where 0 is the vector containing n
0’s. The minimum distance of a linear code is the
minimum Hamming distance between any two distinct
codewords.
Definition 3(Generator Matrix). A generator matrix of an
(n, k)-linear code is a
matrix G whose rows form a
basis for the vector subspace . We call a code systematic
if it can be characterized by a generator matrix G of the
(
|
form
is the
(
) ) , where
identity matrix and A, an
(
) matrix.
Definition 4(Parity-check Matrix). A parity-check matrix
of an (n, k)-linear code is an (
)
matrix H
whose rows form a basis of the orthogonal complement of
22
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
When data are transmitted through the Internet, it is better
that the data are protected by a cryptosystem beforehand to
prevent them from tampering by an illegal third party.
Basically, an encrypted document is sent, and it is
impossible for an unlawful party to get the contents of the
message, except he gets the sender’s private key to decrypt
the message. Under a mutual protocol between the senders
and receivers, each sender holds a private key to encrypt
his messages to send out, and a public key used by the
receiver to decrypt his sent-out messages. When the two
message digests are verified to be identical, the recipient
can have the true text message. Thus, the security of data
transmission can be made sure.
1.Calculate
;
2.Calculate
( );
3.Output
.
The security of the Niederreiter PKC and the McEliece
PKC are equivalent. An attacker who can break one is able
to break the other and vice versa [12]. In the following, by
“Niederreiter PKC” we refer to the dual variant of the
McEliece PKC and to the proposal by Niederreiter to use
GRS codes by “GRS Niederreiter PKC”.
The advantage of this dual variant is the smaller public key
size since it is sufficient to store the redundant part of the
matrix . The disadvantage is the fact, that the mapping
algorithm slows down encryption and decryption. In a
setting, where we want to send random strings, only, this
disadvantage disappears as we can take ( ) as random
string, where is a secure hash function.
3.2 Blind Signature
The signer signs the requester’s message and knows
nothing about it; moreover, no one knows about the
correspondence of the message-signature pair except the
requester. A short illustration of blind signature is
described in the following.
1. Blinding Phase:
A requester firstly chooses a random number called a blind
factor to mess his message such that the signer will be
blind to the message.
2. Signing Phase:
When the signer gets the blinded message, he directly
encrypts the blinded message by his private key and then
sends the blinding signature back to the requester.
3. Unblinding Phase:
The requester uses his blind factor to recover the signer’s
digital signature from the blinding signature.
4. Signature Verification Phase:
Anyone uses the signer’s public key to verify whether the
signature is valid.
3. Digital Signatures and Blind Signatures
3.1 Digital Signature
Under a protocol among all related parties, the digital
signatures are used in private communication. All
messages are capable of being encrypted and decrypted so
as to ensure the integrity and non-repudiation of them. The
concept of digital signatures originally comes from
cryptography, and is defined to be a method that a sender’s
messages are encrypted or decrypted via a hash function
number in keeping the messages secured when transmitted.
Especially, when a one-way hashing function is performed
to a message, its related digital signature is generated
called a message digest. A one-way hash function is a
mathematical algorithm that makes a message of any
length as input, but of a fixed length as output. Because its
one-way property, it is impossible for the third party to
decrypt the encrypted messages. Two phases of the digital
signature process is described in the following.
1. Signing Phase:
A sender firstly makes his message or data as the input of
a one-way hashing function and then produces its
corresponding message digest as the output. Secondly, the
message digest will be encrypted by the private key of the
sender. Thus, the digital signature of the message is done.
Finally, the sender sends his message or data along with its
related digital signature to a receiver.
2. Verification Phase:
Once the receiver has the message as well as the digital
signature, he repeats the same process of the sender does,
letting the message as an input into the one-way hashing
function to get the first message digest as output. Then he
decrypts the digital signature by the sender’s public key so
as to get the second message digest. Finally, verify
whether these two message digests are identical or not.
3.3 RSA Blind System
The first blind signature protocol proposed by Chaum is
based on RSA system [2]. For each requester, he has to
randomly choose a blind factor first and supplies the
encrypted message
to the signer, where
(
) . Note that is the product of two large
secret primes and , and is the public key of the signer
along with the corresponding secret key
such that
(
)(
) . The integer is called a
blind factor because the signer will be blind to the message
after the computation of
(
). While getting ,
the signer makes a signature
on it directly, where
(
) and then returns the signed message to
the requester. The requester strips the signature to yield
an untraceable signature , where
(
), and
announces the pair (
) . Finally, anyone uses the
signer’s public key to verify whether the signature is
valid by checking the formula
(
) holds.
23
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
and anyone can verify the correctness by computing the
following:
4. Proposed Blind Signature Scheme
4.1 Initialization Phase
( )
( ) (
( )
) ( )
( )
Because equation
is equal to ( ) ,
signature of the message m.
We randomly choose a t degree irreducible polynomial
( ) in the finite field ( ), and we get an irreducible
Goppa code (
). The generating matrix of the Goppa
code is of order
, the corresponding parity check
matrix is
of order (
)
. We then choose
invertible matrix
of order (
) (
) and
permutation matrix
of order
. Let
( ) (
) ( ) . The private key is (
), the
public key is ( ).
is the valid
5.2 Security Analysis
The blind signature scheme is based on Niederreiter PKC,
so the security of the proposed signature scheme is up to
the security of Niederreiter PKC. There have been several
methods
proposed
for
attacking
McEliece’s
system,[13],[14],etc. Among them, the best attack with
least complexity is to repeatedly select k bits at random
from the n-bit ciphertext vector c to form in hope that
none of the selected k bits are in error. If there is no error
is equal to m where
is the
in them, then
matrix obtained by choosing k columns of G according to
the same selection of . If anyone can decomposite public
key , he will get
,
and , therefore the blind
signature scheme is invalid. However, there are too many
ways in decompositing , it’s about ( ) ∏ (
),
, numbers of ,
and respectively[15]. When
n and t are large, it’s impossible to calculate, so the
decomposition method is unfeasible.
At present, the most efficient method on attacking the
Niederreiter PKC is solving linear equations. Under such
4.2 Proposed Blind Signature Scheme
There are two parties in the proposed blind signature
scheme, the requester and the signer. The requester who
wants the signature of a message, the signer who can sign
the message into a signature. Before signing the message,
the requester has to hash the message in order to hide the
information of the message.
1. Hash Phase
Assume the message m is of n dimension sequences,
denoted as
(
). We can use a secure
hash function, for example, MD5, to obtain message digest
( ), where is a selected secure hash function.
2. Blinding Phase
The requester randomly chooses a invertible matrix as
blinding factor, computes ( )
( ). Then sends the
blinding message B(m) to the signer.
3. Signing Phase
After the signer has received the blinding message ( ),
, then sends the signature
computes
( )
to the user.
4. Unblinding Phase
After the requester has received the signature
, the
requester uses invertible B to recover signature, computes
as the following:
( )
( )
So, is the real signature of the message digest ( ).
5. Verification Process
Anyone can verify whether the signature is valid by
computing
, is the public key of the signer. If
( ) , then is the valid blind signature of the
message m, otherwise, reject.
an attack, the work factor is
( ) (
), when
,
,
,the work factor of Niederreiter
PKC is approximately
, so we consider the
Niederreiter PKC is secure enough. That is to say, the
blind signature scheme is secure because the bind
signature scheme is based on the Niederreiter PKC, they
have the same security.
5.3 Blindness
The blind factor is choosed randomly by the requester,
only the requester knows , others can’t obtain from any
other ways, the blinding process is computed as the
following:
( )
( )
Because of the privacy and randomness of , the blinding
message ( ) is unknown to the signer.
5. Performance Analysis
5.4 Unforgeability
5.1 Correctness
From the signature process, we can see that anyone else
can’t forge the signer’s signature. If someone wants to
forge the signer’s signature, firstly, he must get the
blinding message ( ) from the requester, then forges a
If the requester and the signer execute the process
according to the above protocol, then the signature is the
exact correct signature of message m signed by the signer,
24
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Niederreiter PKC. Firstly, we use hash function to hash the
message
to get the message digest ( ), then select
randomly an invertible matrix B as blind factor to blind
( ) and get blinding message ( ). After the signer has
received ( ), he will sign the ( ) by his private key.
The user then unblinds what he receives, he will get the
signature. By constructing the invertible matrix cleverly,
we can assure the signature is correct and is verifiable.
Through performance analysis, the blind signature scheme
is correct, also it has the characteristic of blindness,
unforgeability and non-repudiation. The security of our
scheme is the same as the security of Niederreiter PKC
scheme, in addition, it’s efficiency is higher than the
signature scheme based on RSA scheme. The code-based
cryptography can resist the attack of post-quantum
computers, so the scheme is very applicable and
considerable. In the near future, we will focus our research
on the group signature and threshold ring signature based
on the error-correcting code.
signature. In order to forge a signature, the adversary will
encounter two handicap, one is the blind factor which is
random and secret, only the requester knows . The other
problem is that even the adversary obtains the blinding
message ( ), because he doesn’t know the private key
,
and of the signer, it’s impossible to forge a
( ). The requester
message to satisfy the equation
himself can’t forge the signer’s signature, in the first step,
we use the hash function to hash the message and get
( ), the process of the hash function is invertible.
5.5 Non-repudiation
The signature of the signer is signed by his private key
, and , no others can obtain his private key, so, at
any time, the signer can’t deny his signature.
5.6 Untraceability
After the signature message pair (
) is published, even
the signer has the signature information, he can't connect
the blinding signature with the blinding message ( ),
that is to say, he can't trace the original message .
Acknowledgments
We are grateful to the anonymous referees for their
invaluable suggestions. This work is supported by the
National Natural Science Foundation of China (Nos.
61472472). This work is also supported by JiangXi
Education Department (Nos. GJJ14650 and GJJ14642).
5.7 Compared with RSA
References
[1] P. W. Shor. Polynomial-Time Algorithms for Prime
Factorization and Discrete Logarithms on a Quantum
Computer. SIAM J.SCI.STATIST.COMPUT., 26:1484,
1997.
[2] Chaum D. Blind Signatures system. Advances in
cryptology:proceedings of Crypto 1982, Heidelberg:
Springer-Verlag, 1982:199-203.
[3] Okamoto T. Provable secure and practical identification
schemes and corresponding digital signature schemes.
CRYPTO'92. 1992: 31-52.
[4] C. P. Schnorr. Efficient Identification and Signatures for
Smart Cards. In Advances in Cryptology – CRYPTO ’89,
LNCS, pages 239–252. Springer, 1989.
[5] Chien H Y , Jan J K , and Tseng Y M . RSA-Based
partially blind signature with low computation. IEEE 8sth
International Conference on Parallel and Distributed
Systems. Kyongju : Institute of Electrical and Electronics
Engineers Computer Soeiety, 2001: 385-389.
[6] Zheng Cheng, Guiming Wei, Haiyan Sun. Design on blind
signature based on elliptic curve. Chongqing University of
Posts and Telecommunications, 2007, (1):234-239.
[7] T.Okamoto. An efficient divisible electronic cash scheme. In
CRYPTO, pages 438-451, 1995.
[8] I.Damgard. A design principle for hash functions. Crypto 89,
LNCS 435, 416–427.
[9] Overbeck, R.: A Step Towards QC Blind Signatures. IACR
Cryptology ePrint Archive 2009: 102 (2009).
Fig2 Signature Time
We compare the blind signature time between RSA and
our scheme, as is showed in Fig2. We compare four
different situations, when the length of the plaintext is 128
bits, 256 bits, 512 bits and 1024 bits. From the Fig2, we
can draw the conclusion that the signature time of our
scheme is smaller than the signature time of the RSA
shceme. So, our blind signature scheme based on
Niederreiter PKC is very efficient.
6. Conclusions
We propose a blind signature scheme based on
Niederreiter PKC whose security based on the security of
25
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[10]Niederreiter H. Knapsack-type cryptosystems and algebraic
coding theory[J]. Problems of Control and Information
Theory, 1986, 15 (2) :159-166.
[11]E. Berlekamp, R. McEliece, and H. van Tilborg. On the
Inherent Intractability of Certain Coding Problems. IEEE
Transactions on Information Theory, IT-24(3), 1978.
[12]Li, Y., Deng, R., and Wang, X. the equivalence of
McEliece's and Niederreiter's public-key cryptosystems.
IEEE Transactions on Information Theory, Vol.40, pp.271273(1994).
[13]T.R.N. Rao and K.-H. Nam. Private-key algebraic-coded
cryptosystems. Proc.Crypt0 '86, pp.35-48, Aug, 1986.
[14]C. M. Adams and H. Meijer. Security-related comments
regarding McEliece's public-key cryptosystem. Roc. Crypto
'87, Aug,1987.
[15]P. J. Lee and E. F. Brickell. An Observation on the Security
of McEliece’s Public-Key Cryptosystem. j-LECT-NOTESCOMP-SCI, 330:275–280, 1988.
Junyao Ye is a Ph.D. student in Department of Computer Science
and Engineering, Shanghai JiaoTong University, China. His
research interests include information security and code-based
cryptography.
Fang Ren received his M.S. degree in mathematics from
Northwest University, Xi’an, China, in 2007. He received his Ph.D.
degree in cryptography from Xidian University, Xi’an, China, in
2012. His research interests include Cryptography, Information
Security, Space Information Networks and Internet of Things.
Dong Zheng received his Ph.D. degree in 1999. From 1999 to
2012, he was a professor in Department of Computer Science and
Engineering, Shanghai Jiao Tong University, China. Currently, he
is a Distinguished Professor in National Engineering Laboratory for
Wireless
Security,
Xi’an
University
of
Posts
and
Telecommunications, China. His research interests include
subliminal channel, LFSR, code-based systems and other new
cryptographic technology.
Kefei Chen received his Ph.D. degree from Justus Liebig
University Giessen, Germany, in 1994. From 1996 to 2013, he was
a professor in Department of Computer Science and Engineering,
Shanghai Jiao Tong University, China. Currently, he is a
Distinguished Professor in School of Science, Hangzhou Normal
university, China. His research interests include cryptography and
network security.
26
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Multi-lingual and -modal Applications in the Semantic Web:
the example of Ambient Assisted Living
Dimitra Anastasiou
Media Informatics and Multimedia Systems, Department of Computing Science,
University of Oldenburg, 26121 Oldenburg, Germany
[email protected]
digital libraries. The challenges of the SW at that time
were the development of ontologies, formal semantics of
SW languages, and trust and proof models. Zhong et al. [3]
were in search of the “Wisdom Web” and Web
Intelligence where “the next-generation Web will help
people achieve better ways of living, working, playing,
and learning.“ The challenges described in [2] have now
been sufficiently addressed, whereas the vision presented
in [3] has not yet gained ground. d‟ Aquin et al. [4]
presented the long-term goal of developing the SW into a
large-scale enabling infrastructure for both data integration
and a new generation of intelligent applications with
intelligent behavior. They added that some of the
requirements of an application with large-scale semantics
are to exploit heterogeneous knowledge sources and
combine ontologies and resources. In our opinion,
multimedia data belong to such heterogeneous sources.
The intelligent behavior of next-generation applications
can already be found in some new research fields, such as
AAL and Internet of Things.
Many Web applications nowadays offer user interaction in
different modalities (haptics, eye gaze, hand, arm and
finger gestures, body posture, voice tone); few examples
are presented here. Wachs et al. [5] developed GESTIX, a
hand gesture tool for browsing medical images in an
operating room. As for gesture recognition, Wachs et al.
[6] pointed out that no single method for automatic hand
gesture recognition is suitable for every application; each
algorithm depends on each user‟s cultural background,
application domain, and environment. For example, an
entertainment system does not need the gesturerecognition accuracy required of a surgical system. An
application based on eye gaze and head pose in an elearning environment is developed by Asteriadis et al. [7].
Their system extracts the degree of interest and
engagement of students reading documents on a computer
screen. Asteriadis et al. [7] stated that eye gaze can also be
used as an indicator of selection, e.g. of a particular exhibit
in a museum, or a dress at a shop window, and may assist
or replace mouse and keyboard interfaces in the presence
of severe handicaps.
This survey paper presents related work on multilingual
and multimodal applications within the field of Semantic
Abstract
Applications of the Semantic Web (SW) are often related only to
written text, neglecting other interaction modalities and a large
portion of multimedia content that is available on the Web today.
Processing and analysis of speech, hand and body gestures, gaze,
and haptics have been the focus of research in human-human
interactions and have started to gain ground in human-computer
interaction in the last years. Web 4.0 or Intelligent Web, which
follows Web 3.0, takes these modalities into account. This paper
examines challenges that we currently face in developing multilingual and -modal applications and focuses on some current and
future Web application domains, particularly on Ambient
Assisted Living.
Keywords: Ambient Assisted Living, Multimodality,
Multilinguality, Ontologies, Semantic Web.
1. Introduction
Ambient Assisted Living (AAL) promotes intelligent
assistant systems for a better, healthier, and safer life in the
preferred living environments through the use of
Information and Communication Technologies (ICT).
AAL systems aim to support elderly users in their
everyday life using mobile, wearable, and pervasive
technologies. However, a general problem of AAL is the
digital divide: many senior citizens and people with
physical and cognitive disabilities are not familiar with
computers and accordingly the Web. In order to meet the
needs of its target group, AAL systems require natural
interaction through multilingual and multimodal
applications. Already in 1991 Krüger [1] said that “natural
interaction” means voice and gesture. Another current
issue to bring AAL systems into the market is
interoperability to integrate heterogeneous components
from different vendors into assistance services. In this
article, we will show that Semantic Web (SW)
technologies can go beyond written text and can be applied
to design intelligent smart devices or objects for AAL, like
a TV or a wardrobe.
More than 10 years ago, Lu et al. [2] provided a review
about web-services, agent-based distributed computing,
semantics-based web search engines, and semantics-based
27
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
devices, like PC, mobile phone, PDA. In this paper,
though, by “multimodality” we refer to multimodal
input/output:
Web. We discuss challenges of developing such
applications, such as the Web accessibility by senior
people. This paper is laid out as follows: in Sect. 2 we
present how the multilingual and multimodal Web of Data
is envisioned. Sect. 3 presents the challenges of
developing multi-lingual and -modal applications. In Sect.
4 we look at some current innovative applications,
including Wearable Computing, Internet of Things, and
Pervasive Computing. The domain of AAL and its
connection with the SW and Web 4.0 is presented in detail
along with some scenarios in Sect. 5. Finally, we
summarize the paper in Sect. 6.
i) Multimodal input by human users (in)to Web
applications, including modalities, like speech, body
gestures, touch, eye gaze, etc.; for processing purposes,
this input involves recognition of these modalities;
ii) Multimodal output by Web applications to human
users; this involves face tracking, speech synthesis, and
gesture generation. Multimodal output can be found in
browser-based applications, e.g. gestures are performed
by virtual animated agents, but it is even more realistic
to be performed by pervasive applications, such as
robots.
2. Multi-linguality and -modality in the
Semantic Web
2.1 Breaking the digital divide: heterogeneous target
group
Most SW applications are based on ontologies; regarding
the multilingual support in ontologies, W3C recommends
in the OWL Web Ontology Language Use Cases and
Requirements [8] that the language should support the use
of multilingual character sets. The impact of the
Multilingual Semantic Web (MSW) is a multilingual “data
network” where users can access information regardless of
the natural language they speak or the natural language the
information was originally published in (Gracia et al. [9]).
Gracia et al. [9] envision the multilingual Web of Data as a
layer of services and resources on top of the existing
Linked Data infrastructure adding multilinguality in:
Apart from the so-called “computer-literate” people, there
are people who do not have the skills, the abilities, or the
knowledge to use computers and accordingly the Web.
The term “computer literacy” came into use in the mid1970‟s and usually refers to basic keyboard skills, plus a
working knowledge of how computer systems operate and
of the general ways in which computers can be used [12].
The senior population was largely bypassed by the first
wave of computer technology; however, they find it more
and more necessary to be able to use computers (Seals et
al. [13]). In addition to people with physical or cognitive
disabilities, people with temporal impairments (e.g. having
a broken arm) or young children often cannot use
computers efficiently. All the above groups profit by the
interaction with multimodal systems, where recognition of
gesture, voice, eye gaze or a combination of modalities is
implemented. For the “computer-literate” people,
multimodality brings additional advantages, like
naturalness, intuitiveness, and user-friendliness. To give
some examples, senior people with Parkinson have
difficulties controlling the mouse, so they prefer speech;
deaf-mute people are dependent on gesture, specifically
sign language. Sign language, as with any natural
language, is based on a fully systematic and
conventionalized language system. Moreover, the selection
of the modality, e.g. speech or gesture, can also be
context-dependent. In a domestic environment, when a
person has a tray in their hand, (s)he might use speech to
open the door. Thus, as the target group of the Web is very
heterogeneous, the current and future applications should
be context-sensitive, personalized, and adaptive to the
target‟s skills and preferences.
i) linguistic information for data and vocabularies in
different languages (meaning labels in multiple
languages and morphological information);
ii) mappings between data with labels in different
languages (semantic relationships or translation
between lexical entries);
iii) services to generate, localize, link, and access Linked
Data in different languages.
Other principles, methods, and applications towards the
MSW are presented by Buitelaar and Cimiano [10].
As far as multimodality is concerned, with the
development of digital photography and social networks, it
has become a standard practice to create and share
multimedia digital content. Lu et al. [5] stated that this
trend for multimedia digital libraries requires
interdisciplinary research in the areas of image processing,
computer vision, information retrieval, and database
management. Traditional content-based multimedia
retrieval techniques often describe images/videos based on
low-level features (such as color, texture, and shape), but
their retrieval is not satisfactory. Here the so-called
Semantic Gap becomes relevant, defined by Smeulders et
al. [11] as a “lack of coincidence between the information
that one can extract from the visual data and the
interpretation that the same data has for a user in a given
situation.” Besides, multimodality may refer to multimodal
28
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
2.2 Multimodal applications in the Semantic Web
3. Challenges in developing multi-lingual and
-modal applications
Historically, the first multimodal system was the “Put that
there” technique developed by Bolt [14], which allowed
the user to manipulate objects through speech and manual
pointing. Oviatt et al. [15] stated that real multimodal
applications range from map-based and virtual reality
systems for simulation and training over field medic
systems for mobile use in noisy environments, through to
Web-based transactions and standard text-editing
applications. One type of multimodal application is the
multimodal dialogue system. They are applicable both in
desktop and Web applications, but also in pervasive
systems, such as in the car or at home (see AAL scenarios
in 5.2.2). Smartkom [16] is such a system that features
speech input with prosodic analysis, gesture input via
infrared camera, recognition of facial expressions and
emotional states. On the output side, the system features a
gesturing and speaking life-like character together with
displayed generated text and multimedia graphical output.
Smartkom provides full “symmetric multimodality”,
defined by Wahlster [17] as the possibility that all input
modes are also available for output, and vice versa.
Another multimodal dialogue system is VoiceApp
developed by Griol et al. [18]. All applications in this
system can be accessed multimodally using traditional
GUIs and/or by means of voice commands. Thus, the
results are accessible to motor handicapped and visually
impaired users and are easier to access by any user in
small hand-held devices where GUIs are in some cases
difficult to employ.
He et al. [19] developed a dialogue system called Semantic
Restaurant Finder that is both multimodal and
semantically rich. Users can interact through speech,
typing, or mouse clicking and drawing to query restaurant
information. SW services are used, so that restaurant
information in different city/country/language are
constructed, as ontologies allow the information to be
sharable.
Apart from dialogue systems, many web-based systems
are multimodal. In the assistive domain, a portal that offers
access to products is EASTIN1 [20]. It has a multilingual
(users should forward information requests, and receive
results, in their native language) and multimodal (offering
a speech channel) front-end for end-users. Thurmair [20]
tested the usability of the portal and found that most
people preferred to use free text search.
In this section we discuss some challenges for multilingual and -modal applications from a development
perspective. A basic challenge and requirement of the
future Web is to provide Web accessibility to everybody,
bearing in mind the heterogeneous target group. Web
accessibility means to make the content of a website
available to everyone, including the elderly and people
with physical or cognitive disabilities. According to a
United Nations report [21], 97% of websites fail to meet
the most basic requirements for accessibility by using units
of measurement (such as pixels instead of percentages),
which restrict the flexibility of the page layout, the font
size or both. Today worldwide 650 million people have a
disability and approximately 46 million of these are
located in the EU. By 2015 20% of the EU will be over 65
years of age, the number of people aged 60 or over will
double in the next 30 years and the number aged 80 or
over will increase by 10% by 2050. These statistics
highlight the timeliness and importance of the need to
make the Web accessible to more senior or impaired
people. W3C has published a literature review [22] related
to the use of the Web by older people to look for
intersections and differences between the accessibility
guidelines and recommendations for web design and
development issues that will improve accessibility to older
people. W3C has a Web Accessibility Initiative [23],
which has released accessibility guidelines, categorized
into:
i) Web Content: predictable and navigable content;
ii) User Agents: access to all content, user control of how
content is rendered, and standard programming
interfaces, to enable interaction with assistive
technologies;
iii) Authoring Tools: HTML/XML editors, tools that
produce multimedia, and blogs.
Benjamins et al. [24] stated that the major challenges of
SW applications, in general, concern: (i) the availability of
content, (ii) ontology availability, development and
evolution, (iii) scalability, (iv) multilinguality, (v)
visualization to reduce information overload, and (vi)
stability of SW languages. As far as multilinguality is
concerned, they state that any SW approach should
provide facilities to access information in several
languages, allowing the creation and access to SW content
independently of the native language of content providers
and users. Multilinguality plays an important role at
various levels [24]:
i) Ontologies: WordNet, EuroWordnet etc., might be
explored to support multilinguality;
1
www.eastin.eu, 10/09/14
29
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
ii) Annotations: proper support is needed that allows
providers to annotate content in their native language;
iii) User interface: internationalization and localization
techniques should make the Web content accessible in
several languages.
4. Current domain applications
In the last years the usage of the Web has shifted from
desktop applications and home offices to smart devices at
home, in entertainment, the car, or in the medical domain.
Some of the latest computing paradigms are the following:
As far as the challenges related to multimodal applications
are concerned, He et al. [19] pointed out that the existing
multimodal systems are highly domain-specific and do not
allow information to be shared across different providers.
In relation with the SW, Avrithis et al. [25] stated that
there is a lot of literature on multimodality in the domains
of entertainment, security, teaching or technical
documentation, however the understanding of the
semantics of such data sources is very limited. Regarding
the combination of modalities, Potamianos & Perakakis
[26], among other authors, stated that multimodal
interfaces pose two fundamental challenges: the
combination of multiple input modalities, known as the
fusion problem and the combination of multiple
presentation media, known as the fission problem. Atrey et
al. [27] provided a survey about multimodal fusion for
multimedia analysis. They made several classifications
based on the fusion methodology and the level of fusion
(feature, decision, and hybrid). One other challenge of
multimodal systems is low recognition. Oviatt & Cohen
[28], on comparing GUIs with multimodal systems, stated
that, whereas input to GUIs is atomic and certain, machine
perception of human input, such as speech and gesture, is
uncertain; so any recognition-based system‟s interpretations are probabilistic. This means that events, such as
object selection, which were formerly basic events in a
GUI (point an object by touching it) are subject to
misinterpretation in multimodal systems. They see that the
challenge for system developers is to create robust new
time-sensitive architectures that support human
communication patterns and performance, including
processing users‟ parallel input and managing the
uncertainty of recognition-based technologies.
Apart from the above challenges, an additional challenge
is twofold: i) develop multi-lingual and -modal
applications in parallel and ii) tie them with a languageenhanced SW. Today there are not many applications that
combine multiple modalities as input and/or output and
support many natural languages at the same time. Cross
[29] states that current multimodal applications typically
provide user interaction in only a single language. When a
software architect desires to provide user interaction in
more than one language, they often write a multimodal
application for each language separately and provide a
menu interface to a user that permits the user to select the
language that the user prefers. The drawback is that having
multiple versions of the same multimodal application in
various languages increases complexity, which leads to an
increased error rate and additional costs.
 Wearable computing is concerned with miniature
electronic devices that are worn on the body or woven
into clothing and access the Web, resulting in
intelligent clothing. A commercial product is the MYO
armband by Thalmic Labs 1 with which users can
control presentations, video, content, games, browse
the Web, create music, edit videos, etc. MYO detects
gestures and movements in two ways: 1) muscle
activity and 2) motion sensing. The most recent Apple
Watch 2 is designed around simple gestures, such as
zooming and panning, but also senses force (Force
Touch). Moreover, a heart rate sensor in Apple Watch
can help improve overall calorie tracking.
 The Internet of Things (IoT) refers to uniquely
identifiable objects and their virtual representations in
an Internet structure. Atzori et al. [30] stressed that the
IoT shall be the result of the convergence of three
visions: things-oriented, Internet-oriented, and
Semantic-oriented
visions.
Smart
semantic
middleware, reasoning over data, and semantic
execution environments belong to the semanticoriented visions. A recent survey of IoT from an
industrial perspective is published by Perera et al. [31].
[31] stated that “despite the advances in HCI, most of
the IoT solutions have only employed traditional
computer screenbased techniques. Only a few IoT
solutions really allow voice or object-based direct
communications.“ They also see a trend from smart
home products that it also increasingly uses touchbased interactions.
 Pervasive context-aware systems: Pervasive/
ubiquitous computing means that information
processing is integrated into everyday objects and
activities.
Henricksen et al. [32] explored the
characteristics of context in pervasive systems: it
exhibits temporary characteristics, has many alternative
representations, and is highly interrelated. Chen et al.
[33] developed the Context Broker Architecture, a
broker agent that maintains a shared model of context
for all computing entities in the space and enforces the
privacy policies defined by the users when sharing
their contextual information. They believe that a
requirement for realizing context-aware systems is the
ability to understand their situational conditions. To
achieve this, it requires contextual information to be
represented in ways that are adequate for machine
1
2
https://www.thalmic.com/myo/, 08/06/15
https://www.apple.com/watch/, 08/06/15
30
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
are adaptivity, individualization, self-configuration and
learning aptitude. These features have been traditionally
achieved with methods developed within the field of
Artificial Intelligence (AI). However, they believe that the
Internet has been the driving force for the further
development of those methods and mentioned the
problems of Web services: i) integration and evaluation of
data sources, like ambient technical devices; ii)
cooperation between services of different kinds, such as
device services; and iii) interoperability of the above
mentioned services. These problems are very similar to the
interoperability issues arising between AAL system
components, hence Eichelberg & Lipprandt [38] state that
the success of AAL systems is tightly coupled with the
progress of Semantic Web technologies. The goal of using
SW technologies in AAL is to create interoperability
between heterogeneous devices (products from different
vendors) and/or IT services to promote cooperation
between AAL systems and the emergence of innovative
business models [38].
Web 4.0, the so-called Intelligent/Symbiotic Web, follows
the Web 2.0 (Social Web) and Web 3.0 (Semantic Web)
and is about knowledge-based linking of services and
devices. It is about intelligent objects and environments,
intelligent services, and intelligent products. Thus there is
tight connection between Web 4.0 and AAL, since AAL is
realized in smart and intelligent environments. In such
environments, intelligent things and services are available,
such as sensors that monitor the well being of users and
transfer the data to caregivers, robots that drive users to
their preferred destination, TVs that can be controlled
through gestures, etc.
Aghaei et al. [39] points out that Web 4.0 will be about a
linked Web that communicates with humans in a similar
manner that humans communicate with each other, for
example, taking the role of a personal assistant. They
believe that it will be possible to build more powerful
interfaces, such as mind-controlled interfaces. Murugesan
[40] stated that Web 4.0 will harness the power of human
and machine intelligence on a ubiquitous Web in which
both people and computers not only interact but also
reason and assist each other in smart ways.
Moreover, Web 4.0 is characterized by the so-called
ambient findability. Google allows users to search the web
and users‟ desktop and also extend this concept to the
physical world. Some examples are to tag physical objects
with the mobile phone, such as wallet, documents, but
even people or animals. Users can use Google to see what
objects have been tagged and Google can also locate the
objects for the user. In this concept, RFID-like technology,
GPS and mobile phone tricorders are needed. Also here
the connection between findability and AAL is present, as
smart objects with RFID are an important component of
AAL and IoT (see scenarios in 5.2.2).
processing and reasoning. Chen et al. [33] believe that
SW languages are well suited for this purpose for the
following reasons: i) RDF and OWL have rich
expressive power that are adequate for modeling
various types of contextual information, ii) context
ontologies have explicit representations of semantics;
systems with the ability to reason about context can
detect inconsistent context knowledge (result from
imperfect sensing), iii) SW languages can be used as
meta-languages to define other special purpose
languages, such as communication languages for
knowledge sharing.
 Location-based services and positioning systems:
Positioning systems have a mechanism for determining
the location of an object in space, from sub-millimeter
to meter accuracy. Coronato et al. [34] developed a
service to locate mobile entities (people/devices) at any
time in order to provide sets of services and
information with different modalities of presentation
and interaction.
 Semantic Sensor Web: Sheth et al. [35] proposed the
semantic sensor Web (SSW) where sensor data are
annotated with semantic metadata that increase
interoperability and provide contextual information
essential for situational knowledge. The SSW is the
answer to the lack of integration and communication
between networks, which often isolates important data
streams [35].
 Ambient Assisted Living (AAL): AAL is a research
domain that promotes intelligent assistant systems for a
better, healthier, and safer life in the preferred living
environments through the use of Information and
Communication Technologies (ICT). More information
on AAL is provided in the next Section.
5. Ambient Assisted Living
The aging population phenomenon is the primary
motivation of AAL research. From a commercial
perspective, AAL is rich in terms of technology (from telehealth systems to robotics) but also in terms of
stakeholders (from service providers to policy makers,
including core technology or platform developers)
(Jacquet et al. [36]). The program AAL JP [37] is a
funding initiative that aims to create a better quality of life
for older adults and to strengthen the industrial
opportunities in Europe through the use of ICT. In the next
sections we will discuss the connection between AAL and
the SW and Web 4.0, the reason why ontologies play a
role in AAL, and the way AAL is realized along with two
scenarios
5.1 Ambient Assisted Living and Web 3.0 – Web 4.0
According to Eichelberg & Lipprandt [38], the typical
features of AAL systems, as standard interactive systems,
31
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
CARE [44], which develops a fall detector, more than 200
end users in Austria, Finland, Germany and Hungary were
questioned regarding the need for a fall detector; they
answered that the current fall detectors (wearable systems)
are not satisfactory and do not have high acceptance in the
independent living context. Thus generally speaking, endusers are involved in current AAL related projects either
through answering questionnaires or participating in user
studies. Their involvement includes analysis of technical
achievements/ requirements of the developed product,
acceptance and usability of the prototypes, and also often
ergonomic, cognition, and psychological aspects.
As for adoption of AAL systems by end users, this
depends on various aspects, such as an application‟s
obtrusiveness and the willingness of users. In many AAL
systems, bio-sensors (activity, blood pressure- and weight
sensors) are employed to monitor the health conditions of
the users. Sensors/cameras are placed at home, so that the
seniors’ activities are monitored and shared between
informal carers, families and friends. The assisted have to
decide whether their well being should be monitored in
order to avoid undesired situations, but also to keep the
technology as unobtrusive as possible, so that they
preserve dignity and maintain privacy and confidentiality.
Weber [45] stated that an adequate legal framework must
take the technology of the IoT into account and would be
established by an international legislator, which is
supplemented by the private sector according to specific
needs.
Sun et al. [46] referred to some other challenges of AAL
systems: i) dynamic of service availability and ii) service
mapping. The Service Oriented Architecture, which
supports the connection of various services, tackles the
dynamicity problem. For service mapping, ontology
libraries are required to precisely describe the services.
There should be a so-called “mutual assistance
community” where a smart home is managed by a local
coordinator to build up a safety environment around the
assisted people and the elderly should find themselves
with a more active living attitude [46].
To sum up, Web 4.0 can support AAL by linking
intelligent things and services through Web technology
keyed to sensors, like RFID and GPS. However, according
to Eichelberg & Lipprandt [38], until the integration of
AAL systems into the era of Web 4.0, there is still
significant progress needed concerning the semantic
technologies. For instance, development of tools for
collaborative development of formal semantic knowledge
representations; integration of domain experts and
standardization; ontology matching, reasoning and
evaluation as well as semantic sensor networks and
“Semantic Enterprise” methods for the migration of ITprocesses in linked systems. Information of how
ontologies are related to AAL is given in 6.2.1. An
example project which combines sensor networks is SHIP
(Semantic Heterogeneous Integration of Processes) [41]. It
combines separate devices, components and sensors to
yield one coherent, intelligent and complete system. The
key concept of SHIP is a semantic model, which brings
together the data of the physical environment and the
separate devices to be integrated. One application domain
of SHIP is the Bremen Ambient Assisted Living Lab 1 ,
where heterogeneous services and devices are combined in
integrated assistants
5.2 Realization and evaluation of AAL
AAL is primarily realized in domestic environments, i.e.
the houses of senior people. Homes equipped with AAL
technology are called smart homes. Moreover, AAL
systems can be applied in hospitals and nursing homes.
Generally speaking, the objective of AAL systems is to
increase the quality of life of the elderly, maintain their
well-being and independence. However, achieving these
outcomes requires the involvement of third parties (e.g.
caregivers, family) through remote AAL services. Nehmer
et al. [42] distinguished three types of remote AAL
services: emergency treatment, autonomy enhancement,
and comfort services.
The projects funded by the AAL JP programme cover
solutions for prevention and management of chronic
conditions of the elderly, advancement of social
interaction, participation in the self-serve society, and
advancement of mobility and (self-) management of daily
life activities of the elderly at home. Thus AAL is
multifaceted with specific sub-objectives depending on the
kind of application to be developed.
Regarding the involvement of end users in AAL, the
project A2E2 [43] involves users in several phases of the
project, including focus groups, pilots, and an
effectiveness study. Three groups are used: eldery clients,
care professionals, and care researchers. Users are
interviewed to find out which requirements they have on
the particular interface to be developed. In the project
AAL applications are trans-disciplinary, because they mix
automatic control with modeling of user behavior. Thus,
the ability to reuse knowledge and integrate several
knowledge domains is particularly important [36].
Furthermore, AAL is a very open and changing field, so
extensibility is key. In addition, an AAL environment
requires a standard way of exchanging knowledge between
software and hardware devices. Therefore [36] believe that
ontologies are well adapted to these needs:
1
i) are extensible to take into account new applications;
ii) provide a standard infrastructure for sharing knowledge;
5.2.1 Ontologies and AAL
www.baall.org, 05/09/13
32
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
networked ontologies. They developed the OASIS
Common Ontological Framework, a knowledge
representation paradigm that provides: (i) methodological
principles for developing interoperable ontologies, (ii) a
„hyper-ontology‟ that facilitates formal semantic
interoperability across ontologies and (iii) an appropriate
software infrastructure for supporting heterogeneous
ontologies.
iii) semantic relationships, such as equivalence, may be
expressed between various knowledge sources, thus
permitting easy integration.
Jacquet et al. [36] presented a framework in which
ontologies enable the expression of users‟ preferences in
order to personalize the system behavior: it stores
preferences and contains application-specific modules.
Another ontology-centered design is used in the
SOPRANO Ambient Middleware (SAM) [47]. SAM
receives user commands and sensor inputs, enriches them
semantically and triggers appropriate reactions via
actuators in a smart home. The ontology is used as a
blueprint for the internal data models of the components,
for communication between components, and for
communication between the technical system and the
typically non-technical user.
In AAL there is often a problem of disambiguation
between complex situations and simple sensors events. For
example, if the person does not react to a doorbell ring, it
may indicate that they have a serious problem, or
alternatively it may indicate that they are unavailable, e.g.
taking a bath [48]. Therefore, Muñoz et al. [48] proposed
an AAL system based on a multi-agent architecture
responsible for analyzing the data produced by different
types of sensors and inferring what contexts can be
associated to the monitored person. SW ontologies are
adopted to model sensor events and the person‟s context.
The agents use rules defined on such ontologies to infer
information about the current context. In the case that
agents discover inconsistent contexts, argumentation
techniques are used to disambiguate the situation by
comparing the arguments that each agent creates. In their
ontology, concepts represent rooms, home elements, and
sensors along with relationships among them.
Furthermore, Hois [49] designed different modularized
spatial ontologies applied to an AAL application. This
application has different types of information to define: (1)
architectural building elements (walls), (2) functional
information of room types (kitchen) and assistive devices
(temperature sensors), (3) types of user actions (cooking),
(4) types of furniture or devices inside the apartment and
their conditions (whether the stove is in use), and (5)
requirements and constraints of the AAL system
(temperature regulations). Hois [49] designed different,
but related, ontologies to manage this heterogeneous
information. Their interactions determine the system‟s
characteristics and the way it identifies potential abnormal
situations implemented as ontological query answering in
order to monitor the situation in concrete contexts.
Last but not least, in the project OASIS, one of the
challenges was to achieve interoperability spanning
complex services in the areas of Independent Living,
Autonomous Mobility and Homes and Workplaces,
including AAL. Due to the diversity of types of services,
Bateman et al. [50] suggested the support of cross-domain
5.2.2 AAL Scenarios
Two AAL scenarios will now be presented that
demonstrate how multi-lingual and -modal applications,
coupled with SW and Web 4.0, can improve the quality of
life of senior citizens:
 Scenario 1: John is 70 years old and lives alone in a
smart home equipped with intelligent, height-adaptable
devices. He just woke up and wants to put on his clothes.
His wardrobe suggests to him to wear brown trousers
and a blue pullover. Then he goes to the supermarket for
his daily shopping. He comes back and puts his
purchased products into the fridge. The fridge registers
the products. Then he wants to take a rest and watch TV.
He lies on the bed; the bed is set to his favourite position
with the headrest and footrest being set slightly higher.
While he was at the supermarket, his daughter called
him. The TV informs him about this missed call. Then
he wants to cook his favourite meal; he goes to the
kitchen and the kitchen reminds him about the recipe
going through all the steps. The next day, when he goes
again to the supermarket, his mobile reminds him that he
has to buy milk.
 Scenario 2: Svetlana is from Ukraine and lives together
with Maria, 85 years old from England, at Maria‟s smart
home. Svetlana is caring staff, i.e. she cooks, cleans,
helps in shopping, etc. Svetlana does not speak English
very well; thus she speaks in Ukrainian to Maria, but
also to electronic devices (TV, oven, etc.) and Maria
hears it back in English. Alternatively to the voice
commands, they can control the devices through a GUI
or through haptics on the devices that this is available.
The above scenarios have a lot of hardware and software
requirements, some of which are currently under
development in the project SyncReal 1 at the University of
Bremen; we will study these scenarios in more detail in the
subsequent paragraphs.
 Intelligent devices: these are the wardrobe, fridge,
cupboards, bed, and TV. The clothes in the wardrobe are
marked with an RFID tag (IoT – Web 4.0) and the
wardrobe can suggest to the user what to wear through
1
http://www.syncreal.de, 25/06/2013
33
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
the elderly and challenged people. The AAL market is
changing and is expected to boom in the next few years as
a result of demographic developments and R&D
investment by industries and stakeholders. Currently the
ICT for AAL is very expensive; projects test AAL
prototypes in living labs that can be applied in domestic
environments in the future. The technology is still often
obtrusive (motion sensors), although researchers are
working towards a goal of invisible technology. In
addition, often the data is “noisy”, as it is based on fuzzy
techniques, probabilistic systems, or Markov-based
models. Generally speaking, in regards to the future of
intelligent ambient technologies, not only intelligent
devices (Web 4.0), but also semantic interoperability
between devices and IT-services (Web 3.0) are necessary.
In our opinion, as emphasized by the term “semantic”, the
SW should be context-sensitive, situation-adaptive,
negotiating, clarifying, meaningful, and action-triggering.
All these aspects are important both for SW-based
dialogue systems and multimodal interfaces including
various input and output modalities. We share the opinion
of O Grady et al. [54], on their vision about evolutionary
AAL systems, about the necessity for an adaptive (robust
and adapting in real-time), open (not propriety AAL
systems), scalable (integration of additional hardware
sensors), and intuitive (support for many interaction
modalities) software platform that incorporates autonomic
and intelligent techniques.
these RFIDs and motion sensors (Beins [51]). This is
useful, among other benefits, for people with memory
deficit or visual impairments. It can also remind people
to wash clothes if there are not many clothes left in the
wardrobe. Similarly, the fridge and the cupboards
register all products that are placed in and taken out by
storing them in a database. This information is then
transferred to other devices, such as mobile phones, so
that John could see the next day that there is no more
milk in the fridge (Voigt [52]). The bed is set
automatically to a specific height position every time
that he wants to watch TV (context-sensitive).
 Semantic integration of ambient assistance: John
could see the missed call of his daughter on the TV
owing to formal semantic modeling and open standards;
the semantic interoperability allows the integration of the
telephone with the TV.
 Speech-to-speech dialogue system: the language barrier
between Svetlana and Maria is not a problem due to the
speech-to-speech technology that is implemented in the
home system. It includes three technologies: i) speech
recognition, ii) Machine Translation, iii) speech
synthesis; advantages and drawbacks of all three technologies have to be balanced. The dialogue system is also
multimodal giving the possibility to interact with either
through GUI, voice commands or haptics. It can be
applied not only in electronic appliances, but also in
robots. Information about speech-to-speech translation in
AAL can be found in Anastasiou [53].
References
[1] Krueger, M.W., Artificial Reality, Second Ed. Addison,
Wesley, Redwood City CA, 1991.
[2] Lu, S., Dong, M., Fotouhi, F., “The Semantic Web:
opportunities and challenges for next-generation Web
applications”, Information Research, 2002, 7 (4).
[3] Zhong, N., Liu, J., Yao, Y., In search of the wisdom web.
Computer, 2002, 37 (11), pp. 27-31.
[4] D‟Aquin, M.D., Motta, E., Sabou, M., Angeletou, S.,
Gridinoc, L., Lopez, V., Guidi, D., “Toward a New
Generation of Semantic Web Applications”, IEEE Intelligent
Systems, 2008, pp. 20-28.
[5] Wachs, J., Stern, H., Edan, Y., Gillam, M., Feied, C., Smith,
M., and Handler, J., “A hand-gesture sterile tool for browsing
MRI images in the OR”, Journal of the American Medical
Informatics Association, 2008, 15, pp. 3321-3323.
[6] Wachs, J.P., Kölsch, M., Stern, H., Edan, Y., “Vision-based
hand-gesture applications”, Communications of the ACM,
2011, 54 (2), pp. 60-71.
[7] Asteriadis, S., Tzouveli, P., Karpouzis, K. Kollias, S.,
“Estimation of behavioral user state based on eye gaze and
head pose-application in an e-learning environment.”,
Multimed Tools Appl, 2009, 41, pp. 469-493.
[8] OWL Web Ontology Language Use Cases and Requirements: http://www.w3.org/TR/webont-req/, June 2015
[9] Gracia, E. M. Ponsoda, P. Cimiano et al. “Challenges for the
multilingual Web of Data”, Journal of Web Semantics, 11:
2012, 63-71.
6. Summary and Conclusion
In this paper we focused on multimodal applications of the
SW and presented some challenges involved in the
development of multi-lingual and -modal applications. We
provided some examples of current and future application
domains, focusing on AAL. As there are large individual
differences in people‟s abilities and preferences to use
different interaction modes, multi-lingual and -modal
interfaces will increase the accessibility of information
through ICT technology for users of different ages, skill
levels, cognitive styles, sensory and motor impairments, or
native languages. ICT and SW applications gain ground
rapidly today in everyday life and are available to a
broader range of everyday users and usage contexts. Thus
the needs and preferences of many users should be taken
into account in the development of future applications.
High customization and personalization of applications is
needed, both because the limitations of challenged people
can vary significantly and change constantly and in order
to minimize the learning effort and cognitive load.
AAL can efficiently combine multimodality and SW
applications in the future to increase the quality of life of
34
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[26] Potamianos, A., Perakakis, “Human-computer interfaces to
multimedia content: a review”, in Maragos, P., Potamianos,
A. Gros, P. (Eds), Multimodal Processing and Interaction:
Audio, Video, Text, 2008, pp. 49-90.
[27] Atrey, P.K., Hossain, M.A, El Saddik, A., Kankanhalli,
M.S. “Multimodal fusion for multimedia analysis: a
survey” Multimedia Systems, 2010, 16, pp. 345-379.
[28] Oviatt, S., Cohen, P. “Multimodal interfaces that process
what comes naturally”, Communications of the ACM,
2000, 43 (3), pp. 45-52.
[29] Cross, C.W., Supporting multi-lingual user interaction with
a multimodal application, Patent Application Publication,
United States, Pub. No: US/2008/0235027, 2008.
[30] Atzori, L., Iera, A., Morabito, G. “The Internet of Things: A
survey”, in Computer Networks, 2010, 54, pp. 2787-2805.
[31] Perera, C., Liu, C.H., Jayawardena, S., Chen, M., “A Survey
on Internet of Things From Industrial Market Perspective”,
in IEEE Access, 2015, 2, pp. 1660-1679.
[32] Henricksen, K., Indulska, A., Rakotonirainy, A. “Modeling
context information in pervasive computing systems”, in
Proceedings of the 1st International Conference on
Pervasive Computing, 2002, pp. 167-180.
[33] Chen, H., Finin, T., Joshi, A. “Semantic Web in the Context
Broker Architecture”, in Proceedings of the Second IEEE
Annual Conference on Pervasive Computing and
Communications, 2010, pp. 277-286.
[34] Coronato, A., Esposito, M., De Pietro, G. “A multimodal
semantic location service for intelligent environments: an
application for Smart Hospitals”, in Personal and
Ubiquitous Computing, 2009, 13 (7), pp. 527-538.
[35] Sheth, A., Henson, C., Sahoo, S. “Semantic Sensor Web”
IEEE Internet Computing, 2008, pp- 78-83.
[36] Jacquet, C., Mohamed, A., Mateos, M. et al. “An Ambient
Assisted Living Framework Supporting Personalization
Based on Ontologies. Proceedings of the 2nd International
Conference on Ambient Computing, Applications, Services
and Technologies, 2012, pp. 12-18.
[37] Ambient Assisted Living Joint Programme (AAL JP):
http://www.aal-europe.eu/. Accessed 5 Mai 2015
[38] Eichelberg, M., Lipprandt, M. (Eds.), Leitfaden interoperable Assistenzsysteme - vom Szenario zur
Anforderung. Teil 2 der Publikationsreihe “Interoperabilität
von AAL-Systemkomponenten”. VDE-Verlag, 2013.
[39] Aghaei, S., Nematbakhsh, M.A., Farsani, H.K. “Evolution
of the World Wide Web: From Web 1.0 to Web 4.0.”, in
International Journal of Web & Semantic Technology,
2010, 3 (1), pp. 1-10.
[40] Murugesan, S. “Web X.0: A Road Map. Handbook of
Research on Web 2.0, 3.0, and X.0: Technologies,
Business, and Social Applications”, in Information Science
Reference, 2010, pp. 1-11.
[41] Autexier, S., Hutter, D., Stahl, C. “An Implementation,
Execution and Simulation Platform for Processes in
Heterogeneous Smart Environments”, in Proceedings of the
4th International Joint Conference on Ambient Intelligence,
2013.
[42] Nehmer, J., Becker, M., Karshmer, A., Lamm, R. “Living
assistance systems: an ambient intelligence approach”, in
Proceedings of the 28th International Conference on
Software Engineering, ACM, New York, NY, USA, 2006,
pp. 43-50.
[10] Buitelaar, P., Cimiano, P. (Eds.), Towards the Multilingual
Semantic Web, Principles, Methods and Applications,
Springer, 2014.
[11] Smeulders, A., Worring, M., Gupta, A., Jain, R., “ContentBased Image Retrieval at the End of the Early Years.”, in:
Proceedings of IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2000, 22 (12), pp. 1349-1380.
[12] Computerized Manufacturing Automation: Employment,
Education, and the Workplace (Washington, D. C., U.S.
Congress, Office of Technology Assessment, OTACIT235, 1984.
[13] Seals, C.D., Clanton, K., Agarwal, R., Doswell, F., Thomas,
C.M.: Lifelong Learning: Becoming Computer Savvy at a
Later Age. Educational Gerontology, 2008, 34 (12), pp.
1055-1069.
[14] Bolt, R.A.: Put-that-there: Voice and gesture at the graphics
interface. ACM Computer Graphics, 1980, 14 (3), pp. 262270.
[15] Oviatt, S.L., Cohen, P.R., Wu, L. et al. “Designing the user
interface for multimodal speech and gesture applications:
State-of-the-art systems and research directions for 2000
and beyond”, in Carroll, J. (Ed.), Human-Computer
Interaction in the New Millennium, 2000, 15 (4), pp. 263322.
[16] Wahlster, W., Reithinger, N., Blocher, A. “SmartKom:
Multimodal communication with a life-like character”, in
Proceedings of the 7th European Conference on Speech
Communication and Technology, 2001, pp. 1547-1550.
[17] Wahlster, W. “Towards Symmetric Multimodality: Fusion
and Fission of Speech, Gesture and Facial Expression”, in
G nter, A., Kruse, R., Neumann, B. (Eds.): KI 2003:
Advances in Artificial Intelligence. Proceedings of the 26th
German Conference on Artificial Intelligence, 2003, pp. 118.
[18] Griol, D., Molina, J.M., Corrales, V., “The VoiceApp
System: Speech Technologies to Access the Semantic
Web”, in Advances in Artificial Intelligence, 2011, pp.
393-402.
[19] He, Y.; Quan, T., Hui, S.C. “A multimodal restaurant finder
for semantic web”, in Proceedings of the 4th International
Conference on Computing and Telecommunication
Technologies, 2007.
[20] Thurmair, G. Searching with ontologies – searching in
ontologies: Multilingual search in the Assistive Technology
domain. Towards the Multilingual Semantic Web, 2013.
[21] United Nations Open Audit of Web Accessibility:
http://www.un.org/esa/socdev/enable/documents/fnomensa
rep.pdf
[22] Web Accessibility for Older Users: A Literature Review:
http://www.w3.org/TR/wai-age-literature/. Accessed 25
Aug 2013
[23] W3C Web Accessibility Initiative (WAI):
http://www.w3.org/WAI/
[24] Benjamins, V.R., Contreras, J., Corcho, O., Gómez-Pérez,
A. “Six Challenges for the Semantic Web”, in KR2002
Semantic Web Workshop, 2002.
[25] Avrithis, Y., O‟Connor, N.E., Staab, S., Troncy, R.
“Introduction to the special issue on “semantic
multimedia”, in Multimedia Tools Applic, 2008, 39, pp.
143-147.
35
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
informal communication in spatially distributed groups by
exploiting smart environments and ambient intelligence.
In 2015 she has been awarded a Marie Curie-Individual
Fellowship grant on the topic of Tangible User Interfaces. In the
last years, she has supervised numerous BA, MA and PhD
students. In total she has published a book (PhD version), 17
journal/magazines papers, 32 papers in conference and
workshop proceedings, and she is editor of 6 workshop
proceedings. In addition, she is a member of 13 programme
committees for conferences and journals (such as Jounral of
Information Science, Computer Standards and Interfaces
journal). She has 6-year teaching experience mainly in the field
of Computational Linguistics.
[43] A2E2 project: http://www.a2e2.eu/. Accessed 11 June 2015
[44] CARE project: http://care-aal.eu/en. Accessed 11 June
2015
[45] Weber, R.H., “Internet of Things – New security and
privacy challenges”, Computer Law & Security Review,
2010, 26 (1), pp. 23-30.
[46] Sun, H., De Florio, V., Gui, N., Blondia, C. “Promises and
Challenges of Ambient Assisted Living Systems”, in
Proceedings of the 6th International Conference on
Information Technology: New Generations, 2009, pp.
1201-1207.
[47] Klein, M., Schmidt, A., Lauer, R. “Ontology-centred design
of an ambient middleware for assisted living: The case of
SOPRANO”, in the 30th Annual German Conference on
Artificial Intelligence, 2007.
[48] Muñoz, A., Augusto, J.C., Villa, A., Botia, J.A. “Design and
evaluation of an ambient assisted living system based on an
argumentative multi-agent system, Pers Ubiquit Comput
15: 377-387, (2011)
[49] Hois, J. “Modularizing Spatial Ontologies for Assisted
Living Systems”, in Proceedings of the 4th International
Conference on Knowledge Science, Engineering and
Management, 2010, 6291, pp. 424-435.
[50] Bateman, J., Castro, A., Normann, I., Pera, O., Garcia, L.,
Villaveces, J.M. OASIS common hyper-ontological
framework, OASIS Project, Tech. Rep., 2009.
[51] Beins, S.: Konzeption der Verwaltung eines intelligenten
Kleiderschranks, Bachelor Thesis, Fachbereich 3:
Mathematik und Informatik, University of Bremen, 2013.
[52] Voigt, M. Entwicklung einer mittels Barcode-Lesegerätes
automatisierten
Einkaufsliste,
Bachelor
Thesis,
Fachbereich 3: Mathematik und Informatik, University of
Bremen, 2013.
[53] Anastasiou, D. “Speech-to-Speech Translation in Assisted
Living. Proceedings of the 1st Workshop on Robotics in
Assistive Environments”, in the 4th International
Conference on Pervasive technologies for Assistive
Environments, 2011.
[54] O‟Grady, M.J., Muldoon, C., Dragone, M., Tynan, R.,
O'Hare, G.M.P. “Towards evolutionary ambient assisted
living systems”, Journal of Ambient Intelligence and
Humanized Computing, 2010, 1 (1), pp. 15-29.
Dr. Dimitra Anastasiou finished her PhD in 2010 within five
years on the topic of “Machine Translation“ at Saarland
University, Germany. Then she worked for two years as a postdoc in the project “Centre for Next Generation Localisation” at
the University of Limerick, Ireland. There she designed
guidelines for localisation and internationalisation as well as file
formats for metadata, leaded the CNGL-metadata group, and
was a member of the XML Interchange File Format (XLIFF)
Technical Committee. In the next two years she continued with
the project “SFB/TR8 Spatial Cognition” at the University of
Bremen, Germany. Her research focused on multimodal and
multilingual assistive environments and improvement of dialogue
systems with relation to assisted living environments. She run
user studies in the “Bremen Ambient Assisted Living Lab
(BAALL)” with participants interacting with intelligent devices and
a wheelchair/robot and did a comparative analysis of crosslingual spatial spoken and gesture commands. Currently she is
working at the University of Oldenburg, Germany in the DFG
project SOCIAL, which aims at facilitating spontaneous and
36
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
An Empirical Method to Derive Principles, Categories, and
Evaluation Criteria of Differentiated Services in an Enterprise
Vikas S Shah1
1
Wipro Technologies, Connected Enterprise Services
East Brunswick, NJ 08816, USA
[email protected]
BPs’ associations to their activities and reorganize based
on either changes to the or new BP requirements [5] and
[19]. It allows accommodating the desire level of
alterations and respective association in the BPs across
enterprise by means of combining capabilities of more
granular services or nested operations.
Abstract
Enterprises are leveraging the flexibilities as well as
consistencies offered by the traditional service oriented
architecture (SOA). The primarily reason to imply SOA is its
ability to standardize way for formulating separation of concerns
and combining them to meet the requirements of business
processes (BPs). Many accredited research efforts have proven
the advantages to separate the concerns in the aspects of one or
more functional architectures such as application, data, platform,
and infrastructure. However, there is not much attention to
streamline the approach when differentiating composite services
derived utilizing granular services identified for functional
architectures. The purpose of this effort is to provide an
empirical method to rationalize differentiated services (DSs) in
an enterprise. The preliminary contribution is to provide abstract
principles and categories of DS compositions. Furthermore, the
paper represents an approach to evaluate velocity of an enterprise
and corresponding index formulation to continuously monitor the
maintainability of DSs.
Keywords: Business Process (BP) Activities, Differentiated
Services
(DSs),
Enterprise
Entities,
Maintainability,
Requirements, and Velocity of an Enterprise.
DSs deliver the framework to place and update BPs as well
as other important capabilities of monitoring and managing
an enterprise. It enterprises accelerated time-to-market,
increased productivity and quality, reduced risk and
project costs, and improved visibility. Enterprises often
underestimate the amount of change required to adapt the
concept of DSs. [15], [16], and [17] indicates that DSs are
usually architected, updated, and built based on ongoing
changes into the enterprise. For example, newly introduced
product of digital electric meter will be added to the
product database and the service to “capture the meter
data remotely” gets updated explicitly and in composition
with data service to formalize the capabilities of the new
product. The primary concerns such as update to the data
service and the behavior of digital electric meter during the
outage are not being addressed or realized during later
stages when the specific event occurs pertaining to the
specific BP.
1. Introduction
Traditionally, services of SOA are composited to associate
enterprise entities and corresponding operations to
business process (BP) activities. The concept of DSs is
fairly novel that introduces level of variations necessary to
accommodate all the potential scenarios that are required
to be included within the diversified business processes [4]
and [7]. DSs are the services with similar functional
characteristics, but with additional capabilities, different
service quality, different interaction paths, or with different
outcomes [5]. DSs provide the ability to capture precise
interconnectivity and subsequently the integration between
BPs and the operations of enterprise entities [12].
Consequently, the entire purpose of DSs and their
association with the enterprise entities are misled. It
indicates that through feasibility analysis and navigation of
complex cross functional changes of BPs associated with
the enterprise entities are essential before updating DSs.
The analysis presented in this paper identifies core
characteristics of DSs and their association to the modeled
BPs of an enterprise. The paper presents an approach to
rationalize the relationship between the DSs and the
desired variability in BP activities. The goal is to
streamline and evaluate association between the BP
requirements and baseline criteria to incorporate them into
DSs. It sets the principles, categories, and evaluation
criteria for DSs to retain the contexts and characteristics of
DSs in an enterprise during various levels of updates.
Typically, BP association with enterprise entities begins
with assessments of the goals and objectives of the events
required to accomplish the BP requirements. After
modeling, BPs are implemented and consequently
deployed to the platform of choice in an enterprise. The
DSs have the built-in ability to considerably amend the
37
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
the decisions are based on some or other way related to the
following criteria.
In section 2, the primary concerns of the DSs and
corresponding review of the past research efforts are
presented. Section 3 provides methodology to institute DSs
in an enterprise and derives preliminary principles.
Identified meta-level categories of DSs are enumerated in
Section 4. The classification of DSs is based on
characteristics as well as anticipated behavior of the DSs.
Section 5 represents the evaluation method for velocity of
change in an enterprise considering 7 different BPs.
Section 6 proposes and derives practical criteria to indicate
maintainability of DSs depending on their classification.
Section 7 presents conclusion and future work.
 Existing product or service offerings and their
enhancements, support, and maintenance. For example,
DSs associated with the online payment BP has to
consider the product subscribed or in use by the
customer.
 New products or services that will enhance revenue or
gain new market share in the current or near term
timeframe. The most prominent DS example is to
replace electric meter with the smart meter for specific
set of customers.
 Innovation related to future trends and
competition. Product and service offerings that require
immediate development, however, will not contribute
to revenue until outlying years. DSs deployed to
prospect search and survey to investigate interest in
advanced smart grid products are the examples.
 Exit strategies for existing product or service
offerings. Proactively determining end life of the
products or services. In many cases, the previous
products and services are either need to be
discontinued or advanced significantly. The foremost
example is videocassette recorder.
2. Literature Reviews and Primary Concerns
of Introducing DSs
BPs assist businesses to make decisions in order to manage
the enterprise. Using a combination of a BP activities,
associated metrics, and benchmarks, organizations can
identify enterprise entities that are most in need of
improvement. There has been an increasing adaptation of
BPs to derive granular level principles for an enterprise in
recent years [2], [18], [22] and [29]. The Open Group
Architectural Framework (TOGAF) [31] reserves business
architecture as one of the initial phase to define BPs. The
Supply Chain Council’s Supply Chain Operations
Reference-model (SCOR), the Tele-Management Forum’s
Enhanced Telecom Operations Map (eTOM), and the
Value Chain Group’s Value Reference Model (VRM)
framework are the prominent examples of specifying BPs.
The result of the decision process is a set of principles and
key value propositions that provides differentiation and
competitive advantages. Various attempts have been made
either in specific use case [34] or in abstract
standardization [32] and [25]. Rationalized principles have
a much longer life span. These principles are direct or
indirect reflection to attend the uncertainties of an
enterprise. The principles should consider all the levels as
well as categories of uncertainties identified or evaluated
during the BP activities.
In [14], three types of
uncertainties are illustrated with examples.
However, widely accepted enterprise architecture (EA) and
other frameworks [27] and [2] aren’t addressing the
complexities of implementing desired variability in BPs
and corresponding BP activities. They are highly deficient
in specifying synergies of the DSs to BPs in an enterprise.
BP management suite providers are also offering either
inherent SOA and EA capabilities or third-party
integration adapters [8]. As specified in [3], [6], and [11],
it is primarily to eliminate the friction between BPM,
anticipated variations in services, and enterprise
architecture modeling. The most prevalent examples are
Oracle SOA suite [24], Red Hat JBOSS BPM and Fuse
products, OpenText BPM suite, IBM BPM suite [21], and
Tibco Software as indicated in [8]. BP management suites
are still struggling to achieve their enterprise potential best
practices to implement and update DSs.
 State uncertainty relates to the unpredictability that
represents whether or when a certain change may
occur. The example of state uncertainty is the initiation
of outage process (by the utility corporation providing
the outage to restoration services).
 Effect uncertainty relates to the inability to predict the
nature of the impact of a change. During the outage due
to unforeseen weather condition, it is absolutely
unpredictable to know the locations or areas of impact.
 Response uncertainty is defined as a lack of
knowledge of response options and/or an inability to
predict the consequences of a response choice.
Generally, utility provider has guideline for restoration
during the outages, however, it is unpredictable during
the situations that are never been faced before, such as
undermined breaks in the circuits.
The BP requirements are usually grouped to formulate the
future state of an enterprise. These requirements drives the
vision and guides the decisions to introduce DSs. Various
different research efforts [20], [23], and [33] indicates that
38
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
level of updates and interdependencies with enterprise
entities associated with the services (DSs or other types).
DS needs to implement these uncertainties either
proactively initiating a change or reactively responding to
the change. The conclusion of various studies [9], [10],
[18], and [22] indicates that first step to consistently
implement and update DSs is to define principles. These
principles govern maintaining DSs in the correlations with
the enterprise entities and advancements of BP activities.
Defining and Evolving Service Architecture: It is the
primary step to define, update, version, and deploy DSs.
The DS gets evolved and advanced accommodating the
desired level of diversification identified in previous step.
The responsibilities of this step also include evaluating the
potential uncertainties and alternate path that needs to be
derived in adherence to identified uncertainties.
3. Deriving Principles of DSs
The decision whether to introduce additional DS,
additional operation to existing DSs, or changes to the
operations of existing DSs has to be achieved during this
step. Modeling to map DSs with BP activities and
streamlining their implementation are the part of this phase
of DSs enabled enterprise.
BP Requirements and Initiation
The analysis of primary concerns and literature reviews
illustrated in Section 2 justifies that the method for
deriving principles of DSs should fundamentally have a
focus at the BP requirements, identified and placed BP
activities, and interdependencies between events of BP
activities. The BP requirements have to be reviewed to
certify the legitimacy and candidature for diversification to
form DSs’ specification. Figure 1 presents a sequence of
steps performed to identify principles of DSs in an
enterprise and architect DSs in adherence to BP
requirements.
Discovering and Assessing Architecture Artifacts:
When an enterprise receives alterations or new BP
requirements, it needs to assess the impact in terms of
other architectures associated with an enterprise (BP
architecture, integration architecture, and system
architecture). The responsibility of this step is to identify
the need of introducing or updating architecture artifacts
based on the process map (that is, association of services to
the BPs or their activities). Primarily, it is accountable to
identify whether any sublevel BPs (within existing BPs)
and any additional BP activities required to be introduced.
The need of introducing additional sublevel BPs or BP
activities may be either due to critical to major
advancements in BP requirements or changes necessary to
other associated architecture artifacts (including
integration and system architectures).
Review and
Validate
Requirement
Approved
Analyze Business Impact,
Conflict of Interest, and
Notification
Stage ACN
NO
Register(ed) BP
Requirements
YES
Notification to
Enterprise
Business Process
Architect
Discovering and Assessing Architecture Artifacts
BP Requirements and Initiation: The first step is to
validate BP requirements alignment with business and
goals of an enterprise. Stage 0 (initiation) is defined to
reiterate and evaluate BP requirements at each phase (or
step). When there is an ambiguity identified in the BP
requirement at any step due to responsibilities associated
with the corresponding step then Stage 0 has been initiated.
Stage ACN is defined to analyze business impact, conflict
of interest (if any exists), and notification across enterprise.
Stage 0
Categorize
Requirement
& Send it for
Review
Stage ACN
Integration Architect
Approved
NO
YES
Implication of
Update to
Infrastructure &
Platform
Resources
Availability of
Service for
Diversification
Feasibility
Analysis of
Process Map
NO
Systems Architect
Approved
Approved
ITERATE
NO
YES
YES
ITERATE
ITERATE
Enterprise-level
Composition of
Service
Specification
BP and Activity
Implications and
Updated
Specification
Service Update
Criteria and
Dependencies
Specification
Resource
Specification and
Availability for
Update
Defining and Evolving Service Architecture
Stage 0
DSs Design and
Development
Review
Service
Specification
Categorize Update
(Critical, Medium,
High, & Low)
Deployment Iterations
DS Deployment and
Versioning
Contextual and
Regression Testing
YES
NO
Approved
YES
Stage 0
DS & BP Modeling,
Design, and
Updated
Implementation
NO
Associating Service Administration
Paradigms
The other major responsibility of this step is to check
availability of services for diversification based on BP
requirements. It is also liable for specifying the desired
Service Level Testing
DS Configuration
Available?
Receive and
Analyze User
Tickets
YES
NO
Failure?
NO
YES
Support Ticketing &
Notification
Continuous
Auditing and
Monitoring of
SLAs
YES
Stage 0
Approved
Notification
of Iteration
NO
DS Monitoring
Evaluating
Resources based
on Scale of
Update
DS Description
and End-point
Configuration
Approved
Resolve?
YES
NO
Fig. 1 Steps to identify principles of DSs and architect DSs in an
enterprise.
39
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Associating Service Administration Paradigms:
Specifying and resolving interdependencies of DSs with
participant enterprise entities are the responsibilities of this
step. It needs to ensure that DSs are in adherence to the
availability of the enterprise resources and their defined
Service Level Agreements (SLAs). Configuration,
monitoring, and supporting DSs in association with
enterprise entities (including any failure condition or
resolution to uncertainties) are also the accountability of
this step to derive principles of DSs in an enterprise and
provide informed architecture decisions for DSs.
4. Identified Categories of DSs
Due to increasing availability and development of SOA
and BPs [26] and [28] platforms, services are being
characterized in numerous different aspects. The foremost
utilized classification methodology is functional
architecture types such as platform services, data services,
application services, and infrastructure services. Another
approach is to classify industry segment specific services
such as healthcare services, utility services, and payment
services. Certain enterprises are also inclined to introduce
custom classification of the services due to unavailability
of the standards as well as rationalization.
Following are the principles derived to identify, specify,
develop, and deploy DSs in an enterprise based on the
steps necessary to achieve BP requirements. Each step
identified in Figure 2 reveals and constitutes the
foundation for deriving the principles of DSs in
relationship with BP requirements.
Identified principles of DSs indicate that DSs are required
reacting to the set of events associated with BP activities.
DSs are independently built or composited utilizing one or
more types of services placed in an enterprise. DSs need to
be categorized such that each type can be streamlined
based on their characteristic and governed based on the
type of SLAs associated with them. Following is the list of
identified categories of DSs based on their characteristics.
 Specification of DS’s operation into information that
can be utilized in BPs in the context of concrete
activities. The most prominent example is BP activity
“generate invoice” needs DS that retrieves and
combines the information of purchased products and
their current pricing.
 Deterministic specification of relationship between BP
activities and enterprise entities in DS. In the example
of BP activities generate invoice, if any discount has to
be implied then it needs to be in correlations with the
pricing of the product.
 Precisely define BP activity’s events that can be
emulated, monitored, and optimized through DS. The
BP activity “generate invoice” request requires to be
validated before retrieving the other related
information.
 Impact of people, processes, and product (or service)
offerings as metadata associated with the DS. The BP
activity “generate invoice” can only be initiated by
specific role associated with the employee (example:
manager) or triggered by another activity such as
“completed order”.
 Specify and govern SLAs of DS in the context of
associated BP activity. The invoice should be
generated within 3 seconds of completing order is an
example of SLA.
 Regularly place and evaluate governance paradigms for
DS in association with BP activity to address
uncertainties. The BP activity “cancel order” or
“returning an item (or product)” can occur after
invoice has been generated. If those activities are not
defined and updating, canceling or revising invoicing
capabilities are not defined then it needs to be
introduced.
Competency Services: DSs that participates to satisfy one
or more competencies of the core business offerings are
categorized as competency services. Certain features
between different versions of the same product-line are
generic and essential, however, some features need to be
distinguished in the DS.
Relationship Services: DSs presenting external and
internal relationships of the enterprise entities with the role
associated with the entities such as customer, partner, and
supplier. The example of such DS is the relationship of
order with customer differs from the vendor and
corresponding action needs to differ in the operations of
DS.
Collaboration Services: Any DS offering collaboration
among varied enterprise entities and BP activities are
considered the participant of collaborative service
category. Calendar request to schedule the product review
meeting is the type of collaborative service where
participants can be either reviewer, moderator, or optional.
Common Services: When an enterprise gain maturity, it
needs to have standardized audit, log, and monitor
capabilities. These standardized DSs falls in the category
of common services. They are built to utilize consistently
across multiple sets of BP activities with specific objective
to monitor. Generating invoice and amount paid for an
order are different BP activities, however, the number of
item purchased are same and they are required to be
monitored as well as verified between BP activities.
40
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org




Instance identification of the DS.
Category of the DS.
BP name and activity utilizing the DS.
Registered consumer group and associated role using
the DS.
 Service’s probability of failure (recursively identified
from the audit logs).
Framework Services: The framework services are to
increase awareness of the enterprise’s technology
architecture capabilities. DS built to search metadata
associated with application services, data services,
platform services, or infrastructure services is an example
of framework service. The DSs differs in terms of what
type of metadata can be searched for which kind of service.
Governance Services: DSs deployed to ensure the
policies and practices are the governance services. Most
diversification to the security related services including
role based entitlement are the participant of governance
services.
5. Evaluating Velocity of an Enterprise
The experimental evaluation is based on set of 62 DSs out
of 304 services (includes functional architecture type
services as well as industry segment specific services
besides dedicated DSs). The services are built in Oracle
SOA suite [24] that has internal capabilities to map and
generate relationship with BP activities. 4 iterations of the
development, updates, and deployment have been
conducted for the following 7 BPs. The BP activities and
DSs are derived based on severity of the BP requirements.
Organizational Services: Organization culture has various
impacts on the BP activities. DSs that offer common
understanding of organization culture as well as corporate
processes are the organizational services. Ordering and
utilizing office supplies for different departments is an
example of organizational service. In this example, DS
differs in terms of accessibility of type of supplies to the
particular department.
BP# 1: Customer enrollment and registration
BP# 2: Manage customer information, inquiry, and history
BP# 3: Purchase order
BP# 4: Payment processing and account receivables
BP# 5: Invoicing
BP# 6: Notification and acceptance of terms
BP# 7: Account management
Strategic Services: DSs participates in making a decision
that impacts strategic direction and corporate goals are
categorized as strategic services. Financial analysis based
selection of marketing segments and budgeting based on
available statistics of annual spending are the types of
strategic services.
Velocity of the enterprise is representation of the rapid
changes and updates necessary to achieve the BP
requirements. The changes can be achieved through
updating or introducing either DS operations, DSs, BP
activities, or sublevel BPs. Correspondingly, the velocity is
based on four types of ratios as specified bellow. The
ratios are representation of the level of change necessary to
achieve goals of BP requirement.
Conditional Services: Certain BP activities require special
attention and business logic dedicated to particular
condition. The DSs built, updated, and maintained to
accommodate such scenarios are subject to this
classification. Credit card with special privilege for
purchases over allocated limit is an example of such DSs.
 DSs’ Ratio (DSR) = (Additional composite service /
Total number of services)
 DS Operations’ Ratio (OPR) = (Additional
accumulative number of DSs operations /
Accumulative number of DSs operations)
 BP Activities’ Ratio (AR) = (Additional BP activities /
Total number of BP activities)
 Sublevel BPs’ Ratio (SBR) = (Additional sublevel BPs
/ Total number of sublevel BPs)
Automation Services: They are the services defined and
utilized to introduce desired level of automation, yielding
additional business value for new or existing BP activities.
Typically, automation related services require stronger
bonding and maturity at the BP activities. Service to send
email notification for the approval versus the service for
online approval is the classical example of such DSs.
DSs can be associated with multiple categories. However,
alias to the DS is utilized for the secondary category such
that it can be independently monitored and audited.
Optional DSs’ common header elements (or metadata) are
introduced to capture the runtime metrics for the DSs.
Following are the additional information that DSs’
provides at runtime for further evaluation.
The velocity evaluation presented in Eq (1) also introduces
impact factor corresponding to each ratio, that is, c
(critical), h (high), m (medium), and l (low). The assigned
values for the impact factors are c = 10, h = 7, m = 4, and
l = 2 to indicate finite value for the severity of update.
There is absolutely no constraint to revisit the allocation of
41
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
severity to update impact factors during subsequent
iterations of updates to BP requirements and
corresponding deployment cycle. It should be based on
findings as well as severity of BP requirements in
consideration.
In Eq (1), #BPs represents total number of participant BPs
to form DSs enabled enterprise (7 in this case). When there
is a need to introduce or update sublevel BP due to BP
requirement then it is considered critical (c) change to an
enterprise. Whereas, update to or introduction of DS
operation is considered lowest category of change, that is,
low (l).
As such there is no maximum limit set for the velocity,
however, present deployment iteration’s velocity score can
be considered as the baseline for subsequent iterations.
The progressive values of velocity are indicated in Figure
2 for each iteration (1 through 4) pertaining to the 7 BPs in
consideration.
6. Formulating DSs Maintainability Index
(DSMI)
There is no obvious solution to evaluate maintainability of
DSs. The primary reason is due to the little to no effort for
defining maturity model and standardization for DSs. SOA
maturity models and governance are implied at more
operational aspects of the functional architecture type
services [30] and [13]. The other types of metrics
presented in [1] and [32] to measure the agility irrespective
of the maintainability concerns of DSs. The DSMI is an
effort to compute and continuously monitor maintainability
of DSs. Oracle SOA suite capabilities are utilized to
monitor and log DSs. Service registry features are
embraced to define, govern, and monitor SLAs as well as
metadata associated with the DSs.
VELOCITY =

# BPs
BP1
m  DSR  l  OPR  h  AR  c  SBR
(1)
# BPs
Table 1 provides implementation based analysis and
computed velocity of 4th deployment iteration of BP
requirements corresponding to the 7 BPs (as described
above). Following are the acronyms utilized in Table 1.
 #DS: total number of participant DSs for the BP.
 #OPs: accumulative number of DSs’ operations
involved.
 #As: total number of BP activities for the BP.
 #SBPs: total number of sublevel BPs of the BP.
 #A-CS: sum of new and updated DSs to the BP in
iteration 4.
 #A-OPs: sum of new and updated number of DSs’
operations introduced to the BP in iteration 4.
 #A-A: sum of new and updated BP activities
introduced to the BP in iteration 4.
 #A-SBPs: sum of new and updated sublevel BPs
introduced to the BP in iteration 4.
DSs’ operations, DSs, BP activities, and sublevel BPs that
are being reused across multiple BPs are counted at each
and every instance for the purpose of accuracy to evaluate
velocity.
6.1 Paradigms to Derive Inverted DSMI
The paradigms to formulate DSMI are described below for
each type of DSs.
Business continuity (BUC): It is to determine whether the
introduced or updated DSs are able to continue the day-today business activities after the deployment (or iteration).
The evaluation criterion for BUC paradigm is to monitor
the number of unique support tickets created for type of
DSs in context. For example, new customer registration is
providing errors due to inaccuracies in validation of
customer account number and/or customer identification.
The inverted ratio for BUC specific to the set of DSs
associated with the DS type is derived below.
Table 1: Velocity of the enterprise in iteration 4
BP#
1
#DSs
(#A-CSs)
7(0)
# OPs
(#A-OPs)
20(2)
#As
(#A-A)
8(1)
# SBPs
(#A-SBPs)
2(0)
iBUC<DS type> = (# of unique support tickets by the customer
/ #DSs deployed for <DS type>)
2
12(3)
28(7)
15(0)
3(0)
3
18(4)
42(7)
22(2)
5(1)
4
8(2)
15(3)
15(2)
3(0)
5
5(1)
12(2)
10(1)
3(0)
6
3(0)
8(1)
7(0)
1(0)
7
9(2)
16(3)
14(1)
2(0)
Operational risk (ORI): Operational risks are basically to
evaluate the DS level continuation of the enterprise
operations. Typically, it is traced by the number of failures
occurred for the DSs in the production cycle of present
deployment iteration. The specific example of change
purchase order request DS failed due to unambiguous
condition occurred within the dedicated DSs. The inverted
ratio for ORI specific to the set of DSs associated with the
DS type is derived below.
VELOCITY (of Iteration 4) = 1.52
42
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
an example of extendibility of DSs associated with
payment processing and account receivable BP. The
inverted ratio for ECI specific to the set of DSs associated
with the DS type is derived below.
iORI<DS type> = (# of unique operational failures/ (#DSs
deployed for <DS type>)
The ratio of oRIS is being generated by comparing the
failures with previous deployment iteration. The DSs
header contains probability of failures and it is being
automated at some extend to gain indicative operational
risk at runtime (as stated in Section 4).
iECI<DS type> = (# of alternate BP flows accustomed in DSs
of <DS Type> / #DSs deployed for <DS type>)
If “n” stands for the number of DS types identified in an
enterprise (it is 10 in this case based on Section 4) then
inverted DSMI can be computed based on Eq. (2).
#Paradigms (number of paradigms) to impact the DSMI is
5 as described above.
SLA Factorization (SPR): Scalability, reliability, and
performance (SPR) are being bundled to evaluate SLA
factorization. The SLAs defined in consideration of
desired SPR for each type of DSs are configured and
monitored. The SPR is identified based on the number of
violations by the particular category of DSs in the present
deployment iteration. The 4 seconds delay (when SLA is
set for maximum 3 seconds) in sending order confirmation
to vendor for specific product due to the heavy traffic is an
example of SLA violation. The inverted ratio for SPR
specific to the set of DSs associated with the DS type is
derived below.
Inverted DSMI = (1 / DSMI) =
[(
1 iBUC ) / n][( 1 iORI ) / n][( 1 iSPR) / n][( 1 iCOS ) / n][( 1 iECI ) / n] (2)
# Paradigms
n
n
n
n
n
Table 1 below presents the DSMI computed in the iteration
4 for the identified and deployed 7 BPs (as described in
Section 5).
iSPR<DS type> = (# of unique SPR specific SLA violations/
(#DSs deployed for <DS type>)
Table 2: DSMI in iteration 4
Paradigm
iBUC
iORI
iSPR
iCOS
iECI
Competency (6)
0.33
0.83
0.5
0.5
0.67
Relationship (12)
0.25
0.67
0.5
1.5
0.5
Collaboration (4)
0.25
0
0.25
0.5
0.25
Common (7)
0.29
0.14
0.42
2
0.29
Framework (8)
0.5
0.25
0.75
0.5
0.38
Governance (6)
0.33
0.5
0.33
1.5
0.83
Organizational
(7)
0.29
0.14
0.42
0.71
0.86
iCOS<DS type> = (# of BP activities utilizing DSs of <DS
Type> / #DSs deployed for <DS type>)
Strategic (7)
0.14
0
0.14
1
0.42
Conditional (5)
0.6
0.8
0.4
0.4
0.2
Extendibility and Continuous Improvements (ECI):
Extensibility and continuous improvement of the DSs are
evaluated based on customization required to accomplish
BP requirements. It is computed considering the number of
additional custom modeling as well as implementation
needed in context of BP activity and enterprise entity. The
primary objective is, whether respective DSs are able to
accommodate these customizations within the dilemma of
their dependencies with existing enterprise entities. If the
payment is not received within 6 months then it needs to be
sent for collection and vendor also needs to be notified, is
Automation (3)
0.33
0.67
1.67
0.67
2
Consistency (COS): Consistency can be evaluated at
many different aspects. The primary objective of this
criterion is to assess scope of the DS across multiple BP
activities. Due to the BPs requirements, specification of the
DS needs to incorporate high level interactions with
enterprise entities and underneath events of BP activities.
The consistency of DS is being derived based on the
number of BP activities utilizing the specific type of DSs
in considerations. The most prominent example is order
delivery confirmation and status needs to be sent to
customer, vendor, and account receivables. The inverted
ratio for COS specific to the set of DSs associated with the
DS type is derived below.
DS Type
(# of DSs)
Actual DSMI (of Iteration 4) = 1.76
6.2 Analysis and Observations of Evaluation
Figure 2 provides the progress of velocity and DSMI
through iteration 4 for the 7 BPs deployed, advanced, and
monitored. The finite numbers indicate the significant
reduction in velocity over the iterations. 58% reduction in
velocity (of deployment iteration 4) compare to iteration 3.
43
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
and placing DSs. The categorization and corresponding
implementation for BP requirements into the DSs are
identified and implied. Formulae to evaluate velocity of
enterprise and assessment criteria to monitor
maintainability of deployed DSs in terms of index are
illustrated with an example implementation and validated
in number of actual deployment iterations.
The graph also indicates increase in DSMI over the
iterations. The DSMI (of deployment iteration 4) is
improved by 21% compare to iteration 3. The result
directly illustrates that continuous monitoring and
improvements in terms of reducing the number of issues
reported by the business users, immediate resolutions to
causes of services’ failures, accurate modeling of DSs with
respective to the BP requirements, and precisions in test
scenarios decreases the velocity of enterprise and stabilizes
the DSMI.
The rationalization achieved utilizing the methodology to
derive and place principles of DSs increases consistency
and predictability across multiple units as well as entities
of an enterprise. The measurable implications due to
changes in BP requirements and assessable maintainability
are accomplished due to the classification and evaluation
methodologies of DSs.
The subsequent step is to
determine more granular level of DSs types that can be
leveraged in multifaceted BP scenarios. The underneath
primary goal remains intact, that is, to evolve, retain, and
stabilize maintainability of DSs.
Acknowledgments
Vikas Shah wishes to recognize Wipro Technologies’
Connected Enterprise Services (CES) sales team to support
the initiative. Special thanks to Wipro Technologies’
Oracle practice for providing opportunity of implying
conceptually identified differentiated principles, types, and
measurements in 4 different iterations.
Fig. 2 Computed velocities and DSMI for all deployment iterations in
production.
Essentially, it concludes that more number of BP activities
utilizing single DS and more number of alternate path
inclusion to single DS decreases the level of
maintainability of DSs, however, it increases the
consistency and extendibility of the DSs. Contrarily,
introducing more number of DSs also increases additional
level of SLAs’ associations and uncertainties, however,
introduces increased level of flexibility and agility in an
enterprise. It is a trade-off that enterprise has to decide
during the assessment of DSs architecture (2nd step
described in Section 3 Figure 2).
References
[1]
[2]
[3]
7. Conclusions
[4]
The perception of SOA is receiving wide acceptance due
to the ability of accustom and respond to BP related
requirements and changes providing operational visibilities
to an enterprise. DSs are the means to accommodate
uncertainties of BPs such that an enterprise may able to
gain acceptable level of agility and completeness. As such,
there are limited to no standardization available to derive
and maintain the qualities of DSs. In this paper, we
presented necessity of rationalizing DSs and their
principles. The research effort is to propose an empirical
method to derive and evolve the principles of identifying
[5]
[6]
A. Qumer and B. Henderson-Sellers, “An Evaluation of the
Degree of Agility in Six Agile Methods and its Applicability
for Method Engineering,” In: Information and Software
Technology Volume 50 Issue 4, pp. 280 – 295, March 2008.
Alfred Zimmermann, Kurt Sandkuhl, Michael Pretz,
Michael Falkenthal, Dierk Jugel, Matthias Wissotzki,
“Towards and Integrated Service-Oriented Reference
Enterprise Architecture,” In: Proceedings of the 2013
International Workshop on Ecosystem Architectures, pp.
26-30, 2013.
Andrea Malsbender, Jens Poeppelbuss, Ralf Plattfaut, Björn
Niehaves, and Jörg Becker, “How to Increase Service
Productivity: A BPM Perspective,” In: Proceedings of
Pacific Asia Conference on Information Systems 2011, July
2011.
Anirban Ganguly, Roshanak Nilchiani, and John V. Farr ,
“Evaluating Agility in Corporate Enterprise,” In:
International Journal of Production Economics Volume 118
Issue 2, pp. 410 – 423, April 2009.
Aries Tao Tao and Jian Yang, “Context Aware
Differentiated Services Development with Configurable
Business Processes,” In: 11th IEEE International Enterprise
Distributed Object Computing Conference 2007, Oct 2007.
Anne Hiemstra, Pascal Ravesteyn, and Johan Versendaal,
“An Alignment Model for Business Process Management
and Service Oriented Architecture,” In: th International
Conference on Enterprise Systems, Accounting and
Logistics (6th ICESAL ’09), May 2009.
44
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
Bohdana Sherehiy, Waldemar Karwowski, and John K.
Layer, “A Review of Enterprise Agility: Concepts,
Frameworks, and Attributes,: In: International Journal of
Industrial Ergonomics 37, pp. 445 – 460, March 2007.
Clay Richardson and Derek Miers, “How The Top 10
Vendors Stack Up For Next-Generation BPM Suites,” In:
The Forrester Wave: BPM Suites, Q1 2013, For: Enterprise
Architecture Professionals, March 2013.
Daniel Selman, “5 Principles of Agile Enterprise in 2012,”
In: IBM Operational Decision Manager Blog, Dec 2011.
Dean Leffingwell, Ryan Martens, and Mauricio Zamora,
“Principles of Agile Architecture,” Leffingwell, LLC . &
Rally Software Development Corp., July 2008.
Douglas Paul Thiel, “Preserving IT Investments with BPM
+ SOA Coordination,” Technology Investment Management
Library, SenseAgility Group, November 2009.
Florian Wagner, Benjamin Klöpper, and Fuyuki Ishikawa,
“Towards Robust Service Compositions in the Context of
Functionally Diverse Services,” In: Proceedings of the 21st
international conference on World Wide Web, pp. 969-978,
April 2012.
Fred A. Cummins, “Chapter 9: Agile Governance,” In
Book: Building the Agile Enterprise: With SOA, BPM and
MBM, Morgan Kaufmann, July 28, 2010.
Haitham Abdel and Monem El-Ghareeb, “Aligning Service
Oriented Architecture and Business Process Management
Systems to Achieve Business Agility,” Technical Paper,
Department of Information System, Mansoura University,
EGYPT, 2008.
Harry Sneed, Stephan Sneed, and Stefan Schedl, “Linking
Legacy Services to the Business Process Model,” In: 2012
IEEE 6th International Workshop on the Maintenance and
Evolution of Service-Oriented and Cloud-Based Systems,
August 2012.
Imran Sarwar Bajwa, Rafaqut Kazmi, Shahzad Mumtaz, M.
Abbas Choudhary, and M. Shahid Naweed, “SOA and BPM
Partnership: A paradigm for Dynamic and Flexible Process
and I.T. Management,” In: International Journal of
Humanities and Social Sciences 3(3), pp.267-273, Jul 2009.
Imran Sarwar Bajwa, “SOA Embedded in BPM: High Level
View of Object Oriented Paradigm,” In: World Academy of
Science, Engineering & Technology, Issue 54, pp.209-312,
May 2011.
Jean-Pierre Vickoff, “Agile Enterprise Architecture
PUMA,” Teamlog, October 2007.
Leonardo Guerreiro Azevedo, Flávia Santoro, Fernanda
Baião, Jairo Souza, Kate Revoredo, Vinícios Pereira, and
Isolda Herlain, “A Method for Service Identification from
Business Process Models in A SOA Approach,” In:
Enterprise, Business-Process and Information Systems
Modeling, LNCS Volume 29, pp. 99-112, 2009.
Marinela Mircea, “Adapt Business Process to Service
Oriented Environment to Achieve Business Agility,” In:
Journal of Applied Quantitative Methods . Winter 2010,
Vol. 5 Issue 4, p679-691, 2010.
Martin Keen, Greg Ackerman, Islam Azaz, Manfred Haas,
Richard Johnson, JeeWook Kim, Paul Robertson, “Patterns:
SOA Foundation - Business Process Management
Scenario,” IBM WebSphere Software Redbook, Aug 2006.
Mendix Technology, “7 Principles of Agile Enterprises:
Shaping today’s Technology into Tomorrow’s Innovation,”
Presentation, October 2013.
Michael Rosen, Boris Lublinsky, Kevin T. Smith, and Marc
J. Balcer, “Overview of SOA Implementation
Methodology,” In Book: Applied SOA: SOA and Design
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
Strategies, John Wiley & Sons, Web ISBN: 0-470223-65-0,
June 2008.
Oracle Inc., Oracle Application Integration Architecture:
Business Process Modeling and Analysis, Whitepaper,
2013.
Paul Harmon, “What is Business Architecture,” In: Business
Process Trends Vol. 8, Number. 19, November 2010.
Petcu, D. and Stankovski, V., “Towards Cloud-enabled
Business Process Management Based on Patterns, Rules and
Multiple Models,” In: 2012 IEEE 10th International
Symposium on Parallel and Distributed Processing with
Applications (ISPA), July 2012.
Ralph Whittle, “Examining Capabilities as Architecture,”
In: Business Process Trends, September 2013.
Ravi Khadka, Amir Saeidi, Andrei Idu, Jurriaan Hage,
Slinger Jansen, “Legacy to SOA Evolution: A Systematic
Literature Review,” Technical Report UU-CS-2012-006,
Department of Information and Computing Sciences,
Utrecht University, Utrecht, The Netherlands, March 2012.
Razmik Abnous, “Achieving Enterprise Process Agility
through BPM and SOA,” Whitepaper, Content Management
EMC, June 2008.
Scott W Ambler and Mark Lines, “Disciplined Agile
Delivery: A Practitioner’s Guide to Agile Software Delivery
in the Enterprise,” IBM Press, ISBN: 0132810131, 2012.
The Open Group: TOGAF Version 9.1 Standards,
December 2011.
http://pubs.opengroup.org/architecture/togaf9-doc/arch/
Tsz-Wai Lui and Gabriele Piccoli, “Degree of Agility:
Implications for Information Systems Design and Firm
Strategy,” In Book: Agile Information Systems:
Conceptualization, Construction, and Management,
Routledge, Oct 19, 2006.
Vishal Dwivedi and Naveen Kulkarni, “A Model Driven
Service Identification Approach for Process Centric
Systems,” In: 2008 IEEE Congress on Services Part II,
pp.65-72, 2008.
Xie Zhengyu, Dong Baotian, and Wang Li, “Research of
Service Granularity Base on SOA in Railway Information
Sharing Platform,” In: Proceedings of the 2009 International
Symposium on Information Processing (ISIP’09), pp. 391395, August 21-23, 2009.
Vikas S Shah received the Bachelor of Engineering degree in
computer engineering from Conceicao Rodrigues College of
Engineering, University of Mumbai, India in 1995, the M.Sc.
degree in computer science from Worcester Polytechnic Institute,
MA, USA in 1998. Currently he is Lead Architect in Connected
Enterprise Services (CES) group at Wipro Technologies, NJ, USA.
He has published several papers in integration architecture, realtime enterprises, architecture methodologies, and management
approaches. He headed multiple enterprise architecture initiatives
and research ranging from startups to consulting firms. Besides
software architecture research and initiatives, he is extensively
supporting pre-sales solutions, risk management methodologies,
and service oriented architecture or cloud strategy assessment as
well as planning for multinational customers.
45
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
A comparative study and classification on web service
security testing approaches
Azadeh Esfandyari
Department of Computer, Gilangharb branch, Islamic Azad University, Gilangharb, Iran
[email protected]
achieve reliable Web services, which can be integrated
into compositions or consumed without any risk in an
open network like the Internet, more and more software
development companies rely on testing activities. In
particular, security testing approaches help to detect
vulnerabilities in Web services in order to make them
trustworthy. The rest of paper is organized as follows:
Section II presents an overview and a classification of web
service testing approaches. Section III summarizes web
service security testing approaches and issues. Finally,
section IV gives a conclusion of the paper.
Abstract
Web Services testing is essential to achieve the goal of scalable,
robust and successful Web Services especially in business
environment where maybe exist hundreds of Web Services
working together. This Relatively new way of software
development brings out new issues for Web Service testing to
ensure the quality of service that are published, bound, invoked
and integrated at runtime. Testing services poses new challenges
to traditional testing approaches. Dynamic scenario of Service
Oriented Architecture (SOA) is also altering the traditional view
of security and causes new risks. The great importance of this
field has attracted the attention of researchers. In this paper, in
addition of presenting a survey and classification of the main
existing web service testing approaches, web service security
testing researches and their issues are investigated.
2. Overview and a classification of web service
testing approaches
Keywords: web service security testing, WSDL
The Web Services world is moving fast, producing new
specification all the time and different applications, and
hence introducing more challenges to develop more
adequate testing schemes. The challenges stem mainly
from the fact that Web Services applications are
distributed applications with runtime behaviors that differ
from more traditional applications. In Web Services, there
is a clear separation of roles between the users, the
providers, the owners, and the developers of a service and
the piece of software behind it. Thus, automated service
discovery and ultra-late binding mean that the complete
configuration of a system is known only at execution time,
and this hinder integration testing [2]. To have an
overview of web service testing approaches I use the
classification proposed by [2]. But it seems that this
classification is not sufficient for categorizing all existing
approaches therefore new classification is introduced.
In [2] the existing web service testing approaches are
classified to 4 classes by excluding the approaches that are
based on formal method and data gathering:
 WSDL-Based Test Case Generation Approaches
 Mutation-Based
Test
Case
Generation
Approaches
 Test Modeling Approaches
 XML-Based Approaches
All Mutation-Based test case generation approaches that
referred to in [2] like [3, 4] are based on WSDL and can
placed in first class. Also there are approaches that in
addition to considering WSDL specification use other
scenarios to cope with limitation of WSDL specification
1. Introduction
The Web Services are modular, self-described and selfcontained applications. With the open standards, Web
Services enable developers to build applications based on
any platform with any component modular and any
programming language. More and more corporations now
are exposing their information as Web Services and what’s
more, it is likely that Web Services are used in mission
critical roles, therefore performance matters. Consumers of
web services will want assurances that Web Services
won’t fail to return a response in a certain time period. So
the Web Services testing is more important to meet the
consumers’ needs. Web Services’ testing is different from
traditional software testing. In addition, traditional testing
process and tools do not work well for testing Web
Services, and therefore, testing Web Services is difficult
and poses many challenges to traditional testing
approaches due to the above mentioned reason and mainly
because Web Services are distributed applications with
numerous runtime behaviors.
Generally, there are two kinds of Web Services, the Web
Services are used in Intranet and the Web Services are
used in Internet. Both of them face the security risk since
message could be stolen, lost, or modified. The
information protection is the complex of means directed
on information safety assuring. In practice it should
include
maintenance
of
integrity,
availability,
confidentiality of the information and resources that used
for data input, saving, processing and transferring [1]. To
46
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
based test case generation so introduction new category is
seemed necessary. The proposed classification is:
 WSDL-Based Test Case Generation Approaches
 Test Modeling Approaches
 XML-Based Approaches
 Extended Test Case Generation Approaches
semantical aspects which are highly critical for service
availability testing, unlike other approaches that focus on
syntactical correctness, to be designed at an early stage to
drive the product development process and to help uncover
failures prior to deployment of services [2].
Tsai et al. [9] present a Web Services testing approach
based on a stochastic voting algorithm that votes on the
outputs of the Web Service under test. The algorithm uses
the idea of k-mean clustering to handle the multidimensional data with deviations. The heuristics is based
on local optimization and may fail to find the global
optimal results. Furthermore, the algorithm assumes that
the allowed deviation is known, which may be hard to
determine because the deviation is application dependent.
2.1 WSDL-Based Test Case Generation Approaches
These approaches essentially present solution for
generating test cases for web services based only on Web
Services Description Language (WSDL).Research
activities in this category are really extensive and not
included in this paper. Two WSDL approaches is
introduced in following.
2.3 XML-Based Approaches
Hanna and Munro in [5] present solution for test cases
generation depending on a model for the XML schema
datatypes of the input message parameters that can be
found in WSDL specification of the Web Service under
test. They consider the role of application builder and
broker in testing web services. This framework use just
boundary value testing techniques.
Tsai et al. [10] proposed an XML-based object-oriented
(OO) testing framework to test
Web Services rapidly. They named their approach Coyote.
It consists of two parts: test master and test engine. The
test master allows testers to specify test scenarios and
cases as well as various analyses such as dependency
analysis, completeness and consistency, and converts
WSDL specifications into test scenarios. The test engine
interacts with the Web Services under test, and provides
tracing information. The test master maps WSDL
specifications into test scenarios, performs test scenarios
and cases generation, performs dependency analysis, and
completeness and consistency checking. A WSDL file
contains the signatures specification of all the Web
Services methods including method names, and
input/output parameters, and the WSDL can be extended
so that a variety of test techniques can be used to generate
test cases. The test master extracts the interface
information from the WSDL file and maps the signatures
of Web Services into test scenarios. The test cases are
generated from the test scenarios in the XML format
which is interpreted by test engine in the second stage.
Di Penta et al. [11] proposed an approach to complement
service descriptions with a facet providing test cases, in
the form of XML-based functional and nonfunctional
assertions. A facet is a (XML) document describing a
particular property of a service, such as its WSDL
interface. Facets to support service regression testing can
either be produced manually by the service provider or by
the tester, or can be generated from unit test cases of the
system exposed as a service.
Mao in [6] propose two level testing framework for Web
Service-based software. In service unit level,
combinatorial testing method is used to ensure single
service’s reliability through extracting interface
information from WSDL file. In system level, BPEL
specification is converted into state diagram at first, and
then state transition-based test cases generation algorithm
is presented.
Obviously the researches that generate web service test
case from WSDL by using various testing techniques like
black box and random testing techniques and so on are
placed in this category.
2.2 Test Modeling Approaches
Model-based testing is a kind of black-box testing, where
these experiments are automatically generated from the
formally
described
interface
specification,
and
subsequently also automatically executed [7].
Frantzen et al. [7] discuss on a running example how
coordination protocols may also serve as the input for
Model-Based Testing of Web Services. They propose to
use Symbolic Transition Systems and the underlying
testing theory to approach modelling and testing the
coordination.
Feudjio and Schieferdecker in [8] introduced the concept
of test patterns as an attempt to apply the design pattern
approach broadly applied in object-oriented software
development to model-driven test development. Pattern
driven test design effectively allows tests targeting
2.4 Extended Test Case Generation Approaches
Because of weak support of WSDL to web services
semantical aspect some approaches don't confine
themselves only to WSDL-Based Test Case Generation.
47
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Graphical User Interface (GUI), where the test cases
are written in XML files and the results are shown in
HTML and XML files [15].
Damiani et al. [12] in order to guarantee the quality of the
given services propose collaborative testing framework
where different part participate in. They proposed a novel
approach that uses a third party certifier as a trusted entity
to perform all the needed test on behalf of the user and
certify that a particular service has been tested successfully
to satisfy the user's needs.
The open model scenario is a way to overcome the
limitations of WSDL specification based test cases
generation [12].
Since the service source code is generally not available,
the certifier can gain a better understanding about the
service behavior starting from its model. The benefit of
such strategy is to allow the certifier to identify the critical
areas of the service and therefore design test cases to
check them [12].
3. Web service security testing overview and
related work
Web services play an important role for the future of the
Internet, for their flexibility, dynamicity, interoperability,
and for the enhanced functionalities they support. The
price we pay for such an increased convenience is the
introduction of new security risks and threats, and the need
of solutions that allow to select and compose services on
the basis of their security properties [16]. This dynamic
and evolving scenario is changing the traditional view of
security and introduces new threats and risks for
applications. As a consequence, there is the need of
adapting current development, verification, validation, and
certification techniques to the SOA vision [17].
To achieve reliable Web services, which can be integrated
into compositions or consumed without any risk in an
open network like the Internet, more and more software
development companies rely on software engineering, on
quality processes, and quite obviously on testing activities.
In particular, security testing approaches help to detect
vulnerabilities in Web services in order to make them
trustworthy.
Concerning, the Web service security testing few
dedicated works have been proposed. In [18], the passive
method, based on a monitoring technique, aims to filter
out the SOAP messages by detecting the malicious ones to
improve the Web Service’s availability. Mallouli et al. also
proposed, in [19], a passive testing method which analyzes
SOAP messages with XML sniffers to check whether a
system respects a policy. In [20], a security testing method
is described to test systems with timed security rules
modelled with Nomad. The specification is augmented by
means of specific algorithms for basic prohibition and
obligation rules only. Then, test cases are generated with
the "TestGenIF" tool. A Web Service is illustrated as an
example. In [21] a security testing method dedicated for
stateful Web Services is proposed. Security rules are
defined with the Nomad language and are translated into
test purposes. The specification is completed to take into
account the SOAP environment while testing. Test cases
are generated by means of a synchronous product between
test purposes and the completed specification.
2.5 Web service testing tools
Many tools have been implemented for testing Web
Services. Next subsections describe briefly the three
selected tools.
• SoapUI Tool
This tool is a Java based open source tool. It can work
under any platform provided with Java Virtual
Machine (JVM). The tool is implemented mainly to
test Web Services such as SOAP, REST, HTTP, JMS
and other based services. Although SoapUI
concentrates on the functionality, it is also consider
performance, interoperability, and regression testing
[13].
• PushToTest Tool
One of the objectives of this open source tool is to
support the reusability and sharing between people
who are involved in software development through
providing a robust testing environment. PushToTest
primarily implemented for testing Service Oriented
Architecture (SOA) Ajax, Web applications, Web
Services, and many other applications. This tool adopts
the methodology which is used in many reputed
companies. The methodology consists of four steps:
planning, functional test, load test, and result analysis.
PushToTest can determine the performance of Web
Services, and report the broken ones. Also, it is able to
recommend some solutions to the problems of
performance [14].
• WebInject Tool
This tool is used to test Web applications and services.
It can report the testing results in real time, and
monitor applications efficiently. Furthermore, the tool
supports a set of multiple cases, and has the ability to
analyze these cases in reasonable time. Practically, the
tool is written in Perl, and works with the platforms
which have Perl interpreter. The architecture of
WebInject tool includes: WebInject Engine and
Some researchers (e.g., ANISETTI et al. [17]) focused on
security certification. They believe that certification
techniques can play a fundamental role in the servicebased ecosystem. However, existing certification
techniques are not well-suited to the service scenario: they
usually consider static and monolithic software, provide
certificates in the form of human-readable statements, and
48
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Four classes are introduced for web service testing
approaches. By considering this classification when
security is concerned, the classes that are only based on
WSDL specifications can't be useful for security testing.
Since ignoring WSCL and implementation details doesn't
allow the definition of accurate attack models and test
cases. Because of the fourth class abstraction, it can
include approaches that by complete modeling of service
enable themselves to produce fine-grained test cases that
will be used to certify the security property of service(e.g.,
ANISETTI et al. [16]) . Although some security concepts
aren't taken into account in [16] (for instance reliability)
and the level of complexity of its processes has increased,
but it seems that this is the most comprehensive approach
in web service security certification area.
consider systemwide certificates to be used at deployment
and installation time. By contrast, in a service-based
environment, we need a certification solution that can
support the dynamic nature of services and can be
integrated within the runtime service discovery, selection,
and composition processes [22]
To certify that a given security property is holed by its
service, two main types of certification processes are of
interest: test-based certification and model-based
certification.
According to Damiani et al. [23], test-based certification is
a process producing evidence-based
Proofs that a (white- and/or black-box) test carried out on
the software has given a certain result, which in turn
shows that a given high-level security property holds for
that software. Model-based certification can provide
formal proofs based on an abstract model of the service
(e.g., a set of logic formulas or a formal computational
model such as a finite state automaton).
ANISETTI et al. [16] propose a test-based security
certification scheme suitable for the service ecosystem.
The scheme is based on the formal modeling of the service
at different levels of granularity and provides a modelbased testing approach used to produce the evidence that a
given security property holds for the service. The proposed
certification process is carried out collaboratively by three
main parties: (i) a service provider that wants to certify its
services; (ii) a certification authority managing the overall
certification process; and (iii) a Lab accredited by the
certification authority that carries out the property
evaluation. Service model generated by the certification
authority using the security property and the service
specifications is defined at three level of granularity:
WSDL-based
model,
WSCL-based
model
and
implementation-based model. The certification authority
sends the Service model together with the service
implementation and the requested security property to the
accredited Lab. The accredited Lab generates the evidence
needed to certify the service on the basis of the model and
security property and returns it to the certification
authority. If the evidence is sufficient to prove the
requested property the certification authority awards a
certificate to the service, which includes the certified
property, the service model, and the evidence. They also
propose matching and comparison processes that return
the ranking of services based on the assurance level
provided by service certificates. Because of supporting the
dynamic comparison and selection of functionally
equivalent services, the solution can be easily integrated
within a service-based infrastructure.
References
[1] Li, Y., Li, M., & Yu, J. (2004). Web Services Testing, the
Methodology, and the Implementation of the AutomationTesting Tool. In Grid and Cooperative Computing (pp.
940-947).
[2] Ladan, M. I. (2010). Web services testing approaches: A
survey and a classification. In Networked Digital
Technologies (pp. 70-79).
[3] Siblini, R., & Mansour, N. (2005). Testing web services.
In Computer Systems and Applications, 2005. The 3rd
ACS/IEEE International Conference on (p. 135).
[4] Andre, L., & Regina, S. (2009). V.: Mutation Based Testing
of Web Services.IEEE Software.
[5] Hanna, S., & Munro, M. (2007, May). An approach for
specification-based test case generation for Web services.
In Computer Systems and Applications, 2007. AICCSA'07.
IEEE/ACS International Conference on (pp. 16-23).
[6] Mao, C. (2009, August). A specification-based testing
framework for web service-based software. In Granular
Computing, 2009, GRC'09. IEEE International Conference
on (pp. 440-443).
[7] Frantzen, L., Tretmans, J., & de Vries, R. (2006, May).
Towards model-based testing of web services.
In International Workshop on Web Services–Modeling and
Testing (WS-MaTe 2006) (p. 67).
[8] Feudjio, A. G. V., & Schieferdecker, I. (2009). Availability
testing for web services.
[9] Tsai, W. T., Zhang, D., Paul, R., & Chen, Y. (2005,
September). Stochastic voting algorithms for Web services
group testing. In Quality Software, 2005.(QSIC 2005).
Fifth International Conference on (pp. 99-106).
[10] Tsai, W. T., Paul, R., Song, W., & Cao, Z. (2002). Coyote:
An xml-based framework for web services testing. In High
Assurance Systems Engineering, 2002. Proceedings. 7th
IEEE International Symposium on (pp. 173-174).
[11] Di Penta, M., Bruno, M., Esposito, G., Mazza, V., &
Canfora, G. (2007). Web services regression testing.
In Test and Analysis of web Services (pp. 205-234).
[12] Damiani, E., El Ioini, N., Sillitti, A., & Succi, G. (2009,
July). Ws-certificate. InServices-I, 2009 World Conference
on (pp. 637-644).
[13] "SoapUI tool" , http://www.SoapUI.org.
[14] "PushToTest tool", http://www.PushToTest.com.
4. Conclusions
This paper had a review on main issue and related work on
Web Service testing and Web Service security testing.
49
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[15] "WebInject", http://www.WebInject.org/.
[16] Anisetti, M., Ardagna, C. A., Damiani, E., & Saonara, F.
(2013). A test-based security certification scheme for web
services. ACM Transactions on the Web (TWEB), 7(2), 5.
[17] Anisetti, M., Ardagna, C., & Damiani, E. (2011, July). Finegrained modeling of web services for test-based security
certification. In Services Computing (SCC), 2011 IEEE
International Conference on (pp. 456-463).
[18] Gruschka, N., & Luttenberger, N. (2006). Protecting web
services from dos attacks by soap message validation.
In Security and privacy in dynamic environments (pp. 171182).
[19] Mallouli, W., Bessayah, F., Cavalli, A., & Benameur, A.
(2008, November). Security rules specification and analysis
based on passive testing. In Global Telecommunications
Conference, 2008. IEEE GLOBECOM 2008. IEEE (pp. 16).
[20] Mallouli, W., Mammar, A., & Cavalli, A. (2009,
December). A formal framework to integrate timed security
rules within a TEFSM-based system specification.
InSoftware Engineering Conference, 2009. APSEC'09.
Asia-Pacific (pp. 489-496).
[21] Salva, S., Laurençot, P., & Rabhi, I. (2010, August). An
approach dedicated for web service security testing.
In Software Engineering Advances (ICSEA), 2010 Fifth
International Conference on (pp. 494-500). [22] Damiani,
E., & Manã, A. (2009, November). Toward ws-certificate.
InProceedings of the 2009 ACM workshop on Secure web
services (pp. 1-2).
[23] Damiani, E., Ardagna, C. A., & El Ioini, N. (2008). Open
source systems security certification. Springer Science &
Business Media.
50
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Collaboration between Service and R&D Organizations – Two
Cases in Automation Industry
Jukka Kääriäinen1, Susanna Teppola1 and Antti Välimäki2
1
VTT Technical Research Centre of Finland Ltd.
Oulu, P.O. Box 1100, 90571, Finland
{jukka.kaariainen, susanna.teppola}@vtt.fi
2
Valmet Automation Inc.
Tampere, Lentokentänkatu 11, 33900, Finland
[email protected]
services? In this article, the objective is not to describe the
service development process, but rather to try to
understand and collect industrial best practices that
increase the collaboration and transparency between the
Service and R&D organizations so that customers can be
serviced better and more promptly.
Abstract
Industrial automation systems are long-lasting multitechnological systems that need industrial services in order to
keep the system up-to-date and running smoothly. The Service
organization needs to jointly work internally with R&D and
externally with customers and COTS providers so as to operate
efficiently. This paper focuses to Service – R&D collaboration. It
presents a descriptive case study of how the working relationship
between Service and R&D organizations has been established in
a two example industrial service cases (upgrade and audit cases).
The article reports the collaboration practices and tools that have
been defined for these industrial services. This research provides,
for other companies and research institutes that work with
industrial companies, practical real-life cases of how Service and
R&D organizations collaborate together. Other companies would
benefit from studying the contents of the cases presented in this
article and applying these practices in their particular context,
where applicable.
Keywords: Automation systems, Industrial service, Lifecycle,
Transparency, Collaboration.
This article intends to discuss the collaboration between
the Service and R&D organizations using two cases that
provide practical examples about the collaboration, i.e.
what the collaboration and transparency between the
Service and R&D organizations mean in a real-life
industrial environment. In addition, the paper reports what
kind of solutions the company in the case study uses to
effectuate the collaboration.
The paper is organized as follows. In the next section,
background and need for Service and R&D collaboration
are stated. In section 3, the case context and research
process is introduced. In section 4, two industrial service
processes are introduced that are cases for analyzing
Service and R&D collaboration. In section 5, the cases are
analyzed from Service and R&D collaboration viewpoint.
Finally, section 6, discusses the results and draws up the
conclusions.
1. Introduction
Industrial automation systems are used in various industrial
segments, such as power generation, water management
and pulp and paper. The systems comprise HW and SW
sub-systems that are developed in-house or COTS
(Commercial Off-The-Shelf) components. Since these
systems have a long useful life, the automation system
providers offer various different kinds of lifecycle services
for their customers in order to keep their automation
systems running smoothly.
2. Background
In the digital economy, products and services are linked
more closely to each other. The slow economic growth
during recent years has boosted the development of
product-related services even more – they have brought
increasing revenue for the manufacturing companies in
place of traditional product sales [3, 4]. The global market
for product and service consumption is constantly growing
[5]. In 2012, the overall estimate for service revenues
accrued from automation products like DCS, PLC,
Integrated service/product development has been studied
quite a bit, e.g. in [1, 2]. However, there is less information
available how in practice the needs of the Service
organization could be taken into account during product
development. What kind of Service/R&D collaboration
could improve the quality and lead time of the industrial
51
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
based on a generic product platform, and a new version of
this product platform is released annually. Automation
system vendors are also using HW and SW COTS
components in their systems, for instance, third-party
operating systems (e.g. Windows). Therefore, automation
systems are dependent on, for instance, the technology
roadmaps of operating system providers. The (generic)
main sub-systems in an automation system include: Control
Room, Engineering Tools, Information Management and
Process Controllers.
SCADA, etc. amounted to nearly $15 billion [6].
Customers are more and more interested in value-added
services compared to the basic products itself. Therefore,
companies and business ecosystems need the ability to
adapt to the needs of the changing business environment.
The shift from products to services has been taking place
in the software product industry from 1990 onwards [7].
The importance of service business has been understood
for a while, but more systematic and integrated product
and service development processes are needed [8]. During
recent years the focus has shifted towards understanding
the customer’s needs and early validation of the success of
developed services [9]. Furthermore, the separation of
service and R&D organization may cause communication
problems that need to be tackled with new practices and
organizational units [10].
Engineering tools are used to configure the automation
system so as to fit the customer’s context. This includes,
for instance, the development of process applications and
related views. Automation systems have a long life and
they need to be analyzed, maintained and updated, if
necessary. Therefore, the case company offers, for instance,
upgrade and audit services to keep the customers’
automation systems up-to-date. Each update will be
analyzed individually so as to find the optimal solution for
the customer based on the customer’s business needs.
Service operation is highly distributed since the case
company has over 100 sales and customer service units in
38 countries serving customers representing various
industrial segments in Europe, Asia, America, Africa and
Australia.
Technical deterioration (technology, COTS, standards, etc.)
of the systems that have a long lifetime (such as
automation systems) is a problem in industry. The
reliability of technical systems will decrease over time if
companies ignore industrial services. “For a typical
automation/IT system, only 20-40 percent of the
investment is actually spent on purchasing the system; the
other 60-80 percent goes towards maintaining high
availability and adjusting the system to changing needs
during its life span” [11]. This is huge opportunity for
vendors to increase their industrial service business.
Automation system providers offer their automation
systems and related industrial services in order to keep
customer’s industrial processes running smoothly. These
industrial services need to be done efficiently. Therefore,
there should be systematic and effective service processes
with supporting IT systems in global operational
environment. Furthermore, there should be collaboration
practices with R&D and Service organization that systems
can be efficiently serviced and are service friendly. This all
requires deeper understanding how Service and R&D
organizations should operate to enable this collaboration.
Because of the demands of customer-specific tailoring,
there are many customer-specific configurations (i.e. the
customer-specific variants of an automation system) in the
field containing sub-systems from different platform
releases (versions). Therefore, the Service organization
(the system provider) needs to track each customer
configuration of an automation system and detect what
maintenance, optimization, upgrades are possible for each
customer to keep the customer’s automation solutions
running optimally.
Case company aims at better understand collaboration
between Service organization and R&D organization. For
other companies and research institutes this research
provides a descriptive case study how the collaboration
between Service and R&D organizations have been
established in a two example service case (upgrade and
audit cases). Therefore, the research approach is bottom-up.
These cases were selected into this study since the
company personnel that work in this research project have
in-depth knowledge about these services. We first studied
these two service processes and then analyzed what kinds
of activities can be found to enable the transparency
between service and R&D organizations in these cases. We
selected this approach since each industrial service seems
to have its own needs for collaboration and therefore you
first need to understand the service process itself. We have
3. Case context and research process
This work was carried out within the international research
projects Varies (Variability in Safety-Critical Embedded
Systems) [12] and Promes (Process Models for
Engineering of Embedded Systems) [13]. The case
company operates in the automation systems industry. The
company offers automation and information management
application networks and systems, intelligent field control
solutions, and support and maintenance services. The case
focuses on the automation system product sector and
includes the upgrade and audit service. Typically, the
customer-specific, tailored installation of the system is
52
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
been utilized in order to identify the interfaces between
service and R&D organizations.
adapted the approach defined by Charalampidou et al. [14]
as a frame for process descriptions. The research has been
done as follows:
4.1 Case 1: Upgrade–service
1.
2.
3.
4.
Upgrade–service process description was
composed using company interviews and
workshops (case 1).
Audit–service process description was composed
using company interviews and workshops (case 2).
Case analysis was performed that combined case
1 and 2 and additional interviews/workshops were
held to understand service/R&D collaboration
behind the service processes. Two persons that
work in service-R&D interface in case 1 and case
2 were interviewed and the results were discussed.
Finally, the results of the case 1, case 2 and case
analysis were reviewed and modified by the
representatives of case company.
This section presents the Upgrade-service process (Fig. 1).
Upgrade-service is a service that will be provided for a
customer to keep their automation systems up and running.
The detailed description and demonstration of Upgradeservice process has been presented in [15]. Phases are
divided into activities that represent collections of tasks
that will be carried out by the workers (e.g. Service
Manager). One worker has the responsibility (author) for
the activity, and other workers work as contributors.
Activities create and use artefacts that will be retrieved
from or stored in tools (information systems).
The upgrade service process is divided into six activities.
The first four, form the Upgrade Planning process. The last
two represent subsequent steps, as the implementation of
upgrade and subsequent follow up. This case focuses to
Upgrade planning–phase of the Upgrade-service process.
The process contains a sequence of activities to keep the
presentation simple, even though in real-life, parallelism
and loops/iterations are also possible. For instance, new
customer needs may emerge during price negotiations that
will be investigated in a new upgrade planning iteration.
4. Industrial cases
Industrial automation systems are used in various industrial
segments, such as power generation, water management
and pulp and paper production. The systems comprise HW
and SW sub-systems that are in-house developed or COTS
(Commercial Off-The-Shelf) components. Since these
systems have a long lifetime, the automation system
providers offer different kinds of industrial services for
their customers in order to keep their automation systems
running smoothly.
“Identify upgrade needs” activity:
The process starts with the identification of upgrade needs.
The input for an upgrade need may come from various
sources, for instance, directly from customer, from a
service engineer working on-site at the customer’s
premises, from component end-of-life notification, etc. The
Service Manager is responsible for collecting and
documenting upgrade needs originating from internal or
external sources.
In this article, we present two cases related to the industrial
services that both are the sub-processes of the maintenance
main process. The first is Upgrade-service and the second
is Audit-service. These cases represent process
presentations that have been created in cooperation with
the case company in order to document and systematize
their service processes. These process descriptions have
53
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
NOTATION
PROCESS
ACTIVITY
ARTEFACT
PHASE
Maintenance
TOOL
CONNECTOR
Upgrade service
Implementation & Follow-up
Service manager
Customer
Service staff
3 Analyse the
system /
compose LC plan
2 Identify
installed system
Upgrade needs
Service manager
Customer
4 Communicate /
negotiate
Service manager
Customer
Service staff
Service staff
Implement upgrades
according to plans
Re-evaluate
upgrade needs
Lifecycle Plan
Lifecycle report
Installed Reports
Lifecycle Services
Service manager
Customer
R&D
Service manager
Service staff
1 Identify
upgrade needs
Engineering &
Maintenance
AUTHOR
CONTRIBUTOR
COMPOSITE ARTEFACT
Upgrade Planning
Operator Interface
CREATE / STORE
USE / RETRIEVE
Installation
information
System / service
offering
”Tell me your
configuration”
Life cycle rules
Information Management &
Web Reports
Office Network
Router
100 Mbit/s
Network Architecture
100 Mbit/s Switched Ethernet
PCS
PCS
BU/ AL
P
1,2
Interfaces to:
- PLC
- DCS
0- mV
100 10deg 50
- QCS
Distributed I/O’s
Centralized I/O’s
HART
Plug-in
Customer extranet
Installation
information
Installed Base
Automation system
Fig. 1 Description of the Upgrade Planning Process.
“Negotiations” activity:
In the “Negotiations” activity, the service manager
modifies the lifecycle plan based upon the maintenance
budgets and downtime schedules of the customer.
Customer extranet is the common medium for vendor and
customer to exchange lifecycle plans and other material.
The final lifecycle plan presents the life cycle of each part
of the system, illustrating for a single customer what needs
to be upgraded and when, and at what point in time a
larger migration might be needed. The plan supports the
customer in preparing themselves for the updates, for
instance by predicting costs, schedules for downtimes,
rationale for management, etc. Based on the negotiations
and offer, the upgrade implementation starts according to
contract. Additionally, the Service Manager is responsible
for periodically re-evaluating the upgrade needs.
“Identify installed system” activity:
The service manager is responsible for carrying out
“Identify installed system” activity. In this activity, the
customer’s automation system configuration information
(i.e. customer-specific installed report) is retrieved from
the InstalledBase tool. The information is collected
automatically from the automation system (automatically
via a network connection with the customer’s automation
system) and manually by a site visit, if needed. The
updated information is stored in the InstalledBase tool.
Analyze the system/compose LC (lifecycle) plan” activity:
In the “Analyze the system/compose LC (lifecycle) plan”
activity service manager is responsible for analyzing the
instant and future upgrade needs for the customer’s
automation system. The InstalledBase tool contains
lifecycle plan functionality. This means that the tool
contains some lifecycle rules related to the automation
systems. The lifecycle rules are composed by a product
manager who works in service interface working in
collaboration with R&D organization. The service
manager generates a lifecycle report from the InstalledBase
tool and starts to modify it based on negotiations with the
customer.
4.2 Case 2: Audit–service
This section presents the Audit-service process (Fig. 2).
Audit-service is used to determine the status of the
automation
system
or
equipment.
Systematic
practices/process and tools to collect the information allow
repeatable and high-quality service that forms basis for
subsequent services. Audit-service might launch, for
instance, upgrade, optimizations, training –services. Again,
54
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
audit, customer contact/team, customer’s arrangements to
ensure successful audit (availability of key persons during
audit, data analysis and reporting/presentation, visits,
remote connections, safety/security), schedule, resources
and reporting/presentation practices, etc. Furthermore,
service staff documents the audit plan and makes it visible
for customer.
as in Upgrade-service–case process description, phases are
divided into activities that represent collections of tasks
that will be carried out by the workers (e.g. Service
Manager). One worker has the responsibility (author) for
the activity, and other workers work as contributors.
Activities create and use artefacts that will be retrieved
from or stored in tools (information systems). The Auditservice process is divided into five activities.
Office Research -activity:
The purpose of Office Research is to carry out audit
activities that can be done remotely. In this activity service
staff collects remote diagnostics according to audit
checklist. They further collect information about customerspecific product installation. The output of the activity is
the data that is ready for data analysis.
Plan audit -activity:
This activity is used to identify, agree and document scope
and needs for audit. This enables systematic audit. The
planning starts when there are a demand for service or e.g.
service agreement states that the audit will be done
periodically. Service staff creates audit plan with the
customer that contains information about: scope/needs for
NOTATION
PROCESS
Activity
ARTEFACT
PHASE
TOOL
CONNECTOR
CREATE / STORE
USE / RETRIEVE
AUTHOR
CONTRIBUTOR
COMPOSITE ARTEFACT
Maintenance
Implementation & Follow-up
Audit
Service staff
Audit service
Service manager
Customer
Service staff
1 Plan audit
Audit plan
3 Field
Research
Service staff
Service manager
Customer
R&D
Optional
Service
staff
2 Office
Research
Instrumented
data
4 data analysis,
report
Data
Installed
Report
Audit checklist
Data server
Engineering &
Maintenance
Lifecycle Services
Information Management &
Web Reports
0- mV
100 10deg 50
Network Architecture
100 Mbit/s Switched Ethernet
PCS
PCS
BU/ AL
P
Audit report
Sales leads
Actions based on
agreement between
customer and vendor
Action plan
& roadmap
Re-evaluate audit
needs or start periodic
audit
Installation
information
”Tell me your
configuration”
”Instrumented data”
Office Network
Router
100 Mbit/s
5 Presentation /
Communicate /
Negotiate
Service manager
Customer
Service staff
Service staff
Customer
Customer extranet
Installed Base
Operator Interface
Service manager
Sales Manager
Service staff
Customer
1,2
Interfaces to:
Installation
information
- PLC
- DCS
- QCS
Distributed I/O’s
Centralized I/O’s
HART
Target system
Plug-ins,
sensors
Fig. 2 Description of the Audit Process.
55
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Field research –activity (optional):
The purpose of Field Research is to acquire supplementary
information/data during site visits. This activity is optional
if Office research–activity is not sufficient. In this activity,
Service staff carries out field research tasks according to
product-specific checklists (check instruments, get info
about maintenance, training needs, remarks concerning
configuration, visual check, check the function of the
product, corrosion, etc.). Furthermore, staff collects
additional installation information from the customer
premises (customer-specific product installation), if
needed.
practice the needs of the Service organization could be
taken into account during product development. What kind
of service/R&D collaboration could improve the quality
and lead time of the industrial services? Target is not to
describe service development process but try to understand
and collect industrial best practices that increase
service/R&D collaboration and transparency so that
customers can be better and faster serviced. Naturally these
practices are highly service dependent since each service
need different issues from R&D organization. During the
interviews, it become obvious that already on product
platform Business Planning phase there has to be analysis
activity how new proposed features of the system will be
supported by services and what kind of effects there are for
different services (e.g. compatibility). Therefore, already in
system business planning phase one should consider
technical support, product/technology lifecycle and version
compatibility issues from service viewpoint before the
implementation starts.
Data analysis/report –activity:
Purpose of Data analysis/Report is to analyze collected
data and prepare a report that can be communicated with
the customer. Data analysis –task analyses audit data and
observations. Service staff utilize audit checklist and
consult R&D in analysis, if needed. Depending upon the
audit, the analysis may contain e.g.: maintenance, part or
product obsolence, replacements, inventory, needs for
training, etc. During the analysis customer should prepare
time and contacts to answer questions that may arise
concerning audit data. Service staff and manager defined
recommendations based upon the audit. The Service
Manager identifies sales leads related to the audit results.
In addition, staff will update installation information into
InstalledBase if discrepancies have been observed. The
audit report will contain an introduction, definition of
scope,
results,
along
with
conclusions
and
recommendations. The report will be reviewed internally
and stored into the Customer Extranet and the Customer
will be informed (well in advance, in order to allow time
for customer to check the report).
Based on cases above the following service/R&D
collaboration practices were identified. Basically both
cases highlight communication and information
transparency between the organizational units.
5.1 Case 1: collaboration related to Upgrade –service
In case 1 there was a nominated person who works in
service/R&D interface, i.e. Product Manager who works in
Service interface (Fig. 3). This person defines and updates
life-cycle rules document that contains information e.g.:
-
Presentation/communicate/negotiations–activity:
This activity presents results to the key stakeholders and
agrees future actions/roadmaps. The Service Manager
agrees with the customer the time and participants of result
presentation event. The results will be presented and
discussed. The Service Manager negotiates about the
recommendations and defines actions/roadmaps based on
recommendations (first step towards price and content
negotiations).
-
-
5. Case analysis
how long each technology will be supported. In
other words, e.g. how long the particular version of
each operating system (OS) will be supported
(product & service/security packs), along with
considerations of if there is any possibility of
extended support).
hardware – software compatibility (e.g. OS version
vs. individual workstations)
compatibility information showing how different
sub-systems are compatible with each other
(compatible,
compatibility restrictions, not
compatible).
other rules or checklists containing what needs to
be considered when conducting upgrades
(conversions of file formats, etc.)
Integrated service/product development has been studied a
lot. However, there is less information available how in
56
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Life cycle policy
Parallel:
Product manager that
works in Service interface
Product manager(s)
Product manager
Optional:
Collect COTS LC info
Lifecycle info
(enable Upgrade
Planning)
3rd party tech LC info
Installed Base
Define and
update life cycle
rules
Product manager
Collect sub-system
lifecycle info
Internal tech LC info
Product manager
Technology life
cycle information
Collect sub-system
compatibility info
Life cycle rules
Upgrade planning
-process
Internal compatibility info
Fig. 3 Collect lifecycle rules -process.
1.
The rules are used by the Service function in order to
understand the lifecycle effects to the system. For instance,
in Upgrade Planning -process in upgrade analysis -activity
this information is used to compose life cycle plans for
customers. Product manager that works in Service
interface coordinates the composition of lifecycle rules.
These rules originate from internal and external sources.
External information is collected from third-party system
providers (COTS providers). This information comes, for
instance, from operating system providers (a roadmap that
shows how long operating system versions will be
supported). Internal lifecycle information originates from
R&D (product creation and technology research
functions). Internal lifecycle information defines in-house
developed automation system components and their
support policy. Furthermore, lifecycle information about
system dependencies is also important (compatibility
information). Dependency information shows the
dependencies between sub-systems and components so as
to detect how changes in one sub-system may escalate to
other sub-systems in a customer’s configuration. Finally,
rules are also affected by a company’s overall lifecycle
policy (i.e. the policy on how long (and how) the company
decides to support the systems). Some of these rules are
implemented into the InstalledBase tool that partly
automates the generation of lifecycle plan. However, since
every customer tends to be unique some rules need to be
applied manually depending on the upgrade needs.
2.
3.
4.
5.
Based on this case we could compose a task list for the
Product Manager who works in the service interface.
Product manager’s task is to increase and facilitate
communication between the R&D and Service
organizations (collaboration between organizational units):
Coordinates the collection, maintenance and
documentation of lifecycle rules in cooperation
with R&D to support upgrade planning.
Communicates to R&D how they should prepare
for lifecycle issues (how R&D should take into
account service needs?).
Defines the lifecycle policy with company
management.
Coordinates that Service Managers are creating
lifecycle plans for their customers. The objective
is that there are as many lifecycle plans as possible
(every customer is a business possibility).
Participates to the lifecycle support decision
making
together
with
R&D
(e.g.
replacement/spare part decisions, compatibility
between platform releases). For instance:
- decisions concerning how long the company
provides
support
for
different
technologies/components.
Decisions
what
technologies will be used (smaller changes/more
significant changes). Service organization will
make the decision (cooperation with R&D).
- decisions about the compatibility. Service
provides needs/requirements for compatibility
(based on effects to service business). R&D tells
what is possible => needs and possibilities are
combined so that optimal compatibility is
possible for upgrade business (service
organization makes the decision).
- determines the volume/quantity components there
are in field (check from InstalledBase) =>
effects to the content of next platform release
and what support are needed from service
viewpoint.
57
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
5.2 Case 2: Collaboration related to Audit –service
organizations. The approach has some similarities to the
solution that is presented in [10]. Similarly their case study
indicated that there were need to have units that worked in
between the organizations that enabled the interaction.
In case 2, the Service staff utilizes audit checklists that
have been prepared collaboratively by Service and R&D,
and consults R&D in the audit analysis, if required. The
training team that works in the Service organization is
responsible for the coordination of the collection and
maintenance of the audit checklists (Fig. 4). The checklists
are to be composed and maintained in cooperation with
R&D. Checklists are product-specific since different issues
need to be checked depending on product type.
Furthermore, checklists require constant updates as the
product platforms evolve, e.g. one needs to check different
issues from products of different age.
Training team (service)
R&D
Key service staff persons
Audit info (enable
Audit service)
Based on this research, it is possible to better understand
interfaces and needs between Service and R&D
organizations. With this information it is possible to begin
to improve the collaboration practices and solutions in case
company. This research provides for other companies and
research institutes that work with industrial companies the
practical real-life cases how Service and R&D
organizations collaborate. This research is based on
bottom-up approach studying two cases and therefore the
results are limited since the collaboration is service
dependent. This study does not try to explain why the case
company has ended up with these practices and solutions,
nor that these practices are directly applicable to other
companies. However, we described the case in fairly
detailed context in section 3 and Service processes in
section 4. Therefore, this article provides for industrial
companies a good ground to compare their operational
environment with the one presented in this article and
apply the collaboration practices when appropriate and
applicable. For us, this study creates a basis for further
research to study the collaboration needs of the other
industrial services – for instance such as preventive
maintenance services, optimization services, security
assessment services.
Audit -process
Compose audit
checklists
Audit checklist
Fig. 4 Compose audit checklist -process.
6. Discussion and conclusions
The importance of industrial services has increased and
there needs to be systematic practices/processes to support
service and product development. This has been indicated
also in other studies, e.g. in [1, 2]. However, there is less
information available concerning how in practice the needs
of the Service organization could be taken into account
during product development. What kind of service/R&D
collaboration could improve the quality and lead time of
the industrial services? In this article, objective is not to
describe service development process but rather to try to
understand and collect industrial best practices that
increase the collaboration and transparency between
service and R&D organizations so that customers can be
better and faster serviced.
Acknowledgments
This research has been done in ITEA2 project named
Promes [13] and Artemis project named Varies [12]. This
research is funded by Tekes, Artemis joint undertaking,
Valmet Automation and VTT. The authors would like to
thank all contributors for their assistance and cooperation.
References
[1] A. Tukker, U. Tischner. “Product-services as a research field:
past, present and future. Reflections from a decade of
research”, Journal of Cleaner Production, 14, 2006, Elsevier,
pp. 1552-1556.
[2] J.C. Aurich, C. Fuchs, M.F. DeVries. “An Approach to Life
Cycle Oriented Technical Service Design”, CIRP Annals Manufacturing Technology, Volume 53, Issue 1, 2005, pp.
151–154.
[3] H.W. Borchers, H. Karandikar. “A Data Warehouse approach
for Estimating and Characterizing the Installed Base of
Industrial Products”. International conference on Service
systems and service management, IEEE, Vol. 1, 2006, pp.
53-59.
[4] R. Oliva, R. Kallenberg. “Managing the transition from
products to services”. International Journal of Service
Industry Management, Vol. 14, No. 2, 2003, pp. 160-172
This article aims to discuss the collaboration and
transparency of Service and R&D organizations using two
cases that give practical examples about the collaboration,
i.e. what the collaboration and transparency between
Service and R&D organizations means in real-life
industrial environment. Furthermore, the article reports
what kind of solutions the case company uses to realize the
collaboration.
The article shows that in case company service needs were
taken into account already in business planning phase of
the product development process. Furthermore, there were
roles and teams that worked between service and R&D
organizations to facilitate the interaction between the
58
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Dr. Antti Välimäki works as a senior project manager in Valmet
Automation as a subcontractor. He has worked in many positions
from designer to development manager in R&D and Service in
Valmet/Metso Automation. He has received a Ph.D. degree in
2011 in the field of Computer Science. He has over 20 years of
experience with quality management and automation systems in
industrial and research projects.
[5] ICT for Manufacturing, The ActionPlanT Roadmap for
Manufacturing 2.0.
[6] K. Sundaram. “Industrial Services- A New Frontier for
Business Model Innovation and Profitability”, Frost and
Sullivan,
https://www.frost.com/sublib/display-marketinsight.do?id=287324039 (accessed 24th June 2015).
[7] M.A. Cusumano. “The Chaning Software Business: Moving
from Products to Services”. Published by the IEEE Computer
Society, January, 0018-9162/08, 2008, pp. 20 – 27.
[8] J. Hanski, S. Kunttu, M. Räikkönen, M. Reunanen.
Development of knowledge-intensive product-service
systems. Outcomes from the MaintenanceKIBS project.
VTT, Espoo. VTT Technology : 21, 2012.
[9] M. Bano, D. Zowghi. “User involvement in software
development and system success: a systematic literature
review”. Proceedings of the 17th International Conference on
Evaluation and Assessment in Software Engineering, 2003,
pp. 125-130.
[10] N. Lakemond, T. Magnusson. “Creating value through
integrated product-service solutions: Integrating service and
product development”, Proceedings of the 21st IMPconference, Rotterdam, Netherlands, 2005.
[11] L. Poulsen. “Life-cycle and long-term migration planning”.
InTech magazine (a publication of the international society of
automation), January/February 2014, pp. 12-17.
[12] Varies -project web-site: (Variability In Safety-Critical
Embedded Systems) http://www.varies.eu/ (accessed 24th
June 2015).
[13] Promes -project web-site: (Process Models for Engineering
of Embedded Systems) https://itea3.org/project/promes.html
(accessed 24th June 2015).
[14] S. Charalampidou, A. Ampatzoglou, P. Avgeriou. “A
process framework for embedded systems engineering”.
Euromicro Conference series on Software Engineering and
Advanced Applications (SEAA'14), IEEE Computer Society,
27-29 August 2014, Verona, Italy.
[15] J. Kääriäinen, S. Teppola, M. Vierimaa, A. Välimäki. ”The
Upgrade Planning Process in a Global Operational
Environment”, On the Move to Meaningful Internet Systems:
OTM 2014 Workshops, Springer Berlin Heidelberg, Lecture
Notes in Computer Science (LNCS), Volume 8842, 2014, pp
389-398.
Dr. Jukka Kääriäinen works as a senior scientist in VTT
Technical Research Centre of Finland in Digital systems and
services -research area. He has received a Ph.D. degree in 2011
in the field of Computer Science. He has over 15 years of
experience with software configuration management and lifecycle
management in industrial and research projects. He has worked
as a work package manager and project manager in various
European and national research projects.
Mrs. Susanna Teppola (M.Sc.) has worked as a Research
Scientist at VTT Technical Research Centre of Finland since
2000. Susanna has over fifteen years’ experience in ICT, her
current research interests are laying in the area of continuous
software engineering, software product/service management and
variability. In these areas Susanna has conducted and participated
in many industrial and industry-driven research projects and
project preparations both at national and international level.
59
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Load Balancing in Wireless Mesh Network: a Survey
Maryam Asgari1, Mohammad Shahverdy2, Mahmood Fathy3, Zeinab Movahedi4
1
Department of Computer Engineering, Islamic Azad University-Prof. Hessabi Branch, Tafresh, Iran
[email protected]
2
Department of Computer Engineering, Islamic Azad University- Prof. Hessabi Branch,Tafresh and PHD student of
University of Science and Technology, Tehran Iran
[email protected]
3, 4
Computer Engineering Faculty, University of Science and Technology Tehran,Iran
[email protected],[email protected]
Abstract
wireless network has only a single hop of the path and the
Clients need to be within a single hop to make
connectivity with wireless access point. Therefore to set up
such networks need access points and suitable backbone.
As result a Deployment of large-scale WLANs are too
much cost and time consuming. However, The WMNs can
provide wireless network coverage of large areas without
depending on a wired backbone or dedicated access points
[1, 2]. WMNs are the next generation of the wireless
networks that to provide best services without any
infrastructure. WMNs can diminish the limitations and to
improve the performance of modern wireless networks
such as ad hoc networks, wireless metropolitan area
networks (WMANs), and vehicular ad hoc networks [2,3,4
and 5].
WMNs are multi-hop wireless network which provide
internet everywhere to a large number of users. The
WMNs are dynamically self-configured and all the nodes
in the network are automatically established and maintain
mesh connectivity among themselves in an ad hoc style.
These networks are typically implemented at the network
layer through the use of ad hoc routing protocols when
routing path is changed. This character brings many
advantages to WMNs such as low cost, easy network
maintenance, more reliable service coverage.
Wireless mesh network has different members such as
access points, desktops with wireless network interface
cards (NICs), laptops, Pocket PCs, cell phones, etc. These
members can be connected to each other via multiple hops.
In the full mesh topology this feature brings many
advantages to WMNs such as low cost, easy network
maintenance and more reliable service coverage. In the
mesh topology, one or multiple mesh routerscan be
connected to the Internet. These routers can serve as GWs
and provide Internet connectivity for the entire mesh
network. One of the most important challenges in these
networks happens on GW, when number of nodes which
connected to the internet via GW, suddenly increased. It
means that GWs will be a bottleneck of network and
Wireless Mesh network (WMN) is a state of the art networking
standard for next generation of wireless network. The
construction of these networks is basis of a network of wireless
routers witch forwarding each other’s packets in a multi-hop
manner. All users in the network can access the internet via
Gateways nodes. Because of the high traffic load towards
gateway node, it will become congested. A load balancing
mechanism is required to balance the traffic among the gateways
and prevent the overloading of any gateway. In this paper,
weinvestigatedifferent load balancing techniques in wireless
mesh networks to avoid congestion in gateways,as well as we
survey the effective parameters that is used in these techniques.
Keywords:clustering, Gateway, Load Balancing, Wireless Mesh
Network (WMN).
1.Introduction
Wireless mesh networking is a new paradigm for next
generation wireless networks. Wireless mesh networks
(WMNs) consist of mesh clients and mesh routers, where
the mesh routers form a wireless infrastructure/backbone
and interwork with the wired networks to provide multi
hop wireless Internet connectivity to the mesh clients.
Wireless mesh networking has generated as a selforganizing and auto-configurable wireless networking to
supply adaptive and flexible wireless Internet connectivity
to mobile users.
This idea can be used for different wireless access
technologies such as IEEE 802.11, 802.15, 802.16-based
wireless local area network (WLAN), wireless personal
area network (WPAN), and wireless metropolitan area
network (WMAN) technologies. WMNs Potential
application can be used in home networks, enterprise
networks, community networks, and intelligent transport
system networks such as vehicular ad-hoc networks.
Wireless local area networks (WLANs) are used to serve
mobile clients access to the fixed network within
broadband network connectivity with the network
coverage [1]. The clients in WLAN use of wireless access
points that are interconnected by a wired backbone
network to connect to the external networks. Thus, the
60
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
performance of the network strongly decreases [4, 5, and
6].
A mesh router hearing the route request uses the
information in the RREQ to establish a route back to the
RREQ generator.
2.Related Work
The problem of bottleneck in wireless mesh networks is an
ongoing research problem although much of the literature
[7, 8, 9, 10] available, addresses the problem without an
introducing method for removing bottleneck and/or a welldefined way to prevent congestion. In [11], the authors
proposed the Mesh Cache system for exploiting the
locality in client request patterns in a wireless mesh
network .The Mesh Cache system alleviates the congestion
bottleneck that commonly exists at the GW node in
WMNs while providing better client throughput by
enabling content downloads from closer high throughput
mesh routers. There is some papers related to optimization
problems on dynamic and static load balancing across
meshes [11].Optimal load balancing across meshes is
known to be a hard problem. Akyildiz et al.[12]
exhaustively survey the research issues associated with
wireless mesh networks and discusses the requirement to
explore multipath routing for load balancing in these
networks. However, maximum throughput scheduling and
load balancing in wireless mesh networks is an unexplored
problem. In this paper we survey different load balancing
schemes in wireless mesh networks and briefly introduce
some parameters witch they used in their approaches.
Fig.1 Broadcasting RREQs[13]
During the path selection phase a source should decide
which path is the best one among the multiple pathsfigured
out in the first phase. The path selection can be prioritized
in following order:
(a)If there exist multiple paths to a source’s primary
gateway then, take the path with minimum hop count and
if there is still a tie, we can randomlyopt a path.
(b)If there is no path to source’s primary gateway but a
several paths to secondary gateways then take the path
with minimum hop count and if there is still a tie opt a
path randomly.
3.Load Balancing Techniques
Increasing Load in a wireless mesh network causes
Congestion and it is lead to different problems like packet
drop, high end to end delay, throughput decline etc.
various techniques have been suggested that considers load
balancing are discussed below.
As it’s clear, congestion control is based on bandwidth
estimation technique,thereforeavailable bandwidth on a
link should be identified.Here the consumed bandwidth
information can be piggy packed on to the “Hello”
message which is used to maintain local connectivity
among nodes. Each host in the network determines its
devoted bandwidth by monitoring the packets it sends on
to the network. The mesh router can detect the congestion
risk happening on its each link by the bandwidth
estimation technics. A link is in risk of congestion
whenever the available bandwidth of that link is less than a
threshold value of bandwidth. If a link cannot handle more
traffic, it will not accept more requestsover that link. The
primary benefit of this protocol is that it simplifies routing
algorithm but it needs preciseknowledge about the
bandwidth of each link.
3.1 Hop-Count Based Congestion-Aware routing [13]
In this routing protocol, each mesh router rapidly finds out
multiple paths based upon hop count metric to the Internet
gateways by routing protocol designing. Each mesh
routerequipped to a bandwidth estimation technique to
allow it to forecast congestion risk, then router select high
available bandwidth link for forwarding packets. Multipath
routing protocol consists two phases: Route discovery
phase and path selection phase.
In the route discovery phase, whenever a mesh router tries
to find a route to an internet gateway, it initiates a route
discovery process by sending a route request (RREQ) to
all its neighbors. The generator of the RREQ marks the
packet withits sequence number to avoid transmitting the
duplicate RREQ.
3.2 Distributed Load Balancing Protocol. [14]
In this protocol the gateways coordinates to reroute flows
from congested gateways to other underutilized gateways.
This technique also considers interference which can be
appropriate for practical scenarios, achieving good results
61
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
and improving on shortest path routing. Here the mesh
network is divided into domains. A domain di can be
defined as set of routers witch receive internet traffic and a
gateway witch serve them. For each domain a specific
capacity is assigned and is compared against the load in
the domain. The domain is considered as overloaded if the
load exceeds the sustainable capacity. To avoid congestion
in a domain we can reroute the traffic. This technique does
not impose any routing overhead in the network.
3.4 DGWLBA: Distributed Gateway Load balancing
Algorithm [16]
In [16] gateways execute DGWLBA to attain load
balancing. DGWLBA starts by assigning all routers to
their nearest gateway that is called theNGW solution. Next
steps consist in trying to reroute flows from an overloaded
domain d1 to an uncongested domain d2 such that the
overload of both domains is reduced.
Fig.2 Mesh network divided into domains for loadbalancing [14]
Fig.3 WMNs divided into 3 domains each having capacity 25[16]
3.3 Gateway–Aware Routing [15]
If domain is overloaded, its sinks are checked in
descending order of distance to their serving gateway. This
is done to givepreferenceto border sinks. The farther a sink
is fromits serving gateway the less it will harm other flows
of its domain if it is rerouted. And its path to other
domains will be shorter, thus improving performance. For
the same reason, when a sink is chosen, domains are
checked in ascendingorder of distance to the sink. Next, to
perform the switching of domains, the overload after the
switch must be less than the overload before the switch
(lines 9-11).
Lastly, the cost of switching is checked. nGWsis the
gateway nearest toΔs . Only if the cost is less than the
switching threshold Δsit will be performed (line 12). This
rule takes into account the existence of contention, because
it prevents the establishment of long paths, which suffer
from intra-flow interference and increase inter-flow
interference in the network, and gives preference to border
sinks. Hence this approach successfully balances load in
overloaded domains considering congestion and
interference.
In [15] a gateway mesh aware routing solution is proposed
that selects gateways for each mesh router based on
multihop route in the mesh as well as the potentiality of
the gateway. A composite routing metric is designed that
picks high throughput routes in the presence of multiple
gateways. The metric designed is able to identify
congested part of each path, and selectasuitable gateway.
The gateway capacity metric can be defined as the time
needed to transmit a packet of size S on the uplink and is
expressed by
gwETT=ETXgwS/Bgw
(1)
Where ETXgw is the expected transmission count for the
uplink and Bgw is the capacity of the gateway. For
forwarding packets a GARM(Gateway Aware Routing
Metric) is defined which is follows:
GARM =β.Mi + (1-β) .(mETT+gwETT)
(2)
This Gateway-aware Routing Metric has two parts.The
first part of the metric is for bottleneck capacity and the
second part accounts the delay of the path. The β is used
forbalancing between these two factors. The gateway with
minimum GARM value can be chosen as the default
gateway for balancing the load. This scheme overcomes
the disadvantage of accurate bandwidth estimation
suggested in [6] and also improves network throughput.
ALGORITHM
for each gateway GWi do di={ };
for each sink s do
if ( distance(s,GWi) = minimum)
Add sink s to di;
For domain d1 in D do
if load(d1) > Cd1 then
For sink s in d1 do
62
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
For domain d2 in D do
If d1=d2 then
Continue
Ovldbefore = ovld(d1) + ovld(d2)
Ovldafter = ovld(d1-{s}) + ovld(d2 U {s})
If ovldafter< ovldbefore then
If dist(s,GW2) / (dist(s,nGWn) < ∆s then
d1 = d1 – {s}
d2 = d2 U {s}
break;
If load(d1) ≤ Cd then
Break;
node that has larger G_Value is more suitable for being a
GW.
Fig.4 Breaking a cluster[17]
3.5 Load Balancing in WMNs by Clustering[17]
Althoughthe paper considers most of the design aspects of
the proposed infrastructure, it leavessome open issues and
questions. For instance, surveying load balancing of multichannel GWs in clustering wireless mesh networks,
finding maximum throughput of nodes incluster based
wireless mesh networks. Another open issue is using fuzzy
logic for breaking the clusters.
In [17] authors proposed a load balancing schemes for
WMNs by clustering. In first step all nodes are clustered to
control the workload of them. If workload on a GW is
increased up tomaximum capacity of the GW then the
cluster is broken. With the respect to the gateways
capacity, the gateways overload can be predictable.
Because selecting a new GW andestablish a route table is
time consuming, thus third scheme is proposed which
GWselection and creating route table is done before
breaking the cluster. Also they considered some
parameters for selecting the new GW in new cluster witch
isoffered in following formula:
=
×
×
×
4. Conclusion
Load balancing is one of the most important problems in
wireless mesh networks that needs to be addressed. The
nodes in a wireless mesh network trend to communicate
with gateways to access the internet thus gateways have
the potential to be a bottleneck point. Load balancing is
essential to utilize the entire available paths to the
destination and prevent overloading the gateway nodes. In
this paper we surveyed different load balancing scheme
with various routing metrics that can be employed to
tackle load overhead in the network. Table1 summarizes
the load balancing techniques witch we surveyed in this
paper.
(3)
Where Power
is the power of a node, Power
is
the processing power of each node, Constancy is the time
which a node actively exists in cluster, Velocity is the
spped of each node and Distance is the distance of the
node to centeral of the cluster. In the above formula, they
calculate G_Value for each node in a cluster and then each
Table1: Summery of different techniques
Technique
Metric
Advantages
Issues that not Addressed
Hop Count based
Congestion Aware
routing.
Hop
Count
No routing overhead.
Computational overhead.
accurate bandwidth
information required.
Distributed Load
Balancing Protocol
Hop
Count
No Routing overhead.
Computational Overhead.
GARM
No routing overhead,
High throughput.
Computational Overhead.
Queue
Length
Low end to end delay
Routing and
Computational
Overhead.
Queue
Length
Low end to end delay
Cluster initial formation
parameter
Gateway-Aware
Routing
DISTRIBUTD
GATEWAY
LOADBALANCING
Load Balancing in
WMNs by Clustering
63
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
The Nominal Capacity of Wireless Mesh Networks,
IEEE Wireless Communications, vol. 10, no 5,pp. 8–
14.
[10] Abu, R., Vishwanath R., Dipak G., John B., Wei L.,
Sudhir D., Biswanath M., (2008).
Enhancing Multi-hop Wireless Mesh Networks with a
Ring Overlay, SECON workshop.
[11] Horton, G. (1993).
A multi-level diffusion method for dynamic load
balancing, Parallel Computing. 19 pp. 209-229.
[12] Akyildiz, I., Wang, X., Wang, W.,(2005).
Wireless Mesh Networks: A Survey, Computer
Networks Journal 47, (Elsevier), pp. 445-487.
[13] Hung Quoc, V.,Choong Seon, H., (2008).
Hop-Count Based Congestion-Aware Multi-path
Routing in Wireless Mesh Network, International
Conference on Information Networking, pp. 1-5 .
[14] Gálvez, J.J., Ruiz, P.M., Skarmeta, A.F.G, (2008).
A Distributed Algorithm for Gateway Load-Balancing
in Wireless Mesh Networks, Wireless Days. WD '08.
1st IFIP, pp. 1-5.
[15] Prashanth A. K., David L, Elizabeth M.,(2010).
Gateway–aware Routing for Wireless Mesh Networks,
IEEE International Workshop on Enabling
Technologies and Standards for Wireless Mesh
Networking (MeshTech), San Francisco.
[16] GUPTA, B. K., PATNAIK, S., YANG, Y.,(2013).
GATEWAY LOAD BALANCING IN WIRELESS
MESH NETWORKS, International Conference on
Information System Security And Cognitive Science,
Singapore.
[17] Shahverdy, M., Behnami, M., Fathy, M.,(2011).
A New Paradigm for Load Balancing in WMNs,
International Journal of Computer Networks (IJCN),
Volume (3), Issue (4).
References
[1] Bicket, J., Aguayo, D., Biswas, S., Morris, R.,(2005).
Architecture and evaluation of an unplanned 802.11b
mesh network, in: Proceedings of the 11th ACM
Annual International Conference on Mobile Computing
and Networking (MobiCom), ACM Press, Cologne,
Germany, pp. 31–42.
[2] Aoun, B., Boutaba, R., Iraqi, Y., Kenward, G. (2006 ).
Gateway Placement Optimization in Wireless Mesh
Networks with QoS Constraints. IEEE Journal on
Selected Areas in Communications, vol. 24.
[3] Hasan, A.K., Zaidan, A. A., Majeed, A., Zaidan, B. B,
Salleh, R., Zakaria, O., Zuheir, A. (2009).
Enhancement Throughput of Unplanned Wireless Mesh
Networks Deployment Using Partitioning Hierarchical
Cluster (PHC), World Academy of Science,
Engineering and Technology 54.
[4] Akyildiz, I.F., Wang, X., Wang, W.,(2005).
Wireless mesh networks: a survey, Elsevier ,Computer
Networks 47, 445–487.
[5] Jain, K., Padhye, J., Padmanabhan, V. N., Qiu,
L.,(2003).
Impact of interference on multihop wireless network
performance, in Proceeding of ACM MobiCom, 66-80.
[6] Akyildiz, I., Wang, X., (2005).
A survey on wireless mesh networks, IEEE
Communication Magazine, vol. 43, no.9, pp.s23-s30.
[7] MANOJ, B.S., RAMESH, R.,(2006).
,WIRELESS MESH NETWORKING, Chapter 8 ,Load
Balancing in Wireless Mesh Networks, page 263.
[8] Saumitra Das, M., Himabindu Pucha , Charlie
Hu,Y.,(2006).
Mitigating the Gateway Bottleneck via Transparent
Cooperative Caching in Wireless Mesh Networks, NSF
grants CNS-0338856 and CNS-0626703.
[9] Jangeun, J., Mihail, L.,(2003).
64
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Mobile Banking Supervising System- Issues, Challenges
& Suggestions to improve Mobile Banking Services
Dr.K.Kavitha
Assistant Professor, Department of Computer Science,
Mother Teresa Women’s University, Kodaikanal
[email protected]
Abstract
Table 1 - Technology Usage Survey
Banking is one of the Largest Financial Institution which
constantly provides better Customer Services. To improve
the Service Quality, Banking Services are expanded to
Mobile Technology. Recently Mobile Banking plays a vital
role in Banking Sector. This technology helps the Customers
to provide all the account information as well as save
timings. Customer can avail all financial services such as
Credit, Debit, Money Transfer, Bill Payment etc in their
Mobiles using this application. It saves time to speed in a
Bank. Almost, most of the banks providing financial services
through Mobile phones. But still majority of peoples are not
preferred this services like ATM or Online because of Risk
Factor. The main objective of this paper is to discuss about
the benefits, issues and suggestions to improve Mobile
Banking services successfully.
25
Online
Banking
20
Mobile
Banking
5
30
40
75
Issues
ATM
Preferable
Risk
Factor
Keywords: ATM, Online Service, Mobile Banking, Risk
Rating, MPIN, Log Off
1. Introduction
Mobile Banking System allows the Customers to avail
all financial services through Phones or Tablets. The
traditional Mobile Banking Services offered through
SMS which is called as SMS Banking. Whenever the
Customer availed Transaction either Debit or Credit,
SMS will be sent to the Customers accordingly. But
this service offer two transactions only such as Credit
and Debit. Other benefits are gathered by spending
money for SMS. New technology is rapidly modified
the traditional Systems of doing banking services.
Banking Services is expanded up to this technology.
By using iphones, Customers can download and use
Mobile applications for further financial Services. This
service avoids the Customers going to branch premises
and provided more Services. The Usages of
technological Financial Services was tested by 50
Customers and most of them indicated that major risk
service is Mobile Banking instead of ATM and Online
Services as follows in table 1
Figure 1 - Technology Usage Survey Chart
The above table and figure proves that the major risk
comes under the third category such as Mobile
Banking. ATM and Online Services are having
minimum Risks comparable with mobile banking
services. So that, in this Survey indicates that mostly
preferred service is ATM then the next option is
Online Services. This paper studies the benefits,
limitations and suggestions to improve the mobile
banking services.
2. Related Work
Renju Chandran [1] suggested some ideas and
presented three steps to run and improve the mobile
banking services effectively. The author presented the
benefits, limitations and problems faced by the
customer during the transaction of mobile banking and
suggested a method for improving that service. Aditya
Kumar Tiwari.et.al [2]
discussed about mobile
banking advantages, drawbacks, Security issues and
challenges in mobile banking services and proposed
65
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
some idea to get the solution of mobile banking
security.
V.Devadevan
[3]
conversed
about
Mobile
Compatibility, Mindset about Mobile Banking
acceptance and Security issues. The author depicted
from the study that the evolution of eminent
technologies in communication system and mobile
device is a major factor and challenge to frequently
changing the mobile banking solutions. The author
suggested creating awareness among the existing
customers and providing special benefits for mobile
bankers which will increase the service. MD.
Tukhrejul Inam.et.al[4] described the present condition
of mobile banking services in Bangladesh and also
showed prospects and limitations of Mobile banking in
their country. The author suggested to the Bangladesh
banks to follow the mobile banking services for
making their lives easier.Mobile phones target the
world's nonreading poor[5] disussed the Modern Cell
Phone usage and functionalities.
ii.
iii.
iv.
v.
Internet Connection is necessary to avail these
Services. If the Customers reside in rural area
means then they cannot avail the services because
of Tower problem or line breakage.
AntiVirus Software Updation
Many Customers are not aware about Anti Virus
Software so that spyware will affect their mobiles.
Forget to Log Off
if Customer’s mobile phone theft means
unauthorised person can reveal all our transaction
details.
Mobile Compatibility
Latest Mobile Phones alone suited for availing
these services
Spend Nominal Charge
For Regular Usage, Customers has to spend some
nominal Charges for transactions
3.2 Identify the Major Issue in Mobile
Banking
3. Objectives of Mobile Banking System
Customers mostly prefer ATM and Online Services.
Mobile Banking is not preferred by many because of
the above limitations. Customers have to aware about
Mobile Banking Services before usage. The awareness
and Risk about Mobile Banking was tested by 50
Customers and Comparable with other risk factor most
pointed out the “forget to Log off” Issue in these
Limitations as follows.
The following steps are discussed in the next Section
1. Benefits & Limitation of Mobile Banking
2. Identify the Major Issue in Mobile Banking
3. Suggestion proposed to improve the Mobile
Banking Services.
3.1 Benefits & Limitations of Mobile
Banking
Table 2 - Risk Ratings in Mobile Banking
Issues in Mobile
Banking
Benefits of Mobile Banking
i. Reduce Timing
Instead of going to bank premises and waiting in a
Queue for checking the account transactions,
Customers can check all details through Mobile
Phones
ii. Mini Statement
In Offline Mode, We can see our Transaction
Details through Mini Statement by using MPIN
iii. Security
During Transactions like Amount Transfer, SMS
Verification Code is provided for checking the
Authorised Persons.
iv. Availability
At any time, Customers can avail all the Services
through Mobile Phones
v. Ease of Use
User Friendly. Customers can access all the
financial services with little knowledge about
mobile application.
Compulsory Internet
Connection & Tower
Problem
Risk Ratings
5
4
3
2
1
25
5
5
10
5
10
20
20
10
40
Anti Virus Software
Updation
Forget to Log Off &
Misuse Mobile Phones
35
5
10
Mobile Compatibility
Spend Nominal Charge
10
Limitations of Mobile Banking
i.
Compulsory Internet Connection & Tower
Problem
66
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
40
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
invoked by MBMS. Clock Time Limits are not fixed,
Customers can change the limits at any time.
Mobile Banking
Log in
Process
Transactio
n Process
Monitor all
Transaction
s
Figure 2 - Risk Ratings in Mobile Banking Chart
Mobile
Banking
Supervising
System
The above collected data also indicates that 70% of
Customers mentioned the “Forget to Log Off & Misuse
of Mobile Phones due to theft” is a major risk factor in
the above list. The author suggested an idea to
improvise the mobile banking services in the next
section
3.3 Suggestion proposed to improve the Mobile
Banking Services
Log Out
Process / Skip
Skip returns or
time expires
Invoke
Log Off
Service
Figure 3 - MBSS Model Design
5. Conclusion
In Mobile Banking Applications, whenever we need to
avail financial services we have to enter our User name
and Password for using our account transactions. After
completion of our task, Customers have to log off these
services. But sometimes, for regular usage Customers
may forget or postponed to log off. At that time, This
mobile application always keep inside the
corresponding Customer’s Account Database. If the
Customers mobile phones theft means, automatically
hackers can reveal all their transaction details very
easily. This will become a very big issue. Banking
Sector has to avoid this type of problems by using new
emerging technologies. At the Same time, Customers
also have to aware about these Services like How to
use these apps, what are the security measures taken by
the banking sector and how to avoid major risks from
unauthorized persons.
Mobile Banking is a convenient financial services to
the Customers. Customers can avail all account
transactions like Bill payment, Credit Amount, Debit
Amount, Fund Transfer etc. It offers many benefits
with ease of use. But still it has some limitations. So
this paper discussed the major issues faced by the
Customer & Banks Sector through Mobile Banking
services and suggested an idea for protecting the
account information from unauthorised persons
through Mobile Banking Supervising System.
References
[1] Renju Chandran “Pros and Cons of Mobile Banking”International Journal of Scientific Research Publications,
Volume 4 Issue 10 October 2014.ISSN 2250-3153
[2] Aditya Kumar Tiwari, Ratish Agarwal, Sachin Goyal “Imperative & Challenges of Mobile Banking in India”International Journal of Computer Science & Engineering
Technology, Volume 5 Issue 3 March 2014. ISSn 2229-3345
[3] V.Devadevan “ Mobile Banking in India- Issues &
Challenges” – International Journal of Engineering
Technology and Advanced Engineering, Volume 3, Issue 6,
June 2013. ISSN 2250-2459
[4] MD.Tukhrejul Inam, MD.Baharul Islam “ Possibilities
and Challenges of Mobile Banking: A Case Study in
Bangladesh”
International
Journal
of
Advanced
Computational Engineering and Networking, Volume 1
Issue 3 May 2013, ISSN: 2321-2106.
[5] L.S.Dialing, “Mobile Phones target the world’s non
reading poor”, Scientific American, Volume 296 issue 5
2007.
4. Proposed Mobile Banking Supervising
System [MBSS]
This paper suggested to implement Mobile Banking
Supervising System [MBSS] along with mobile
banking applications for protecting and keep track all
the sensitive information. For tracking all the
transactions, MBMS keeps Stop Watch for monitoring
the Services regularly like Log in Timing, transaction
Particulars, Log off (or) skip the application.
Everything has to be monitored by MBSS. If the
Customer skips these mobile apps or forgets to log off
keep on staying means, Automatic Log off Functions is
67
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
A Survey on Security Issues in Big Data and NoSQL
Ebrahim Sahafizadeh1, Mohammad Ali Nematbakhsh2
1
Computer engineering department, University of Isfahan
Isfahan,81746-73441,Iran
[email protected]
2
Computer engineering department, University of Isfahan
Isfahan,81746-73441,Iran
[email protected]
Abstract
2.1 Big Data
This paper presents a survey on security and privacy issues in big
data and NoSQL. Due to the high volume, velocity and variety of big
data, security and privacy issues are different in such streaming data
infrastructures with diverse data format. Therefore, traditional
security models have difficulties in dealing with such large scale
data. In this paper we present some security issues in big data and
highlight the security and privacy challenges in big data
infrastructures and NoSQL databases.
Keywords: Big Data, NoSQL, Security, Access Control
Big data is a term refers to the collection of large data sets
which are described by what is often referred as multi 'V'. In
[8] 7 characteristics are used to describe big data:
Volume, variety, volume, value, veracity, volatility and
complexity, however in [9], it doesn't point to volatility and
complexity. Here we describe each property.
Volume: Volume is referred to the size of data. The size of
data in big data is very large and is usually in terabytes and
petabytes scale.
Velocity: Velocity referred to the speed of data producing and
processing. In big data the rate of data producing and
processing is very high.
Variety: Variety refers to the different types of data in big
data. Big data includes structured, unstructured and semistructured data and the data can be in different forms.
Veracity: Veracity refers to the trust of data.
Value: Value refers to the worth drives from big data.
Volatility: "Volatility refers to how long the data is going to
be valid and how long it should be stored" [8].
Complexity: "A complex dynamic relationship often exists in
big data. The change of one data might result in the change of
more than one set of data triggering a rippling effect" [8].
Some researchers defined the important characteristics of big
data are volume, velocity and variety. In general, the
characteristics of big data are expressed as three Vs.
1. Introduction
The term big data refers to high volume, velocity and variety
information which requires new forms of processing. Due to
these properties which are referred sometimes as 3 'V's, it
becomes difficult to process big data using traditional database
management tools [1]. A new challenge is to develop novel
techniques and systems to extensively exploit the large
volume of data. Many information management architectures
have been developed towards this goal [2].
As developing new technologies and increasing the use of big
data in several scopes, security and privacy has been
considered as a challenge in big data. There are many security
and privacy issues about big data [1, 2, 3, 4, 5 and 6]. In [7]
top ten security and privacy challenges in big data is
highlighted. Some of these challenges are: secure
computations, secure data storages, granular access control
and data provenance.
2.2 NoSQL
The term NoSQL stands for "Not only SQL" and it is used for
modern scalable databases. Scaling is the ability of the system
to increase throughput when the demands increase in terms of
data processing. To support big data processing, the platforms
incorporate scaling in two forms of scalability: horizontal
scaling and vertical scaling [10].
Horizontal Scaling: in horizontal scaling the workload
distributes across many servers. In this type of scalability
multiple systems are added together in order to increase the
throughput.
In this paper we focus on researches in access control in big
data and security issues on NoSQL databases. In section 2 we
have an overview on big data and NoSQL technologies, in
section 3 we discuss security challenges in big data and
describe some access control model in big data and in section
4 we discuss security challenges in NoSQL databases.
2. Big Data and NoSQL Overview
In this section we have an overview on Big Data and NoSQL.
68
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[2] using data content. In this case the semantic content of
data plays the major role in access control decision making.
"CBAC makes access control decisions based on the content
similarity between user credentials and data content
dynamically" [2].
Attribute relationship methodology is another method to
enforce security in big data proposed in [3] and [4]. Protecting
the valuable information is the main goal of this methodology.
Therefore [4] focuses on attribute relevance in big data as a
key element to extract the information. In [4], it is assumed
that the attribute with higher relevance is more important than
other attributes. [3] uses a graph to model attributes and their
relationship. Attributes are expressed as node and relationship
is shown by the edge between each node and the method is
proposed by selecting protected attributes from this graph. The
method proposed in [4] is as follow:
"First, all the attributes of the data is extracted and then
generalize the properties. Next, compare the correlation
between attributes and evaluate the relationship. Finally
protect selected attributes that need security measures based
on correlation evaluation" [4] and the method proposed in [3]
is as follow:
"All attributes are represented as circularly arranged nodes.
Add the edge between the nodes that have relationships. Select
the protect nodes based on the number of edge. Determine the
security method for protect nodes" [3].
A suitable data access control method for big data in cloud is
attribute-based encryption.[1] A new schema for enabling
efficient access control based on attribute encryption is
proposed in [1] as a technique to ensure security of big data in
the cloud. Attribute encryption is a method to allow data
owners to encrypt data under access policy such that only user
who has permission to access data can decrypt it. The problem
with attribute-based encryption discussed in [1] is policy
updating. When the data owner wants to change the policy, it
is needed to transfer the data back from cloud to local and reencrypt the data under new policy and it caused high
communication overhead. The authors in [1] focus on solving
this problem and propose a secure policy updating method.
Hadoop is an open source framework for storing and
processing big data. It uses Hadoop Distributed File System
(HDFS) to store data in multiple nodes. Hadoop does not
authenticate users and there is no data encryption and privacy
in Hadoop. HDFS has no strong security model and Users can
directly access data stored in data nodes without any fine grain
authorization [13, 16]. Authors in [16] present a survey on
security of Hadoop and analyze the security problems and
risks of it. Some security mechanism challenges mentions in
[16] are large scale of the system, partitioning and distributing
files through the cluster and executing task from different user
on a single node. In [13] the authors express some of security
risk in Hadoop and propose a novel access control scheme for
storing data. This scheme includes Creating and Distributing
Access Token, Gain Access Token and Access Blocks. The
same scheme is also used with Secure Sharing Storage in
cloud. It can help the data owners control and audit access to
Vertical Scaling: in vertical scaling more processors, more
memory and faster hardware are installed within a single
server.
The main advantages of NoSQL is presented in [11] as the
following: "1) reading and writing data quickly; 2) supporting
mass storage; 3) easy to expand; 4) low cost". In [11] the data
models that studied NoSQL systems support are classified as
Key-value, Column-oriented and Document. There are many
products claim to be part of the NoSQL database, such as
MongoDB, CouchDB, Riak, Redis, Voldermort, Cassandera,
Hypertable and HBase.
Apache Hadoop is an open source implementation of Google
big table [12] for storing and processing large datasets using
clusters of commodity hardware. Hadoop uses HDFS which is
a distributed file system to store data across clusters. In section
6 we have an overview of Hadoop and discuss an access
control architecture presented for Hadoop.
3. Security Challenges and Access Control
Model
There are many security issues about big data. In [7] top ten
security and privacy challenges in big data is presented.
Secure computation in distributed framework is a challenge
which discusses security in map-reduce functions. Secure data
storage and transaction logs discuss new mechanism to
prevent unauthorized access to data stores and maintain
availability. Granular access control is another challenge in
big data. The problem here is preventing access to data by
users who should not have access. In this case, traditional
access control models have difficulties in dealing with big
data. Some mechanisms are proposed for handling access
control in big data in [2, 3, 4, 13 and 14].
Among the security issues in big data, data protection and
access control are recognized as the most important security
issues in [4]. Shermin In [14] presents an access control model
for NoSQL databases by the extension of traditional role based
access control model. In [15] security issues in two of the
most popular NoSQL databases, Cassandra and MongoDB are
discussed and outlined their security features and problems.
The main problems for both Cassandra and MongoDb
mentioned in [15] are the lack of encryption support for data
files, weak authentication between clients and servers, simple
authentication, vulnerability to SQL injection and DOS attack.
It is also mentioned that both of them do not support RBAC
and fine-grained authorization. In [5] the authors have a look
at NIST risk management standards and define the threat
source, threat events and vulnerabilities. The vulnerabilities
defined in [5] in term of big data are Insecure computation,
End-point input validation/filtering, Granular access control,
Insecure data storage and communication and Privacy
preserving data mining and analytics.
In some cases in big data it is needed to have access control
model based on semantical content of the data. To enforce
access control in such content centric big data sharing,
Content-Based Access Control (CBAC) model is presented in
69
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
service attack because it performs one thread per one client
[19] and it does not support inline auditing.[15] Cassandra
uses a query language called Cassandra Query Language
(CQL), which is something like SQL. The authors of [15]
show that injection attack is possible on Cassandra like SQL
injection using CQL. Cassandra also has problem in managing
inactive connection [19].
their data but the owners need to update the access token when
the metadata of file blocks changes.
4. Secutity Issues in NoSQL Databases
NoSQL stands for "Not Only SQL" and NoSQL databases are
not meant to replace the traditional databases, but they are
suitable to adopt big data when the traditional databases do not
appropriate [17]. NoSQL databases are classified as Key-value
database, Column-oriented database, Document based and
Graph database.
4.4 HBase
HBase is an open source column oriented database modeled
after Google big table and implemented in java. Hbase can
manage structured and semi-structured data and it uses
distributed configuration and write ahead logging.
Hbase relies on SSH for inter-node communication. It
supports user authentication by the use of SASL (Simple
Authentication and Security Layer) with Kerberos. It also
supports authorization by ACL (Access Control List) [17].
4.1 MongoDB
MobgoDB is a document based database. It manages
collection of documents. MongoDB support complex datatype
and has high speed access to huge data.[11] flexibility, power,
speed and ease of use are four properties mentioned in [18] for
MongoDB. All data in MongoDB is stored as plain text and
there is no encryption mechanism to encrypt data files [19].
All data in MongoDB is stored as plain text and there is no
encryption mechanism to encrypt data files. [19] This means
that any malicious user with access to the file system can
extract the information from the files. It uses SSL with X.509
certificates for secure communication between user and
MongoDB cluster and intra-cluster authentication [17] but it
does not support authentication and authorization when
running in Sharded mode [15]. The passwords are encrypted
by MD5 hash algorithm and MD5 algorithm is not a very
secure algorithm. Since mongo uses Javascript as an internal
scripting language, authors in [15] show that MongoDb is
potential for scripting injection attack.
4.5 HyperTable
Hypertable is an open source high performance column
oriented database that can be deployed on HDFS. It is
modeled after Google's big table. It use a table to store data as
a big table [20].
Hypertable does not support data encryption and
authentication [19]. It does not tolerate the failure of range
server and if a range server crashes it is not able to recover lost
data [20]. Eventhough Hypertbale uses Hypertable Query
Language (HQL) which is similar to SQL, but it has no
vulnerabilities for the injection [19]. Additionally there is no
denial of service is reported for Hypertable [19].
4.6 Voldemort
4.2 CouchDB
Voldemort [23] is a key value NoSQL database used in
LinkedIn. This type of databases match keys with values and
the data is stored as a pair of key and value. Voldemort
supports data encryption if it uses BerkeleyDB as the storage
engine. There is no authentication and authorization
mechanism in Voldemort. It neither supports auditing [21].
CouchDb is a flexible, fault-tolerant document based NoSQL
database [11]. It is an open source apache project and it runs
on Hadoop Distributed File Systems (HDFS) [19].
CouchDB does not support data encryption [19], but it
supports authentication based on both password and cookie
[17]. Passwords are encrypted using PBKDF2 hash algorithm
and are sent over the network using SSL protocol [17].
CouchDB is potential for script injection and denial of service
attack [19].
4.7 Redis
Redis is an open source key value database. Data encryption is
not supported by Redis and all data stored as plain text and the
communication between Redis client and server is not
encrypted [19]. Redis does not implement access control, so it
provides a tiny layer of authentication. Injection is impossible
in Redis, since Redis protocol does not support string escaping
concept [22].
4.3 Cassandra
Cassandra is an open source distributed storage for managing
big data. It is a key value NoSQL database which is used in
Facebook. The properties mentioned in [11] for Cassandra are
the flexibility of the schema, supporting range query and high
scalability.
all passwords in Cassandra are encrypted by the use of MD5
hash function and passwords are very weak. If any malicious
user can bypass client authorization, user can extract the data
because there is no authorization mechanism in inter-node
message exchange.[17] Cassandra is potential for denial of
4.8 DynamoDB
DynamoDB is a fast and flexible NoSQL database used in
amazon. It supports both key value and document data model
[24]. Data encryption is not supported in Dynamo but the
communication between client and server uses https protocol.
70
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Some researchers presented new access control model for big
data which was introduced in this paper.
In the last section we described security issues in NoSQL
databases. As it was mentioned the most of NoSQL databases
has the lack of data encryption. To have a more secure
database it is needed to encrypt sensitive database fields.
Some of databases have vulnerability for injection. It is
needed to use sufficient input validation to overcome this
vulnerability. Some of them have no authentication
mechanism and some of them have weak authentication
mechanism. So to overcome this weakness it is needed to have
strong authentication mechanism. CouchDB uses SSL
protocol, Hbase uses SASL and Hypertable, redis and
Voldemort has no authentication and the other databases has
weak authentication. MongoDB and CouchDB are potential
for injection and Cassandra and CouchDB are potential for
denial of service attack. Table 1 briefly shows this
comparison.
Authentication and authorization is supported by dynamo and
arequests need to be signed using HMAC-SHA256 [21].
4.9 Neo4J
Neo4j [25] is an open source graph database. Neo4j does not
support data encryption and authorization and auditing. The
communication between client and server is based on SSL
protocol. [21].
5. Conclusion
Increasing the use of NoSQL in organization, security has
become a growing concern. In this paper we presented a
survey on security and privacy issues in big data and NoSQL.
We had an overview on big data and NoSQL databases and
discussed security challenges in this area. Due to the high
volume, velocity and variety of big data, traditional security
models have difficulties in dealing with such large scale data.
Table 1: The Comparison between NoSQL Databases
Authentication
Authorization
Data
Encription
Auditing
Communication
protochol
Document
Not Support
Not Support
Not Support
-
SSL
CouchDB
Document
Support
-
Not Support
-
SSL
Cassandra
Key/Value
Support
Not Support
Not Support
Not Support
SSL
Column
Oriented
Support
Support
Not Support
-
SSH
Not Support
-
Not Support
-
Not Support
Tiny Layer
Not Support
Not Support
Support
Not Support
Support
-
-
Not Support
DB/Criteria
Data Model
MongoDb
Hbase
HyperTable
Voldemolt
Redis
DynamoDB
Neo4J
Column
Oriented
Key/Value
Key/Value
Key/Value
Document
Graph
Potential
for attack
Script
injection
Script
injection
and DOS
Script
injection (in
CQL)
and DOS
Not reoprt
for DOS
and
injection
Data Model
-
-
Not Support
Not Support
Not Encrypted
-
Not Support
-
https
-
Column
Oriented
Key/Value
Key/Value
Key/Value
Document
Not Support
Not Support
SSL
-
References
K.Yang, Secure and Verifiable Policy Update Outsourcing for
Big Data Access Control in the Cloud, Parallel and Distributed
Systems, IEEE Transactions on , Issue 99, 2014
[2] W.Zeng, Y.Yang, B.Lou, Access control for big data using data
content, Big Data, IEEE International Conference on, pp. 45-47,
2013
[3] S.Kim, J.Eom, T.Chung, Big Data Security Hardening
Methodology Using Attributes Relationship, Information
Science and Applications (ICISA), 2013 International
Conference on, pp 1-2, 2013
[4] S.Kim, J.Eom, T.Chung, Attribute Relationship Evaluation
Methodology for Big Data Security, IT Convergence and
[1]
[5]
[6]
[7]
[8]
Document
Document
Key/Value
Column
Oriented
Graph
Security (ICITCS), 2013 International Conference on, pp 1-4,
2013
M.Paryasto, A.Alamsyah, B.Rahardjo, Kuspriyanto, Big-data
security management issues, Information and Communication
Technology (ICoICT), 2nd International Conference on, pp 5963, 2014
J.H.Abawajy,A. Kelarev, M.Chowdhury, Large Iterative
Multitier Ensemble Classifiers for Security of Big Data,
Emerging Topics in Computing, IEEE Transactions on, Volume
2, Issue 3, pp 352-363, 2014
Cloude Security Allience, Top Ten Big Data Security and
Privacy Challenges, www.cloudsecurityalliance.org, 2012
K. Zvarevashe, M. Mutandavari, T. Gotora, A Survey of the
Security Use Cases in Big Data, International Journal of
71
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Innovative Research in Computer and Communication
Engineering, Volume 2, issue 5, pp 4259-4266, 2014
M.D.Assuncau, R.N.Calheiros, S.Bianchi, A.S.Netto, R.Buyya,
Big Data computing and clouds: Trends and future directions,
Journal of Parallel and Distributed Computing, 2014
D.Singh, C.K.Reddy, A survey on platforms for big data
analytics, Journa of Big Data, 2014
J.Han, E.Haihong, G.Le, J.Du , Survey on NoSQL Database,
Pervasive Computing and Applications (ICPCA), 2011 6th
International Conference on, pp 363-366, 2011
F.Chang, J.Dean, S.Ghemawat, W.C. Hsieh, D.A. Wallach,
Bigtable: A Distributed Storage System for Structured Data,
Google, 2006
C.Rong, Z.Quan, A.Chakravorty, On Access Control Schemes
for Hadoop Data Storage, International Conference on Cloud
Computing and Big Data, pp 641-645, 2013
M. Shermin, An Access Control Model for NoSQL Databases,
The University of Western Ontario, M.Sc thesis, 2013
L.Okman, N.Gal-Oz, Y.Gonen, E.Gudes, J.Abramov, Security
Issues in NoSQL Databases, Trust, Security and Privacy in
Computing and Communications (TrustCom), IEEE 10th
International Conference on, pp 541-547, 2011
M.RezaeiJam, L.Mohammad Khanli, M.K.Akbari, M.Sargolzaei
Javan, A Survey on Security of Hadoop, Computer and
Knowledge Engineering (ICCKE), 2014 4th International
Conference on , pp 716-721, 2014
A.Zahid, R.Masood, M.A.Shibli, Security of Sharded NoSQL
Databases: A Comparative Analysis, Conference on Information
Assurance and Cyber Security (CIACS), pp 1-8, 2014
A.Boicea, F.Radulescu, L.I.Agapin, MongoDB vs Oracle database comparison, Emerging Intelligent Data and Web
[19]
[20]
[21]
[22]
[23]
[24]
[25]
Technologies (EIDWT), 2012 Third International Conference
on, pp 330 – 335, 2012
19P.Noiumkar, T.Chomsiri, A Comparison the Level of Security
on Top 5 Open Source NoSQL Databases, The 9th International
Conference
on
Information Technology
and
Applications(ICITA2014) , 2014
A.Khetrapal, V.Ganesh, HBase and Hypertable for large scale
distributed storage systems, A Performance evaluation for Open
Source BigTable Implementations, Dept. of Computer Science,
Purdue University, http://cloud.pubs.dbs.uni-leipzig.de/node/46,
2008
K.Grolinger, W.A.Higashino, A.Tiwari,M.AM Capretz, Data
management in cloud environments: NoSQL and NewSQL data
stores, Journal of Cloud Computing: Advances, Systems and
Applications, 2013
http://redis.io/topics/security
http://www.project-voldemort.com
http://aws.amazon.com/dynamodb
http://neo4j.com
Ebrahim Sahafizadeh, B.S. Computer Engineering (Software), Kharazmi
University of Tehran,2001, M.S. Computer Engineering (Software), ran
University of Science & Technology, Tehran, 2004. Ph.D student at Isfahan
University. Faculty member, Lecturer, Department of Information
Technology, Payame Noor University , Boushehr.
MohammadAli Nematbakhsh, B.S. Electrical Engineering , Louisiana Tech
University, USA, 1981, M.S. Electrical and Computer Engineering.
University of Arizona, USA, 1983, Ph.D. electrical and Computer
Engineering, University of Arizona, USA, 1987. Micro Advanced Computer,
Phoenix, AZ, 1982-1984, Toshiba Co, USA and Japan, 1988-1993, Computer
engineering Department, university of Isfahan, 1993-now
72
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Classifying Protein-Protein Interaction Type based on Association
Pattern with Adjusted Support
Huang-Cheng Kuo and Ming-Yi Tai
Department of Computer Science and Information Engineering
National Chiayi University
Chia-Yi City 600, Taiwan
[email protected]
Keywords: Protein-Protein Interaction, Association Pattern
Based Classification, Type Imbalance
some small transactions from a protein complex, and
each small transaction contains the residues which are
geographically close to each other. The binding surface
of a protein complex is usually curve. So, there have
some residues of a protein with concave shape binding
site and others are on the other protein with convex
shape binding site. A transaction is a tuple <R, L>,
where R is a set of residues of a protein, L is a set of
residues of the other protein. Residues of a transaction
are geographical close to each other. Patterns from
obligate protein complexes and from transient protein
complexes are mined separately [1].
In this paper, we assume proteins are in complex form.
However, with the association patterns, proteins can be
indexed under the patterns. So that biologists can
quickly screen proteins that interact with the certain
type of interaction.
1. Introduction
2. Related Works
Protein-protein interaction refers to an event generated
on the change in physical contact between two or more
proteins. Protein-protein interaction occurs when a
number of proteins combine into an obligate protein
complex or a transient protein complex. An obligate
complex will continue to maintain its quaternary
structure and its function will continue to take effect. A
transient complex will not maintain its structure. It will
separate at the end of its function. Protein-protein
interaction occurs mainly on the binding surface of the
proteins. The residues on the binding surface play an
important role for deciding the type of protein-protein
interaction. The residue distribution affects the
contacting orientation and thus determines the binding
energy which is important in interaction type.
In this paper, an association pattern method is proposed
for classifying protein-protein interaction type. A
transaction is instead of considering all the residues on
the binding surface of a protein complex. We generate
The ultimate goal of this paper is user input transient
protein binding proteins, and then quickly screened out
an experimental biological experimenter direction from
the data library. As for how to predict protein
interaction type, researchers have proposed method
using machine learning classification methods to design
the system module.
Mintseris et al for protein complexes can identify
whether their prediction classification of information
depends only limited participation in Pair of two
proteins interact in order for the quantity of various
atoms, called Atomic Contact Vector. There is a good
accuracy, but there are two drawbacks. (1) Feature
vector (171 dimensions), there will curse of
dimensionality problem. (2) Focus only on contact with
the atom, it did not consider the shape of the contact
surface. The shape of the contact surface of the protein
affects the contacting area and the types of atom contact
Abstract
Proteins carry out their functions by means of interaction.
There are two major types of protein-protein interaction (PPI):
obligate interaction and transient interaction. In this paper,
residues with geographical information on the binding sites
are used to discover association patterns for classifying protein
interaction type. We use the support of a frequent pattern as
its inference power. However, due to the number of transient
examples are much less than the number of obligate examples,
therefore there needs adjustment on the imbalance. Three
methods of applying association pattern to classify PPI type
are designed. In the experiment, there are almost same results
for three methods. And we reduce effect which is correct rate
decreased by data type imbalance.
73
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
rules, such as SVM, has 99% correct rate, and knearest-neighbor method also has about 93%
accuracy[13]. Lukman et al divided interaction into
three categories: crystal packing, transient and obligate.
4,661 transient and 7,985 obligate protein complexes are
in order bipartite graph pattern mining complexes of
each category, and find style single binding surface
dipeptide. Find out which style or Patches, locate the
joint surface, and can bring good accuracy whether
protein interactions. Some people use the opposite
operation, a collection of known protein acting plane.
We want to find those proteins which have similar
effects because the first know what is the role of surface
interacted with each other exist. But it is possible to be
able to know relationship exists for protein to protein.
[2,3].
It is a popular field of study on protein complexes in
addition to the recognition as a research outside the
pharmacy. In pharmaceutical drug design research, the
main goal is analyzing protein-protein interaction[4,5].
Pharmaceutical drug design research intends to find the
protein that is located in a position gap, and the gap is
mainly protein or protein-bound docking. In the protein
binding site, there is information, such as shapes, notch
depth and electrical distribution. The protein binding
site is the main location of the occurrence of disease
organisms, and the location is a place where compound
produced protein chemistry and mutual bonding place.
Therefore, when researchers design a drug, they look for
existing molecules or the synthesis of new compounds.
When a compound is placed in the protein gap, we must
try to put a variety of different angles to constantly
rotate. Looking for as much as possible fill the gap and
produce good binding force between molecules. So, we
can find a compound capable of proteins that have the
highest degree of matching notches. This is called
docking.
Protein-protein interaction network can help us to
understand the function of the protein[6,7]. You can
understand the basic experiment to determine the role of
the presence of the protein, but the protein due to the
huge amount of data.
We are not likely to do it one by one experiment. So
predicting interactions between proteins has become a
very important issue. In order to predict whether there
has interaction between the protein situations more
accurately [8]. So Biologists proposed using protein
combination to increase the accuracy. The joint surface
between the protein and the protein interacting surface
is called protein domain.
Domain Protein binding protein is the role of surface
functional units. Usually a combination of surface has
more than one domain presence, and combined with the
presence of surface property which is divided into the
following
categories:
hydrophobicity,
electrical
resistance, residual Kitt, shape, curvature, retained
residues[9, 10, 11]. We use information of residues.
Park et al also use classification association rules
prediction interaction. The interaction into Enzymeinhibitors, Non Enzyme-inhibitors, Hetero-obligomers,
Homo-obligomers and other four categories[12], the
total 147. Association rule is used in conjunction face
value of 14 features, as well as domain, numerical
characteristics such as average hydrophobicity, residue
propensity, number of amino acids, and number of
atoms.
There is about 90 percent correction rate, but the
information has the same way. Other non-association
3. Data Preparation
3D coordinate position of the plan of proteins derived
from the RCSB Protein Data Bank. Identification of
protein complexes, there are several sources:
1. 209 protein complexes collected by Mintseris and
Weng [3].
2. Protein complexes obtained from the PDB web
site[14]. Then type of the complexes is determined by
using NOXclass website. NOXclass uses SVM to
classify the protein-protein interaction of three types into
biological obligate, biological transient and crystal
packing. We keep only the protein complexes which as
classified as biological obligate and biological transient.
The accuracy rate is claimed to be about 92%. So, we
use the data classified by NOXclass as genuine data for
experiment. We collected total 243 protein complexes
[15] by this way.
3.1 The Binding Surface Residues
A protein complex is composed of two or more proteins,
where in PDB[14] each protein is called a chain. The all
the chains in a protein complex share a coordinate
system, and the coordinates of each chain of each
residue sequentially label in a number of the more
important atoms. Because it is a complex of a common
coordinate system, so that the relative position is found
between the chains. If there are two chains bind together
by the chain of residue position determination, but no
residues are indicated at the bonding surface on[16]. It
is therefore necessary to further inquiries by other
repositories or algorithms judgment.
There have been many studies on protein sequence data
to predict which residues are on the binding surface[17].
In our research, the atom-to-atom distance between two
74
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
residues is used to decide whether the two residues
which are on the binding surfaces. They are binding
surface residues if there exists an atom-to-atom distance
which is less than a threshold. The distance threshold
is 5Å in this paper[18,19,20] .
Input
100
Input pdb data
Operation
confidence
Search residue
sets pair
Confidence
> 0.6
Yes
Transaction
No
Support
>1%
Have
same
rule
No
No
Yes
Select
Yes
Get association
rule
Delete
Finish rule
classify
Fig 1. Associative Classification Mining
We input a dataset of PDB files[14], then find the
residues on the binding site for each complex and get a
pair of residue sets, one set on the convex side, the other
one on the other side. And partition each of the pairs of
residue sets into transactions for association rule mining
in next box. Finally if the transaction's value of supper is
less than 1%, then delete this transaction.
Delete
Fig 2. Applying the rules to classify
Taking association rule operation value for PDB file[14] called
confidence. If the value of confidence is less than 0.6, delete
this rule. Next check have same rule with different type, if
have same rule, delete lower confidence rule.
3.2 Obtaining Data
Frequent pattern mining results can be obtained such as:
identification of protein complexes in the same body,
which is electrically common to the residues in the
concave joint surface, and hydrophilic (or hydrophobic)
residue at the convex joint surface. Association rule
mining results can be obtained, such as: polar residues
75
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
association rule mining we can get rule: {phe, ser} →
{arg} , support for the rule of 1.9%, confidence is 90%.
After exploration, it identify with the non-binding
protein complex identification of amino acid side
connected to the rules. It can assist in predicting the
style to get a likelihood classification combined with
another side effect of the combination of face
recognition occurs. If one party has a combination of
surface phe and ser. The other party has arg, and then
the binding surface increases the likelihood of
identification. But if the two sides should also consider
combining amino acid side of the amino acid pattern of
non-compliance with the identification of the joint
surface. The likelihood of this occurrence to identify the
combination of surface and reduced. In addition,
detailed rules amino acid position for the application of
the rules is likely to affect, such as: if ser phe and the
distance to the amino acid level, it is very far away.
Even if there is another combination of surface arg. The
applicability of this rule may play on a discount of, on
the contrary. If it is very close distance, the influence of
this rule should be raised. We will consider the overall
impact is to identify a set of joint surface when the
amino acid pattern recognition and non-recognition of
the joint surface binding amino acid pattern in the
surface of the judge[22,23]. Obtain association rules
method, divided into two steps:
1. Delete unimportant or conflict association rules.
2. Select the association rules, set for unknown objects.
And related forms of association rule is X => C, X is the
project set (also called itemset), C is a category.
Association rules in the form of <X, Y> => C, X and Y
are itemsets, representing convex and convex surface
binding residues; C is a correlation between
categories[24].
on the concave joint surface and hydrophilic (or
hydrophobic) residue at the convex joint surface. The
identification of protein complexes mostly is interaction.
Combination of surface materials can have physical and
chemical properties of amino acids. As well as
numerical features such as accessible surface area (ASA).
The appropriate numeric features discrete (discretize)
for the interval, as a project (item) mining of association
rules. At this stage, we just take the residual basic body
styles and frequently used as input data mining of
association rules.
We combine two protein complexes uneven surface
residues projected on the surface of the joint surface of
the cross to a radius of 10 Å circular motion in the
transverse plane. The radius of 10 Å successive
increases in a circular motion to form concentric
rings[21]. Each ring will cut the number of between
zones of equal area (sector), and different ring every
area roughly equal to the formation of residues in each
district a deal. This method will be divided residues
from the past in the same transaction. But the
disadvantage is that the district boundary residues are
rigidly assigned to a transaction that is on the boundary
of two similar residues are divided into different deal.
Another way for the direct use of the tertiary structure
coordinates to each binding surface residues as a
benchmark. In concave surface of combination site. For
example, take one of the residues r, the same as in the
concave and r similar residues put together, then convex.
The similar residues with r is added and assigned to the
same transaction. The advantage is that residues close to
each other are put into a transaction. But the drawback
is that a residue may repeatedly appears in some
transactions.
Then we find on the amount of data that is significantly
the number of type biological transient less than obligate,
which causes production rules and calculations during
supper. The value of biological transient is underrated,
so we have to do to adjust the value of biological
transient. So biological obligate and biological transient
are at the fair situation.
4. Association Rules Deletion
Data for the transaction or relation, in the training data
set (hereinafter referred to as DB) data attach each
category. The following instructions to the P and N two
categories. For example, Arising class association rule
(hereinafter referred to as CAR) format X => Y, X is an
item set, Y is a type.
In [25] algorithm, which depend on the sort of
confidence class association rule. The pattern with the
same confidence are sorted according their supports.
Then the class association rule is according to the sort
order of selection. The selection process is that the data
base in line with the current class association rule
(called r) case deletion of the conditions of. Data base
case after delete the called DB '. Suppose r of category P,
3.3 Associative Classification Rule
Various protein complexes bind frequency distribution
of different surface amino acids, which the reason we
believe that the association rules can be used as the basis
to predict classification. In addition, the combination of
surface irregularities are made of a complex
combination of the two surfaces.
Using arg amino acids for example, in the identification
of the complex, if there is a combination of concave and
phe ser, then there will be 90% across arg, there is the
76
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
in line for the case N R condition but the number of
classes, for r the error; case DB 'is determined by a
majority judgment error, assume that the majority of
Class P case, the DB' in the number N class called DB
'of the error. So every pick a class association rule, there
will be one pair of class association rule of error, and
choose class association rule to the whole process lasted
until the lowest error
Its algorithm is as follows:
1 R = sort(R);
3.
2 for each rule r in R in sequence do
3
temp = empty;
4
for each case d in D do
5
6
6. Support Adjustment
if d satisfies the conditions of r then
store d.id in temp and mark r
When predicting PPI type, we find that no matter which
method of calculation the prediction results are almost
always obligate type. There are almost all obligate data
rule, transient data rule almost nothing. We judge
because the gap between the obligate data and transient
quantity data, resulting in a lower number of transient
rule, more likely to be filtered out, so we focused on a
number of imbalances do numerically adjustment.
The following formula:
C( x ) = P( x ∩obligate ) * R /
if it correctly classifies d;
7
end
if r is marked then
8
9
temp from D;
insert r at the end of C;
delete all the cases with the ids in
10
11
C;
select a default class for the current C;
compute the total number of errors of
12
obligate two type. Let Ri is descending order of
confidence value rule set. Determine which type
large quantity for the top few rule in set. If object
type large, we surmise this object type is obligate,
else we surmise this object type is transient.
The number of qualified rule: If this object contains
a number of rules Ri and Rj, Ri is obligate type rule
set and Rj is non- obligate type rule set. If the
number of rule in Ri is large than Rj, then surmise
this object is obligate type, if not surmise this object
is transient type.
(P( x ∩obligate ) * R + P( x ∩non-obligate ) ) (1)
Let's R is assumed that the ratio of transient to obligate.
X is a rule,
denote this PPI is obligate
end
denote
and this PPI contain rule x.
13 end
14 Find the first rule p in C with the lowest total number
of errors and drop all the rules after p in C;
15 Add the default class associated with p to end of C,
and return C (our classifier).
this PPI is transient.
Table1. The number of non-obligate rules.
Factor
Before
support
adjustment
After
support
adjustment
5. Applying Rules for Classification
When the classification of an unknown category of
object, there are some methods of selecting rules:
1.
Confidence sum
2.
Higher confidence
3.
The number of qualified rule
The methods are as follows:
1. Confidence sum: If this object contains a number of
rules Ri and Rj, Ri is obligate type rule set and Rj is
non- obligate type rule set. Let X is sum of
confidence for Ri, Y is sum of confidence for Rj. If
X > Y, we surmise this object type is obligate, else
we surmise this object type is transient.
2. Higher confidence: If this object contains a number
of rules R, R is rule set contain obligate and non-
1.0
2
1.1
1
1.2
1
1.3
1
1.4
1
1.5
0
1.6
0
1.7
1
1.8
1
1.9
2
11
14
14
18
34
64
95
17
3
29
1
44
9
This table shows the number two transient rules in range
factor from 1.0 to 2.0. Before the change of the number
of transient rules are rare and some cases do not even
have. After the change, see table 1 the number of
transient rules has increased after support adjustment.
77
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
In figure 3, we find that is low correct rate before data
counterpoise. Because the number of non-obligate rule is
more less than obligate rule number. From the figure 4,
the results of the number of qualified rule and
confidence sum rule number the difference of two
methods is not large. Because the confidence will be
greater the more acceptable when the number of its rule,
cause results not dissimilar large. High confidence at
lower accuracy rate method beginning, because at factor
size is relatively small number of transient rule is not
much. It is easy to determine when not to judge him,
because of high confidence. The number of the back of
transient rule is changed for a long time. It increases the
accuracy by High confidences before making a judgment.
7. Experiment and Discussion
8. Conclusions
The amount of protein data is enormous, coupled with
environmental variation factors of uncertainty. It takes a
lot of time and money to determine protein-protein
interaction in wet lab. So there are many experts and
scholars toward using known information to predict
protein interactions situation, in order to reduce the
amount of protein test objectives.
We use a class association rule method for classifying
protein-protein interaction type. And we compared
several screening methods about screening associated
rules. Due to type imbalance, where there are much
more obligate protein complexes than transient protein
complexes, the interesting measures of the mined rules
are tortured. We have designed a method to adjust this
effect.
The proposed method can further be used to screen
proteins that might have a certain type of protein-protein
interaction with a query protein. For biologists, it may
take much less time to explore; also it saves time to
experiment. Even for pharmaceutical research and
development, it has brought many benefits. Mainly
proteins and protein interactions experimentally really
have to spend a lot of time and money. If a system can
quickly provide a list of the list of subjects, there will be
a great help.
Fig 3 The correct rate of non-obligate data prediction
Fig. 4 The result of proposed method.
References
[1] SE Ozbabacan, HB Engin, A Gursoy, O Keskin,
“Transient
Protein-Protein
Interactions,”
Protein
Engineering, Design & Selection, Vol. 24, No. 9, pp. 63548, 2011.
[2] Ravi Gupta, Ankush Mittal and Kuldip Singh, “A TimeSeries-Based Feature Extraction Approach for Prediction
78
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
of Protein Structural Class,” EURASIP Journal on
Bioinformatics and Systems Biology, pp. 1-7, 2008.
[3] Julian Mintseris and Zhiping Weng, “Atomic Contact
Vectors in Protein-Protein Recognition,” PROTEINS:
Structure, Function, and Genetics, Vol. 53, pp. 629–639,
2003.
[4] S. Grosdidier, J. Fernández-Recio, “Identification of Hotspot Residues in Protein-protein Interactions by
Computational Docking,” BMC Bioinformatics, 9:447,
2008.
[5] JR Perkins, I Diboun, BH Dessailly, JG Lees, C Orengo,
“Transient Protein-Protein Interactions: Structural,
Functional, and Network Properties,” Structure, Vol. 18,
No. 10, pp. 1233-43, 2010.
[6] Florian Goebels and Dmitrij Frishman, “Prediction of
Protein Interaction Types based on Sequence and Network
Features,”BMC Systems Biology, 7(Suppl 6):S5, 2013.
[7] Huang-Cheng Kuo and Ping-Lin Ong, “Classifying Protein
Interaction Type with Associative Patterns,” IEEE
Symposium
on
Computational
Intelligence
in
Bioinformatics and Computational Biology, pp. 143-147,
2013.
[8] Nurcan Tuncbag,Gozde Kar, Ozlem Keskin, Attila Gursoy
and Ruth Nussinov, “A Survey of Available Tools and
Web Servers for Analysis of Protein-Protein Interactions
and Interfaces,” Briefings in Bioinformatics, Vol. 10, No.
3, pp. 217-232, 2009.
[9] R. P. Bahadur and M. Zacharias, “The Interface of
Protein-protein Complexes: Analysis of Contacts and
Prediction of Interactions,” Cellular and Molecular Life
Sciences, Vol.65, pp. 7-8, 2008.
[10] Huang-Cheng Kuo, Ping-Lin Ong, Jung-Chang Lin, JenPeng Huang, “Prediction of Protein-Protein Recognition
Using Support Vector Machine Based on Feature
Vectors,”
IEEE
International
Conference
on
Bioinformatics and Biomedicine, pp. 200-206, 2008.
[11] Huang-Cheng Kuo, Ping-Lin Ong, Jia-Jie Li, Jen-Peng
Huang, “Predicting Protein-Protein Recognition Using
Feature Vector,” International Conference on Intelligent
Systems Design and Applications, pp. 45-50, 2008.
[12] M. Altaf-Ul-Amin, H. Tsuji, K. Kurokawa, H. Ashahi, Y.
Shinbo, and S. Kanaya, “A Density-periphery Based
Graph Clustering Software Developed for Detection of
Protein Complexes in Interaction Networks,”
International
Conference
on Information
and
Communication Technology, pp. 37-42, 2007.
[13] Sung Hee Park, José A Reyes, David R Gilbert, Ji
Woong Kim and Sangsoo Kim, “Prediction of Protein-
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
protein Interaction Types Using Association Rule based
Classification,” BMC Bioinformatics, Vol. 10, January
2009.
Protein Data Bank
[http://www.rcsb.org/pdb/home/home.do]
NOXclass [http://noxclass.bioinf.mpi-inf.mpg.de/]
Biomolecular Object Network Databank
[http://bond.unleashedinformatics.com/]
Chengbang Huang, Faruck Morcos, Simon P. Kanaan,
Stefan Wuchty, Danny Z. Chen, and Jesus A. Izaguirre,
“Predicting Protein-protein Interactions from Protein
Domains Using a Set Cover Approach,” IEEE/ACM
Transactions
on
Computational
Biology
and
Bioinformatics, Vol. 4, No. 1, pp. 78-87, 2007.
Frans Coenen and Paul Leng, “The Effect of Threshold
Values on Association Rule based Classification
Accuracy,” Data & Knowledge Engineering, Vol. 60, No.
2, pp. 345-360, 2007.
John Richard Davies, “Statistical Methods for Matching
Protein-ligand Binding Sites”, Ph.D. Dissertation,
School of Mathematics, University of Leeds, 2009.
S. Lukman, K. Sim, J. Li, Y.-P. P. Chen, “Interacting
Amino Acid Preferences of 3D Pattern Pairs at the
Binding Sites of Transient and Obligate Protein
Complexes,” Asia-Pacific Bioinformatics Conference, pp.
14-17, 2008.
Huang-Cheng Kuo, Jung-Chang Lin, Ping-Lin Ong, JenPeng Huang, “Discovering Amino Acid Patterns on
Binding Sites in Protein Complexes,” Bioinformation,
Vol. 6, No. 1, pp. 10-14, 2011.
Aaron P. Gabow, Sonia M. Leach, William A.
Baumgartner, Lawrence E. Hunter and Debra S.
Goldberg, “Improving Protein Function Prediction
Methods with Integrated Literature Data,” BMC
Bioinformatics, 9:198, 2008.
Mojdeh Jalali-Heravi, Osmar R. Zaïane, “A Study on
Interestingness Measures for Associative Classifiers,”
ACM Symposium on Applied Computing, pp. 10391046, 2010.
Xiaoxin Yin and Jiawei Han, “CPAR: Classification
based on Predictive Association Rules,” SIAM
International Conference on Data Mining, pp. 331–335,
2003.
B. Liu, W. Hsu, Y. Ma, “Integrating Classification and
Association Rule Mining,” KDD Conference, pp. 80-86,
1998.
79
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Digitalization Boosting Novel Digital Services for Consumers
Kaisa Vehmas1, Mari Ervasti2, Maarit Tihinen3 and Aino Mensonen4
1
VTT Technical Research Centre of Finland Ltd
Espoo, PO BOX 1000, 02044, Finland
[email protected]
2
VTT Technical Research Centre of Finland Ltd
Oulu, PO BOX 1100, 90571, Finland
[email protected]
3
VTT Technical Research Centre of Finland Ltd
Oulu, PO BOX 1100, 90571, Finland
[email protected]
4
VTT Technical Research Centre of Finland Ltd
Espoo, PO BOX 1000, 02044, Finland
[email protected]
successful themes for economic growth: data is often
considered as a catalyst for overall economy growth,
innovation and digitalization across all economic sectors.
For example, in Europe the Big Data sector is growing by
40% per year, seven times faster than the IT market [3].
Abstract
Digitalization has changed the world. The digital revolution has
promoted the Internet, and more recently mobile network
infrastructure, as the technological backbone of our society.
Digital technologies have become more integrated across all
sectors of our economy and society, and create novel possibilities
for economic growth. Today, customers are more and more
interested in value-added services, compared to the basic
products of the past. Novel digital services, as well as the use of
mobile services, has increased both at-work and during free time.
However, it is important to understand the needs and
expectations of the end users and develop future services with
them. This paper focuses on pointing out the importance of user
involvement and co-design in digital service development and
providing insights on transformation caused by the digital
revolution. Experiences and effects of user involvement and codesign are introduced with details via two case studies from the
traditional retail domain.
Keywords: digital services, digitalization, user involvement, codesign, retail.
Digitalization is affecting people‘s everyday lives, and
changing the world. The pervasive nature of technology in
consumers‘ lives also causes a rapid change in the business
landscape [4]. The value of the ICT sector‘s manufacturing
and services will increase faster than the world economy
on average [5]. Thus, companies have to move their
business into digital forms. Business models must change
to support and improve new business opportunities, which
are created together with the services. In order to build up
an excellent digital service that meets the customers‘ needs,
participatory design of the service is inevitable [6].
To be successful, innovative solutions must take into
account opportunities provided by new technology, but
they cannot lose sight of the users. In practice, companies
have understood how important it is to understand the
needs and expectations of the end users of the product or
service. Users are experts on user experience and thus are
a significant source of innovation [7]. Involving different
stakeholders in the value chain, from the very start of the
development process, increases customer acceptance,
gives the developers new development ideas and gives the
users feelings that their voices have been heard. The
interaction with the customer is the key issue. That is,
keeping customers satisfied in a way that they feel that the
service provider listens to them and appreciates their
1. Introduction
The digital revolution is everywhere and it is continually
changing and evolving. Information technology (IT)
innovations, such as the Internet, social media, mobile
phones and apps, cloud computing, big data, e-commerce,
and the consumerization of IT, have already had a
transformational effect on products, services, and business
processes around the world [1]. In fact, information and
communications technology (ICT) is no longer a specific
sector, but the foundation of all modern innovative
economic systems [2]. Digitalization is one of the
80
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
opinions and activities is of major importance. This will
make it possible for companies to obtain and keep loyal
customers. [8]
2.1 Digital Services Boosting the Finnish Economy
In the beginning of the DS program DIGILE, Finnish
industry and Academia of Finland (http://www.aka.fi/en)
together identified four themes which would lead the
Finnish economy towards having the best means to reach
an advantageous position in the global market for mobile
services. The themes were 1) small and medium
enterprises (SME) services, 2) financial services, 3)
educational services, and 4) wellness services. The main
aim of the DS program was to create and begin to
implement various digital services, service platforms and
technologies which promote new or enhanced services, as
well as to ensure maintenance of new services in selected
areas. The structure of the DS program is presented in
Figure 1.
This paper focuses on pointing out the importance of user
involvement and co-design in digital services development
and providing insights of transformation caused by digital
revolution. Experiences and effects of user involvement
and co-design are introduced with details via two case
studies from the traditional retail domain. The research
was done as a part of large Digital Services (DS) program
(http://www.digital-services.fi) facilitated by DIGILE
(http://www.digile.fi), one of Finland‘s Strategic Centers
for Science, Technology and Innovation. DIGILE points
out that ICT-based digital services are the most important
way to provide added value to customers. Thus, DIGILE is
focused on promoting the development of digital service
know-how for business needs.
In the DS program, work was conducted in a true
partnership model, meaning that the program provided a
pool of complementary platforms where partners shared
and trialed their innovations and enabler assets. The
mission was accomplished by creating new innovative
services in these selected sectors and by recognizing the
need of enablers in their context. The ecosystem thinking
played a central role during the whole program.
The case studies described in this paper, Case A and Case
B, are introduced in detail for illustrating user involvement
and co-design while developing new digital services for a
traditional retail sector. In Case A, novel omnichannel
services for the customers were integrated into the retail
processes to better serve and meet the needs of the store‘s
rural customers closer to their homes. Customers living in
rural areas were known not to have access to the larger
selections of the retailer‘s online stores. The second, Case
B, aimed to understand consumer attitudes towards novel
digital service points in hypermarkets. Customers were
able to test the first version of a novel user interface to be
used in digital service points. The case studies emphasized
the importance of user involvement and co-design while
developing new digital services.
Fig. 1 The structure of the Digital Services program.
This paper is structured in the following way. In the
second chapter background information about the DIGILE
Digital Services program and digitalization in general are
given. Also, relevant literature concerning the retail sector
and the context of the case studies are introduced. The
third chapter presents the research approaches used with
the two case studies. In the fourth chapter case study
findings are introduced and discussed. Finally, in the fifth
chapter the main findings are summarized and concluded.
The key objectives for the SME services were optimizing
service creation tools for SMEs and sharing of know-how
in service building to enable a new class of service
development. The SME theme targeted creation of a pool
of companies implementing key services by or for the
SME sector. SMEs supported the whole services
ecosystem by utilizing and trialing service platforms
offered by the program. A pull of additional platform
features was created, and SMEs acted to create new
service products for their business. Rapid prototyping and
iterative research methods were utilized.
2. Background
In the case of financial services the program concentrated
on services and service enablers which bring added value
to the companies, customers, as well as to consumers, and
linked money transactions and financial services smoothly
to the ecosystem market. The goal was to introduce
mechanisms and platforms for service developers,
enabling financial transactions in the services and to
develop safe and flexible trust enablers, as well as cost-
In this chapter, the DS program is introduced with some
examples of developed digital services in order to build a
complete picture of where the case studies were carried
out. After that, an overview of digitalization and its‘ effect
on digital services are presented. Finally, digitalization in
the retail sector, the environment of our case studies, is
introduced in detail.
81
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
changes the world and affects people‘s everyday lives. The
huge impact of digitalization and the Internet will be felt in
different industrial and associated sectors, for example, 3D
printing, smart city services (e.g. lighting control,
temperature optimization), predictive maintenance
solutions, intelligent logistics, smart factories, etc.
efficient banking and payment tools, for both existing and
new innovative mobile services.
The goal of educational services was to increase the
utilization and availability of e-learning materials and
services in education. It was not only concentrated on
support for mobile and pervasive learning, but on high
quality services that could be easily integrated into
everyday processes of the ordinary school day. User
perspectives played an important role; this was seen as
especially important in cases that aim at the global market
or developing services for challenged learners.
Digitalization also means that businesses make use of
electronic information exchange and interactions. For
example, digitalization in factories allows end-to-end
transparency over the entire manufacturing process, so that
individual customer requirements can be implemented
profitably and the produced solutions managed throughout
their life cycle. Existing businesses can be modernized
using new technologies, which potentially also generate
entirely new types of businesses. Evans and Annunziata
[10] highlight the promise of the Industrial Internet by
stating that it is the 3rd innovation wave – after the
Industrial Revolution (1st wave) and the Internet
Revolution (2nd wave). The growing importance of
context-awareness, targeting enriched experience, intuitive
communication services and an increasingly mobile
society, requires intelligent services that are smart, but
invisible to users. Hernesniemi [11] argued that the value
of the ICT sector‘s manufacturing and services will
increase faster than the world economy on average. For
example, e-Commerce is growing rapidly in the EU at an
average annual growth rate of 22%, surpassing EUR 200
billion in 2014 and reaching a share of 7% of total retail
sales [12].
In the case of wellness services the aim was to create a
wellness ecosystem with common platform access and the
capability to develop enablers and tools for integrating
different categories of value adding services and
technologies. It was also targeted towards developing
components and capabilities for integrating technologies
for automatic wellness data collection. Data analysis will
be facilitated by developing and enabling the integration of
tools for professional wellness data analysis and content
delivery.
During 2012-2015, 85 organizations (53 SMEs, 19 large
companies and 13 research organizations) in total
participated in the DS program. The program exceeded the
goals by achieving 27 highlighted new services and 18
features. In addition, three new companies were
established. One of the successful examples of results
achievement is Personal Radio. This offers consumers new
services and personal content from different sources based
on recommendation engine technology. The ecosystem
thinking was an enabling asset: there have been several
companies involved in trying to create the service, e.g.,
companies to take care of content production and delivery,
speech synthesis, audio search, content analysis, payment
system, concept design, user interface, business models,
user experience and mobile radio player. In addition, in the
wellness services domain several wellbeing services were
developed. For example, novel mobile services to prevent
illnesses, such as memory disorder or work-related
musculoskeletal disorder, were developed. Novel option
for traditional marital therapy and couching methods is
now available in mobile also. In this paper, two pilot cases
are introduced as examples of digitalization and
developing digital services in the retail sector.
In the digital economy, products and services are linked
more closely to each other. The slow economic growth
during recent years has boosted the development of
product-related services even more – these services have
brought increasing revenue for the manufacturing
companies in place of traditional product sales. The global
market for product and service consumption is steadily
growing. Today consumers are key drivers of technology
and change as new digital tools, e.g., comparison websites,
social media, customization of goods and services and
mobile shopping, have empowered them [13]. Customers
are more and more interested in value-added services
compared to the basic products themselves.
Now, companies around the world are not only willing to
use digital technologies to obtain transformation—they
must [14]. Companies are working towards achieving
digital transformation, but still most are lacking experience
with emerging digital technologies and they are skeptical.
The key issue is to respond effectively and quickly to
newly available technologies in order to gain better
customer experiences and engagement, streamlined
operations and new lines of business. Accordingly, digital
services are a strong global trend in the world: long-term
2.2 Digitalization Effects on Services and Business
The digital revolution has promoted the Internet and more
recently mobile networks infrastructures as the
technological backbone of our society. The Internet and
digital technologies have become more integrated across
all sectors of our economy and society [9]. Digitalization
82
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
development is moving the weight of economic value
creation from agriculture, to goods, and then services. The
service sector is the fastest growing segment of global
economies. Figure 2 illustrates the trend in the USA.
of these displays is to provide better service for customers
and promote sales. From a retail perspective, these
displays can be seen as one of the many channels that aim
to capture people‘s attention and affect their customer
behavior.
ICT has a remarkable impact on the development of
services; ICT enables completely new services, increases
the efficiency of service production, enhances the
availability of services, and increases the profitability of
service business. Kettunen et al. [15] have identified six
megatrends in ICT and ICT enabled services; 1) dataintensiveness, 2) decentralized system architectures, 3)
fusion of real and virtual, 4) disappearing (or hidden)
human interface, 5) web-based organization of work and
life, and 6) increasing need to manage social robustness.
For example, the advantage of the data-intensiveness is
that the service providers can automatically collect and
analyze a large amount of customer or process data, and
also combine it with other data that is available free of
charge. This helps service providers to develop their
services, e.g., by customizing the services and creating
new ones.
2.3 Digitalization of the Retail Sector
The retail sector is considered one of the most rapid
technology-adoptive sectors (e.g., [19]). Over the years,
retailers have learned how to design their stores to better
meet shoppers‘ needs and to drive sales. In addition, the
technical infrastructure that supports most retail stores has
grown enormously [20]. The retail industry has evolved
from traditional physical stores, through the emergence of
electronic commerce, into a combination of physical and
digital channels. Seeing the future of retailing is quite
complex and challenging; busy customers expect that
companies use innovative approaches to facilitate their
shopping process efficiently and economically, along with
providing value-added shopping experiences. People no
longer only go shopping when they need something: the
experience of shopping is becoming more important [21].
There are a number of challenges and opportunities
retailers face on their long-term radar, such as changes in
consumer behavior and consumer digitalization. These
drivers affecting the retail sector should be a key
consideration for retailers of all shapes and sizes [22].
It is likely that the power of the consumer will continue to
grow [23], and from the demand side, consumers will be
empowered to direct the way in which the revolution will
unfold [24]. The focus on buying behavior is changing
from products to services [25]. Thus, the established
retailers will need to start considering how they can more
effectively integrate their online and off-line channels to
provide customers with the very highest levels of service.
Fig. 2 Long term industry trends [16].
The use of mobile services has increased both at work and
during free time, and social communities are increasingly
formed. People are able and willing to generate content
and the line between business and private domains is
increasingly blurred [17]. The idea of everybody having
their own personal computer is being reborn and has
evolved into everyone having their own personal cloud to
store and share their data and to use their own applications
[18]. This is driving a power shift away from personal
devices toward personal services.
It is now widely recognized that the Internet‘s power,
scope and interactivity provide retailers with the potential
to transform their customers‘ shopping experiences, and in
so doing, strengthen their own competitive positions [26].
Frost & Sullivan [27, 28] predicts that by 2025, nearly
20% of retail will happen through online channels, with
global online retail sales reaching $4.3 trillion. Thus,
retailers are facing digitalization of the touch-point and
consumer needs [29]. By 2025, 80 billion devices will
connect the world with each person carrying five
connected devices [30]. Mobile and online information
technology make consumers more and more flexible in
terms of where and how they wish to access retailer
information and where and how to purchase products.
Consumer behavior is changing as a growing number of
smarter, digitally-connected, price-conscious consumers
Currently, digital signage is widely used in different
environments to deliver information about a wide array of
topics with varying content formats. Digital signs are
generally used in public spaces, transportation systems,
sport stadiums, shopping centers, health care centers etc.
There are also a growing number of indoor digital displays
in shopping centers and retail stores. The underlying goal
83
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
All these needs and requirements must come together as a
unified, holistic solution, and retailers should be able to
exploit the channel-specific capabilities in a meaningful
way [43].
exploit multiple shopping channels, thus making the
multichannel retail approach an established shopping
behavior [31]. Described as channel agnostic, modern
consumers do not care whether they buy online, via mobile
or in-store as long as they get the product they want, when
they want it at the right price. A new behavior of test-andbuy-elsewhere is becoming more common [32] and
retailers must adapt to the buying behavior of these
―channel-hoppers‖ [33]. Aubrey and Judge [34] talk about
‗digital natives‘ who are highly literate in all things digital,
and their adoption of technology is easy and distinctive.
3. Developing New Digital Services
In this chapter the research approaches for our case studies
are introduced in detail. The case studies emphasize user
involvement and co-design while developing new digital
services.
However, simply ―adding digital‖ is not the answer for
retailers – yet that is an approach too often taken [35]. For
traditional retailers to survive, they must pursue a strategy
of an integrated sales experience that blends online and instore experiences seamlessly, leading to the merger of a
web store and a physical store [36]. According to Frost &
Sullivan [37], the retail model will evolve from a
single/multiple channel model to an integrated hybrid
cross-channel model, identified as bricks and clicks. Thus,
shoppers of the future float seamlessly across mobile,
online and real-world platforms [38].
3.1 Case Study A
In this research context the retailer wanted to integrate
novel services and adapt retail processes to better serve
and meet the needs of the store‘s rural customers closer to
their homes. Customers living in rural areas were known
not to have access to the retailer‘s online store‘s larger
selection. In the development of the novel service concept
the utilization of Internet possibilities and the importance
of sales persons guiding and socializing alongside the
customers at the physical store were also emphasized [44].
Adoption of both online and physical channels, to sell
simultaneously through multiple marketing channels, is
referred to as multichannel retailing [39]. Today, in an
ever digitizing world the line between channels is fading
as the different channels are no longer separate and
alternative means for delivering shopping services, but
customers increasingly use them as complements to each
other, or even simultaneously. Hence, the term
multichannel is not enough to describe this phenomenon,
and instead the new concept of omnichannel is adopted
[40]. Omnichannel is defined as ―an integrated sales
experience that melds the advantages of physical stores
with the information-rich experience of online shopping‖.
The customers connect and use the offered channels as
best fits their shopping process, creating their unique
combinations of using different complementary and
alternative channels. In an omnichannel solution the
customer has a possibility to seamlessly move between
channels which are designed to support this ―channelhopping‖.
Case study A was conducted in the context of developing
and piloting a novel omnichannel service concept for a
Finnish retail chain (described in more detail in [45]). A
starting point for the new service was a need to provide a
wider selection of goods for the customers of a small,
distant rural store. The store is owned by a large national
co-operative retail chain.
The service concept was based on the idea of providing
customers with the selection available in large stores by
integrating an e-commerce solution within the service of a
rural store. This was practically done by integrating the
service provider‘s digital web store to the service
processes of the small brick-and-mortar store. Burke [46]
suggests that retailers who want to web-enable their store
should optimize the interface to the in-store environment
instead of just providing web access. Thus, one of the
driving design principles of our case study was to achieve
a seamless retail experience by a fusion of web and
physical retail channels. The novelty of the service concept
was in how it was integrated to the service processes of a
physical store, i.e., how the different channels were used
together to create a retail experience that was as seamless
as possible.
Payne and Frow [41] examined how multichannel
integration affects customer relationship management and
stated that it is essential to integrate channels to create
positive customer experiences. They pointed out how a
seamless and consistent customer experience creates trust
and leads to stronger customer relationships as long as the
experience occurs both within channels and between them.
Technology-savvy
consumers
expect
pre-sales
information, during-sales services and after-sales support
through a channel customized to their convenience [42].
A co-design process was used in the service design. The
build-and-evaluate design cycle involved a small group of
researchers and the employees of the retail company. The
researchers were active actors in the design process,
participating in the service concept design and facilitating
co-design activities. Technical experts of the retail
84
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
organization were involved in specification of how the
developed solution would best integrate with the existing
infrastructures of the organization, and how the new
solutions related to the strategic development agenda of
other related omnichannel solutions. Retail experts were
involved in designing the customer journey, tasks of the
staff, the service solution‘s visual and content design, and
internal and external communication required by the
service.
screen (located on the wall above the terminal) that
advertised the new retail service concept and directed the
customers in its use.
3.1.1. Research Approach of Case Study A
The focus of the research was on more closely
investigating and analyzing the customer and personnel
service experience and deriving design implications from
the gained information for developing and improving the
service concept further. The user experience data
collection methods and the number of stakeholders for
each method are listed in Table 1.
The pilot study was conducted in a small rural store that
was part of the service provider‘s retail chain, located in
the city of Kolari (www.kolari.fi) in northern Finland, with
a population of 3,836. The customers visiting the physical
store could access the selection of goods otherwise not
available through a web store interface. The study was
launched with a goal of eventually scaling up the digital
retail service concept to other small rural stores of the
service provider. The retail service included a touch screen
customer terminal located inside the physical store (see
Figure 3).
The research study was focused on the two main user
groups: store customers and personnel. Altogether 35 store
customers were interviewed, and of these 10 also
experimented with the service hands-on by going through
the controlled usability testing. The ages of the study
participants among the customers varied from 21 years to
73 years. Altogether six members of the store personnel
participated in the interviews.
Table 1. Summary of the data collection methods and number of
participants for each method.
Data collection method
Number of participants
Interviews with store customers
35 store customers
Usability testing
10 store customers
Paper questionnaires
10 returned questionnaires
Group interviews with store personnel
6 members of store personnel
Phone calls
1 store superior
Automatic behaviour tracking
~484 service users
A set of complementary research methods were used to
monitor and analyze the retail experience. The interviews
were utilized as a primary research method, accompanied
by in-situ observation at the store and a questionnaire
delivered to customers. These qualitative research methods
were complemented with quantitative data achieved
through a customer depth sensor tracking system installed
inside the store. Interviews were utilized to research
customer and personnel attitudes and expectations towards
the novel service concept, motivations for the service
adoption and usage, their service experiences, and ideas
for service improvement. Two types of structured
interviews were done with the customers: a) General
interview directed for all store customers, and b) interview
focusing on the usability aspects of the service (done in the
context of the usability testing). Usability testing,
accompanied with observations, was conducted to gain
insights into the ways customers used the service.
Fig. 3 The digital retail service inside the store.
The customers could use the terminal for browsing,
comparing and ordering goods from the retail provider‘s
web store selections. The two web stores accessible
through the customer terminal were already existing and
available for any customers through the Internet
connection. In addition, the retailer piloted the marketing
and selling of their own campaign products through a new
web store interface available on the customer terminal.
The customers could decide whether they wanted their
product order delivered to a store (the delivery was then
free of charge) or directly to their home. After placing the
order, the customer paid for the order at a cash register at
the store alongside their other purchases. The customer
terminal was also accompanied by a large information
Paper questionnaires were distributed for the customers
who had ordered goods through the service, with a focus
on gathering data of their experiences with the service
ordering process. Also a people tracking system based on
85
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
one tablet device). Consumers were able to freely
comment on their experience and they were also
interviewed after testing the novel service prototype.
depth sensor was used to automatically observe the
customers. The special focus of the people-tracking was to
better understand the in-store customer behavior, and to
collect data in more detail of the number of customers
using the service through the customer terminal, and of the
duration and timing of the service use.
3.2 Case Study B
In Case B, the retailer‘s goal was to improve customer
service in the consumer goods trades by implementing
novel digital service points in the stores. Generally, using
these displays customers were able to browse the selection
of consumer goods and search detailed information about
the products. On displays customers were also able to see
advertisements and campaign products available at the
store. Customer displays help consumers in making
purchase decisions by providing guides and selection
assistant. In addition to that, customers can get help to find
certain products from the bigger store by utilizing a map
service. It has also been planned that customers could use
the displays to find a wider selection of consumer goods
from the online shop and place the online order in the
store.
Fig. 4 Test setup in the study.
4. Results and Findings from the Case Studies
In this chapter the main results and findings of the case
studies are presented in detail for introducing user
involvement in the development process.
This case study aimed to understand consumer attitudes
towards digital service points in Prisma hypermarkets. The
research was divided into three tasks:
4.1 Results of Case Study A
The findings from Case Study A are analyzed from the
viewpoint of two end-user groups, namely the rural store
customers and personnel.
1. Digital service points as a service concept
2. Type and location of the digital service point in the
store
3. Online shopping in the store.
4.1.1 Store Customers
Altogether, 35 customers of the store were asked about
their attitudes, expectations, and experiences related to the
novel retail service concept.
In the study, customers were able to test the first version of
the novel user interface to be used in digital service points
in stores and compare different screens. The goal was to
gather information about customer experience, their
expectations and needs, and also ideas of how to develop
the user interface further. A test setup of the study is
presented in Figure 4. The novel user interface was tested
with the big touch screen (on right). The other touch
screens were used to test with the web store content just to
get an idea about the usability of the screen.
Interviews and paper questionnaires. When asked
whether or not the customers were likely to use the novel
retail service on a scale of 1-5 (where 1 = not likely to use,
5 = likely to use), the average was 2.6, resulting in 16
interviewees responding not likely to use the service and
19 interviewees responding likely to use the service.
3.2.1 Research Approach of Case Study B
The work was carried out in the laboratory environment,
not in the real hypermarket. Consumers were invited to
participate in a personal interview where their attitudes
towards customer displays were clarified. The interviews
were divided into two phases. First, the background
information and purchase behavior was discussed and the
novel digital service concept was presented to the
customers. In the second phase they were able to test the
proof of concept version of the user interface and compare
different types of devices (two bigger touch screens and
Regarding those 16 customers stating not likely to use the
novel retail service, the age distribution was large, as this
customer group consisted of persons aged between 21 and
73 years, the average age being 44 years. The gender
distribution was very even; 10 men vs. 9 women (some
respondents comprised of couples who answered the
researchers‘ questions together as one household). Except
for one person, all the interviewees said they visited quite
regularly the nearest (over 150 kilometers) bigger cities for
shopping purposes. Of the 16 respondents 13 had either no
or only little experience with online shopping. This
86
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Usability testing. A total of ten store customers
participated in the usability testing. The customers were
directed to go through a set of predetermined tasks with
the retail service interface, and they were asked to ―think
aloud‖ and give any comments, feedback and thoughts that
came to their mind during the interaction with the service.
Their task performance was observed by the researchers
and notes were taken during the customer‘s
experimentation with the service. The tasks included 1)
browsing the product selections available through the web
stores, 2) looking for more detailed product information,
and 3) ordering a product from two different web stores.
customer group gave the following reasons for not being
so eager to adopt the novel retail service in use (direct
quotes translated from Finnish):
“I do not need this kind of a service.”
“Everyone has an Internet connection at home. It is easier to
order [products] from home.”
“Might be good for someone else…”
On the other hand, 19 responders stated that they were
likely to use the retail service in the future. Also in this
customer group the gender distribution was very even, as
the responders consisted of 10 men vs. 11 women. The age
distribution was respectively diverse, from 29 to 72 years,
the average age being 51 years. In addition, everyone
regularly made shopping journeys to the closest bigger
cities. In this customer group, 11 respondents had some
experience with online shopping, with six respondents
stating they often ordered products online. These
customers justified their interest towards the novel retail
service in the following ways (direct quotes translated
from Finnish):
The biggest difficulty the customers encountered was
related to the touch-based interaction with the service
terminal. The terminal‘s touch screen appeared not to be
sensitive enough, resulting in six out of ten customers
experiencing difficulties in interacting with the touch
screen. In addition, it was not immediately clear for the
customers that the terminal indeed was a touch screen, as
six customers hesitated at first and asked aloud whether
the terminal had a touch screen: “Do I need to touch this? /
Should I touch this?”
“Everything [new services] that comes need to be utilized so that
the services also stay here [in Kolari].”
“We do not have much [product] selections here.”
“Really good… No need to visit [bigger cities] if we do not have
other businesses/chores there.”
“Sounds quite nice… If there would be some product offers.”
“If there [in the digital retail service] would be some specific
product that I would need, then I could use this.”
However, interestingly four customers out of ten changed
their initial answer regarding their willingness to use the
service (asked before actually experimenting with the
service UI) in a more positive direction after having a
hands-on experience with the service. Thus, after usability
testing, the average raised a bit from the initial 2.6 to 2.7
(on a scale of 1-5). None of the customers participating in
the usability testing changed their response in a negative
direction. Other valuable usability findings included
observation on the font size on the service UI, insufficient
service feedback to the customer, and unclear customer
journey path.
To conclude, age or gender did not seem to have an effect
on the store customers‘ willingness to use the retail
service. Neither did the shopping journeys to bigger cities
influence the willingness for service adoption, as most of
the customers made these shopping journeys regularly.
However, previous experience with online shopping
appeared to have a direct effect on the customers‘
willingness to use the retail service. If the customer did not
have, or had only little previous experience with ordering
products from web stores, the person in question often also
responded not likely to adopt the retail service into use.
However, if the customer was experienced with online
shopping, they had a more positive attitude and greater
willingness to use the novel retail service.
Automatic tracking of store customers’ behaviors. A
depth sensor-based system was used for detecting and
tracking objects (in this case people) in the scene, i.e.,
inside the physical store. Depth sensors are unobtrusive,
and as they do not provide actual photographic
information, any potential privacy issues can be more
easily handled. The sensor was positioned so that it could
observe the customer traffic at the store‘s entrance hall
where the service terminal was positioned. Sensor
implementation is described in more detail in [47].
Paper questionnaires were distributed to the customers
who had ordered products through the retail service (either
at home or through the store‘s customer terminal), with the
goal of researching customers‘ experiences with the
ordering process. These customers identified the most
positive aspects of the service as the following: 1) wider
product selections, 2) unhurried [order process], 3) easy to
compare the products and their prices, 4) fast delivery, and
5) free delivery.
The purpose of the implementation of depth sensor
tracking was to better understand the in-store customer
behavior, and to gather in more detail data of 1) the
number of customers using the service terminal, and 2) the
duration of the service use. The data was recorded during a
total of 64 days. Most of those days contain tracking
information from all the hours the store was open. Some
87
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
In addition, the following comments illustrate the general
thoughts of the store personnel and expectations regarding
the service:
hours are missing due to the instability of the peopletracking software. From the recorded data all those store
customers that came to the near-range of the service set-up
were analyzed. The real-world position of the customers
using the service terminal was mapped to the peopletracker coordinates and all the customers that had come
into a 30 cm radius of the user position and stayed still
more than three seconds were accepted. The radius from
the user position was kept relatively small in order to
minimize the distortion of data resulting from confusing
the users of the slot machine as service terminal users.
“This is [indeed a useful service], since we have these long
distances [to bigger cities].
Now a customer can buy the washing machine from us.”
“Always the adding of more services should be a positive thing.”
“More services also always mean more customers.”
“When we get our own routines and set-up for this, I’m certain
this will succeed!”
“…Should have distribution of [personnel‘s] work with this.”
The results show that most of the users used the service for
a relatively short time. On average 0.54 store customers
per hour used the service terminal. It is reasonable to
assume that, most likely, a proper usage of the service
system would take more than 120 seconds. The shorter the
usage period, the less serious or determined the user
session has been. Average usage period was 58.4 seconds.
Thus, the service usage appeared as quite short-term,
indicating that in most cases the service usage was not so
―goal-directed‖, but more like sessions where store
customers briefly familiarized themselves with the novel
service. During the hours the store was open, from 7am to
9pm, there were on average 7.56 service users/day. For the
week, Saturday and Sunday attracted the most service
users and the times of most service users were at 1-2pm
and 6-7pm.
During the first two months of the case study, inquiry calls
were made every two weeks to the store superior in order
to keep records and obtain information regarding the
progress of the service adoption at the store, in addition to
possible encountered problems from the viewpoint of both
the customers and the personnel. In general, the novel
retail service appeared to have been quickly wellintegrated into the personnel‘s work processes.
4.2 Results of Case Study B
The target group of the Case Study B included a workingage population. Together 17 people were interviewed (8
women and 9 men). Age distribution varied between 27
and 63 years. Most of the interviewees (88%) lived in the
Helsinki metropolitan area. Over half of the interviewees
(62%) commonly used the retailer‘s stores to buy daily
consumer goods. The most remarkable factor affecting
selection of the store was location. Also, selection of
goods, quality of the products, price level, bonus system
and other services besides the store location were
important criteria for consumer choice of which store to go
to.
4.1.2. Store Personnel
The goal of the group interviews was to investigate store
personnel attitudes and expectations towards the novel
service concept, and ideas for service improvement and
further development. In addition, the store superior was
contacted every other week with a phone call for the
purpose of enquiring about the in-store service
experiences, both from the viewpoint of the store
customers and personnel.
Consumers are mainly confident with the selection of
goods in retailer‘s stores. According to the customers, it is
usually easy to find the different products in smaller and
familiar stores. In unfamiliar bigger hypermarkets it is
sometimes a real challenge. If some product is not
available in the store, the customer usually goes to some
other store to buy it. Most of the interviewees (71%) also
shop online, an average of 1-2 times in a month, and they
usually buy clothes and electronics. Online shopping is
liked mainly because of cheaper prices, wider selection of
consumer goods and it is easy and fast.
Group interviews and phone calls. Two group interviews
with six members of the store personnel were carried out
at the same time as the personnel were introduced and
familiarized with the service concept, alongside their new
service-related work tasks. In general, the attitudes of the
store personnel towards the novel service appeared as
enthusiastic and positive.
Naturally, the novel service also invoked some doubts,
mostly related to its employing effect on the personnel, the
clearness and learnability of the order processes, and
formation of the new routines related to the service
adoption that would also streamline their new work duties,
and thus ease their work load.
Generally, customers (88%) liked the idea of novel digital
service points in the Prisma stores. They felt that the
customer displays sped up getting the useful information
and made shopping in the stores more effective. According
to the interviewees, the most important services were the
map service and product information. Especially in bigger
hypermarkets, and if the customers are not familiar with
88
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
the store, it is sometimes challenging to find certain
products. The map service could include additional
information about the location of the department and the
shelf where the product can be found. In the hypermarkets
there usually are limited possibilities to offer detailed
information about the products. With this novel digital
service customers are willing to get more information
about the products and compare them.
5. Conclusions
Today the Internet and digital technologies are becoming
more and more integrated across all sectors of our
economy and society. Digitalization is everywhere; it is
changing the world and our everyday lives. Digital
services provide new services or enhanced services to
customers and end users. In a DIGILE Digital Services
program, 85 Finnish partners innovated and developed
novel digital services in 2012-2015 by recognizing the
need of enablers in their context. Work was conducted in a
true partnership model, in close co-operation with research
organizations and companies. During the whole program,
ecosystem thinking had a big role in innovating and
developing the solutions. The program exceeded the goals
by achieving 27 highlighted new services and 18 features.
In addition, three new companies were established as a
result of ecosystem thinking and companies shared and
innovated together new or enhanced digital services.
A proof of concept version of the novel user interface
received positive feedback; customers thought it was clear,
simple and easy to use. They also felt that it was
something new and different, compared to traditional web
sites. It was pointed out that there is too much content e.g.,
in Prisma´s web store to be flipped through in the
hypermarket. It is important to keep the content and layout
of the novel user interface simple.
People are more willing to do online shopping at home.
Online shopping in the store was still not totally refused,
and interviewees found several circumstances when they
could utilize it. For example, it could be easy to do online
shopping at the same time with other shopping in Prisma
stores. If some certain product is no longer available in the
store, customers could buy it online in the store, especially
the sale products.
In many cases in the DS program the role of consumers
and stakeholders was remarkable in the development
process. Narratives, illustrations and prototypes enhanced
the co-development of new solutions from ideas through
trials and evaluations to working prototypes. There is a
wide scale of tools, methods and knowledge available for
demonstrating ideas and opportunities enabled by
emerging technologies, and for facilitating co-innovation
processes. In this program, novel game-like tools were
developed to easily involve different groups of people in
the development process. The tools support efficient and
agile development and evaluation for identifying viable
concepts in collaboration between experts and users.
According to the customers there should be several digital
service points in Prisma stores, customers are not willing
to queue up for their own turn. The service points should
be located next to the entrance and also in the departments,
next to the consumer goods. The service point should be a
peaceful place where they have enough privacy to
concentrate on finding information and shopping. Still,
there should be something interesting on the screen,
something that attracts the customers. The youngest
interviewees commented that they would like to go to test
the new device and find out what it is, whereas the eldest
interviewees said that they would like to know beforehand
what they could get from the new service. The screen has
to be big enough and good quality. Interviewees thought
that the touch screen was modern. Using a tablet as a
screen in a digital service point was seen to be too small.
In this paper, two retail case studies were presented in
detail. Case Study A was conducted in the context of
developing and piloting a novel omnichannel service
concept in distant rural store. Case Study B concentrated
on novel digital service points in hypermarkets. The need
for these kinds of novel digital services have different
starting points; in the city of Kolari the selection of goods
in the local store is limited and the distance to bigger cities
and stores is long. In the Helsinki area, the selection of
stores and goods is huge and people are also more
experienced in shopping online. Still, in both cases, people
expect more quality customer service, e.g., in terms of
value added shopping experience, easier shopping and
wider selection of goods. In both case studies customers
stated they were likely to use the novel digital retail
services in the future. The behavior of the consumer has
changed due to digitalization and this change must be
taken into consideration when developing services for the
customers.
In addition, the retailer got some new ideas for developing
customer service in the stores. For example, some of the
interviewees suggested a mobile application for customer
service and map service, and it could also be used as a
news channel. Customers could also create personalized
shopping lists with it. Authentication to the digital service
point could be executed by fidelity cards in order to
receive personalized news and advertisements and to
accelerate the service.
89
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
co-develop the retail services. It is noticed that active user
involvement in the early stage of a development process
increases the quality of the novel service, and user
involvement leads to better user acceptance and
commitment.
Integrating novel digitally-enabled retail services with a
physical store requires innovative thinking from retailers.
Customers are interested in having these types of novel
digital services in the stores; they feel them to be modern
and forward-looking options. Most of the customers see
that the digital service points make shopping more
effective and they expect that they will get useful
information faster compared to current situation in the
hypermarkets.
As introduced in this paper, user involvement and codesign have a central and very important role when
developing novel digital services for customers. In fact,
feedback and opinions of end users can significantly
improve or change the final results. The DS program
facilitated the development of novel digital services by
providing an ecosystem where companies could share and
pilot their innovations. This kind of ecosystem thinking
was seen in a very useful and productive manner.
These retail-related case studies were implemented in
order to better understand the challenges and opportunities
in this domain. Based on these studies the most important
issues for retailers to take into account when implementing
digital services in the stores are:


Acknowledgments
Keep it simple. Keeping the layout and user interface
clear and easy makes it possible to serve all the user
groups digitally.
This work was supported by Tekes – the Finnish Funding
Agency for Innovation (http://www.tekes.fi/en) and
DIGILE. We also want to thank all the tens of participants
we have worked with in this program.
Central location. Digital service points should be
situated in noticeable positions in the stores.
Customers do not want to search for the service
points or queue up for using the service. Enlarging
and clarifying the instructional and information texts
is part of this issue. Also, elements in the graphical
user interface must be considered carefully to arouse
customer interest when they are passing by.

Adding more privacy. Despite the central location
privacy issues have to be taken into consideration
when implementing the digital service point in the
store. The service points should offer an undisturbed
place for searching information and shopping.

High quality screens. Customers are nowadays
experienced with using different kinds of screens. A
touch screen was felt to be modern. The screens have
to be high quality to ensure the smoothness of
interaction between the customer and the service
terminal user interface.

Going mobile. Customers were also asking for
mobile services enclosed in this novel digital service.
This could bring the retailers an unlimited amount of
possibilities to offer their services also outside the
stores.
References
[1] Bojanova, I. (2014). ―The Digital Revolution: What´s on the
Horizon?‖ IT Pro, January/February 2014, 12 p. IEEE.
[2] European Commission (2015a). ―A Digital Single Market
Strategy
for
Europe.‖
20
p.
Available:
http://ec.europa.eu/priorities/digital-singlemarket/docs/dsm-communication_en.pdf [29.6.2015].
[3] European Commission (2015b). ―A Digital Single Market
Strategy for Europe – Analysis and Evidence.‖ Staff
Working
Document.
109
p.
Available:
http://ec.europa.eu/priorities/digital-single-market/docs/
dsm-swd_en.pdf [29.6.2015].
[4] Fitzgerald, M., Kruschwitz, N., Bonnet, D. & Welch, M.
(2013). ―Embracing Digital Technology. A New Strategic
Imperative.‖ MITSloan Management review. Available:
http://sloanreview.mit.edu/projects/embracing-digitaltechnology/ [4.6.2015].
[5] Hernesniemi, H. (editor), (2010). ―Digitaalinen Suomi 2020.
Älykäs tie menestykseen.‖ Teknologiateollisuus ry. 167 p.
[6] Mensonen, A., Grenman, K., Seisto, A. & Vehmas, K.
(2015). ‖Novel services for publishing sector through cocreation with users.‖ Journal of Print and Media
Technology Research, 3(2014)4, pp. 277-285.
[7] Thomke, S. & von Hippel, E. (2002). ―Customers as
innovators: a new way to create value.‖ Harvard Business
Review, 80(4), pp. 74-81.
[8] Mensonen, A., Laine, J., & Seisto, A. (2012). ‖Brand
experience as a tool for brand communication in multiple
channels.‖ Advances in Printing and Media Technology.
IARIGAI Conference, Ljubljana, 9-12 September 2012.
[9] European Commission (2015a). ―A Digital Single Market
Strategy
for
Europe.‖
20
p.
Available:
http://ec.europa.eu/priorities/digital-single-market/docs/
dsm-communication_en.pdf [29.6.2015].
Digital service points are one option to offer digital
services for retail customers. Still others, e.g., mobile
services, have unlimited possibilities to create added value
for the customers. In this type of development work, when
something new is developed for the consumer, it is
essential to involve real customers in the beginning of the
planning and developing process. Customers are the best
experts in user experience. In this study consumers were
bravely involved in the very early stage to co-innovate and
90
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[10] Evans, P. C., & Annunziata, M., (2012). ―Industrial Internet:
Pushing the Boundaries of Minds and Machines.‖ GE
Reports.
Available:
http://www.ge.com/docs/chapters/Industrial_Internet.pdf
[18.5.2015].
[11] Hernesniemi, H. (editor), (2010). ‖Digitaalinen Suomi
2020.‖ Älykäs tie menestykseen. Teknologiateollisuus ry.
167 p.
[12] European Commission (2015b). ―A Digital Single Market
Strategy for Europe – Analysis and Evidence.‖ Staff
Working
Document.
109
p.
Available:
http://ec.europa.eu/priorities/digital-singlemarket/docs/dsm-swd_en.pdf [29.6.2015].
[13] European Commission (2015b). ―A Digital Single Market
Strategy for Europe – Analysis and Evidence.‖ Staff
Working
Document.
109
p.
Available:
http://ec.europa.eu/priorities/digital-single-market/docs/
dsm-swd_en.pdf [29.6.2015].
[14] Fitzgerald, M., Kruschwitz, N., Bonnet, D. & Welch, M.
(2013). ―Embracing Digital Technology. A New Strategic
Imperative.‖ MITSloan Management review. Available:
http://sloanreview.mit.edu/projects/embracing-digitaltechnology/ [4.6.2015].
[15] Kettunen, J. (editor), Vähä, P., Kaarela, I., Halonen, M.,
Salkari, I., Heikkinen, M. & Kokkala, M. (2012). ‖Services
for Europe. Strategic research agenda and implementation
action plan for services.‖ Espoo, VTT. 83 p. VTT Visions;
1.
[16] Spohrer, J.C. (2011). Presentation: SSME+D (for Design)
Evolving: ―Update on Service Science Progress &
Directions.‖ RIT Service Innovation Event, Rochester, NY,
USA, April 14th, 201.
[17] Kettunen, J. (editor), Vähä, P., Kaarela, I., Halonen, M.,
Salkari, I., Heikkinen, M. & Kokkala, M. (2012). ―Services
for Europe. Strategic research agenda and implementation
action plan for services.‖ Espoo, VTT. 83 p. VTT Visions;
1.
[18] Bojanova, I. (2014). ―The Digital Revolution: What´s on the
Horizon?‖ IT Pro, January/February 2014, 12 p. IEEE.
[19] Ahmed, N. (2012). ―Retail Industry Adopting Change.‖
Degree Thesis, International Business, Arcada - Nylands
svenska yrkeshögskola.
[20] GS1 MobileCom. (2010). ―Mobile in Retail – Getting your
retail environment ready for mobile.‖ Brussels, Belgium: A
GS1 MobileCom White Paper.
[21] Gehring, S., Löchtefeld, M., Magerkurth, C., Nurmi, P. &
Michahelles, F. (2011). ―Workshop on Mobile Interaction
in Retail Environments (MIRE).‖ In MobileHCI 2011, Aug
30-Sept 2 (pp. 729-731). New York, NY, USA: ACM
Press.
[22] Reinartz, W., Dellaert, B., Krafft, M., Kumar, V. &
Varadajaran, R. (2011). ‖Retailing Innovations in a
Globalizing Retail Market Environment.‖ Journal of
Retailing.
87(1),
pp.
S53-S66.
DOI:
http://dx.doi.org/10.1016/j.jretai.2011.04.009.
[23] Aubrey, C. & Judge, D. (2012). ―Re-imagine retail: Why
store innovation is key to a brand growth in the ‗new
normal‘, digitally-connected and transparent world.‖
Journal of Brand Strategy, April-June 2012, 1(1), pp. 3139. DOI: http://henrystewart.metapress.com/link.asp?id=
b05460245m4040q7.
[24] Doherty, N.F. & Ellis-Chadwick, F. (2010). ―Internet
Retailing; the past, the present and the future.‖
International Journal of Retail & Distribution Management,
Emerald.
38(11/12),
pp.
943-965.
DOI:
10.1108/09590551011086000.
[25] Marjanen, H. (2010). ‖Kauppa seuraa kuluttajan katsetta.‖
(Eds.
Taru
Suhonen).
Mercurius:
Turun
kauppakorkeakoulun sidosryhmälehti (04/2010).
[26] Doherty, N.F. & Ellis-Chadwick, F. (2010). ―Internet
Retailing; the past, the present and the future.‖
International Journal of Retail & Distribution Management,
Emerald.
38(11/12),
pp.
943-965.
DOI:
10.1108/09590551011086000.
[27] Frost & Sullivan (2012). ―Bricks and Clicks: The Next
Generation of Retailing: Impact of Connectivity and
Convergence on the Retail Sector.‖ Eds. Singh, S.,
Amarnath, A. & Vidyasekar, A.
[28] Frost & Sullivan (2013). ―Delivering to Future Cities –
Mega Trends Driving Urban Logistics.‖ Frost & Sullivan:
Market Insight.
[29] Reinartz, W., Dellaert, B., Krafft, M., Kumar, V. &
Varadajaran, R. (2011). ‖Retailing Innovations in a
Globalizing Retail Market Environment.‖ Journal of
Retailing.
87(1),
pp.
S53-S66.
DOI:
http://dx.doi.org/10.1016/j.jretai.2011.04.009.
[30] Frost & Sullivan (2012). ―Bricks and Clicks: The Next
Generation of Retailing: Impact of Connectivity and
Convergence on the Retail Sector.‖ Eds. Singh, S.,
Amarnath, A. & Vidyasekar, A.
[31] Aubrey, C. & Judge, D. (2012). ―Re-imagine retail: Why
store innovation is key to a brand growth in the ‗new
normal‘, digitally-connected and transparent world.‖
Journal of Brand Strategy, April-June 2012, 1(1), pp. 3139. DOI: http://henrystewart.metapress.com/link.asp?id=
b05460245m4040q7.
[32] Anderson, H., Zinser, R., Prettyman, R. & Egge, L. (2013).
‖In-Store Digital Retail: The Quest for Omnichannel.‖
Insights 2013, Research and Insights at SapientNitro.
Available:
http://www.slideshare.net/hildinganderson/sapientnitroinsights-2013-annual-trend-report [1.7.2015].
[33] Ahlert, D., Blut, M. & Evanschitzky, H. (2010). ―Current
Status and Future Evolution of Retail Formats.‖ In Krafft,
M. & Mantrala, M.K.. (Eds.), Retailing in the 21st Century:
Current and Future Trends (pp. 289-308). Heidelberg,
Germany: Springer-Verlag.
[34] Aubrey, C. & Judge, D. (2012). ―Re-imagine retail: Why
store innovation is key to a brand growth in the ‗new
normal‘, digitally-connected and transparent world.‖
Journal of Brand Strategy, April-June 2012, 1(1), pp. 3139. DOI: http://henrystewart.metapress.com/link.asp?id=
b05460245m4040q7.
[35] Anderson, H., Zinser, R., Prettyman, R. & Egge, L. (2013).
‖In-Store Digital Retail: The Quest for Omnichannel.‖
Insights 2013, Research and Insights at SapientNitro.
Available:
http://www.slideshare.net/hildinganderson/
sapientnitro-insights-2013-annual-trend-report [1.7.2015].
91
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[36] Maestro (2012). ‖Kaupan alan trendikartoitus 2013:
Hyvästit itsepalvelulle – älykauppa tuo asiakaspalvelun
takaisin.‖
Available:
http://www.epressi.com/tiedotteet/mainonta/kaupan-alantrendikartoitus-2013-hyvastit-itsepalvelulle-alykauppa-tuoasiakaspalvelun-takaisin.html?p328=2 [20.2.2014].
[37] Frost & Sullivan (2012). ―Bricks and Clicks: The Next
Generation of Retailing: Impact of Connectivity and
Convergence on the Retail Sector.‖ Eds. Singh, S.,
Amarnath, A. & Vidyasekar, A.
[38] PSFK. (2012). ―The Future of Retail.‖ New York, NY,
USA: PSFK Labs.
[39] Turban, E., King, D., Lee, J., Liang, T-P. & Turban, D.C.
(2010). ―Electronic commerce: A managerial perspective.‖
Upper Saddle River, NJ: USA Prentice Hall Press.
[40] Rigby, D. (2011). ―The Future of Shopping.‖ New York,
NY, USA: Harvard Business Review. December 2011.
[41] Payne, A. & Frow, P. (2004). ―The role of multichannel
integration in customer relationship management.‖
Industrial Marketing Management. 33(6), pp. 527-538.
DOI: http://dx.doi.org/10.1016/j.indmarman.2004.02.002.
[42] Oh, L-B., Teo, H-H. & Sambamurthy, V. (2012). ―The
effects of retail channel integration through the use of
information technologies on firm performance.‖ Journal of
Operations Management. 30, pp. 368-381. DOI:
http://dx.doi.org/10.1016/j.jom.2012.03.001.
[43] Goersch, D. (2002). ―Multi-channel integration and its
implications for retail web sites.‖ In the 10th European
Conference on Information Systems (ECIS 2002), June 6–
8, pp. 748–758.
[44] Nyrhinen, J., Wilska, T-A. & Leppälä, M. (2011).
‖Tulevaisuuden
kuluttaja:
Erika
2020–hankkeen
aineistonkuvaus
ja
tutkimusraportti.‖
Jyväskylä:
Jyväskylän yliopisto, Finland. (N:o 370/2011 Working
paper).
[45] Ervasti, M., Isomursu, M. & Mäkelä, S-M. (2014).
―Enriching Everyday Experience with a Digital Service:
Case Study in Rural Retail Store.‖ 27th Bled eConference,
June 1-5, Bled, Slovenia, pp. 1-16.
[46] Burke, R.R. (2002). ―Technology and the customer
interface: what consumers want in the physical and virtual
store.‖ Academy of Marketing Science, 30(4), pp. 411-432.
DOI: 10.1177/009207002236914.
[47] Mäkelä, S-M., Sarjanoja, E-M., Keränen, T., Järvinen, S.,
Pentikäinen, V. & Korkalo, O. (2013). ‖Treasure Hunt with
Intelligent Luminaires.‖ In the International Conference on
Making Sense of Converging Media (AcademicMindTrek
'13), October 01-04 (pp. 269-272). New York, NY, USA:
ACM Press.
M.Sc. Kaisa Vehmas received her M.Sc. in Graphic Arts
Technology from Helsinki University of Technology in 2003. She is
currently working as a Senior Scientist in the Digital services in
context team at VTT Technical Research Centre of Finland Ltd.
Since 2002 she has worked at VTT and at KCL (2007-2009). Her
background is in printing and media research focusing nowadays
on user centric studies dealing with participatory design, user
experience and customer understanding especially in the area of
digital service development.
Dr. Mari Ervasti received her M.Sc. in Information Networks from
the University of Oulu in 2007 and her PhD in Human-Centered
Technology from Tampere University of Technology in 2012. She
is currently working as a Research Scientist in the Digital services
in context team at VTT Technical Research Centre of Finland Ltd.
She has worked at VTT since 2007. Over the years, she has
authored over 30 publications. In 2014 she got an Outstanding
Paper Award in Bled eConference. Her research interests include
user experience, user-centered design and human computer
interaction.
Dr. Maarit Tihinen is a Senior Scientist in the Digital services in
context team at VTT Technical Research Centre of Finland. She
graduated from the department of mathematics from the University
of Oulu in 1991. She worked as a teacher (mainly mathematics
and computer sciences) at the University of Applied Sciences
before coming to VTT in 2000. She completed her Secondary
Subject Thesis in 2001 and received her PhD in 2014 in
information processing science from the University of Oulu,
Finland. Her research interests include measurement and metrics,
quality management, global software development practices and
digital service development practices.
Lic.Sc. Aino Mensonen obtained her Postgraduate Degree in
Media Technology in 1999 from Helsinki University of Technology.
She is currently working as a Senior Scientist and Project Manager
in the Digital services in context team at VTT Technical Research
Centre of Finland Ltd. She started her career at broadcasting
company MTV Oy by monitoring the TV viewing and has worked
as a Research Engineer at KCL before coming to VTT in 2009. At
the moment she is involved in several projects including Trusted
Cloud services, Collaborative methods an city planning, User
experience, and Service concepts and development.
92
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
GIS-based Optimal Route Selection for Oil and Gas Pipelines in
Uganda
Dan Abudu1 and Meredith Williams2
1
2
Faculty of Engineering and Science, University of Greenwich, Chatham, ME4 4TB, United Kingdom
[email protected]
Centre for Landscape Ecology and GIS, University of Greenwich, Chatham, ME4 4TB, United Kingdom
[email protected]
routing. Impacts to animal migration routes, safety of
nearby settlements, security of installations and financial
cost implications are all important variables considered in
optimal pipeline routing. Jankowski [7] noted that pipeline
routing has been conventionally carried out using coarse
scale paper maps, hand delineation methods and manual
overlaying of elevation layers. Although conventional, it
emphasises the importance spatial data play in determining
where the pipeline is installed. This has also pioneered
advancement in spatial-based pipeline planning, routing
and maintenance.
Abstract
The Ugandan government recently committed to development of
a local refinery benefiting from recently discovered oil and gas
reserves and increasing local demand for energy supply. The
project includes a refinery in Hoima district and a 205 kilometre
pipeline to a distribution terminal at Buloba, near Kampala city.
This study outlines a GIS-based methodology for determining an
optimal pipeline route that incorporates Multi Criteria Evaluation
and Least Cost Path Analysis. The methodology allowed for an
objective evaluation of different cost surfaces for weighting the
constraints that determine the optimal route location. Four
criteria (Environmental, Construction, Security and Hybrid) were
evaluated, used to determine the optimal route and compared
with the proposed costing and length specifications targets issued
by the Ugandan government. All optimal route alternatives were
within 12 kilometres of the target specification. The construction
criteria optimal route (205.26 km) formed a baseline route for
comparison with other optimal routes.
Keywords: GIS, MCE, LCPA, Oil & Gas, pipeline routing.
The approaches used in this paper are presented as an
improvement and a refinement of previous studies such as
those conducted by Anifowose et al. [8] in Niger Delta,
Nigeria, Bagli et al. [9] in Rimini, Italy, and Baynard (10)
in Venezuela oil belts. This study was the first of its kind
in the study area and incorporated both theory and practice
in similar settings and model scenarios for testing to
support the decision making process. The study understood
that evaluation of the best route is a complex multi criteria
problem with conflicting objectives that need balancing.
Pairwise comparison matrix and Multi Criteria Evaluation
(MCE) were used to weight and evaluate different factors
necessary for deriving optimal routes, and then Least Cost
Path Analysis (LCPA) used to derive alternative paths that
are not necessarily of shortest distance but are the most
cost effective.
1. Introduction
Lake Albertine region in Western Uganda holds large
reserves of oil and gas that were discovered in 2006. Tests
have been continually carried out to establish their
commercial viability and by August 2014, 6.5 billion
barrels had been established in reserves [1, 2 & 3]. The
Ugandan government plans to satisfy the country’s oil
demands through products processed at a local refinery to
be built in Kabaale, Hoima district and transported to a
distribution terminal in Buloba, 14 kilometres from
Kampala capital city [4]. Several options have been
proposed on how to transport the processed products from
the refinery to the distribution terminal, this study explored
one option; constructing a pipeline from Hoima to
Kampala [5].
2. Study Area
Uganda is a land locked country located in East Africa (Fig.
1). The refinery and distribution terminal locations define
the start and end points respectively for the proposed
pipeline route. The refinery is located near the shores of
Lake Albert at Kabaale village, Buseruka sub-country in
Hoima district, on a piece of land covering an area of 29
square kilometres. This location lies close to the country’s
largest oil fields in the Kaiso-Tonya which is 40 kilometres
Determination of the optimal route for pipeline placement
with the most cost effectiveness and least impact upon
natural environment and safety has been noted by Yeo and
Yee [6] as a controversial spatial problem in pipeline
93
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
and a hybrid cost surface comprising of all criteria factors.
Different cost surfaces for each of the criteria were
generated and evaluated to identify the combination of
factors for an optimal pipeline route and the route
alternatives determined using Least Cost Path Analysis.
by road from Hoima town. Kaiso-Tonya is also 260
kilometres by road from Kampala, Uganda’s capital. The
approximate coordinates of the refinery are: 1⁰30’0.00”N,
31⁰4’48.00”E. The distribution terminal is located at
Buloba town centre approximately 14 kilometres by road,
west of Kampala city. The coordinates of Buloba are:
0⁰19’30.00”N, 32⁰27’0.00”E. The geomorphology is
characterised by a small sector of flat areas in the northeastern region and rapid changing terrain elsewhere with
elevations ranging from 574 to 4,877 metres above sea
level. The most recent population census was carried out in
2014 and reported total national population results of 34.9
million covering 7.3 million households with 34.4 million
inhabitants [11]. This represented a population increment
of 10.7 million people from the 2002 census. Subsistence
agriculture is predominantly practiced throughout the
country as a major source of livelihood as well as fishing
and animal grazing. Temperature ranges between 20 - 30
ºC with annual rainfall between 1,000 and 1,800 mm.
3.1 Data
Achieving the study objectives required the use of both
spatial and non-spatial data (Table 1). Data were obtained
from government departments in Uganda and
supplemented with other publicly available data. The
choice of input factors was determined by the availability
of data, their spatial dimensions and computational
capacity. The study noted that there are many factors that
can influence the routing of an oil and gas pipeline.
However, only factors for which data were available were
examined. Spatial consistency was attained by projecting
all data to Universal Transverse Mercator (UTM)
projection, Zone 36N for localised projection accuracy and
a spatial resolution of 30 m maintained during data
processing.
Table 1: Data used for designing the cost surface layers
Data type
Wellbores &
Borehole data
Rainfall &
Evapotranspiration
Soil map
Topography
Geology
Land cover
Soil
Population
Wetlands
Streams (Minor &
Major)
Urban centres
Protected sites
Boundary, source &
destination
Linear features
(Roads, Rail, Utility
lines)
Construction costs
Fig. 1: Location Map of Uganda, East Africa
3. Methodology
The methodology utilised a GIS to prepare, weight, and
evaluate environmental, construction and security factors
used in the optimal pipeline routing. Estimates for local
construction costs for specific activities such as the actual
costs of ground layout of pipes, building support structures
in areas requiring above ground installations, and
maintenance costs were beyond the scope of the available
data. However, cost estimates averaged from published
values for similar projects in the USA and China [12, 13 &
14] were used to estimate the total construction costs of the
optimal route. Multi Criteria Evaluation of pairwise
comparisons were used to calculate and obtain the relative
importance of each of the three major criteria cost surfaces
Format
Scale
Date
Table & Points
1:60,000
2008
Table & Raster
30 metre
Raster
Raster
Raster
Raster
Raster
Raster & Table
Raster
30 metre
30 metre
30 metre
30 metre
30 metre
30 metre
30 metre
19902009
1970
2009
2011
2010
2008
2014
2010
Raster
30 metre
2007
Vector
Vector
1:60,000
1:60,000
2013
2011
Vector
1:60,000
2014
Vector
1:60,000
2009
Table
1:60,000
2009
3.2 Routing Criteria
Pipeline route planning and selection is usually a complex
task involving simultaneous consideration of more than
one criterion. Criteria may take the form of a factor or a
constraint. A factor enhances or detracts from the
suitability of a specific alternative for the activity under
94
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
consideration. For instance, routing a pipeline within close
distance to roads is considered more suitable compared to
routing it far away from the road. In this case, distance
from the road constitute a factor criterion. Constraints on
the other hand serve to limit the alternatives under
consideration, for instance protected sites and large water
bodies are not preferred in any way for pipelines to be
routed through them.
Environmental criteria
The environmental criteria were aimed at assessing the
risks and impacts upon the environmental features found in
potential corridors of the pipeline route. Two objectives
were addressed, i.e. minimising the risks of ground water
contamination (GWP) and maintaining least degrading
effect on the environment such as the effects on land cover,
land uses, habitats and sensitive areas (DEE). A GIS-based
DRASTIC Model (Fig. 3) was used to assess areas of
ground water vulnerability while a weighted overlay model
was used in determining areas with least degrading
environmental effects.
Routing a pipeline is therefore, more complex than simply
laying pipes from the source refinery to the final
destination. Natural and manmade barriers along possible
routes have to be considered as well as the influences these
barriers have on the pipeline after installation. Accurate
determination of the impact of these factors and constraints
on pipeline routes is usually a time-consuming task
requiring a skilled and dedicated approach [15]. This study
employed a criteria-based approach in order to consider
the different barriers and factors required to perform
optimal pipeline route selection. Datasets were selected
and processed into friction surfaces and grouped into three
separate strands of criteria for analysis. Fig. 2 shows the
implementation methodology and the grouping of the
criteria (environmental, engineering and security).
Fig. 3: DRASTIC Model
Construction criteria
Construction criteria considered factors and constraints
that accounted for the costs of laying oil and gas pipelines
through the route. Two objectives were addressed;
maximising the use of existing rights of way around linear
features such as roads and utility lines (ROW), and
maintaining routing within areas of low terrain costs
(HTC). Although, the criteria aimed at minimising costs as
much as possible, maintenance of high levels of pipeline
integrity was not compromised.
Security criteria
Oil and gas pipeline infrastructures have been vandalised
and destroyed in unstable political and socio-economic
environments [16]. Political changes in Uganda have often
been violent, involved military takeover leading to
destruction of infrastructures and resources. Therefore, the
security of the proposed pipeline has always been a
concern. Also, the proposed pipeline is projected to be laid
Fig. 2: Flow diagram of the implementation methodology
95
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
summary of the normalised weights derived from expert
opinion is shown in Table 10.
above ground traversing through different land cover types,
administrative boundaries and cultural groupings
comprising the study area. It is therefore, imperative that
security is kept at high importance in consideration of the
pipeline route. Two objectives were addressed by the
security criteria:
Table 2: DRASTIC Model Description and assigned Standard Weights
S/n
1
Factor
Description
Weights
Depth to Depth from ground surface to water
5
water table
table.
Represents the amount of water per
unit area of land that penetrates the
4
Net Recharge
ground surface and reaches the water
table.
Refers to the potential area for water
Aquifer logging, the contaminant attenuation
3
of the aquifer inversely relates to the
media
amount and sorting of the fine grains
Refers to the uppermost weathered
2
Soil media
area of the ground.
Refers to the slope of the land
Topography
1
surface.
It is the ground portion between the
Impact of
aquifer and soil cover in which pores
5
vadose zone
or joints are unsaturated.
Indicates the ability of the aquifer to
transmit water and thereby
Hydraulic
determining the rate of flow of
3
conductivity
contaminant material within the
ground water system.
First, facilitation of quick access to the pipeline facility
(QCK) and secondly, protection of existing and planned
infrastructures around the pipeline route (PRT). This is in
line with the observation that pipeline infrastructure poses
a high security risk to the environment and communities,
and is of international concern [17]. Pipeline
infrastructures suffer from illegal activities involving
siphoning, destruction and sabotage, disrupting the supply
of oil products. Similar studies such as the Baku-TblilsiCeyhan (BTC) pipeline [18] and the Niger Delta pipeline
[19] reported significant effects of pipeline infrastructure
vandalism and the need for proper security planning to
counter such activities during pipeline route planning. It is
also important that oil and gas pipelines are regularly
monitored and maintained against wear and tear effects on
the pipe materials, pressure, and blockages inside the
pipeline. Routing in locations with ease of access for
maintenance, emergency response and protection against
vandalism were therefore addressed.
Source: [21]
3.3 Weighting Criteria
3.4 Estimating the construction costs
The weighting criteria used were based on weights derived
from literature review and expert opinions. Questionnaires
were used to collate responses from experts as well as
standard weights (Table 2) sourced from literature that
were incorporated to weigh and derive the optimal routes.
The construction costs for each pipeline alternative were
estimated using the economic model proposed by
Massachusetts Institute of Technology (MIT), Laboratory
for Energy and the Environment (LEE) (MIT-LEE) [13].
MIT applied the model to estimate the annual construction
cost for a Carbon Dioxide (CO2) pipeline. Data used were
based on Natural Gas pipelines due to the relative ease of
availability. The cost data were used to estimate the
pipeline construction costs. Although, the rate of flow and
pipeline thickness of these two types of pipelines (Natural
Gas and oil) may differ, the land construction costs does
not differ much. The costs of acquiring pipeline materials
such as pipes, pump stations, diversions and support
structures were not included in the analysis. Equation 1
illustrates the formula used to estimate the total
construction cost (TCC) over the operating life of the
pipeline in British Pounds Sterling (BPD):
2
3
4
5
6
7
Values were assigned to each criterion based on their
degree of importance in the containing criteria. For
example, gentle slopes provide solid foundations for laying
pipelines so it received higher weight (lower friction value)
in the construction criteria whereas steep slopes require
levelling and/or support posts to raise the pipeline above
ground hence it received low weight (higher friction value).
Based on linguistic measures developed by Saaty [20],
weights were assigned on a scale of 1 to 9 semantic
differentials scoring to give relative rating of two criteria
where 9 is highest and 1 is lowest. The scale of differential
scoring presumes that the row criterion is of equal or
greater importance than the column criterion. The
reciprocal values (1/3, 1/5, 1/7, or 1/9) were used where
the row criterion is less important than the column criterion.
A decision matrix was then constructed using Saaty’s scale
and factor attributes were compared pairwise in terms of
importance of each criterion to that of the next level. A
TCC = LCC × CCF + OMC
(1)
Where, LCC is the Land construction cost in BPD,
CCF is the Capital Charge Factor,
OMC is the annual operation & management costs in BPD
96
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
CCF values were defaulted to 0.15 and the OMC estimated
at BPD 5,208.83 per kilometre per year irrespective of the
pipeline diameter [14].
LCC were obtained from two correlation equations which
assume a linear relationship between LCC and distance
and length of the pipeline. Equations 2 and 3 illustrate the
formula used to obtain LCC for the MIT and Carnegie
Mellon University (CMU) correlation models respectively.
1.
regional differences in pipeline construction costs by
using regional dummy variables. The two correlations
provided comparative results for the study area.
LCC = β × Dx × (L × 0.62137)y × z × i
Where, β = BPD 27187.55
D = pipeline diameter in inches and x = 1.035
L = pipeline length in kilometres and y = 0.853
z = regional weights = 1 (since regional weights are
constant)
i is optional. It is the cost fluctuation index due to
increase in inflation and costs in a given year (Table 4).
The study used running average index for year 2007.
In the MIT correlation, it is assumed that the pipeline’s
LCC has a linear correlation with pipeline’s diameter
and length
LCC = α × D × (L × 0.62137) × i
(2)
Where, α = BPD 21,913.81 (variable value specific to
the user) per inch per kilometre
D is the pipeline diameter in inches
L is the least-cost pipeline route length in Kilometres
i is optional. It is the cost fluctuation index due to
increase in inflation and costs in a given year. The
study used the running average for year 2007 (Table 3).
4. Results and Discussion
This section presents the results of the various analyses
carried out in the study. Maps, Figures and Tables make up
the content together with detailed descriptions and
discussion of the results shown.
Table 3: MIT Correlation Price Index
Year
2000
2001
2002
2003
2004
2005
2006
2007
Index (i)
1.51
1.20
1.74
2.00
2.30
2.31
2.30
3.53
(3)
4.1 Weights used in the study
Running Average
1.47
1.48
1.65
2.01
2.20
2.30
2.71
2.92
The study employed both primary and secondary data.
Primary data were obtained from a sample of experts in the
fields of oil and gas, environment, plus cultural and
political leaders. Questionnaires were used to collect
expert opinions from 20 respondents from each of the three
fields. Fig. 4 shows the category of respondents and the
percentage responses obtained for each of the categories.
Table 10 shows the comparative responses normalised in
percentage.
Source; [13]
Table 4: CMU Correlation Price Index
Year
2000
2001
Index (i)
1.09
0.99
Running Average
1.05
1.08
2002
1.17
1.16
2003
2004
2005
1.33
1.56
1.52
1.35
1.47
1.57
2006
1.68
1.59
2007
2.46
2.07
Source; [13]
2.
Fig. 4: Respondents collated from questionnaires
The CMU correlation model is similar to the MIT
model. However, it is more recent and departs from the
linearity restriction in the MIT correlation and allows
for a double-log (nonlinear) relationship between
pipeline LCC and pipeline diameter and length. In
addition, the CMU correlation model takes into account
4.2 Environment cost surface
An environmental cost surface (Fig.5C) was obtained by
applying equal weighting on two objective-based cost
surfaces; that is maintaining least degrading effect on the
97
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
4.6 Optimal route
environment (DEE) and protection of ground water from
contamination arising from pipeline related activities
(GWP), represented in Fig. 5 (A) and (B) respectively.
Additionally, studies by Secunda et al. [22] revealed that
assuming constant values for the missing layers in the
DRASTIC Model produced the same results as when all
seven layers were used. This study applied constant values
to the three cost layers (Net Recharge, Impact of vadose
and Hydraulic conductivity) based on literature because
these layers have values representing a country-wide extent
[23].
Table 5 shows the accumulated costs incurred by each
route and the total distance traversed by the optimal routes.
While the diameter of the actual pipes for the proposed
pipeline have yet to be decided, a buffer of 1 kilometre was
applied around the optimal routes to generate a strip
accounting for the potential right-of-way. Also, there were
no routing activities conducted for oil and gas pipeline in
the study area prior to this study. The Government’s
estimated total distance for the pipeline route determined
by a neutral criteria was 205 kilometres [4]. Therefore, this
study considered the optimal route with the shortest length
as a baseline route for comparisons with other optimal
routes.
4.3 Construction cost surface
A Construction cost surface (Fig. 6C) was obtained by
applying equal weighting on two objective-based cost
surfaces; maintaining the use of areas with existing right of
way (ROW, Fig. 6A) and minimising areas with high
terrain cost (HTC, Fig. 6B). The cost surfaces for both
ROW and HTC show that distribution of the costs cover
the entire study area. Over 50% of the study area presented
very low ROW with a few areas in the West, Central and
Eastern parts of the study extent recording high costs
indicating areas of urban concentrations, Mount Elgon to
the East and protected sites covering the South-Western
part of the study area and North-Eastern parts. Similarly,
one protected site (licensed sites for oil drilling purposes)
and all major streams (lakes and rivers) presented higher
costs to the construction criteria. Much of the Central and
Northern parts of the country are cheaper. Moderate
construction costs are observed around areas covered by
protected sites such as national parks, cultural sites,
wildlife reserves and sanctuaries. This is so because the
importance of these protected sites are evaluated entirely
on economic terms (ROW and HTC objectives).
Table 5: Costs and lengths of the optimal routes
Optimal route
alternatives
Accumulated
cost distance
Pipeline
length
(km)
Environmental
Construction
Security
Hybrid
1,529,468.00
1,363,801.75
1,393,417.50
1,255,547.75
213.09
205.26
209.52
215.11
Length
difference
from the
proposed
length
+8.09
+0.26
+4.52
+10.12
The construction criteria optimal was the shortest route
with a length of 205.26 kilometres, a 0.26 kilometre
increase over the 205 km estimate proposed by Ugandan
government. From Table 5, the environmental, security
and hybrid are respectively 8.09, 4.52 and 10.12
kilometres longer than the proposed route. The baseline
route also has an accumulated cost cheaper than both
security and environmental criteria. However, the hybrid
criteria optimal route is 1.95% cheaper than the baseline
route. This suggests that the incorporation of multiple
constraints and criteria in the optimal route selection
minimises the resultant costs associated with routing.
4.4 Security cost surface
A security cost surface was obtained from equal weighting
of the QCK and PRT cost surfaces. QCK and PRT cost
surfaces are the two objective-based cost surfaces for
which the security criteria achieved. The results are shown
in Fig. 7 (A), (B) and (C) for QCK, PRT and security
criteria cost surfaces respectively. In the three maps, costs
were represented as continuous surfaces.
4.7 The financial implications of each optimal route
Construction cost estimates from Tables 6 and 7 show that
construction costs linearly vary with increases in both
pipeline diameter and length across the two models. The
shorter the route and the narrower the pipeline, the cheaper
the construction costs. Fig. 10 shows the graphical
representation of the linear relationship between pipeline
construction costs and both pipeline diameter and length.
4.5 Hybrid cost surface
The final cost surface obtained is the hybrid cost surface
where the six cost surfaces (DEE, GWP, ROW, HTC,
QCK and PRT) were combined and equally weighted. A
continuous surface was generated as shown in Fig. 8 (A).
98
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Table 6: TCC estimates for the optimal routes based on MIT Model
Pipeline
length
(km)
Optimal
Routes
Total construction cost (MIT Model)
in millions of BPD
Pipeline diameter in inches
8
16 18 24
30 36
Land uses such as roads, urban centres and protected sites
were crossed by at least one of the four optimal routes.
Linear features (Roads, Rail roads, utility lines) and minor
streams were among the most crossed features by the
optimal routes. No urban and protected sites were directly
crossed by the optimal routes. However, when a spatial
buffer of 200m was applied around the urban centres, five
urban centres and one protected site were crossed by the
optimal routes (Table 8). Of the affected urban centres,
four were crossed by security optimal route while hybrid
optimal route crossed one urban centre. The location of the
refinery is within a 1km buffer around one of the protected
sites (Kaiso-Tonya Community Wildlife Management
Area).
40 42
Environmental 213.09 10.2 20.3 22.9 30.5 38.1 45.8 50.8 53.4
Construction 205.26 9.8 19.6 22.0 29.4 36.7 44.1 49.0 51.4
Security
209.52 10.0 20.0 22.5 30.0 37.5 45.0 50.0 52.5
Hybrid
215.11 10.3 20.5 23.1 30.8 38.5 46.2 51.3 53.9
Table 7: TCC estimates for the optimal routes based on CMU Model
Pipeline
Optimal Routes length
(km)
Total construction cost (CMU
Model) in millions of BPD
Pipeline diameter in inches
8 16
18
24
30
36 40
42
4.9 Monitoring and maintenance planning along the
optimal routes
Environmental 213.09 7.0 14.4 16.3 21.9 27.6 33.4 37.2 39.2
Construction
205.26 6.8 14.0 15.8 21.3 26.8 32.3 36.1 37.9
Security
209.52 6.9 14.2 16.1 21.6 27.2 32.9 36.7 38.6
Hybrid
215.11 7.1 14.5 16.4 22.1 27.9 33.7 37.5 39.5
In order to properly monitor and maintain efficient
operation of the pipeline, pipeline routes were preferred to
be near linear features such as roads, rail roads and utility
lines since they provide quick and easy access to the
pipeline facility. Also, locations near streams were
preferred to allow access using water navigation means.
For planning purposes such as installation of monitoring
and maintenance facilities such as engineering workshops
and security installations, areas with clear line of sight are
recommended. The study therefore performed Viewshed
analysis [24] on the on topographical data to determine
visible areas. Fig. 9 (B) shows the locations visible from
each of the four optimal routes as determined from ArcGIS
Viewshed Analysis. Although, the Viewshed analysis
performed on DEM does not consider the above-ground
obstructions from land cover types such as vegetation and
buildings, it can be compensated by installing such
monitoring facilities at the appropriate height above
ground while maintaining the site location.
Considering the total construction cost for a 24-inch
diameter pipeline, The total construction costs for the
Government’s proposed pipeline route is 29.34 million
BPD, whereas for security, environmental and hybrid
routes are 30.0, 30.5 and 30.8 million BPD respectively
using the MIT Model. Also using the CMU Model similar
trend in results are shown where the baseline route (the
shortest) also doubling as the cheapest route estimated at
21.3 million BPD, followed by security, then
environmental and finally hybrid at 21.6, 21.9 and 22.1
million BPD respectively.
Therefore, the financial implication of each optimal route
shows the construction criteria optimal route as the
cheapest and most feasible. The other three optimal routes
(security, environmental and hybrid) although longer and
more expensive, are all under 1.59 and 2.54 million BPD
from the CMU and MIT models’ construction costs
estimates.
5. Sensitivity testing of weighting schemes
4.8 Effects of optimal routes on land cover and uses
5.1 The effect of equal weighting and weights
obtained from expert opinion on the optimal routes
Twelve different land cover types were considered in the
study, seven of which (Table 9) were crossed by at least
one of the four optimal routes. Woodland, grassland,
small-scale farmland, wetlands and degraded tropical high
forests all were crossed by the optimal routes.
Environmental and hybrid optimal routes were the only
routes that crossed Bushland. Also, construction and
security optimal routes were the only routes that crossed
stocked tropical high forest.
Equal weightings were applied to combine criteria
objectives and generate criteria cost surfaces as the first
stage of analysis. Weights normalised from expert opinions
were then used to provide comparative results of the
analysis for environmental, construction and security
criteria. The hybrid criteria was not affected because nonequal weightings were applied at the objectives evaluation
level. The significant result was shown in the
99
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
environmental criteria route where the 25% weight change
in the DEE objective resulted in a 7.79% (16.61 km)
increase in the overall pipeline length under environmental
criteria. This was the highest change in the pipeline length
followed by security criteria at (0.44 km) and lastly
construction criteria at 0.05 km. Environmental criteria
optimal route was also the longest route with a total length
at 229.70 km followed by hybrid at 215.11 km, security at
210.18 km and lastly construction criteria at 205.31 km.
Although, the environmental route had the longest length,
security criteria accumulated the highest cost while
construction had the least accumulated cost distances.
5.2 Application of non-equal weighting on criteria to
generate hybrid route
Table 8: Number of crossings by the optimal routes through buffer zones
Features Environmental Construction Security Hybrid
Roads
10
12
10
13
Lakes &
0
0
0
0
Rivers
Minor
14
9
13
16
Streams
Utility
2
2
2
2
Lines
Rail roads
0
1
0
0
Urban
0
0
4
1
centres
Protected
1
1
1
1
sites
Total
27
25
30
33
Figures 5 & 11, shows the location of the hybrid optimal
route generated from the application of equal weighting on
the three criteria (environmental, construction and security).
The route is within 1.51 kilometres south of Hoima town.
By applying an un-equal weighting where the
environmental criteria accounted for 50% of the total
weight, security and construction at 25% each, the route
was shifted 12 km further south of Hoima town (Fig. 11).
Other urban centres such as Kitoba and Butemba that were
initially close to the equal weighted hybrid route (11.83 &
11.96 kms respectively) were also shifted (50 and 20 kms
respectively) away from the non-equal weighted route.
Table 9: Areal coverage (square metres) of land cover type crossed by
each pipeline route
Land cover
The length of the non-equal weighted hybrid route
decreased from 215.11 km to 212.94 km representing a
construction cost decrement of 0.3 BPD based on MIT
Model for a 24-inch pipeline. Using CMU model, the
construction costs decrement is at 0.2 BPD for the same
pipeline diameter. Similarly, increasing the security and
construction criteria by 50% respectively, while
maintaining the environmental criteria weights at 25% in
each case resulted in cheaper routes but presented real risk
to some urban centres. For instance, the 50% security
criteria weighting resulted in the hybrid optimal route
crossing the buffer zone of Ntwetwe town while avoiding
Bukaya by 0.2 kilometre (Fig. 9C). Although the effect of
applying un-equal weighting on the hybrid criteria optimal
route had no incremental effect on the total length and
costs of the pipeline, the potential effects on other criteria
routes are visible. However, generally un-equal weighting
had minimal adverse effects upon the environmental,
construction and hybrid optimal routes.
Environmental Construction Security Hybrid
Grassland
2,223,000
386,100
Bushland
270,000
0
0
346,500
Woodland
957,600
1,208,700
600,300
560,700
Small-Scale
Farmland
2,219,400
4,383,900
Wetland
27,900
261,000
288,000
76,500
0
52,200
244,800
0
253,800
231,300
15,300
278,100
5,951,700
6,523,200
Tropical high
forest (stocked)
Tropical high
forest (degraded)
Total
27,900 2,014,200
4,161,600 3,029,400
5,337,900 6,305,400
Fig. 5: Location of the optimal routes
100
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Table 10: Summary of normalised factor weights used in determination of cost surface layers
1. DEE Objective
Factor/
Weight
(%)
Constraint
Urban centres
7.53
Land cover
50.92
Protected sites
26.30
Wetlands
15.25
2. ROW Objective
3. HTC Objective
Factor/
Weight
Factor/
Weight
Constraint
(%)
Constraint
(%)
Linear
5.83
Land cover
6.48
features
Population
0.55
Soil
38.52
density
Protected
24.78
Topography
18.31
sites
Cultural
14.38 Linear features 10.88
landmarks
Geology
25.18
Environmental Criteria
4. QCK Objective
Factor/
Weight
Constraint
(%)
Linear
20.16
features
Streams
Dense land
cover
Urban
centres
Construction Criteria
30.62
8.13
41.08
5. PRT Objective
Factor/
Weight
Constraint
(%)
Urban
20.16
centres
Protected
30.62
sites
Linear
8.13
features
Cultural
41.08
landmarks
Security Criteria
B
A
C
Fig. 6: Cost surface maps showing DEE (A), GWP (B) objectives and combined environmental criteria cost surface (C)
C
Fig. 7: Cost surface maps showing ROW (A) and HT (B) objectives and combined Construction criteria cost surface (C)
101
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
C
Fig. 8: Cost surface map showing the ROW objective (A) and the PRT objective (B) and combined Security criteria cost surface (C)
A
B
B
C
Fig. 9: Hybrid cost surface map (A), visible locations to optimal routes (B) and all five route alternatives (C)
102
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
6. Conclusions
This paper presented a GIS-based methodology for the
identification of an optimal and cost effective route for the
oil and gas pipeline as well as taking into consideration the
environmental, economic and security concerns associated
with oil and gas pipeline routing. The effects on land cover
and land uses, ground water contamination, costs of
investments, human and wildlife security, and emergency
responses to adverse effects such as oil spillage and
pipeline leakages were considered in the routing activity.
Given that governments and religious affiliations of the
people can change any time, factors with long-term effects
upon the installation and operation of the oil and gas
pipelines were key in the decision making process. While
the analyses were successful and objectives achieved, the
study noted that community participation in pipeline
routing is the most essential component of any complex
multi criteria study. Factors such as socio-political, socioeconomic and religious factors for which data are often
unavailable or unreliable are recommended to be
incorporated in any future studies. Similarly, land prices
where compulsory land purchases are required should be
conducted to estimate the pre-installation market values of
land.
Acknowledgments
Fig. 10: Construction costs variation
The Authors acknowledge the technical and software
support obtained from the Faculty of Engineering and
Science, University of Greenwich. The authors also thank
the various departments of Uganda government, GISTITOS project, Nile Basin Initiative and USGS Earth
Explorer Project to name but a few that provided the
required data. Finally the lead author’s profound gratitude
goes to Tullow Group Scholarship Scheme for providing
the scholarship funding.
References
[1] F.A.K. Kaliisa, “Uganda’s petroleum resources
increase to 6.5 billion barrels oil in place”, Department
of Petroleum, Uganda, 2014. Available at:
http://www.petroleum.go.ug/news/17/Ugandaspetroleum-resources-increase-to-65-billion-barrels-oilin-place. Last accessed: 15/06/2015
[2] C.A. Mwesigye, “Why Uganda is the Best Investment
Location in Africa”, State House, Uganda, 2014.
Available
at:
http://www.statehouse.go.ug/search/node/Why%20Uga
nda%20is%20the%20Best%20Investment%20Location
%20in%20Africa. Last accessed: 15/06/2015
[3] US EIA, “Uganda: Country Analysis Note”, 2014,
Available
at:
http://www.eia.gov/beta/international/country.cfm?iso=
UGA. Last accessed: 15/04/2015.
Fig. 11: Location of visible areas to the optimal routes
103
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[17] S.k.N. Hippu, S.K. Sanket, and R.A. Dilip, “Pipeline
politics—A study of India’s proposed cross border gas
projects”, Energy Policy, Vol. 62, 2013, pp. 145 –
156.
[18] G. Dietl, “Gas pipelines: politics and possibilities”. In:
I.P. Khosla, Ed. “Energy and Diplomacy”, Konark
Publishers, New Delhi, 2005, pp.74–90
[19] F. Onuoha, “Poverty, Pipeline Vandalisation/Explosion
and Human Security: Integrating Disaster Management
into Poverty Reduction in Nigeria”, African Security
Review, Vol. 16, No. 2, 2007, pp.94-108, DOI:
10.1080/10246029.2007.9627420.
[20] T.L. Saaty, The Analytical Hierarchy Process. New
York: Wiley, 1980
[21] R.A.N. Al-Adamat, I.D.L. Foster, and S.N.J. Baban,
“Groundwater vulnerability and risk mapping for the
Basaltic aquifer of the Azraq basin of Jordan using GIS,
Remote sensing and DRASTIC”, Applied Geography,
Vol. 23, 2003, pp. 303–324.
[22] S. Secunda, M.L., Collin, A.J. Melloul, “Groundwater
vulnerability assessment using a composite model
combining DRASTIC with extensive agricultural land
use in Israel’s Sharon region”, Journal of
Environmental Management, Vol. 54, 1998, pp. 39 –
57.
[23] MWE, “WATER SUPPLY ATLAS 2010”, 2012,
Available
at:
http://www.mwe.go.ug/index.php?option=com_docma
n&amp;task=cat_view&amp;gid=12&amp;Itemid=223.
Last accessed: 15/02/2015.
[24] E.E. Jones, “Using Viewshed Analysis to Explore
Settlement Choice: A Case Study of the Onondaga
Iroquois”, American Antiquity, Vol. 71, No. 3, 2006,
pp. 523-538.
[4] PEPD, “Uganda’s Refinery Project Tender Progresses
to the Negotiations Phase”, Department of Petroleum,
Uganda,
2014.
Available
at:
http://www.petroleum.go.ug/news/13/UgandasRefinery-Project-Tender-Progresses-to-theNegotiations-Phase. Last accessed: 15/06/2015.
[5] Business Week, “Oil boss says pipeline quickest option
for Uganda”, East African Business Week, 2014.
Available
at:
http://www.busiweek.com/index1.php?Ctp=2&pI=205
2&pLv=3&srI=53&spI=20. Last accessed: 05/11/2014.
[6] I. Yeo, and J. Yee, “A proposal for a site location
planning model of environmentally friendly urban
energy supply plants using an environment and energy
geographical information system (E-GIS) database (DB)
and an artificial neural network (ANN)”, Applied
Energy, Vol. 119, 2014, pp. 99 – 117.
[7] P. Jankowski, “Integrating geographical information
systems and multiple criteria decision-making
methods”, International Journal of Geographical
Information Systems, Vol. 9, No. 3, 1995, pp. 251-273.
[8] B. Anifowose, D.M. Lawler, V.D. Horst, and L.
Chapman, “Attacks on oil transport pipelines in
Nigeria: A quantitative exploration and possible
explanation of observed patterns”, Applied Geography,
Vol. 32, No. 2, 2012, pp. 636 – 651.
[9] S. Bagli, D. Geneletti, and F. Orsi, “Routeing of power
lines through least-cost path analysis and multicriteria
evaluation to minimise environmental impacts”,
Environmental Impact Assessment Review, Vol. 31,
2011, pp. 234 – 239.
[10] C.W. Baynard, “The landscape infrastructure footprint
of oil development: Venezuela’s heavy oil belt”,
Ecological Indicators, Vol. 11, 2011, pp.789 – 810.
[11] UBOS, “National Population and Housing Census
2014, Provincial Results”, 2014, Available at:
http://www.ubos.org/onlinefiles/uploads/ubos/NPHC/
NPHC%202014%20PROVISIONAL%20RESULTS%
20REPORT.pdf. Last accessed: 13/02/2015.
[12] B. Bai, X. Li, and Y. Yuan, “A new cost estimate
methodology for onshore pipeline transport of CO2 in
China”, Energy Procedia, Vol. 37, 2013, pp. 7633 –
7638.
[13] CCSTP, “Carbon Management GIS: CO2 Pipeline
Transport Cost Estimation. Carbon Capture and
Sequestration Technologies Program Massachusetts
Institute of Technology”, 2009, Available at:
http://sequestration.mit.edu/energylab/uploads/MIT/Tr
ansport_June_2009.doc. Last accessed: 02/05/ 2015
[14] G. Heddle, H. Herzog, and M. Klett, “The Economics
of CO2 Storage. MIT LFEE 2003-003 RP”, 2003,
Available at: http://mitei.mit.edu/system/files/2003-03rp.pdf. Last accessed: 12/05/2014.
[15] Oil & Gas, “GIS leads to more efficient route
planning”, Oil & Gas Journal, Vol. 91, No. 17, 1993
pp. 81.
[16] S. Pandian, “The political economy of trans-Pakistan
gas pipeline project: assessing the political and
economic risks for India”, Energy Policy, Vol. 33, No.
5, 2005, pp. 659–670.
Dan Abudu was awarded a BSc in Computer Science (First
Class) by Gulu University, Uganda in 2010 and continued
his career in Data Management serving at organisiations
such as Joint Clinical Research Centre and Ministry of
Finance, Planning and Economic Development in Uganda,
and at Kingsway International Christian Centre, UK. He
briefly served in Academia as Teaching Assistant (August
2010 – April 2011). Dan has been active in GIS and Remote
Sensing research since 2013 with keen interests in GIS
applications in Oil and Gas sector. He is a member of the
African Association of Remote Sensing of the Environment
(AARSE) and was awarded an MSc in GIS with Remote
Sensing from the University of Greenwich, UK on the 22nd
July 2015.
Dr. Meredith Williams is a Senior Lecturer in Remote
Sensing and GIS at the Centre for Landscape Ecology &
GIS, University of Greenwich, Medway Campus, UK, with
over 23 years experience in applied GIS and Remote
Sensing. He specialises in the application of Remote
Sensing and GIS to the monitoring of vegetation health, land
cover change, and fluvial systems. He has supervised a
wide range of PhD and MSc projects, including several in
collaboration with the oil and gas industry.
104
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Hybrid Trust-Driven Recommendation System
for E-commerce Networks
Pavan Kumar K. N1, Samhita S Balekai1, Sanjana P Suryavamshi1, Sneha Sriram1, R. Bhakthavathsalam2
1
Department of Information Science and Engineering, Sir M. Visvesvaraya Institute of Technology
Bangalore, Karnataka, India
1
1
[email protected], [email protected], [email protected], [email protected]
2
Super computer Education and Research Centre, Indian Institute of Science
Bangalore, Karnataka, India
2
[email protected]
Abstract
In traditional recommendation systems, the challenging issues
in adopting similarity-based approaches are sparsity, cold-start
users and trustworthiness. We present a new paradigm of
recommendation system which can utilize information from
social networks including user preferences, item's general
acceptance, and influence from friends. A probabilistic model,
particularly for e-commerce networks, is developed in this paper
to make personalized recommendations from such information.
Our analysis reveals that similar friends have a tendency to
select the same items and give similar ratings. We propose a
trust-driven
recommendation
method
known
as
HybridTrustWalker. First, a matrix factorization method is
utilized to assess the degree of trust between users. Next, an
extended random walk algorithm is proposed to obtain
recommendation results. Experimental results show that our
proposed system improves the prediction accuracy of
recommendation systems, remedying the issues inherent in
collaborative filtering to lower the user’s search effort by listing
items of highest utility.
Keywords: Recommendations system, Trust-Driven, Social
Network, e-commerce, HybridTrustWalker.
1. Introduction
Recommendation systems (RS) (sometimes replacing
"system" with a synonym such as platform or engine) are
a subclass of information filtering system that seek to
predict the 'rating' or 'preference' that user would give to
an item. RSs have changed the way people find products,
information, and even other people. They study patterns
of behaviour to know what someone will prefer from
among a collection of things he has never experienced.
RSs are primarily directed towards individuals who lack
sufficient personal experience or competence to evaluate
the potentially overwhelming number of alternative items
that a Web site, for example, may offer .A case in point is
a book recommendation system that assists users to select
a book to read. In the popular Website, Amazon.com, the
site employs a RS to personalize the online store for
each customer. Since recommendations are usually
personalized, different users or user groups receive
diverse suggestions. In addition there are also
non-personalized recommendations. These are much
simpler to generate and are normally featured in
magazines or newspapers. Typical examples include the
top ten selections of books, CDs etc. While they may be
useful and effective in certain situations, these types of
non-personalized recommendations are not typically
addressed by RS research.
1.1 Recommendation System Functions
First, we must distinguish between the roles played by the
RS on behalf of the service provider from that of the user
of the RS. For instance, a travel recommendation system
is typically
introduced by a travel intermediary (e.g.,
Expedia.com) or a destination management organization
(e.g., Visitfinland.com) to increase its turnover (Expedia),
i.e. sell more hotel rooms, or to increase the number of
tourists to the destination. Whereas, the user’s primary
motivations for accessing the two systems is to find a
suitable hotel and interesting events/attractions when
visiting a destination [1]. In fact, there are various reasons
as to why service providers may want to exploit this
technology:
Increase in sales: This goal is achieved because the
recommended items are likely to satisfy users’ functional
preferences. Presumably the user will recognize this after
having tried several recommendations. From the service
providers’ point of view, the primary goal of introducing
a RS is to increase the conversion rate, i.e. the number of
users that accept the recommendation and consume an
item compared to the number of visitors browsing
through for information.
Exposure to a wider product range: Another major
function of a RS is to enable the user to select items that
might be hard to find without a precise recommendation.
For instance, in a movie RS such as Netflix, the service
105
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
provider is interested in renting all the DVDs in the
catalogue, not just the most popular ones.
horror genre, then the system can learn to recommend
other movies from this genre.
Consolidating user satisfaction and fidelity: The user
will find the recommendations interesting, relevant, and
accurate, and when combined with a properly designed
human-computer interaction she will also enjoy using the
system. Personalization of recommendations improves
user loyalty. Consequently, the longer the user interacts
with the site, the more refined her user model becomes,
i.e., the system representation effectively customizing
recommendations to match the user’s preferences.
Demographic: This type of system recommends items
based on the demographic profile of the user. The
assumption is that different recommendations should be
generated for different demographic niches. Many Web
sites adopt simple and effective personalization solutions
based on demographics. For example, users are
dispatched to particular Web sites based on their language
or country. Or suggestions may be customized according
to the age of the user. While these approaches have been
quite popular in the marketing literature, there has been
relatively little proper RS research into demographic
systems.
Improve QoS through customer feedback: Another
important function of a RS, which can be leveraged to
many other applications, is the description of the user’s
preferences, either collected explicitly or predicted by the
system. The service provider may then decide to reuse
this knowledge for a number of other goals such as
improving the management of the item’s stock or
production.
1.2 Common Recommendation Techniques
In order to implement its core function, identifying the
useful items for the user, an RS must predict that an item
is worth recommending. In order to do this, the system
must be able to predict the utility of some of them, or at
least compare the utility of some items, and then decide
what items to recommend based on this comparison. The
prediction step may not be explicit in the
recommendation algorithm but we can still apply this
unifying model to describe the general role of a RS. Some
of the recommendation techniques are given below:
Collaborative filtering: The simplest and original
implementation of this approach recommends the items
that other users with similar tastes liked, to the target user.
The similarity of taste between two users is calculated
based on the rating history of the users. Collaborative
filtering is considered to be the most popular and widely
implemented technique in RS. Neighbourhood methods
focus on relationships between items or, alternatively,
between users. An item-item approach models the
preference of a user to an item based on ratings of similar
items by the same user. Nearest-neighbours methods
enjoy considerable popularity due to their simplicity,
efficiency, and their ability to produce accurate and
personalized recommendations. The authors will address
the essential decisions that are required when
implementing a neighbourhood based recommender
system and provide practical information on how to make
such decisions [2].
Content-based: The system learns to recommend items
that are similar to the ones that the user liked in the past.
The similarity of items is calculated based on the features
associated with the compared items. For example, if a
user has positively rated a movie that belongs to the
Knowledge-based: Recommendation based on specific
domain knowledge about how certain item features meet
users’ needs and preferences and, ultimately, how the item
is useful for the user. In these systems a similarity function
estimates how well the user needs match the
recommendation. The similarity score can be directly
interpreted as the utility of the recommendation for the
user. Content-based systems are another type of
knowledge-based RSs In terms of used knowledge, both
systems are similar: user requirements are collected;
repairs for inconsistent requirements are automatically
proposed in situations where no solutions could be found;
and recommendation results are explained. The major
difference lies in the way solutions are calculated.
Knowledge-based systems tend to work better than others
at the beginning of their deployment but if they are not
equipped with learning components they may be surpassed
by other shallow methods that can exploit the logs of the
human/computer interaction (as in CF).
1.3 Problems in Existing Recommendation Systems
Sparsity problem: In addition to the extremely large
volume of user-item rating data, only a certain amount of
users usually rates a small fraction of the whole available
items. As a result, the density of the available user
feedback data is often less than 1%. Due to this data
sparsity, collaborative filtering approaches suffer
significant difficulties in identifying similar users or
items via common similarity measures, e.g., cosine
measure, in turn, deteriorating the recommendation
performance.
Cold-start problem: Apart from sparsity, cold-start
problem, e.g., users who have provided only little
feedback or items that have been rated less frequently or
even new users or new items, is a more serious challenge
in recommendation research. Because of the lack of user
feedback, any similarity-based approaches cannot handle
such cold-start problem.
Trustworthiness problem: Prediction accuracy in
recommendation systems requires a great deal of
106
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
consideration as it has such a strong impact on customer
experience. Noisy information and spurious feedback
with malicious intent must be disregarded in
recommendation
considerations.
Trust-driven
recommendation methods refer to a selective group of
users that the target user trusts and uses their ratings
while making recommendations. Employing 0/1 trust
relationships , where each trusted user is treated as an
equal neighbour of the target user , proves to be
rudimentary as it does not encapsulate the underlying
level of trust between users.
As a solution , the concept of Trust Relevancy [3]
is introduced first , which measures the trustworthiness
factor between neighbours , defining the extent to which
the trusted user's rating affects the target user's predicted
rating
of
the
item.
Next,
the
algorithm
HybridTrustWalker performs a random walk on the
weighted network. The result of each iteration is
polymerised to predict the rating that a target user might
award to an item to be recommended. Finally, we conduct
experiments with a real-world dataset to evaluate the
accuracy and efficiency of the proposed method.
2. Related Work
Since the first paper published in 1998, research in
recommendation systems has greatly improved reliability
of the recommendation which has been attributed to
several factors. Paolo Massa and Bobby Bhattacharjee in
their paper Using Trust in Recommendation System: An
Experimental Analysis (2004) show that any two users
have usually few items rated in common. For this reason,
the classic RS technique is often ineffective and is not
able to compute a user similarity weight for many of the
users. In 2005, John O'Donovan and Barry Smyth
described a number of ways to establish profile-level and
item-level trust metrics, which could be incorporated into
standard collaborative filtering methods.
Shao et al (2007) proposed a user-based CF algorithm
using Pearson Correlation Coefficient (PCC) to compute
user similarities. PCC measures the strength of the
association between two variables. It uses historical item
ratings to classify similar users and predicts the missing
QoS values of a web service by considering QoS value of
service used by users similar to her [4].
The most common trust-driven recommendation
approaches make users explicitly issue trust statements
for other users. Golbeck proposed an extended-breadth
first-search method in the trust network for prediction
called TidalTrust [5]. TidalTrust finds all neighbours who
have rated the to-be recommended service/item with the
shortest path distance from the given user and then
aggregates their ratings, with trust values between the
given user and these neighbours as weights. Mole
Trust [6] is similar to TidalTrust but only considers the
raters within the limit of a given maximum-depth.
The maximum-depth is independent of any specific user
and item.
3. Proposed System
In a trust-driven recommendation [7] paradigm, the trust
relations among users form a social network. Each user
invokes several web services and rates them according to
the interaction experiences. When a user needs
recommendations, it predicts the ratings that the user
might provide and then recommends services with high
predicted ratings. Hence, the target of the
recommendation system predicts users’ ratings on
services by analysing the social network and user-service
rating records.
There is a set of users U = {u1, u2, ...,um} and a set of
services S = {s1, s2, ..., sn} in a trust driven
recommendation system. The ratings expressed by users
on services/items are given in a rating matrix R = [Ru,s]mxn.
In this matrix, Ru,s denotes the rating of user u on service
(or item) s. Ru,s can be any real number, but often ratings
are integers in the range of [3]. In this paper, without loss
of generality, we map the ratings 1 ,…, 5 to the interval
[0,1] by normalizing the ratings. In a social rating
network, each user u has a set Su of direct neighbours, and
tu,v denotes the value of social trust u has on v as a real
number in [0, 1]. Zero means no trust, and one means full
trust. Binary trust networks are the most common trust
networks (Amazon, eBay, etc.). The trust values are
given in a matrix T = [Tu,v]mxm. Non-zero elements Tu,v in
T denote the existence of a social relation from u to v.
Note that T is asymmetric in general [8].
u
E-commerce
Network
u
u
u
Zheng et al furthered the collaborative filtering dimension
of recommendation systems for web service QoS
prediction by systemically combining both item-based
PCC (IPCC) and user-based PCC (UPCC). However, the
correlation methods face challenges in providing
recommendations for cold-start users as these methods
consider users with similar QoS experiences for same
services to be similar [3].
u
u
Web
Services/Item
s
s
s
Fig. 1. Illustration of trust-driven recommendation approach
107
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Thus, the task of a trust-driven service recommendation
system is as follows: Given a user u0 belonging to U and
a service s belonging to S for which Ruo,s is unknown,
predict the rating for u0 on service s using R and T. This is
done by first determining a degree of trust between users
in the social network to obtain a weighted social network
from the Epinions data , using 0/1 trust relation from the
input dataset and cosine similarity measures of user and
service latent features. Then, a random walk performed
on this weighted network yields a resultant predicted
rating. Ratings over multiple iterations are polymerized to
obtain the final predicted ratings.
3.1 Trust-Driven Recommendation Approach
Incorporating trust metrics in a social network does not
absolutely affect the target user’s ratings because the
target user and trusted users might differ in interests,
preferences and perception. The concept of trust
relevancy considers both the trust relations between users
together with the similarities between users. This section
presents our approach in detail for trust-driven service
recommendations. First, we define the concept of trust
relevancy, on which our recommendation algorithm is
based.
Then,
we
introduce
the
algorithm
HybridTrustWalker by extending the random walk
algorithm in [7]. Lastly, the predicted ratings are returned.
The methodology is summarized as shown in Fig. 2.
t(u,v) is the degree of trust of u towards v. By computing
the trust relevancy between all connected users in a social
network, we can obtain a weighted trust network (SN+),
where the weight of each edge is the value of trust
relevancy. The aim of calculating trust relevancy is to
determine the degree of association between trusted
neighbours.
In RSs, the user-item/service rating matrix is usually very
large in terms of dimensionality but most of the score
data is missing. Therefore, matrix factorization (MF) has
been widely utilized in recommendation research to
improve efficiency by dimension reduction [9]. For an
m * n user-service rating matrix R, the purpose of matrix
factorization is to decompose R into two latent feature
matrices of users and items with a lower dimensionality d
such that ,
R ≈ PQT
(2)
where P ∈ Rmxd and Q ∈ R nxd represent the user and item
latent feature matrices, respectively. Each line of the
respective matrix represents a user or service latent
feature vector. After decomposing the matrix, we use the
cosine similarity measure to calculate the similarity
between two users. Given the latent feature vectors of two
users, u and v, their similarity calculation is as follows:
simU(u ,v) = cos(u ,v) =
User Set , Service Set , Social network , Ratings
(3)
where u and v are latent feature vectors of users u and v.
Trust Relevancy Calculation
3.2 Recommendation Algorithm
The HybridTrustWalker algorithm attains a final result
through multiple iterations. For each iteration, the random
walk starts from the target user u0 in the weighted trust
network SN+. In the kth step of the random walk in the
trust network, the process will reach a certain node u. If
user u has rated the to-be-recommended service s, then
the rating of s from user u is directly used as the result for
the iteration. Otherwise, the process has two options, one
of which is:
Weighted Network, User/Service Features
Random Walk
One result from each Iteration
Ratings Prediction
 The random walk will stop at the current node u
with a certain probability φu,s,k. Then, the service si is
selected from RSu based on the probability Fu(si ). The
rating of si from u is the result for the iteration.
Termination condition based rating finalization
Fig. 2. Proposed Methodology
Given user u and v, the trust relevancy between u and v is
as follows:
tr(u,v) = simU(u,v) *t(u,v)
(1)
Here, simU(u,v) is the similarity of users u and v, and
The probability that the random walk stops at user u in
the k-th step is affected by the similarity of the items that
u has rated and the to-be-recommended service s. The
more similar the rated items and s, the greater the
probability is to stop. Furthermore, a larger distance
between the user u and the target user u0 can introduce
more noise into the prediction. Therefore, the value of
probability φu,s,k should increase when k increases [10].
108
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Thus, the calculation for φu,s,k is as follows:
φu,s,k =
(si,s) *
∈
(4)
where simS(si, s) is the similarity between the services si
and s. The sigmoid function of k can provides value 1 for
big values of k, and a small value for small values of k. In
contrast to collaborative filtering techniques [2], this
method can cope with services that do not have ratings
from common users. Service similarities are calculated
using Matrix Factorization [8]:
SimS(si ,sj) = cos(si,sj) =
(5)
where tr(u, v) is the trust relevancy introduced earlier.
The trust relevancy guarantees that each step of the walk
will choose the user that is more similar to the current
user, making the recommendation more accurate and thus
enhancing productivity and user acceptance.
3.3 HybridTrustWalker Algorithm
Input: U(user set), S(service set), R(rating matrix),
SN+(weighted social network), u0(the target user), s(tobe-recommended service).
Output: r (predicted rating).
Pseudocode:
When it is determined that user u is the terminating point
of the walk, the method will need to select one service
from RSu. The rating of si from u is the outcome for the
iteration. The probability of the chosen service Fu(si ) is
calculated according to the following formula:
Fu(si )=
∑ ∈
1
2
3
4
5
6
(6)
7
8
9
10
11
12
Services are selected Fu(si ) through a roulette-wheel
selection [11], that is, services with higher values of Fu (si )
are more possible to be selected. Also, adopting the "six
degrees of separation" [12], by setting the maximum step
of each walk to 6, prevents infinite looping of the random
walk.
13
14
15
16
17
18
19
20
21
Fig. 3. Example of HybridTrustWalker
The other alternate option during the walk if the user u
has not rated the to-be recommended service s is:
 The walk can continue with a probability of 1-φu,s,k.
In which case, a target node for the next step is
selected from the set of trusted neighbours of the user
u.
To distinguish different users’ contribution to the
recommendation prediction, we propose that the target
node v for the next step from the current user u is selected
according to the following probability:
Eu(v) =
∑ ∈
(7)
set k = 1 ; //the step of the walk
set u = u0 ; //set the start point of the walk as u0
set max-depth = 6 ; //the max step of the walk
set r = 0 ;
while (k<=max-depth) {
u = selectUser(u) ; //select v from TUu
as the target of the next step based on
the probability Eu(v).
if (u has rated s) {
r = ru,s ;
return r ;
}
else {
if (random (0,1) < φu,s,k ||k == max-depth) {
//stop at the current node
si = selectService(u); //service
si is selected from RSU based
on the probability FU (si ).
r = ru,si ;
return r;
}
else
k++ ;
}
}
return r;
Fig.3 shows an example to illustrate the algorithm clearly.
The weight of each edge represents the probability Eu(v).
Suppose the service s3 is to be recommended for the user
u1. For the first step of the walk, u2 is more likely to be
selected as the target node since the value of Eu(u2) is
larger. If u2 has rated s3 with the rating r, r will be
returned as the result of this walk (Line.7–9). Otherwise,
if the termination condition (Line.12) is not reached, the
walk would continue. For the second step, u5 is selected.
It should also check whether u5 has rated s3. If u5 has not
rated s3 but the termination condition is reached, it will
select the most similar service to s3 from the items u5 has
rated (Line.13). Then, the rating of the selected service by
u5 is returned as the result of this walk.
109
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
RMSE into a precision metric in the range of [0, 1]. The
precision is denoted as follows:
3.4 Ratings Prediction
The HybridTrustWalker algorithm attains a final result
through multiple iterations. The final predicted rating is
obtained by polymerizing the results returned from every
iteration:
∑
(8)
puo,s
where ri is the result of each iteration, n is the number of
iterations.
To obtain a stable predict result, the algorithm needs to
perform an adequate number of random walks. We can
decide the termination condition of the algorithm through
the calculation of the variance of the prediction values.
The variance of the prediction results after a random walk
is denoted and calculated as:
σi2
precision = 1 -
(12)
To combine RMSE and coverage into a single evaluation
metric, we compute the F-Measure as follows :
(13)
F-Measure =
Comparison analysis of performance measure for various
RS paradigms including collaborative filtering
approaches:
Table 1: Comparing results for all users
∑
̅
(9)
where rj is the result of every iteration, i is the total
number of iterations until the current walk, and σi2 the
variance obtained from the last i iterations, which will
eventually tend to a stable value. When |σi+12 - σi2| ≤ ε, the
algorithm terminates ( = 0.0001).
Algorithms
RMSE
Coverage (%)
F-measure
Item based CF
1.345
67.58
0.6697
User based CF
1.141
70.43
0.7095
Tidal trust
1.127
84.15
0.7750
Mole trust
1.164
86.47
0.7791
Trust Walker
1.089
95.13
0.8246
HybridTrustWalker
1.012
98.21
0.8486
4. Results and Discussion
We use the dataset of Epinions published by the authors
of [11]. The large size and characteristically sparse useritem rating matrix makes it suitable for our study. This
contains data of 49,290 users who have rated 139,738
items. There are a total of 664,824 ratings with 487,181
trust relations within the network.
We adopt the Root Mean Squared Error (RMSE), which
is widely used in recommendation research, to measure
the error in recommendations:
1.2
1
RMSE = √
∑
0.8
̂
(10)
0.6
Precision
0.4
where Ru,s is the actual rating the user u gave to the
service s and Ȓu,s: which is the predicted rating the user u
gave to the service s. N denotes the number of tested
ratings. The smaller the value of RMSE is, the more
precisely the recommendation algorithm performs. We
use the coverage metric to measure the percentage of
pairs
of
<user, service>, for which a predicted value can be
generated:
Coverage =
(11)
where, S denotes the number of predicted ratings and N
denotes the number of tested ratings. We have to convert
Coverage
0.2
F-measure
0
Fig.4. Comparing results of all users.
The reduction of precision of the proposed model is
compensated by the increased coverage and F-measure as
shown in Table 1 and Table 2 (in the case of cold-start
users).
110
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Table 2: Comparing results of cold-start users
Algorithms
RMSE
Coverage (%)
F-measure
Item based CF
1.537
User based CF
1.485
18.93
0.2910
Tidal trust
1.237
60.75
0.6463
Mole trust
1.397
58.29
0.6150
Trust Walker
1.212
74.36
0.7195
HybridTrustWalker
1.143
79.64
0.7531
23.14
0.3362
1.2
1
0.8
0.6
Precision
0.4
Furthermore, for this model, we develop a hybrid random
walk algorithm. Existing methods usually randomly
select the target node for each step when choosing to
walk. By contrast, the proposed approach selects the
target node based on trust and similarity. Thus, the
recommendation contribution from trusted users is more
accurate. We also utilize large-scale real data sets to
evaluate the accuracy of the algorithm. The experimental
results show that the proposed method can be directly
applied in existing e-commerce networks with improved
accuracy. Personalized service recommendation systems
have been heavily researched in recent years and the
proposed model provides an effective solution for the
same. We believe that there is scope for improvement.
For example, here, the trust relationships between users in
the social trust network are considered to be invariant.
But in reality, the trust relationship between users can
change over time. In addition, the user ratings are also
time sensitive. As a result, ratings that are not up-to-date
may become noise information for recommendations. In
large user communities, it is only natural that besides
trust also distrust starts to emerge. Hence, the more users
issuing distrust statements, the more interesting it
becomes to also incorporate this new information.
Therefore, we plan to include time sensitivity and the
distrust factor in our future work.
Coverage
0.2
F-measure
0
Acknowledgments
The authors sincerely thank the authorities of
Supercomputer Education and Research Center, Indian
Institute of Science for the encouragement and support
during the entire course of this work.
Fig. 5 Comparing results of cold-start users
References
[1] Ricci,
This means the ratings from most number of relevant
users is considered during the rating prediction in each
step of the walk in HybridTrustWalker. Due to cold-start
users (Fig 5), item-based and user-based CF performs
poorly. They have highest RMSE and lowest coverage
than all the other algorithms considered during analysis.
Due to the introduction of trust factor, TidalTrust,
MoleTrust and TrustWalker have improved coverage
compared to CF whereas precision does not change much.
[2]
[3]
[4]
5. Conclusion
The proposed recommendation system has three main
objectives: (1) Tackling the problem of recommendations
with cold-start users; (2) Address the problem of
recommendations with a large and sparse user-service
rating matrix and (3) Solve the problem with trust
relations in a recommendation system. Thus, the main
contributions of HybridTrustWalker presented in this
paper, include, introducing the concept of trust relevancy,
which is used to obtain a weighted social network.
[5]
[6]
[7]
Francesco,
Lior
Rokach,
and
Bracha
Shapira. Introduction to recommender systems handbook.
Springer US, 2011.
Koren, Yehuda. "Factorization meets the neighborhood: a
multifaceted collaborative filtering model." In Proceedings
of the 14th ACM SIGKDD international conference on
Knowledge discovery and data mining, pp. 426-434. ACM,
2008.
Deng, Shuiguang, Longtao Huang, and Guandong Xu.
"Social network-based service recommendation with trust
enhancement." Expert Systems with Applications 41, no. 18
(2014): 8075-8084.
Shao, Lingshuang, Jing Zhang, Yong Wei, Junfeng Zhao,
Bing Xie, and Hong Mei. "Personalized qos prediction
forweb services via collaborative filtering." InWeb Services,
2007. ICWS 2007. IEEE International Conference on, pp.
439-446. IEEE, 2007.
Golbeck, Jennifer Ann. "Computing and applying trust in
web-based social networks." (2005).
Massa, Paolo, and Paolo Avesani. "Trust-aware
recommender systems." InProceedings of the 2007 ACM
conference on Recommender systems, pp. 17-24. ACM,
2007.
Jamali, Mohsen, and Martin Ester. "Trustwalker: a random
walk model for combining trust-based and item-based
111
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
recommendation." In Proceedings of the 15th ACM
SIGKDD international conference on Knowledge discovery
and data mining, pp. 397-406. ACM, 2009.
[8] Sarwar, Badrul, George Karypis, Joseph Konstan, and John
Riedl. Application of dimensionality reduction in
recommender system-a case study. No. TR-00-043.
Minnesota Univ Minneapolis Dept of Computer Science,
2000.
[9] Koren, Yehuda, Robert Bell, and Chris Volinsky. "Matrix
factorization
techniques
for
recommender
systems." Computer 8 (2009): 30-37.
[10] Salakhutdinov, Ruslan, and Andriy Mnih. "Probabilistic
Matrix Factorization Advances in Neural Information
Processing
Systems
21
(NIPS
21)."Vancouver,
Canada (2008).
[11] Lipowski, Adam, and Dorota Lipowska. "Roulette-wheel
selection via stochastic acceptance." Physica A: Statistical
Mechanics and its Applications 391, no. 6 (2012): 21932196.
[12] Milgram, Stanley. "The small world problem." Psychology
today 2, no. 1 (1967): 60-67.
Mr. Pavan Kumar K N obtained his B.E. degree with distinction
in Information Science and Engineering from Visvesvaraya
Technological University. Presently he is taking up the position of
Trainee Decision Scientist in Mu Sigma, Bangalore, India.
His areas of interests include Data Analytics and Cyber Security.
Ms. Samhita S Balekai received her B.E degree in Information
Science and Engineering from Visvesvaraya Technological
University. She secured an offer for the position of Software
Engineer in Accenture, Bangalore, India. Her areas of interests
are Data Analytics, Social Networks, Data Warehousing and
Mining.
Ms. Sanjana P Suryavamshi was awarded her B.E. degree with
distinction in Information Science and Engineering from
Visvesvaraya Technological University. Presently she is
employed as a Software Engineer in Tata Consultancy Services
(TCS), Bangalore, India. Her areas of interests are Networks and
Cyber Security.
Ms. Sneha Sriram earned her B.E. degree in Information
Science and Engineering from Visvesvaraya Technological
University. She is pursuing her M.S. degree in Information
Technology Management from University of Texas, Dallas.
Her areas of interests are Enterprise Systems and Information
Technology.
Dr. R. Bhakthavathsalam is presently working as a Principal
Research Scientist in SERC, Indian Institute of Science,
Bangalore. His areas of interests are Pervasive Computing and
Communication, Wireless Networks and Electromagnetics with a
special reference to exterior differential forms. Author held the
position of Fellow of Jawaharlal Nehru Centre for Advanced
Scientific Research during 1993 - 1995. He is a Member of IEEE
Communication Society, ACM and CSI.
112
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Correlated Appraisal of Big Data, Hadoop and
MapReduce
Priyaneet Bhatia1, Siddarth Gupta2
1
Department of Computer Science and Engineering, Galgotias College of Engineering and Technology
Uttar Pradesh Technical University
Greater Noida, Uttar Pradesh 201306, India
[email protected]
2
Department of Computer Science and Engineering, Galgotias University
Greater Noida, Uttar Pradesh 203208, India
[email protected]
in real world scenarios. Section 3 describes the
comparison between RDBMS and NoSQL and why
NoSQL rather than RDBMS is used in today‟s
world. Section 4 explores the Apache Hadoop in
detail and its use in big data. Section 5 analyzes the
MapReduce paradigm, its use in Hadoop paradigm,
and its significance in enormous data reduction.
Section 6 explicates the table comparisons of big
data and Hadoop of various survey papers. Finally,
the paper is concluded in the end.
Abstract
Big data has been an imperative quantum globally.
Gargantuan data types starting from terabytes to
petabytes are used incessantly. But, to cache these
database competencies is an arduous task. Although,
conventional database mechanisms were integral
elements for reservoir of intricate and immeasurable
datasets, however, it is through the approach of NoSQL
that is able to accumulate the prodigious information in a
proficient style. Furthermore, the Hadoop framework is
used which has numerous components. One of its
foremost constituent is the MapReduce. The MapReduce
is the programming quintessential on which mining of
purposive knowledge is extracted. In this paper, the
postulates of big data are discussed. Moreover, the
Hadoop architecture is shown as a master- slave
procedure to distribute the jobs evenly in a parallel style.
The MapReduce has been epitomized with the help of an
algorithm. It represents WordCount as the criterion for
mapping and reducing the datasets.
Keywords: Big Data, Hadoop, MapReduce, RDBMS,
NoSQL, Wordcount
2. Big Data Concepts
2.1 Outline of Big Data
Let‟s start with big data. What is big data? Why has
it created a buzz? Why is big data so essential in
our daily chores of life? Where is it used? All these
unanswerable questions have made everyone
curious. Moving on, big data is actually a collection
of large and complex datasets that has become very
difficult to handle using traditional relational
database management tools [5].
1. Introduction
As immense amount of data is being generated day
by day, efficiency of storage capacity for this huge
information becomes a painful task [1]. Therefore
exabytes or petabytes of database known as big
data need to be scaled down to smaller datasets
through an architecture called Hadoop.
Apache Hadoop is an open source framework on
which the big data is processed with the help of
MapReduce [2]. A programming model,
MapReduce uses basic divide and conquer
technique to its own map and reduce functions. On
copious datasets it processes key/value pairs to
generate intermediate key/value pairs and then,
with the help of these pairs, merges the
intermediate values to form a smaller sets of
key/values sets [3][4]. The reduced data bytes of
massive information are produced. The rest of the
paper is formulated as follows. Section 2 covers the
concepts of big data, its 3 V‟s and its applications
2.2 Four Vs‟ of Big Data
Variety
Velocity
Volume
Veracity
4V's of
Big Data
Figure 1: 4Vs‟ of Big Data
Big data has its own 4 characteristics shown above
in Fig 1 as:
113
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
i. Volume: refers to the amount of space or
quantity of something. Since data is huge
scaled complex sized, even larger than 1024
terabytes, it becomes a challenge to extract
relevance information from traditional
database techniques. E.g. the world produces
2.5 quintillion bytes in a year.
3.2 NoSQL
NoSQL commonly refers to „Not Only SQL‟, has
become a necessity in replacement to RDBMS,
since its main characteristics focuses on data
duplication and unstructured schemes. It allows
unstructured compositions to be reserved and
replicated across multiple servers for future needs.
Eventually, no slowdown in performance occurs
unlike in RDBMS. Companies such as Facebook,
Google and Twitter use NoSQL for their high
performance, scalability, availability of data with
regards to the expectations of the users [11].
ii. Variety: represent the state of being varied or
diversified. In this, all types of formats are
available. Structured, unstructured, semistructured data etc. These varieties of formats
are needed to be searched, analyzed, stored
and managed to get the useful information.
E.g.: geospatial data, climate information,
audio and video files, log files, mobile data,
social media etc [6].
4. Hadoop
4.1 Hadoop in Brief
iii. Velocity: the rate of speed with which
something happens. In this, to deal with bulky
spatial dimensional data which is streaming at
an eccentric rate is still an eminent challenge
for many organizations. E.g. Google produces
24 TB/day; Twitter handles 7 TB/day etc.
Hadoop was created by Doug Cutting and Mike
Caferella in 2005 [12]. It was named after his son‟s
toy elephant [13]. It comprises of 2 components
and other project libraries such as Hive, Pig,
HBase, Zookeeper etc:
a.
iv. Veracity: refers to the accuracy of the
information extracted. In this, data is mined
for profitable purposes [7].
b.
2.3 Big Data in Real World Scenarios
HDFS: open source data storage architecture
with fault tolerant capacity.
MapReduce: programming model for
distributed processing that works with all
types of datasets. [14].
4.2 Motive behind Hadoop in Big Data
a)
Facebook generates 10-20 billions photos
which is approximately equivalence to 1
petabytes.
b) Earlier, hard copy photographs take space
around 10 gigabytes in a canon camera. But,
nowadays, digital camera is producing
photographic data more than 35 times the old
camera used to take and it is increasing day by
day [8].
c) Videos on youtube are being uploaded in 72
hours/min.
d) Data produced by google is approximately
100 peta-bytes per month [9].
Despite, one might get worried that since RDBMS
is a dwindling technology, it cannot be used in big
data processing; however, Hadoop is not a
replacement to RDBMS. Rather, it is a supplement
to it. It adds characteristics to RDBMS features to
improve the efficiency of database technology.
Moreover, it is designed to solve the different sets
of data problems that the traditional database
system is unable to solve.
4.3 CAP Theorem for Hadoop
Cap theorem shown in Fig 2, can be defined as
consistency, scalability and flexibility.
3. RDMS VS NOSQL
a)
Consistency: simultaneous transactions are
needed in continuity for withdrawing from the
account and saving into the account.
b) Availability: flexibility in making multiples
copies of data. If one copy goes down, then
another is still accessible for future use.
c) Partitioning: to partition the data in multiple
copies for storage in commodity hardware. By
default, 3 copies are normally present. This is
to make for easy feasibility for the customers
[15].
3.1 RDBMS
For several decades, relational database
management system has been the contemporary
benchmark in various database applications. It
organizes the data in a well structured pattern.
Unfortunately, ever since the dawn of big data era,
the information comes mostly in unstructured
dimensions. This culminated the traditional
database system in not able to handle the
competency of prodigious storage database. In
consequence, it is not opted as a scalable resolution
to meet the demands for big data [10].
114
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
consistency
5.2 Principles of MapReduce
a.
CAP
theorem
partitioning
b.
availabilty
c.
Figure 2: CAP Theorem
4.4 Hadoop Business Problems
d.
i.
ii.
iii.
iv.
v.
Marketing analysis: market surveys are being
used to understand the consumer behaviours
and improve the quality of the product. Lot of
companies used feedback survey to study
shopper attitudes.
Purchaser analysis: It is best to understand the
interest of the current customer rather than the
new one. Therefore, the best thing is collect as
much information as one can to analyze what
the buyer was doing before he left the
shopping mall.
Customer profiling: it is essential to identify
specific group of consumers having similar
interest and preferences in purchasing goods
from the markets.
Recommendation portals: These online
shopping browsers not only collect database
from your own data but also from those users
who match the profile of yours, so that these
search engines can make recommend webites
that are likely to be useful to you. E.g.:
Flipkart, Amazon, Paytm, Myntra etc.
Ads targeting: we all know ads are a great
nuisance when we are doing online shopping,
but they stay with us. These ad companies put
their ads on popular social media sites so they
can collect large amount of data to see what
we are doing when we are actually shopping
[16].
Lateral computing: provides parallel data
processing across the nodes of clusters using
the Java based API. It works on commodity
hardware in case of any hardware failure.
Programming languages: uses Java, Python
and R languages for coding in creating and
running jobs for mapper and reducer
executables.
Data locality:
ability to move the
computational node close to where the data is.
That means, the Hadoop will schedule
MapReduce tasks close to where the data
exist, on which that node will work on it. The
idea of bringing the compute to the data rather
than bringing data to the compute is the key of
understanding MapReduce.
Fault tolerant with shared nothing: The
Hadoop architecture is designed where the
tasks have no dependency on each other.
When node failure occurs, the MapReduce
jobs are retried on other healthy nodes. This is
to prevent any delays in the performance of
any task. Moreover, these nodes failure are
detected and handled automatically and
programs are restarted as needed [18].
5.3 Parallel Distributed Architecture
The MapReduce is designed as the master slave
framework shown in Fig 4, which works as job and
task trackers. The master is the Jobtracker which
performs execution across the mapper or reducer
over the set of data. However, on each slave node,
the Tasktracker executes either the map or reduce
task. Each Tasktracker reports its status to its
master.
Jobtracker
(master)
TaskTracker
(slave 1)
TaskTracker
(slave 2)
TaskTracker
(slave 3)
Figure 3: Master Slave Architecture
5.4 Programming Model
5. MapReduce
The MapReduce consists of 2 parts: map and
reduce functions.
5.1 Understanding MapReduce
MapReduce is a programming paradigm, developed
by Google, which is designed to solve a single
problem. It is basically used as an implementation
procedure to induce large datasets by using map
and reduce operations [17].
a)
Map part: This is the 1st part of MapReduce.
In this, when the MapReduce runs as a job,
the mapper will run on each node where the
data resides. Once it gets executed, it will
115
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
create a set of <key/value> pairs on each
node.
b) Reduce part: In the 2nd part of MapReduce,
the reducer will execute on some nodes, not
all the nodes. It will create aggregated sets of
<key, value> pairs on these nodes. The output
of this function is a single combined list.
For each word in file contents:
Emit (word, 1)
Reducer (word, values):
Sum=0
For each value in values
Sum+= value
Emit (word, sum)
The pseudocode of MapReduce contains the
mapper and reducer. The mapper has the filename
and file contents and a loop for each to iterate the
information. The word is emitted which has a value
1. Basically, spliiting occurs. In the reducer, from
the mapper, it takes the output and produces lists of
keys and values. In this case, the keys are the words
and value is the total manifestation of that word.
After that, zero is started as an initializer and loop
occurs again. For each value in values, the sum is
taken and value is added to it. Then, the aggregated
count is emitted.
Figure 4: MapReduce Paradigm
Figure 4 displays the MapReduce prototype,
comprised of three nodes under Map. Variegated
categories of data have been represented through
numerous colors in the Map. Accordingly, in
essence, these nodes are running in three separate
machines i.e. the commodity hardware. Thus, the
chunks of information are implemented on discrete
machines. Furthermore, the intermediate portion of
this model resides the magical shuffle that is in fact
quite complicated and hence is the key aspect of
MapReduce. Literally, this framework has set the
mind to think. How does this list come out from the
Map function and is then aggregated to the Reduce
function? Is this an automated process or some
codes have to be written? Actually, in reality, this
paradigm is the mixture of both. As a matter of
fact, whenever the MapReduce jobs are written, the
default implementations of shuffle and sort are
studied. Consequently, all these mechanisms are
tunable; one may accept the defaults or tune in or
can change according to one‟s own convenience.
5.6 Example [21]
Consider the question of counting the occurrence of
each word in the accumulation of large documents.
Let‟s take 2 input files and perform MapReduce
operation on them.
File 1: bonjour sun hello moon goodbye world
File 2: bonjour hello goodbye goodluck world earth
Map:
First map: second map
<bonjour, 1><bonjour,1>
<sun,1><hello,1>
<hello, 1><goodbye,1>
<moon,1>< goodluck,1>
<goodbye,1><world,1>
<world,1><earth,1>
5.5 Word count
Reduce:
The Hello World of MapReduce program is the
WordCount. It comes from Google trying to solve
the data problem by counting all the words on the
Web. It is de facto standard for starting with
Hadoop programming. It takes an input as some
text and produces a list of words and does counting
on them.
<bonjour, 2>
<sun,1>
<hello, 2>
<moon,1>
< goodluck,1>
<goodbye,2>
<world,2>
<earth,1>
Following below is the Pseudocode of WordCount
[19][20]:
Mapper (filename, file contents):
116
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Table 2: Approach on Hadoop
5.7 Algorithm Steps [22]:
S.No
a)
Map step: in this step, it takes key and value
pairs of input data and transform into output
intermediate list of key/ value pairs.
1.
(
Approach on
Hadoop
2011
MapReduce,
Linkcount,
WordCount
) (1)
b) Reduce step: in this step, after being shuffled
and sorted, the output intermediate key/value
pairs is passed through a reduce function
where these values are merged together to
form a smaller sets of values.
(
(
))
(2)
6. Tabular Comparisons on Big Data
and Hadoop
Table 1: Approach on Big Data
Author’s
name
1.
Puneet
Singh
Duggal
et al
2.
Min
Chen
et al
P.Sara
da
Devi et
al
Year
2013
2014
2014
4.
Poona
m S.
Patil
et al
2014
5.
K.Arun
et al
2014
2.
Puneet
Duggal et
al
2013
HDFS,
MapReduce,
joins,
indexing,
clustering,
classification
3.
Shreyas
Kudale et
al
2013
4.
Poonam S.
Patil et al
2014
5.
Prajesh P.
Anchalia
et al
2014
HDFS,
MapReduce,
ETL,
Associative
Rule Mining
HDFS,
MapReduce,
HBase, Pig,
Hive, Yarn,
k-means
clustering
algorithms
6.
Radhika
M. Kharode
et al
2015
)
(
3.
Year
)
(
S.No
Author’s
Name
Mahesh
Maurya et
al
Approach on
Big Data
Big Data
analysis
tools,
Hadoop,
HDFS,
MapReduce
Cloud
computing,
Hadoop
Hadoop,
extract
transform
load (ETL)
tools like
ELT,
ELTL.
RDBMS,
NoSQL,
Hadoop,
MapReduce
mining
techniques
like
association
rule
learning,
clustering
classification
Results/ Conclusion
Used for storing
and managing
Big Data. Help
organizations to
understand better
customers &
market
Focus on 4 phases
of value chain of
Big Data i.e., data
generation, data
acquisition, data
storage and data
analysis.
Introduces ETL
process in taking
business
intelligence
decisions in
Hadoop
HDFS,
MapReduce ,
k-means
algorithms,
cloud
computing
Results/ Conclusion
Experimental
setup to count
number of words
& links (double
square brackets)
available in
Wikipedia file.
Results depend on
data size &
Hadoop cluster.
Studied Map
Reduce
techniques
implemented for
Big Data analysis
using
HDFS.
Hadoop„s not an
ETL tool but
platform supports
ETL processes in
parallel.
Parallelize &
distribute
computations
tolerant.
Experimental
setup for
MapReduce
technique on kmeans clustering
algorithm which
clustered over 10
million data
points.
Combination of
data mining & Kmeans clustering
algorithm make
data management
easier and quicker
in cloud
computing model.
7. Conclusion
To summarize, the recent literature of various
architectures have been surveyed that helped in the
reduction of big data to simple data which mainly
composed of immense knowledge in gigabytes or
megabytes. The concept of Hadoop, its use in big
data has been analyzed and its major component
HDFS and MapReduce have been exemplified in
detail. Overall, the MapReduce model is illustrated
with its algorithm and an example for the readers to
understand it clearly. To sum up, applications of
big data in real world scenario has been elucidated.
Study challenges
to deal analysis of
big data. Gives
flexibility to use
any language to
write algorithms.
Study big data
classifications to
business needs.
Helps in decision
making in
business
environment by
implementing
data mining
techniques,
Acknowledgements
Priyaneet Bhatia and Siddarth Gupta thanks Mr.
Deepak Kumar, Assistant Professor, Department
of Information Technology, and Rajkumar Singh
Rathore, Assistant Professor, Department of
Computer Science and Engineering, Galgotia
117
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
College of Engineering and Technology, Greater
Noida, for their constant support and guidance
throughout the course of whole survey.
[15]
[16]
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
Shreyas Kudale, Advait Kulkarni and Leena A.
Deshpande, “Predictive Analysis Using Hadoop: A
Survey”, International Journal of Innovative
Research in Computer and Communication
Engineering, Vol. 1, Issue 8, 2013. pp 1868-1873
P.Sarada
Devi,
V.Visweswara
Rao and
K.Raghavender, “Emerging Technology Big Data
Hadoop Over Datawarehousing, ETL” in
International Conference (IRF), 2014, pp 30-34.
Jeffrey Dean and Sanjay Ghemawat, “MapReduce:
Simplified Data Processing on Large Clusters”,
Google, Inc in USENIX Association OSDI ‟04: 6th
Symposium on Operating Systems Design and
Implementation, 2009, pp 137-149.
Jeffrey
Dean
And
Sanjay
Ghemawat,
“MapReduce: Simplified data Processing on Large
Cluster” in OSDI‟04: Sixth Symposium on
Operating System Design and Implementation, CS
739 Review Blog, 2004,
http:
//pages.cs.wisc.edu/~swift/classes/cs739sp10/blog/2010/04/mapreduce_simplified_data_pr
oc.html
Munesh Kataria and Ms. Pooja Mittal, “Big Data
and Hadoop with Components like Flume, Pig,
Hive and Jaql”, International Journal of Computer
Science and Mobile Computing (IJCSMC), Vol. 3,
Issue 7, 2014. pp.759 – 765,
Jaseena K.U. and Julie M. David, “Issues,
Challenges, and Solutions: Big Data Mining”,
NeTCoM, CSIT, GRAPH-HOC, SPTM – 2014,
2014, pp. 131–140
K.Arun and Dr. L. Jabasheela, “Big Data: Review,
Classification and Analysis Survey”, International
Journal of Innovative Research in Information
Security (IJIRIS), 2014, Vol. 1, Issue 3, pp 17-23
T. White, Hadoop: The Definitive Guide, O‟Reilly
Media, Yahoo! Press, 2009.
Min Chen, Shiwen Mao and Yunhao Liu, “Big
Data: A Survey”, Springer, New York, 2014, pp171-209
Poonam S. Patil and Rajesh. N. Phursule, “Survey
Paper on Big Data Processing and Hadoop
Components”, International Journal of Science and
Research (IJSR), Vol.3, Issue 10, 2014 pp 585-590
Leonardo Rocha, Fernando Vale, Elder Cirilo,
Dárlinton Barbosa and Fernando Mourão, “A
Framework for Migrating Relational Datasets to
NoSQL”, in International Conference on
Computational Science, , Elsevier, Vol.51, 2015,
pp 2593–2602
Apache
Hadoop,
Wikipedia
https://en.wikipedia.org/wiki/Apache_Hadoop
Ronald C Taylor, “An overview of the Hadoop/
MapReduce/ HBase framework and its current
applications in bioinformatics” in Bioinformatics
Open Source Conference (BOSC), 2010, pp 1-6.
Puneet Singh Duggal and Sanchita Paul, “Big Data
Analysis: Challenges and Solutions” in
[17]
[18]
[19]
[20]
[21]
[22]
International Conference on Cloud, Big Data and
Trust, RGPV, 2013, pp 269-276
lynn langit, Hadoop fundamentals, ACM, 2015,
Lynda.com
,
http://www.lynda.com/Hadooptutorials/Hadoop-Fundamentals/191942-2.html
ShaokunFana, Raymond Y.K.Laub, and J.
LeonZhaob,. “Demystifying Big Data Analytics for
Business Intelligence through the Lens of
Marketing Mix”, Elsevier, ScienceDirect, 2015, pp
28-32.
Mahesh
Maurya
and
Sunita
Mahajan
,“Comparative analysis of MapReduce job by
keeping data constant and varying cluster size
technique”, Elseveir, 2011, pp 696-701
Dhole Poonam and Gunjal Baisa, “Survey Paper on
Traditional Hadoop and Pipelined Map Reduce”,
International
Journal
of
Computational
Engineering Research (IJCER), Vol. 3, Issue 12,
2013, pp 32-36
MapReduce, Apache Hadoop, Yahoo Developer
Network,
https://developer.yahoo.com/hadoop/tutorial/modul
e4.html
Mahesh
Maurya
and
Sunita
Mahajan,
“Performance analysis of MapReduce programs on
Hadoop Cluster” IEEE, World Congress on
Information and Communication Technologies
(WICT2012), 2012, pp 505-510.
Ms. Vibhavari Chavan and Prof. Rajesh. N.
Phursule, “Survey Paper on Big Data”,
International Journal of Computer Science and
Information Technologies (IJCSIT), Vol. 5, No.6,
2014, pp 7932-7939.
Radhika M. Kharode and Anuradha R. Deshmukh,
“Study of Hadoop Distributed File system in Cloud
Computing”, International Journal of Advanced
Research in Computer Science and Software
Engineering (IJARCSSE) , Vol.5, Issue 1, 2015, pp
990-993.
First Author Priyaneet Bhatia has done her B.Tech in IT
from RTU, Jaipur, Rajasthan, India in 2012. Currently, she
is pursuing M.Tech in CSE from Galgotia College of
Engineering and Technology, UPTU, Greater Noida, Uttar
Pradesh, India. She is working on the project “Big Data in
Hadoop MapReduce”.
Second Author Siddarth Gupta has done B.Tech in CSE
from UPTU, Lucknow, Uttar Pradesh, India in 2012.He has
completed M.tech in CSE from Galgotias University,
Greater Noida, Uttar Pradesh, India in May 2015. He is
currently working on “Big Data optimization in Hadoop”
118
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Combination of PSO Algorithm and Naive Bayesian
Classification for Parkinson Disease Diagnosis
Navid Khozein Ghanad1,Saheb Ahmadi 2
1
Islamic Azad university Of Mashhad, Faculty of Engineering, Department Of Computer, Mashhad, Iran
[email protected]
2
Islamic Azad university Of Mashhad, Faculty of Engineering, Department Of Computer, Mashhad, Iran
[email protected]
Abstract
significantly low symptoms [1]. It is claimed that 90% of
Parkinson is a neurological disease which quickly affects
human’s motor organs. Early diagnosis of this disease is very
Parkinson patients can be recognized
important for its prevention. Using optimum training data and
Through voice disorders[2]. Parkinson patients have a set
omitting noisy training data will increase the classification
of voice disorders by which their disease can be
accuracy. In this paper, a new model based on the combination of
diagnosed. These voice disorders have indices whose
PSO algorithm and Naive Bayesian Classification has been
measurement can be used for diagnosing the disease [3]
presented for diagnosing the Parkinson disease, in which
[4]. In the previous studies, problems of Parkinson disease
optimum training data are selected by PSO algorithm and Naive
diagnosis were considered. Using SVM Classification with
Bayesian Classification. In this paper, according to the obtained
Gaussian kernel, the obtained result was 91.4% at best [4].
results, Parkinson disease diagnosis accuracy has been 97.95%
In order to diagnose the Parkinson disease, a new non-
using the presented method, which is indicative of the superiority
of this method to the previous models of Parkinson disease
linear model based on Dirichlet process mixing was
diagnosis.
presented and compared with SVM Classification and
decision tree. At best, the obtained result was 87.7% [5].
Keywords: Parkinson disease diagnosis, Naive Bayesian
In [6], different methods have been used to diagnose the
Classification, PSO algorithm
Parkinson disease, in which the best result pertained to the
classification using the neural network with 92.9%
accuracy. In [7], the best features were selected for SVM
1. Introduction
Classification through which 92.7% accuracy could be
Parkinson disease is one of the nervous system diseases,
obtained at best. In [8], using sampling strategy and multi-
which causes quivering and losing of motor skills. Usually
class multi-kernel relevance vector machine method
this disease occurs more in people over 60 years old, and 1
improvement, 89.47% accuracy could be achieved. In [9],
out of 100 individuals suffers from this disease. However,
the combination of Genetic Algorithm and Expectation
it is also observed in younger people. About 5 to 10% of
Maximization Algorithm could bring 93.01% accuracy for
patients are in younger ages. After Alzheimer, Parkinson is
Parkinson disease diagnosis. In [10], using fuzzy entropy
the second destructive disease of the nerves. Its cause has
measures, the best feature was selected for classification
not been recognized yet. In the first stages, this disease has
and thereby 85.03% accuracy could be achieved for
119
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
classification. In [11], the combination of non-linear fuzzy
and the objective function f(x) from a set like V. Bayesian
method and SVM Classification could detect the speaker’s
method for the new sample classification is such that it
gender with 93.47% accuracy. In [12], the combination of
detects the most probable class or the target value vMAP
RF and CFS algorithms could diagnose the Parkinson
having trait values<a1,a2,…an>, which describes the new
disease with 87.01% accuracy. In [13], using parallel
sample.
forward neural network, Parkinson disease was diagnosed
vmap=argvi=vmax p(vj I a1, a2,…….,an)
with 91.20% accuracy. In [14], with improvements in OPF
Using Bayesian ’ theorem, term (1) can be rewritten as
Classification, Parkinson disease was diagnosed with
term (2):
(1)
84.01% accuracy. In [15], fuzzy combination with the
Nearest Neighbor Algorithm could achieve 96.07%
Vmap=argvi=vmax
accuracy. In [16] and [17], by focusing on voice analysis,
(
) (
(
)
)
=argvi=vmaxP(a1,a2,…..,an,Ivj)P(vj)
(2)
they attempted to gain 94% accuracy. In the previous
presented methods, attempts have been made to offer the
Now using the training data, we attempt to estimate the
best classification methods and no attention has been paid
two terms of the above equation. Computation based on
to the quality of the training data. In this paper, we
the training data to find out what is the repetition rate of v j
presented a new model based on the combination of PSO
algorithm
and
Naive
Bayesian
Classification
in the data, is easy. However, computation of different
for
terms P(a1,a2,…an | Vj) by this method will not be
diagnosing the Parkinson disease. This algorithm selects
acceptable unless we have a huge amount of training data
the best training data for Naive Bayesian Classification
available. The problem is that the number of these terms is
and this causes no use of non-optimal training data. Due to
equal to the number of possible samples multiplied by the
using optimum training data and not using non-optimal
number of the objective function values. Therefore, we
training data, this new model presented in this paper
should observe each sample many times so that we obtain
increases the classification accuracy and Parkinson disease
an appropriate estimation.
diagnosis to 97.95%.
Objective function output is the probability of observing
First we consider Naive Bayesian Classification and PSO
the traits a1,a2,…an equal to the multiplication of separate
algorithm. Then, the presented algorithm, results and
probabilities of each trait. If we replace it in Equ.2, it
references will be investigated.
yields the Naive Bayesian Classification, i.e. Equ.3:
1.1. Naive Bayesian Classification
VNB=arg max P(Vj) ∏ (
One very practical Bayesian learning method is naive
|
)
(3)
Where vNB is Naive Bayesian Classification output for the
Bayesian learner which is generally called the Naive
objective function. Note that the number of terms P(ai|vj)
Bayesian Classification method. In some contexts, it has
that should be computed in this method is equal to the
been shown that its efficiency is analogous to that of the
number of traits multiplied by the number of output classes
methods such as neural network and decision tree.
for the objective function, which is much lower than the
Naive Bayesian classification can be applied in problems
number of the terms P(a1,a2,…an | Vj)
in which each sample x is selected by a set of trait values
120
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
We conclude that naive Bayesian learning attempts to
the sake of maintaining the algorithm’s probabilistic
estimate different values of P(vj) and P(ai|vj) using their
property. Each particle’s next speed is obtained by Equ.5:
repetition rate in the training data. This set of estimations
Xi+1=Xi+Vi+1
(5)
corresponds to the learnt assumption. After that, this
assumption is used for classifying the new samples, which
2. Considering the presented algorithm
is done through the above formula. When conditional
independence assumption of Naive Bayesian Classification
In the introduction section, we considered that different
method is estimated, naive Bayesian class will be equal to
methods have been presented for Parkinson disease
the MAP class.
diagnosis, but no attention has been paid to the quality of
the training data. In this paper, we attempt to select the
best training data using PSO algorithm for Naive Bayesian
1.2. PSO algorithm
Classification. The selection of the best training data is the
most important part for training the Naive Bayesian
Each particle is searching for the optimum point. Each
Classification training. This is due to the fact that we
particle is moving, thus it has a speed. PSO is based on the
observed in our studies that adding or omitting two
particles’ motion and intelligence. Each particle in every
training data in the whole set of training data caused 4 to
stage remembers the status that has had the best result.
5% more accuracy in disease diagnosis. The suggested
method will be introduced in detail in the following.
Particle’s motion depends on 3 factors:
1- Particle’s current location
The diagram below shows the general procedure of the
2- Particle’s best location so far (pbest)
new presented algorithm.
3- The best location which the wholeset of particles
were in so far (gbest)
In the classical PSO algorithm, each particle i has two
main parts and includes the current location, and Xiis the
particle’s current speed (Vi). In each repetition, particle’s
change of location in the searching space is based on the
particle’s current location and its updated speed. Particles’
speed is updated according to three main factors: particle’s
current speed,
particle’s best
experienced location
(individual knowledge), and particle’s location in the best
status of group’s particles (social knowledge), as Equ.4.
Vi+1 =K(wVi+C1i(Pbest i– Xi) + C1i(Gbesti - Xi))
(4)
th
Where W is the i particle’s inertia coefficient for moving
with the previous speed. C1i and C2i are respectively the
individual and group learning coefficients of the ith
particle, which are selected randomly in range {2-0} for
121
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Table1. Primary values given to PSO algorithm parameters
No.
1
Bird in swarm
The used
parameter
value
50
2
Number of Variable
1
3
Min and Max Range
2-46
4
Availability type
Min
5
Velocity clamping factor
2
6
Cognitive constant
2
7
Social constant
2
8
min of inertia weight
0.4
9
max of inertia weight
0.4
Start
Selecting the best training data and the
intended parameters for naive Bayesian
training using PSO algorithm
Naive Bayesian Classification training
using the best training data and forming
the Parkinson disease diagnosis model
Parkinson disease diagnosis through
the formed model
Parameter title
3. Experiments and results
End
3.1. Dataset descriptions
Fig1. The procedure of the suggested method for Parkinson disease
In this article, we used the dataset of the Parkinson disease
diagnosis
belonging to UCI. This dataset is accessible through this
The general procedure is very simple. In this paper, first
link [18]. The number of the items of this dataset is 197,
the best data for Naive Bayesian Classification are selected
and its features are 23. Features used in Parkinson disease
using PSO algorithm, and Naive Bayesian Classification is
diagnosis are presented in Table2:
trained by the optimum training data. Thereby, the
Parkinson disease diagnosis model is formed. After the
formation of the intended model, the Parkinson disease is
diagnosed and identified.
PSO algorithm fitness function for the selection of the
optimum training data is expressed in Equ.6:
Fitness =
where
∑
|
|
(6)
is the real value of the test data, and
is the
value that has been determined using Naive Bayesian
Classification.
Values of the primary parameters of PSO algorithm for the
selection of the optimum training data are presented in
Table1.
122
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Table3. The accuracy of Parkinson disease diagnosis class using the
Table2. Features used in Parkinson disease diagnosis
1
MDVP: FO(HZ)
2
MDVP: Fhi (HZ)
3
MDVP: Flo(HZ)
4
5
6
7
8
9
MDVP: Jitter (%)
MDVP: Jitter (Abs)
MDVP: RAP
MDVP: PPQ
Jitter: DDP
MDVP: Shimmer
10
11
12
13
14
15
MDVP: Shimmer (dB)
Shimmer : APQ3
Shimmer : APQ5
MDVP :APQ
Shimmer :DDA
NHR
16
17
18
19
20
21
22
NHR
RPDE
DFA
Spread 1
optimum training data selected by PSO algorithm
Average vocal
fundamental
frequency
Maximum vocal
fundamental
frequency
Minimum vocal
fundamental
frequency
The number of the
optimum training data
No.
selected for Naive
Classification accuracy
Bayesian Classification
using PSO algorithm
Several measures
of variation in
fundamental
frequency
1
8
97.95%
2
10
96.93%
3
12
97.95%
In Table3, some of the optimum training data selected
using PSO algorithm along with the classification accuracy
obtained through the optimum training data can be found.
As can be seen in No. 2 of Table3, by adding two training
Two measures of
ratio of noise to
tonal components
in the voice
data, classification accuracy has decreased 1.02%.
Therefore, it can be concluded that by increasing the
training data, there is no guarantee that classification
accuracy be increased. The important point in increasing
the classification accuracy is the use of optimum training
Two nonlinear
dynamical
complexity
measure
data and no use of noisy training data which decrease the
classification accuracy. We increased the number of
Spread 2
D2
PPE
training data respectively to 50, 60, 70, 80 and 90 training
data. The accuracy of the obtained results of this high
number of training data can be observed in Table4.
3.2. The optimum training data selected for Naive
Bayesian Classification using PSO algorithm
Table4. The relationship between Parkinson disease diagnosis accuracy
and training data increase
As stated in the previous sections, selecting the best
The number of the
No.
training data is the most important part of Naive Bayesian
Classification accuracy
trainingdata
Classification for increasing the accuracy and Parkinson
disease diagnosis. In Table3, the number of the optimum
training data selected by PSO algorithm can be observed:
1
50
88.79%
2
60
77.55%
3
70
76.53%
4
80
69.38%
5
90
67.54%
123
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
In Table4, we can see that using Naive Bayesian
Bayesian Classification. Due to the fact that this presented
Classification with increasing the training data will
algorithm selects the best training data and avoids
decrease the classification accuracy.
choosing those that cause drop and decrease in
According to the optimum training data selected by PSO
classification accuracy, it gets the classification accuracy
algorithm, it is concluded that by having only 8 training
and Parkinson disease diagnosis to 97.95%. This
data, the highest accuracy in the possible classification can
classification accuracy shows the superiority of the
be obtained for Parkinson disease diagnosis.
suggested method to the previous models of Parkinson
In Table5, the result of the algorithm presented in this
disease diagnosis. Also, according to the result obtained in
paper is compared with the results of the previous works:
the paper, it can be reminded that in order to increase the
classification accuracy, it is not always necessary to
Table5. Comparison of the suggested method’s accuracy and previous
present a new classification method; rather by selecting the
models of Parkinson disease diagnosis
best training data and omitting the inappropriate training
Result and accuracy of
No.
data, classification accuracy can be significantly increased.
Presented works
the presented model
1
[9]
93.01%
References
2
[11]
93.01%
3
[13]
91.20%
4
[15]
96.01%
5
[16][17]
94%
6
Proposed Algorithm
97.95%
[1] Singh, N., Pillay, V., &Choonara, Y. E. (2007).
Advances in the treatment of Parkinson’s disease. Progress
in Neurobiology.81,29-44
[2] Ho, A. K., Iansek, R., Marigliani, C., Bradshaw, J. L.,
& Gates, S. (1998).Speechimpairment in a large sample of
patients with Parkinson’s disease. Behavioural Neurology,
11,131-138
[3] Little, M. A., McSharry, P. E., Hunter, E. J., Spielman,
J., &Ramig, L. O (2009). Suitability of dysphonia
measurements for telemonitoring of Parkinson’s disease.
IEEE Transactions on Biomedical Engineering, 56(4),
1015–1022
[4] Rahn, D. A., Chou, M., Jiang, J. J., & Zhang, Y.
(2007). Phonatory impairment in Parkinson’s disease:
evidence from nonlinear dynamic analysis and
perturbation analysis. Journal of Voice, 21, 64–71.
[5] Shahbaba, B., & Neal, R. (2009). Nonlinear models
using Dirichlet process mixtures.The Journal of Machine
Learning Research, 10, 1829–1850.
[6] Das, R. (2010). A comparison of multiple classification
methods for diagnosis of Parkinson disease. Expert
Systems with Applications, 37, 1568–1572.
[7] Sakar, C. O., &Kursun, O. (2010). Telediagnosis of
Parkinson’s disease using measurements of dysphonia.
Journal of Medical Systems, 34, 1–9
[8] Psorakis, I., Damoulas, T., &Girolami, M. A. (2010).
Multiclass relevance vectormachines: sparsity and
accuracy. Neural Networks, IEEE Transactions on,
21,1588–1598.
[9] Guo, P. F., Bhattacharya, P., &Kharma, N. (2010).
Advances in detecting Parkinson’s disease. Medical
Biometrics, 306–314.
According to the comparison made between the suggested
method and the previous models of Parkinson disease
diagnosis in Table5, it is shown that the suggested method
is superior to the previous models of Parkinson disease
diagnosis. Based on the comparison it can be concluded
that in order to increase the classification accuracy, it is
not always necessary to present a new classification
method; rather by selecting the best training data and
omitting the inappropriate training data, classification
accuracy can be significantly increased.
4. Conclusion
In this paper, we suggested a new model for Parkinson
disease diagnosis based on the combination of PSO
algorithm and Naive Bayesian Classification. Using PSO
algorithm, the best training data were selected for Naive
124
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[10] Luukka, P. (2011). Feature selection using fuzzy
entropy measures with similarity classifier. Expert
Systems with Applications, 38, 4600–4607.
[11] Li, D. C., Liu, C. W., & Hu, S. C. (2011). A fuzzybased data transformation for feature extraction to increase
classification performance with small medical data sets.
Artificial Intelligence in Medicine, 52, 45–52.
[12] Ozcift, A., &Gulten, A. (2011). Classifier ensemble
construction with rotation forest to improve medical
diagnosis performance of machine learning algorithms.
Comput Methods Programs Biomed, 104, 443–451.
[13] AStröm, F., &Koker, R. (2011). A parallel neural
network approach to prediction of Parkinson’s Disease.
Expert Systems with Applications, 38, 12470–12474.
[14] Spadoto, A. A., Guido, R. C., Carnevali, F. L.,
Pagnin, A. F., Falcao, A. X., & Papa, J. P. (2011).
Improving Parkinson’s disease identification through
evolutionarybased feature selection. In Engineering in
Medicine and Biology Society, EMBC, 2011 Annual
International Conference of the IEEE (pp. 7857–7860).
[15] Hui-Ling Chen a, Chang-Cheng Huang a, Xin-Gang
Yu b, Xin Xu c, Xin Sun d, Gang Wang d, Su-Jing
Wang(2013). An efficient diagnosis system for detection
of Parkinson’s disease using fuzzy k-nearest neighbor
approach. In Expert Systems with Applications 40 (2013)
263–271
[16] Yoneyama, M.;kurihara, y.;watanabe,k;mitoma, h.
accelerometry-Based Gait Analysis and Its Application to
Parkinson's Disease Assessment— Part 1: Detection of
Stride Event(Volume:22 , Issue: 3)page:613-622, May
2014
[17] Yoneyama, M.;kurihara, y.;watanabe,k;mitoma,
h.Accelerometry-Based Gait Analysis and Its Application
to Parkinson's Disease Assessment— Part 2: New Measure
for Quantifying Walking Behavior (Volume:21 , Issue:
6)page:999-1005,Nov. 2013
[18]
UCI
machine
learning
repository.
(http://archive.ics.uci.edu/ml/datasets/Parkinsons)
125
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Automatic Classification for Vietnamese News
Phan Thi Ha1, Nguyen Quynh Chi2
1
Posts and Telecommunications Institute of Technology
Hanoi, Vietnam
[email protected]
2
Posts and Telecommunications Institute of Technology
Hanoi, Vietnam
[email protected]
and processing and classifying documents by topic has
been interested and researched on the worldwide [2].
Therefore, they build the methods of text classification to
strongly support for finding information of Internet users.
Abstract
This paper proposes an automatic framework to classify
Vietnamese news from news sites on the Internet. In this
proposed framework, the extracted main content of Vietnamese
news is performed automatically by applying the improved
performance extraction method from [1]. This information will
be classified by using two machine learning methods: Support
vector machine and naïve bayesian method. Our experiments
implemented with Vietnamese news extracted from some sites
showed that the proposed classification framework give
acceptable results with a rather high accuracy, leading to
applying it to real information systems.
Keywords: news classification; automatic extraction; support
vector machine, naïve bayesian networks
This paper proposes an automatic framework to classify
Vietnamese news from electronic newspaper on the
Internet under Technology, Education, Business, Law,
Sports fields to build archives which serve the construction
of internet electronic library of Vietnam. In this proposed
framework, the extracted main content of Vietnamese
news is performed automatically by applying the improved
performance extraction method from [1]. This information
will be classified by using two machine learning methods:
Support vector machine and naïve bayesian method. Our
experiments implemented with Vietnamese news extracted
from some sites showed that the proposed classification
framework gives an acceptable result with a rather high
accuracy, leading to applying it to real information
systems.
1. Introduction
In the modern life, the need to update and use of
information is very essential for human’s activities. Also
we can see clearly the role of information in work,
education, business, research to modern life. In Vietnam,
with the explosion of information technology in recent
years, the demand for reading newspapers, searching for
information on the Internet has become a routine of each
person. Because of many advantages of Vietnamese
documents on the Internet such as compact and long-time
storage, handy in exchange especially through the Internet,
easy modification, the number of document has been
increasing dramatically. On the other hand, the
communication via books has been gradually obsolete and
the storage time of document can be limited.
The rest of the paper is presented as the followings. In
section 2, the methods of news classification based on
automatically extracted contents of Web pages on the
Internet are considered. The main content of the automatic
classification method is presented in section 3. Our
experiments and the results are analyzed and evaluated in
section 4. The conclusions and references are the last
section.
2. Related works and motivation
From that fact, the requirement of building a system to
store electronic documents to meet the needs of academic
research based on the Vietnamese rich data sources on the
site. However, to use and search the massive amounts of
data and to filter the text or a part of the text containing the
data without losing the complexity of natural language, we
cannot manually classify text by reading and sorting of
each topic. An urgent need to solve the issue is how can
automatically classify the document on the Vietnamese
sites. Basically, the sites will contain pure text information
In recent years, natural language processing and document
content classification have had a lot of works with
encouraging results of the research community inside and
outside Vietnam.
The relevant works outside Vietnam have been published a
lot. In [3], they used clustering algorithm to generate the
sample data. They focused on optimizing for active
machine learning. An author at University of Dortmund
126
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Germany presented that the use and improvement of
support vector machine (SVM) technique has been highly
effective in text classification [4]. The stages in the a text
classification system including indexing text documents
using Latent semantic Indexing (LSI), learning text
classification using SVM, boosting and evaluating text
categorization have been shown in [5]. “Text
categorization based on regularized linear classification
methods” [6] focused in methods based on linear least
squares techniques fit, logistic regression, support Vector
Machine (SVM). Most researchers have focused on
processing for machine learning and foreign language,
English in particularly. In the case of applying for
Vietnamese documents, the results may not get the desired
accuracy.
In recent years, extracting contents of the site have been
researched by many groups in countries and their results
were rather good [16, 17, 18, 19, 1]. These approaches
include: HTML code analysis; pattern framework
comparison; natural language processing. The method of
pattern framework extracts information from two sites.
This information is then aligned together based on the
foundation of pattern recognition applied by Tran Nhat
Quang [18]. This author extracted content on web sites
aiming to provide information on administration web sites.
The method of natural language processing considers the
dependence of syntax and semantics to identify relevant
information and extract needed information for other
processing steps. This method is used for extracting
information on the web page containing text following
rules of grammar. HTML method accesses directly content
of the web page displayed as HTML then performs the
dissection based on two ways. The first is based on
Document Object Model tree structure (DOM) of each
HTML page, data specification is then built automatically
based on the dissected content. The second is based on
statistical density in web documents. Then, dissect the
content, data obtained will become independent from the
source sites, it is stored and reused for different purposes.
The work of Vietnamese text categorization can be
mentioned by Pham Tran Vu et al. They talked about how
to compute the similarity of text based on three aspects:
the text, user and the association with any other person or
not [7]. The authors applied this technique to compute the
similarity of text compared with training dataset. Their
subsequent work referred matching method with profiles
based on semantic analysis (LSA). The method presented
in [8] was without the use of ontology but still had the
ability to compare relations on semantics based on the
statistical methods. The research works in Vietnam
mentioned have certain advantages but the scope of their
text processing is too wide, barely dedicated for a
particular kind of text. Moreover, the document content
from Internet is not extracted automatically by the method
proposed in [1]. Therefore, the precision of classification
is not consistent and difficult to evaluate in real settings.
To automatically extract text content from the web with
various sources, across multiple sites with different
layouts, the authors [1] studied a method to extract web
pages content based on HTML tags density statistics. With
the current research stated above, we would like to
propose a framework for automatic classification of news
including Technology, Education, business, Law, Sports
fields. We use the method in [1] which is presented in the
next section.
To extract document content on the Internet, we must
mention to the field of natural language processing – a key
field in science and technology. This field includes series
of Internet-related applications such as: extracting
information on the Web, text mining, semantic web, text
summarization, text classification... Effective exploitation
of information sources on the Web has spurred the
development of applications in the natural language
processing. The majority of the sites is encoded in the
format of Hyper Text Mark-up Language (HTML), in
which each Web page’s HTML file contains a lot of extra
information apart from main content such as pop-up
advertisements, links to other pages sponsors, developers,
copy right notices, warnings…Cleaning the text input here
is considered as the process of determining content of the
sites and removing parts not related. This process is also
known as dissection or web content extraction (WCE).
However, structured websites change frequently leading to
extracting content from the sites becomes more and more
difficult [9]. There are a lot of works for web content
extraction, which have been published with many different
applications [10, 11, 12, 13, 14, 15].
3. Automatic News Classification
3.1 Vietnamsese
classification
web
content
extraction
for
The authors have automatically collected news sites under
5 fields from the Internet and used content dissection
method based on word density and tag density statistics of
the site. The extracting text algorithm was improved from
the algorithm proposed by Aidan Finn [11] and the results
were rather good.
Aidan Finn proposed the main idea of BTE algorithm as
follows: Identify two points i, j such that some HTML tagtokens under i and on j is maximum and the signs of texttokens between i and j is maximum. The extraction result
is the text signs between interval [i, j] which are
separated.
Aidan Fin did experiments by using BTE algorithm to
extract text content for textual content classification in
127
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
digital libraries, mainly collecting new articles in the field
of sports and politics in the news website. This algorithm
has the advantage that dissection does not depend on the
given threshold or language but the algorithm is not
suitable for some Vietnamese news sites containing some
advanced HTML tags.
encode[] array, which significantly reduces the size of
binary_tokens[] array. The complexity of this
algorithm is O(n)
Step 2: Locate two points i, j from binary_tokens[] array
recently obtained in step 1 so that the total number of
elements which have value -1 between [i, j] and 1 outside
[i, j] is the largest. Perform data dissection in the scope [i,
j] and remove HTML tags. The complexity of this
algorithm is O(n3).
By observing some different Vietnamese news sites, the
paper [1] showed that the news sites in general have a
main characteristic: in each page’s HTML code, text body
part contains fewer tags and many signs of text. The
authors have improved algorithm BTE (by adding step 0)
to extract text body from Vietnamese new sites to build
Vietnamese vocabulary research corpus.
The BTE-improved algorithm is tested and compared with
original algorithm proposed by Aidan Finn with the same
number of sites in the test set. The experiments and results
are as follows:
Construction algorithm: The experimental observations
show that the text body of Web pages always belong to a
parent tag that is located in pair (<body> … </ body>) in
which HTML tags like or scripts is embedded in tags like
<img> <input> <select> <option>… In addition, some
content is not related to the text body but in some
advanced HTML tags like (<style> … </ style> <script>
… </ Script>, <a>….</a>,…). Therefore, initial step
should be removing the HTML code which certainly does
not contain the content of the web page (Step 0). Then
binary encoding of remaining content (HTML tags
corresponding to 1, text signals corresponding -1) is
performed then total of identical adjacent value is
computed. Next, extract segments which have the most
negative values (-1) and the least positive values (1). The
complexity of this algorithm is O (n2).
First time: run BTE algorithm of Aidan Finn on HTML
file obtained respectively from the URL.
Second time: run improved BTE on HTML file obtained
respectively from the URL.
The ratio of text body needed over the total extraction text
of 3 types of sites which are interested by many users is
shown in table 1, in which each type of collected sites
contains 100 files.
Table 1. Comparing ratio of text body needed to take/total of extraction
text
Here are main steps of the algorithm:
Step 0: Each site corresponds to one HTML file format.
Clean HTML codes by removing tags, HTML codes do
not contain information relating to contents such as tags:
<input>, <script>, <img>, <style>, <marquee>,<!--...-->,
<a>… and contents outside the HTML tags
<body>,</body> of each web page. HTML tags library is
collected from web site address [22, 23].
Aidan Finn’s
algorithm
47.12%
Dantri.com.vn
Improved
algorithm
99.02%
Vietnamnet.vn
99.67%
65.71%
VnExpress.net
99.00%
48. 87%
Type of site
3.2 News web content classification
We use a learning machine method called support vector
machine (SVM) to train a 5 types classifier for news
classification on webs. This is a multi-class classification
problem. The idea of solving a multi-class classification
problem is to convert it into two-class problems by
constructing multiple classifiers. The common multi-class
classification strategy are: One-Against_One (OAO), and
One-Against- Rest (OAR).
Step 1: For the remaining part of the web sites, build two
arrays that are binary_tokens[] and tokens[].
Binary_tokens[] include 1 and -1:
- Binary_tokens[i] = 1 corresponds to the ith element
which is an HTML tag. This tag includes the beginning
tags: <?...>, example: <html>, <p color = red> and end
tags: </?...>, example: </html>, </p>.
With OAR strategy (Fig 1), we will use K-1 binary
classifiers to build K-class. The K-class classification
problem is converted into K-1 two-class classification
problems. In particular, the ith two-class classifier built on
the ith class and all the remaining classes. The ith decision
function for the ith classifier and the remaining classes has
the form
( )
( )
( )
Hyper-plane yi(x) = 0 will form optimal division hyperplane, the support vector of class (i) puts yi(x) = 1 and the
remaining support vector class to be satisfying yi(x) = -1.
- Binary_tokens[i]= -1 corresponds to the ith element
which is a sign of text.
Tokens[] is an array of elements including value of text
signs or tags corresponding to elements in the
binary_tokens[].
Example,
at
position
23,
binary_tokens [23] = 1, tokens [23] = <td…>.
Merge adjacent elements which have the same value in
the binary_tokens[] array to make an element in
128
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
If the data vector x satisfying the conditions yi(x) > 0 for
only one i, x will be assigned to the ith class.
But the number of training record for each classifier in
OAO is less than OAR and the classification is also
simpler. So the OAO strategy has higher accuracy, but the
cost to build is equivalent to OAR strategy. The decision
function to subclass of class i to class j in the OAO
strategy is:
( )
( )
( )
However, both strategies lead to a vague region in the
classification (Fig 3). We can avoid this problem by
building K-Class based on K-linear functions of the form:
( )
( )
And a point s is assigned to the class Ck if: ( )
( )
with every
.
Technology, Education, Business, Sports
Classifier
-1
1
Technology
Education,
Sports
Business,
Classifier
-1
1
Education
Business, Sports
Classifier
-1
1
Business
Sports
Fig 1: OAR Strategy
OAO strategy (Fig 2) uses K* (K-1)/2 binary classifiers
constructed by pairing two classes so this strategy should
also be referred to as the pair (pairwise) and used the
following the method of combining multiple parts of this
class to determine the final classification results. The
number of classifiers never exceeds K*(K-1)/2
Technology
Education
Tech-Edu Classifier
Technology
Business
Tech-Bus Classifier
Technology
Sports
Tech-Spo Classifier
Education
Business
Bus-Edu Classifier
Education
Sports
Edu-Spo Classifier
Business
Sports
Bus-Spo Classifier
Fig 3: The vague region in subclass
Table 2. Category label of each topic
Topic
Label
Technology
1
Education
2
Business
3
Laws
4
Sports
5
The specific steps in the training phase as the following
Step 1: Download the HTML page corresponding to the
news page links to filter and retrieve content saved in plain
text format (file.txt), remove the documents which are
over the size allowed (1KB) and have duplicate contents.
Step 2: Separate words (integrate VnTokenize) according
to [20] and remove the stop words, select features [4] (the
selection of features will be presented in detail in section
4)
Step 3: Represent a news article as a vector as follows:
<classi ><label1>:<value1><label2>:<value2> ...
<labeln>:<valuen>
Where:
- Classi is the category label of each topic with i =
1 5 (Table 2).
Fig 2: OAO Strategy
Compared with OAR strategy, the advantage of the
strategy is not only to reduce the region which cannot be
classified, but also to increase the accuracy of the
classification. The strategy OAR just needs K-1 classifiers
meanwhile the OAO strategy needs K*(K-1)/2 classifiers.
129
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
-
-
Labelj is the index of the jth feature word in the
feature space which may appear in the training
news with j = 1 n.
Valuej is the weight of the indexj which is
calculated by the TF.IDF formula, if the valuej =
0, then do not write that feature itself. This format
complies to the input data format of the program
SVMMulticlass [21].
choose n dimensions corresponding to n words with the
highest weights. Representation of all news articles
described in step 3, Section 3.2 in which:
Valuei of the ith word in vector representing jth news is the
weight of the word which is calculated by the formula (4)
(
Step 4: Train classification model based on multi-class
SVM algorithm applying OAO strategy with the optimal model parameters (by experiment and using a number of
methods such as GrisSeach, Genetics)
The specific steps in the classifying phase as the
following:
Step 1: Allow the user to select seed words then generate
queries randomly.
Step 2: Perform a search for each query through Google
and store links of the found news site after filtering out
invalid links such as: music, video, and forums…
Step 3: Use the method of [1] to extract text from the
download link, check and remove the text files which do
not meet requirement (size<1KB) and text files having
duplicate content.
Step 4: Perform to separate words (integrated VnTokenize)
and remove stop words from the text. Represent the text as
feature vector which is input format of the SVM
algorithm.
Step 5: Perform classification (under 5 labels), and save
the results in a database.
4. Experiments and Evaluation
4.1. Pre-processing data and experiments on
classification model training
Training and testing data were built semi-automatically by
the authors. We have developed software to automatically
extract content of the text by updating the RSS links of
two news electronic sites that are VietNamnet.net and
Vnexpress.net by date for each topic (Technology,
Education, Business, Laws, Sport). The data obtained is
the links of news sites after removing duplicate links,
invalid links. The content of news sites are extracted based
on the method described in [1]. Then, the preprocessing
steps of separating and removing the stop words made as
step 2 in the training phase presented in section 3.2. After
separating of words, the words will be re-weighted to carry
out the selection of features vector for articles. Each article
in the data set will be represented by an n-dimensional
vectors, each dimension corresponds to a feature word. We
)
{
(
(
))
[
]
( )
Where:
∑
tfij (Term frequency): The number of occurrences of the
word wi in document dj
dfi (Document frequency): The number of documents that
contain the word wi
cfi (Collection frequency): The number of occurrences of
word wi in the corpus.
If Valuei = 0, do not need to keep this feature.
The data set for the training and testing phases includes
2114 articles in total in which 1000 articles belong to the
training data set and all remaining 1114 the testing belong
to data set. Table 3 lists the number of the training data set
and testing on each topic.
Table 3. Number of training data set and testing
Topic
Training data set
Test data set
Technology
200
123
Education
200
128
Business
200
236
Sports
200
62
Laws
200
565
To choose the value for the dimension of feature vectors,
we perform experiments with different values of n listed in
Table 4. Evaluation results of SVM and Bayes classifier
with different dimensions of feature vectors on the same
set of training and testing data set are shown in Table 3
with the accuracy suitable evaluated by the formula (5)
and (6). This is the basis for selecting the dimension of the
feature space for classifiers: The given criteria evaluating
on is the high classification results and narrow fluctuations
in a certain data region. Based on Table 4, authors choose
dimension n for SVM method is 2000 and Bayes method
is 1500.
( )
Where:
130
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
- Pre: Classification accuracy for a topic.
- TD: The number of correctly classified documents.
- SD: Total number of documents to be classified.
in Table 5 according to the formula (5) and (6). Results of
classification for the accuracy of each topic are different.
Technology topics have the lowest accuracy and Sports
have the highest accuracy.
( )
Table 5. Results of evaluations are categorized by topic
Where:
- Tpre: Total classification accuracy for topics.
- TDCi: The number of correctly clssified
documents belonging to the topic Ci.
- SDCi: Total number of classified documents
belonging to the topic Ci.
- ST: Total topics.
Topic
Number of news
NB Method
sites
Technology
Education
Business
Laws
Sports
123
128
236
62
565
SVM Method
77.23%
92.96%
94.91%
83.87%
94.51%
87%
96.88%
83.47%
96.77%
98.58%
4.2 Classification experiment and evaluation
5. Conclusion
In order to automatically classify information on the web,
the authors build applications which automatically classify
5 topics: Technology, Education, Laws, Business and
Sports. This classification models are trained with SVM
and Bayes algorithms with the dimension of the feature
vectors selected in Section 4.1. The application is built
following 5 specific steps in the classification phase
presented in Section 3.2. To evaluate the classification
model obtained after conducting the training, we tested
classified documents of 1114 in different categories.
This paper describes automatically classifying framework
for Vietnamese news from news sites on the Internet. In
the proposed method, the contents of Vietnamese news are
extracted automatically by applying the improved
performance extraction of the author group [1]. The news
is classified by using two machine learning methods Naïve
Bayes and SVM. Experiments on news sites
(vietnamnet.vn and vnexpress.net) shows that using SVM
method gives a higher (94%) accuracy while the method
naïve Bayesian network for lower results. However, we
find that classification accuracy is different with various
topics and Sports news has the highest accuracy. In future,
we will aim to improve automatic classification methods
to increase classification accuracy for different news topics
and extend widen the types of sites with different and
more complicated content than news.
Table 4. Results of evaluation with the different lengths of vectors
Number of
dimensions of
feature vectors
Accuracy of SVM
algorithm
Accuracy of Naïve
Bayes algorithm
500
91.56%
89.77%
800
91.92%
90.04%
1000
92.01%
90.39%
1200
92.37%
90.57%
1500
93.00%
91.92%
1800
93.63%
90.48%
2000
93.81%
90.84%
2500
93.72%
90.79%
Acknowledgments
We would like thank to Phuong le Hong PhD providing
useful tools for word processing on the Web Vietnamese.
References
[1] Phan Thi Ha and Ha Hai Nam, “Automatic main text extraction from
web pages”, Journal of Science and Technology, Vietnam, Vol. 51,
No.1, 2013.
[2] Yang and Xin Liu, “A re-examination of text categorization
methods”, Proceedings of ACM SIGIR Conference on Research and
Development in Information Retrieval (SIGIR’99), 1999.
[3] Rong Hu, “Active Learning for Text Classification”, Ph.D Thesis,
Dublin Institute of Technology, Dublin, Ireland, 2011.
[4] Joachims T., “Text categorization with Support Vector Machines:
Learning with many relevant features”, in Proc. of the European
Conference on Machine Learning (ECML), 1998, pages 137–142.
The results showed that the SVM method give results with
an accuracy of approximately 94%, Naïve Bayes (NB)
method with an accuracy of approximately 91%. The
accuracies are calculated by the formula (5) and (6). For
each topic, testing data and evaluation results are described
131
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[5] Fabrizio Sebastiani, “Text categorization”, In Alessandro Zanasi
(ed.), Text Mining and its Applications, WIT Press, Southampton,
UK, 2005, pp. 109-129.
First Author: Dr. Phan Thi Ha is currently a lecturer of the Faculty of
Information Technology at Posts and Telecommunications Institute of
Technology in Vietnam. She received a B.Sc.in Math & Informatics, a
M.Sc. in Mathematic Guarantee for Computer Systems and a PhD. in
Information Systems in 1994, 2000 and 2013, respectively. Her research
interests include machine learning, natural language processing and
mathematics applications.
[6] Tong Zhang and Frank J. Oles. “Text categorization based on
regularized linear classification methods”, Information Retrieval,
Vol. 4:5-31, 2001
[7] Tran Vu Pham, Le Nguyen Thach, “Social-Aware Document
Similarity Computation for Recommender Systems”, in Proceedings
of the 2011 IEEE Ninth International Conference on Dependable,
Autonomic and Secure Computing, 2011, Pages 872-878
Second Author: M.Sc Nguyen Quynh Chi is currently a lecturer of the
Faculty of Information Technology at Posts and Telecommunications
Institute of Technology in Vietnam. She received a B.Sc.in Information
Technology in Hanoi University of Technology in Vietnam, a M.Sc. in
Computer Science in University of California, Davis, USA (UCD) and
became PH.D Candidate at UCD in 1999, 2004 and 2006, respectively.
Her research interests include machine learning, data mining.
[8] Tran Vu Pham, “Dynamic Profile Representation and Matching in
Distributed Scientific Networks”, Journal of Science and Technology
Development, Vol. 14, No. K2, 2011
[9] David Gibson, Kunal Punera, Andrew Tomkins, “The Volume and
Evolution of Web Page Templates”. In WWW\'05: Special interest
tracks and posters of the 14th international conference on World
Wide Web, 2005.
[10] Aidan Finn, Nicholas Kushmerick, Barry Smyth, “Fact or Fiction:
Content Classification for Digital Libraries”, Proceedings of the
Second DELOS Network of Excellence Workshop on
Personalisation and Recommender Systems in Digital Libraries,
Dublin City University, Ireland, 2001.
[11] Aidan. Fin, R. Rahman, H. Alam and R. Hartono, “Content
Extraction from HTML Documents”, in WDA: Workshop on Web
Document Analysis, Seattle, USA, 2001.
[12] C.N. Ziegler and M. Skubacz, “Content extraction from news pages
using particle swarm optimization on linguistic and structural
features,” in WI ’07: Proceedings of the IEEE/WIC/ACM
International Conference on Web Intelligence. Washington, DC,
USA: IEEE Computer Society, 2007, pp. 242–249.
[13] Ion Muslea, Steve Minton, and Craig Knoblock, “A hierarchical
Approach to Wrapper Induction”, in Proceedings of the third annual
conference on Autonomous Agents, 1999, Pages 190-197.
[14] Tim Weninger, William H. Hsu, Jiawei Han, “CETR-Content
Extraction via Tag Ratios”. In Proceedings of the 19th international
conference on World wide web, 2010, Pages 971-980
[15] Sandip Debnath, Prasenjit Mitra, Nirmal Pal, C. Lee Giles,
“Automatic Identification of Informative Sections of Web-pages”,
Journal IEEE Transactions on Knowledge and Data Engineering,
Vol. 17, No. 9, 2005, pages 1233-1246
[16] http://nhuthuan.blogspot.com/2006/11/s-lc-v-k-thut-trong-vietspider3.html
[17] www.majestic12.co.uk/projects/html_parser.php
[18] Vu Thanh Nguyen, Trang Nhat Quang, “ Ứng dụng thuật toán phân
lớp rút trích thông tin văn bản FVM trên Internet”, Journal of
Science and Technology Development,Vol. 12, No. 05, 2009.
[19] Ngo Quoc Hung, “Tìm kiếm tự động văn bản song ngữ Anh-Việt từ
Internet”, MS thesis, Ho Chi Minh City University of Science,
Vietnam, 2008
[20] http://mim.hus.vnu.edu.vn/phuonglh/softwares/vnTokenizer
[21] http://www.cs.cornell.edu/people/tj/svm_light/svm_multiclass.html
[22] http://mason.gmu.edu/~montecin/htmltags.htm#htmlformat
[23] http://www.w3schools.com/tags/
132
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
Practical applications of spiking neural network in information
processing and learning
Fariborz Khademian1, Reza Khanbabaie2
1
Physics Department, Babol Noshirvani University of Technology,
Babol, Iran
[email protected]
2
Physics Department, Babol Noshirvani University of Technology,
Babol, Iran
[email protected]
Abstract
Historically, much of the research effort to contemplate the
neural mechanisms involved in information processing in the
brain has been spent with neuronal circuits and synaptic
organization, basically neglecting the electrophysiological
properties of the neurons. In this paper we present instances of a
practical application using spiking neurons and temporal coding
to process information, building a spiking neural network – SNN
to perform a clustering task. The input is encoded by means of
receptive fields. The delay and weight adaptation uses a multiple
synapse approach. Dividing each synapse into sub-synapses, each
one with a different fixed delay. The delay selection is then
performed by a Hebbian reinforcement learning algorithm, also
keeping resemblance with biological neural networks.
Keywords: Information processing, Spiking neural network,
Learning.
Fig.1 simple temporal encoding scheme for two analog variables:
x1=3.5 and x2=7.0, with x3 as the reference, firing always at 0 in a
coding interval of 10ms.
A very simple temporal coding method, suggested in [13],
[14], is to code an analog variable directly in a finite time
interval. For example, we can code values varying from 0
to 20 simply by choosing an interval of 10ms and
converting the analog values directly in proportional
delays inside this interval, so that an analog value of 9.0
would correspond to a delay of 4.5ms. In this case, the
analog value is encoded in the time interval between two
or more spikes, and a neuron with a fixed firing time is
needed to serve as a reference. Fig. 1 shows the output of
three spiking neurons used to encode two analog variables.
Without the reference neuron, the values 3.5 and 7.0
would be equal to the values 6.0 and 2.5, since both sets
have the same inter-spike interval. If we now fully connect
these three input neurons to two output neurons, we will
have a SNN like the one shown in fig.2, which is capable
of correctly separating the two clusters shown in the right
side of the figure. Although this is a very simple example,
it is quite useful to illustrate how real spiking neurons
possibly work. The clustering here was made using only
between the input and output
the axonal delays
neurons with all the weights equal to one.
1. Information Encoding
When we are dealing with spiking neurons, the first
question is how neurons encode information in their spike
trains, since we are especially interested in a method to
translate an analog value into spikes [1], so we can process
this information in a SNN. This fundamental issue in
neurophysiology remains still not completely solved and is
extensively paid for in several publications [2], [3], [4],
[5]. Although a line dividing the various coding schemes
cannot always be clearly drawn [6], it is possible to
distinguish essentially three different approaches [7], [8],
in a very rough categorization:
1. rate coding: the information is encoded in the
firing rate of the neurons [9].
2. temporal coding: the information is encoded by
the timing of the spikes. [10]
3. population coding: information is encoded by the
activity of different pools (populations)
of neurons, where a neuron may participate of several
pools [11], [12].
133
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
In a SNN with input and output neuron, the center of
the RBF-like1 output neuron is given by the vector
,
where
{ |
}
. Similarly, the input
vectors are defined as
, where
and is the firing time of each input
neuron [10]. The SNN used here has an EPSP2 with a time
constant
and threshold
. The lateral
connection between the two output neurons is strongly
inhibitory.
Figure. 3 Continuous input variable encoded by means of local
receptive fields.
From a biological point of view, we can think of the input
neurons as if they were some sort of sensory system
sending signals proportional to their excitation, defined by
the Gaussian receptive fields. These neurons translate the
sensory signals into delayed spikes and send them forward
to the output neurons. In this work, the encoding was made
with the analog variables normalized in the interval [0,1]
and the receptive fields equally distributed, with the
centers of the first and the last receptive fields laying
outside the coding interval [0,1], as shown in fig. 3, there
is another way to encode analog variables, very similar to
the first, the only difference being that no center lays
outside the coding interval and the width of the receptive
fields is broader.
Fig.2 Left: SNN with a bi-dimensional input formed by three
spiking neurons and two RBF-like output neurons. Right: two
randomly generated clusters (crosses and circles), correctly
separated by the SNN.
This encoding method can work perfectly well for a
number of clusters less or equal to the number of
dimensions, but its performance decreases when the
number of clusters exceeds the number of input
dimensions. The proposed solution for this problem [15]
implemented here uses an encoding method based on
population coding [16], which distributes an input variable
over multiple input neurons. By this method, the input
variables are encoded with graded and overlapping
activation functions, modeled as local receptive fields. Fig.
3 shows the encoding of the value 0.3. In this case,
assuming that the time unit is millisecond, the value 0.3
was encoded with six neurons by delaying the firing of
neurons 1 (5.564ms), 2 (1.287ms), 3 (0.250ms),
4 (3.783ms) and 5 (7.741ms). Neuron 6 does not fire at all,
since the delay is above 9ms and lays in the no firing zone.
It is easy to see that values close to 0.3 will cause neurons
2 and 3 to fire earlier than the others, meaning that the
better a neuron is stimulated, the nearer to
ms it will
fire. A value up to
ms is assigned to the less
stimulated neurons, and above this limit the neuron does
not fire at all (see fig. 4).
Fig. 4 Spikes generated by the encoding scheme of the first type
shown in figure 3.
1
2
Both types of encoding can be simultaneously used, like in
fig. 5, to enhance the range of detectable detail and
provide multi-scale sensitivity [17]. The width and the
Radial Basis Function network
Excitatory postsynaptic potential
134
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
centers are defined by Eq. (1) and (2), for the first and
second types, respectively. Unless otherwise mentioned,
the value of used for the first type is 1.5 and 0.5 for the
second.
Fig. 6 The SNN at left was capable of correctly separate ten clusters.
The lateral connections linking the output neurons are strong
inhibitory synapses, disabling all other neurons to fire after the first
neuron has fired, thus implementing the winner-takes-all process.
Each dimension of the input was coded by a receptive field with 12
neurons.
(
)
The learning function used here shown in fig. 6, is a
Gaussian curve defined by the Eq. (4) [23]. It reinforces
the synapse between neurons and , if
, and
depresses the synapse if
.
Where we have the parameter
in this form.
Fig. 5 Two ways to encode continuous input variable by means of
local receptive fields. The dotted lines are wide receptive fields of
the second type, with 𝜆
2. Learning
Giving some background information and instances of the
application of the models to simulate real neurons, these
examples demonstrate the existence of a relationship
between
electrophysiology,
bifurcations,
and
computational properties of neurons, showing also the
foundations of the dynamical behavior of neural systems.
The approach presented here implements the Hebbian
reinforcement learning method [18] through a winnertakes-all algorithm [19], that its practical application in
SNN is discussed in [20] and a more theoretical approach
is presented in [21]. In a clustering task, the learning
process consists mainly of adapting the time delays, so that
each output neuron represents an RBF center. This purpose
is achieved using a learning window or learning function
[22], which is defined as a function of the time interval
between the firing times and . This function
controls the learning process by updating the weights
based on this time difference, as shown in Eq. (3), where
is the amount by which the weights
are increased
or decreased and is the learning rate.
Fig. 7 Gaussian learning function, with 𝛼
𝛽
𝑎𝑛𝑑 𝑣
The learning window is defined by the following
parameters:
- : this parameter, called here neighborhood, determines
the width of the learning window where it crosses the zero
line and affects the range of
, inside which the weights
135
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[4] Araki, Osamu, and Kazuyuki Aihara. "Dual coding in a
network of spiking neurons: aperiodic spikes and stable firing
rates." Neural Networks, 1999. IJCNN'99. International Joint
Conference on. Vol. 1. IEEE, 1999.
[5] Koch, Christof. Biophysics of computation: information
processing in single neurons. Oxford university press, 1998.
[6] Dayan, P. E. T. E. R., L. F. Abbott, and L. Abbott.
"Theoretical neuroscience: computational and mathematical
modeling
of
neural
systems." Philosophical
Psychology (2001): 563-577.
[7] Maass, Wolfgang, and Christopher M. Bishop. Pulsed neural
networks. MIT press, 2001.
[8] Gerstner, Wulfram, and Werner M. Kistler. Spiking neuron
models: Single neurons, populations, plasticity. Cambridge
university press, 2002.
[9] Wilson, Hugh Reid. Spikes, decisions, and actions: the
dynamical foundations of neuroscience. Oxford University
Press, 1999.
[10] Ruf, Berthold. Computing and learning with spiking
neurons: theory and simulations. na, 1998.
[11] Snippe, Herman P. "Parameter extraction from population
codes: A critical assessment." Neural Computation 8.3
(1996): 511-529.
[12] Gerstner, Wulfram. "Rapid signal transmission by
populations of spiking neurons." IEE Conference Publication.
Vol. 1. London; Institution of Electrical Engineers; 1999,
1999.
[13] Hopfield, John J. "Pattern recognition computation using
action
potential
timing
for
stimulus
representation." Nature 376.6535 (1995): 33-36.
[14] Maass, Wolfgang. "Networks of spiking neurons: the third
generation of neural network models." Neural networks 10.9
(1997): 1659-1671.
[15] Avalos, Diego, and Fernando Ramirez. "An Introduction to
Using Spiking Neural Networks for Traffic Sign
Recognition." Sistemas Inteligentes: Reportes Finales EneMay 2014} (2014): 41.
[16] de Kamps, Marc, and Frank van der Velde. "From artificial
neural networks to spiking neuron populations and back
again." Neural Networks 14.6 (2001): 941-953.
[17] Avalos, Diego, and Fernando Ramirez. "An Introduction to
Using Spiking Neural Networks for Traffic Sign
Recognition." Sistemas Inteligentes: Reportes Finales EneMay 2014} (2014): 41.
[18] Morris, R. G. M. "DO Hebb: The Organization of Behavior,
Wiley: New York; 1949." Brain research bulletin 50.5
(1999): 437.
[19] Haykin, Simon. "Adaptive filters." Signal Processing
Magazine 6 (1999).
[20] Bohte, Sander M., Han La Poutré, and Joost N. Kok.
"Unsupervised clustering with spiking neurons by sparse
temporal coding and multilayer RBF networks."Neural
Networks, IEEE Transactions on 13.2 (2002): 426-435.
[21] Gerstner, Wulfram, and Werner M. Kistler. "Mathematical
formulations
of
Hebbian
learning." Biological
cybernetics 87.5-6 (2002): 404-415.
[22] Kempter, Richard, Wulfram Gerstner, and J. Leo Van
Hemmen. "Hebbian learning and spiking neurons." Physical
Review E 59.4 (1999): 4498.
are increased. Inside the neighborhood the weights are
increased, otherwise they are decreased.
- : this parameter determines the amount by which the
weights will be reduced and corresponds to the part of the
curve laying outside the neighborhood and bellow the zero
line.
- : because of the time constant of the EPSP, a neuron
firing exactly with does not contribute to the firing of ,
so the learning window must be shifted slightly to consider
this time interval and to avoid reinforcing synapses that do
not stimulate .
Since the objective of the learning process is to
approximate the firing times of all the neurons related to
the same cluster, it is quite clear that a neuron less
stimulated (large
and thus, low weight) must have also
a lower time constant, so it can fire faster and compensate
for the large
. Similarly, a more stimulated neuron
(small
and thus, high weight) must have also a higher
time constant, so it can fire slower and compensate for the
small .
3. Conclusions
All the experimental results obtained in the development
indicate that the simultaneous adaptation of weights and
time constants (or axonal delays) must be submitted to a
far more extensive theoretical analysis. Given the high
complexity of the problem, it is not encompassed by the
scope of the present work, and hence should be left to a
further work. It was presented a practical applications of a
neural network, built with more biologically inspired
neuron, to perform what we could call real neuroscience
task. In this application we demonstrated how analog
values can be temporally encoded and how a network can
learn using this temporal code. Even with these very short
steps towards the realm of neuroscience, it is not difficult
to realize how intricate things can get, if we try to descend
deeper into the details of neural simulation. However, this
apparent difficulty should rather be regarded as an
opportunity to use spike-timing as an additional variable in
the information processing by neural networks [24].
References
[1] Hernandez, Gerardina, Paul Munro, and Jonathan Rubin.
"Mapping from the spike domain to the rate-based
domain." Neural Information Processing, 2002. ICONIP'02.
Proceedings of the 9th International Conference on. Vol. 4.
IEEE, 2002.
[2] Aihara, Kazuyuki, and Isao Tokuda. "Possible neural coding
with interevent intervals of synchronous firing." Physical
Review E 66.2 (2002): 026212.
[3] Yoshioka, Masahiko, and Masatoshi Shiino. "Pattern coding
based on firing times in a network of spiking
neurons." Neural Networks, 1999. IJCNN'99. International
Joint Conference on. Vol. 1. IEEE, 1999.
136
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.
ACSIJ Advances in Computer Science: an International Journal, Vol. 4, Issue 4, No.16 , July 2015
ISSN : 2322-5157
www.ACSIJ.org
[23] Leibold, Christian, and J. Leo van Hemmen. "Temporal
receptive fields, spikes, and Hebbian delay selection." Neural
Networks 14.6 (2001): 805-813.
[24] Bohte, Sander M. "The evidence for neural information
processing with precise spike-times: A survey." Natural
Computing 3.2 (2004): 195-206.
First Author Master of science in neurophysics at Noshirvani
Institute of Technology; analyzing visual information transmission
in a network of neurons with feedback loop.
Second Author Post-doctoral at University of Ottawa, Canada;
Synaptic Plasticity, Dynamic Synapses, Signal Processing in
Electric Fish Brian. PhD at Washington University in St.Louis,
USA; Synaptic Plasticity, Dynamic Synapses, Signal processing in
Birds Brain.
137
Copyright (c) 2015 Advances in Computer Science: an International Journal. All Rights Reserved.